Skip to content

Instantly share code, notes, and snippets.

@Metawhy
Metawhy / jserv_hf_fast.py
Created July 1, 2021 13:53 — forked from kinoc/jserv_hf_fast.py
Run HuggingFace converted GPT-J-6B checkpoint using FastAPI and Ngrok on local GPU (3090 or Titan)
# So you want to run GPT-J-6B using HuggingFace+FastAPI on a local rig (3090 or TITAN) ... tricky.
# special help from the Kolob Colab server https://colab.research.google.com/drive/1VFh5DOkCJjWIrQ6eB82lxGKKPgXmsO5D?usp=sharing#scrollTo=iCHgJvfL4alW
# Conversion to HF format (12.6GB tar image) found at https://drive.google.com/u/0/uc?id=1NXP75l1Xa5s9K18yf3qLoZcR6p4Wced1&export=download
# Uses GDOWN to get the image
# You will need 26 GB of space, 12+GB for the tar and 12+GB expanded (you can nuke the tar after expansion)
# Near Simplest Language model API, with room to expand!
# runs GPT-J-6B on 3090 and TITAN and servers it using FastAPI
# change "seq" (which is the context size) to adjust footprint
@Metawhy
Metawhy / jserv.py
Created July 1, 2021 12:18 — forked from kinoc/jserv.py
Simplest FastAPI endpoint for EleutherAI GPT-J-6B
# Near Simplest Language model API, with room to expand!
# runs GPT-J-6B on 3090 and TITAN and servers it using FastAPI
# change "seq" (which is the context size) to adjust footprint
#
# seq vram usage
# 512 14.7G
# 900 15.3G
# uses FastAPI, so install that
# https://fastapi.tiangolo.com/tutorial/