Skip to content

Instantly share code, notes, and snippets.

@jrknox1977
Last active April 14, 2024 08:42
Show Gist options
  • Save jrknox1977/4576d518b39be90d38d9bfc261d1be09 to your computer and use it in GitHub Desktop.
Save jrknox1977/4576d518b39be90d38d9bfc261d1be09 to your computer and use it in GitHub Desktop.
DSPy - using TGI for local model
# install DSPy: pip install dspy
import dspy
# This sets up the language model for DSPy in this case we are using mistral 7b through TGI (Text Generation Interface from HuggingFace)
mistral = dspy.HFClientTGI(model='mistralai/Mistral-7B-v0.1', port=8080, url='http://localhost')
# This sets the language model for DSPy.
dspy.settings.configure(lm=mistral)
# This is not required but it helps to understand what is happening
my_example = {
"question": "What system was final fantasy 1 made for?",
"answer": "NES",
}
# This is the signature for the predictor. It is a simple question and answer model.
class BasicQA(dspy.Signature):
"""Answer questions with short factoid answers."""
question = dspy.InputField()
answer = dspy.OutputField(desc="often between 1 and 5 words")
# Define the predictor.
generate_answer = dspy.Predict(BasicQA)
# Call the predictor on a particular input.
pred = generate_answer(question=my_example['question'])
# Print the answer...profit :)
print(pred.answer)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment