Skip to content

Instantly share code, notes, and snippets.

@lambdaofgod
Last active April 16, 2023 18:23
Show Gist options
  • Save lambdaofgod/998a853be71936e8085d0fda08999568 to your computer and use it in GitHub Desktop.
Save lambdaofgod/998a853be71936e8085d0fda08999568 to your computer and use it in GitHub Desktop.
OpenAssistant Pythia 6.9

openassistant/pythia

pythia-6.9b

import transformers
import torch
model_id = "dvruette/oasst-pythia-6.9b-4000-steps"

tokenizer =  transformers.AutoTokenizer.from_pretrained(model_id)
model = transformers.AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).cuda()
_ = model.cuda()
pipeline = transformers.pipeline(task="text-generation", model=model, tokenizer=tokenizer, device="cuda:0")
input_text = "<|prompter|>Write an emacs lisp function that asks user for input and stores it to a list `foo-list`<|endoftext|><|assistant|>"

pipeline(input_text, max_new_tokens=128)
generated_text:<prompter>Write an emacs lisp function that asks user for input and stores it to a list `foo-list`<endoftext><assistant>Sure, here’s an example of an emacs lisp function that prompts the user for input and stores it to a list:\n\n“`lisp\n(defun ask-for-input ()\n (interactive)\n (let ((input (read-from-string (prompt “Enter something: “)))\n (foo-list nil))\n (while input\n (push (string-to-number input) foo-list)\n (setq input (read-from-string (prompt “Enter something else: “)))\n foo-list))\n“`\n\nThis
input_text = "How to make postgres docker container not to expose 5432 port?"
prompt_template = "<|prompter|>{}<|endoftext|><|assistant|>"
prompt = prompt_template.format(input_text)

pipeline(prompt, max_new_tokens=256)
generated_text:<prompter>How to make postgres docker container not to expose 5432 port?<endoftext><assistant>To prevent PostgreSQL from exposing the 5432 port, you can modify the container’s configuration file and add the following line to the “ports” section:\n\n”ports”: {\n “postgres”: “3306:3306”,\n}\n\nThis will prevent PostgreSQL from listening on the 5432 port, which is used by the default “postgres” user to connect to the database.
input_text = """
Write Pydantic BaseModel classes that correspond to the following JSON

Request body (JSON)
prompt: string.
The input text to complete.

max_tokens: optional int (default = 100)
Maximum number of tokens to generate. A token represents about 4 characters for English texts. The total number of tokens (prompt + generated text) cannot exceed the model's maximum context length. It is of 2048 for GPT-J and 1024 for the other models.

If the prompt length is larger than the model's maximum context length, the beginning of the prompt is discarded.

stream: optional boolean (default = false)
If true, the output is streamed so that it is possible to display the result before the complete output is generated. Several JSON answers are output. Each answer is followed by two line feed characters.
"""
prompt_template = "<|prompter|>{}<|endoftext|><|assistant|>"
prompt = prompt_template.format(input_text)

pipeline(prompt, max_new_tokens=256)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment