Skip to content

Instantly share code, notes, and snippets.

View natolambert's full-sized avatar
🎯
Focusing

Nathan Lambert natolambert

🎯
Focusing
View GitHub Profile
from typing import Dict, List
from rich.console import Console
from rich.panel import Panel
from datasets import load_dataset
def print_hf_messages(messages: List[Dict[str, str]]):
console = Console()
colors = ["red", "green"]
color_idx = 0
console.rule(f"[bold yellow]The number of turns is {len(messages)}")
@hamelsmu
hamelsmu / is_fine_tuning_valuable.md
Last active April 4, 2024 01:22
My thoughts re: Is fine tuning still valuable?

Here is my personal opinion about the questions I posed in this tweet:


I think that fine-tuning is still very valuable in many situations. I’ve done some more digging and I find that people who say that fine-tuning isn't useful are indeed often working on products where fine-tuning isn't likely to be useful:

  • They are making developer tools - foundation models have been trained extensively on coding tasks.
  • They are building foundation models and testing for the most general cases. But the foundation models themselves are also being trained for the most general cases.
  • They are building a personal assistant that isn’t scoped to any type of domain or use case and is essentially similar to the same folks building foundation models.