Insert your Hugging Face token in the following Python code and execute it. This step will essentially clone the models Git repo into a folder named llama3-8b-instruct-hf
.
from huggingface_hub import snapshot_download
model_id="meta-llama/Meta-Llama-3-8B-Instruct"
access_token = "hf_XYZ"
snapshot_download(repo_id=model_id, local_dir="llama3-8b-instruct-hf",
local_dir_use_symlinks=False, revision="main", token=access_token)