Skip to content

Instantly share code, notes, and snippets.

View mindonscreen's full-sized avatar

Mike mindonscreen

View GitHub Profile
@Manoz
Manoz / Readme.md
Last active March 9, 2024 16:26
Informations, libraries and setup used to run a LLaMA model locally on Apple Silicon M1/M2 chips

Running LLaMAs models locally on Apple Silicon M1/M2 chips using a nice Web UI

Disclaimer: I'm not a data scientist or an expert with LLaMA models or LLMs thus I won't cover technical details about LLaMA models, Vicuna or the settings used in my tests. I just wanted to play with LLaMA models and share my setup and results with the community.
I also consider that you have some basic knowledge about using a terminal and running Python scripts.
I will also not cover the installation process of the tools and libraries used in this document but I will provide links to the documentation I used to make this work on my computer.
Finally, I'm not a native English speaker so please excuse my English mistakes 🙃

Introduction:

I wanted to try to run a LLaMA model on my computer. Since I had absolutely no knowledge about this I started by reading a lot of documentation and articles on the Internet.