print("\n".join("".join("*" if i in (0, 9, j, 9-j) else " " for j in range(10)) for i in range(10))) | |
print("====") | |
print("".join(map(lambda i: ("*" if i//10 in (0, 9, i%10, 9-(i%10)) else " ") + ("\n" if (i+1) % 10 == 0 else ""), range(100)))) | |
print("====") | |
print("".join(map(lambda i: ([""]*9+["\n"])[(i>>1)%10] if i&1 else "*" if (i>>1)//10 in (0, 9, (i>>1)%10, 9-((i>>1)%10)) else " ", range(200)))) |
This worked on 14/May/23. The instructions will probably require updating in the future.
llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)
Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.
It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.
- Clone llama.cpp from git, I am on commit
08737ef720f0510c7ec2aa84d7f70c691073c35d
.
# turn until you get three hexagons changed to get here | |
H1 = ["AC", "AB", "ACD"] | |
# turn until you get two hexagons changed to get here | |
H2 = ["BCF", "ACF", "BF"] | |
# turn until you get three hexagons changed to get here | |
H3 = ["CD", "BC", "AB", "AF", "EF", "CDE"] | |
# this is my status, look at the hexagons to determine which ones are open | |
status = set("ABDE") # 110110 |
I was drawn to programming, science, technology and science fiction | |
ever since I was a little kid. I can't say it's because I wanted to | |
make the world a better place. Not really. I was simply drawn to it | |
because I was drawn to it. Writing programs was fun. Figuring out how | |
nature works was fascinating. Science fiction felt like a grand | |
adventure. | |
Then I started a software company and poured every ounce of energy | |
into it. It failed. That hurt, but that part is ok. I made a lot of | |
mistakes and learned from them. This experience made me much, much |
This is a collection of the things I believe about software development. I have worked for years building backend and data processing systems, so read the below within that context.
Agree? Disagree? Feel free to let me know at @JanStette. See also my blog at www.janvsmachine.net.
Keep it simple, stupid. You ain't gonna need it.
Mute these words in your settings here: https://twitter.com/settings/muted_keywords | |
ActivityTweet | |
generic_activity_highlights | |
generic_activity_momentsbreaking | |
RankedOrganicTweet | |
suggest_activity | |
suggest_activity_feed | |
suggest_activity_highlights | |
suggest_activity_tweet |
⚠️ Note 2023-01-21
Some things have changed since I originally wrote this in 2016. I have updated a few minor details, and the advice is still broadly the same, but there are some new Cloudflare features you can (and should) take advantage of. In particular, pay attention to Trevor Stevens' comment here from 22 January 2022, and Matt Stenson's useful caching advice. In addition, Backblaze, with whom Cloudflare are a Bandwidth Alliance partner, have published their own guide detailing how to use Cloudflare's Web Workers to cache content from B2 private buckets. That is worth reading,