You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
Instantly share code, notes, and snippets.
Laeeth Isharc
Laeeth
Hacker, trader, aspiring live player. Former co-CEO / Executive Committee member of Symmetry Investments ($11bn hedge fund) where I directed tech across firm
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Training script for LongGPT; Fine-tunes GPT-2 (335M) on The Pile Dataset with a context size of 8k tokens. (requires > 16GB RAM)
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Can Large Language Models do Abduction? Chat GPT answers
Are LLMs able to do abductive reasoning?
Yes, LLMs (Large Language Models) such as GPT-3 have demonstrated the ability to perform abductive reasoning to a certain extent. Abductive reasoning involves making an inference based on the best available explanation of observed phenomena or data.
LLMs achieve this by analyzing large amounts of data and generating probabilistic predictions based on patterns and relationships found in the data. In the context of natural language processing, this can involve generating hypotheses about the meaning of a sentence or paragraph based on the surrounding context.
However, it's important to note that LLMs are not perfect and their abductive reasoning abilities are not always reliable. They may make incorrect inferences or struggle with more complex reasoning tasks. Additionally, their reasoning abilities may be limited by the quality and diversity of the data they are trained on.
Can LLMs reason like Sherlock Holmes in The Dog That Did Not Bark in the Night?
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Why I think Immediate Mode GUI is way to go for GameDev tools
Why I think Immediate Mode GUI is way to go for GameDev tools
Prerequisites
Before you continue, if you don't know what IMGUI is don't bother reading this post, just ignore it, don't write anything in comments section, etc. If you're curious about IMGUI see bottom of this post, otherwise continue whatever you were doing, this post it's not for you. Thanks!
If you know what IMGUI is, for context read following presentations and blog posts:
Another recursive range possibility, using simple linked-list stack.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Another recursive range possibility, using simple linked-list stack.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters