Skip to content

Instantly share code, notes, and snippets.

View m0o0scar's full-sized avatar
💻
Oscar + coffee + AI => code

Oscar m0o0scar

💻
Oscar + coffee + AI => code
  • Sea
  • Singapore
View GitHub Profile

[youtube] NodeJS Evolves

Source

Syntax

Duration: 00:55:46 Published: 2024-08-21

In this episode of Syntax, Wes and Scott talk about the latest features in Node.js, including native support for TypeScript, .env parsing, a built-in test runner, watch mode, SQLite integration, glob support, and top-level await. They also discuss some wishlist items, and experimental features like WebSocket support and the require module.

@m0o0scar
m0o0scar / 📖 Pieter Levels! Programming, Viral AI Startups, and Digital Nomad Life ! Lex Fridman Podcast #4.md
Created August 20, 2024 22:27
Pieter Levels: Programming, Viral AI Startups, and Digital Nomad Life | Lex Fridman Podcast #440. Continue this conversation at https://readfm.vercel.app?gist=1bc64f4fe050147f0c45155f05cb5e54

[youtube] Pieter Levels: Programming, Viral AI Startups, and Digital Nomad Life | Lex Fridman Podcast #440

Source

Lex Fridman

Duration: 03:43:34 Published: 2024-08-20

Pieter Levels (aka levelsio on X) is a self-taught developer and entrepreneur who has designed, programmed, launched over 40 startups, many of which are highly successful. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep440-sb

[github] LuanRT/YouTube.js

Source

TypeScript / 32.0K lines of code. A wrapper around YouTube's internal API — reverse engineering InnerTube

URL: https://github.com/LuanRT/YouTube.js

Conversation

@m0o0scar
m0o0scar / 📖 Can Large Language Models Understand Symbolic Graphics Programs!.md
Last active August 20, 2024 03:28
Can Large Language Models Understand Symbolic Graphics Programs?. Continue this conversation at https://readfm.vercel.app?gist=5f9519e3b83727b7c241cf9d1f4f7259

[arxiv] Can Large Language Models Understand Symbolic Graphics Programs?

Source

Authors: Zeju Qiu, Weiyang Liu, Haiwen Feng, Zhen Liu, Tim Z. Xiao, Katherine M. Collins, Joshua B. Tenenbaum, Adrian Weller, Michael J. Black, Bernhard Schölkopf

Published on: 15 Aug 2024

Abstract: Assessing the capabilities of large language models (LLMs) is often challenging, in part, because it is hard to find tasks to which they have not been exposed during training. We take one step to address this challenge by turning to a new task: focusing on symbolic graphics programs, which are a popular representation for graphics content that procedurally generates visual data. LLMs have shown exciting promise towards program synthesis, but do they understand symbolic graphics programs? Unlike conventional programs, symbolic graphics programs can be translated to graphics content. Here, we characterize an LLM's understanding of symbolic programs in terms of their ability to answer questions related to the graphics content.

@m0o0scar
m0o0scar / 📖 Diversity Empowers Intelligence! Integrating Expertise of Software Engineering Agents.md
Created August 15, 2024 05:11
Diversity Empowers Intelligence: Integrating Expertise of Software Engineering Agents. Continue this conversation at http://localhost:3000?gist=331ed08c89ef938f79faaf485cbd2bec

[arxiv] Diversity Empowers Intelligence: Integrating Expertise of Software Engineering Agents

Source

Authors: Kexun Zhang, Weiran Yao, Zuxin Liu, Yihao Feng, Zhiwei Liu, Rithesh Murthy, Tian Lan, Lei Li, Renze Lou, Jiacheng Xu, Bo Pang, Yingbo Zhou, Shelby Heinecke, Silvio Savarese, Huan Wang, Caiming Xiong

Published on: 13 Aug 2024

Abstract: Large language model (LLM) agents have shown great potential in solving real-world software engineering (SWE) problems. The most advanced open-source SWE agent can resolve over 27% of real GitHub issues in SWE-Bench Lite. However, these sophisticated agent frameworks exhibit varying strengths, excelling in certain tasks while underperforming in others. To fully harness the diversity of these agents, we propose DEI (Diversity Empowered Intelligence), a framework that leverages their unique expertise. DEI functions as a meta-module atop existing SWE agent frameworks, managing agent collectives for enhanced problem-solving. Experimental results show that a DEI-g

@m0o0scar
m0o0scar / 📖 Amuro & Char! Analyzing the Relationship between Pre-Training and Fine-Tuning of Large Languag.md
Created August 15, 2024 02:48
Amuro & Char: Analyzing the Relationship between Pre-Training and Fine-Tuning of Large Language Models. Continue this conversation at https://readfm.vercel.app?gist=74feb6f45f195a6d1f3adeaa6f39a969

[arxiv] Amuro & Char: Analyzing the Relationship between Pre-Training and Fine-Tuning of Large Language Models

Source

Authors: Kaiser Sun, Mark Dredze

Abstract: The development of large language models leads to the formation of a pre-train-then-align paradigm, in which the model is typically pre-trained on a large text corpus and undergoes a tuning stage to align the model with human preference or downstream tasks. In this work, we investigate the relationship between pre-training and fine-tuning by fine-tuning multiple intermediate pre-trained model checkpoints. Our results on 18 datasets suggest that i) continual pre-training improves the model in a latent way that unveils after fine-tuning; ii) with extra fine-tuning, the datasets that the model does not demonstrate capability gain much more than those that the model performs well during the pre-training stage; iii) although model benefits significantly through supervised fine-tuning, it may forget previously known domain knowledge and the task

@m0o0scar
m0o0scar / 📖 THUDM!LongWriter.md
Created August 15, 2024 02:16
THUDM/LongWriter. Continue this conversation at http://localhost:3000?gist=8c2133b056318b6bc34f9f2c5237030a

[github] THUDM/LongWriter

Source

Python / 4.0K lines of code. LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs

URL: https://github.com/THUDM/LongWriter

Conversation

@m0o0scar
m0o0scar / 📖 KGLens! Towards Efficient and Effective Knowledge Probing of Large Language Models with Knowle.md
Created August 12, 2024 00:20
KGLens: Towards Efficient and Effective Knowledge Probing of Large Language Models with Knowledge Graphs. Continue this conversation at https://readfm.vercel.app?gist=84d1f309cb89fe42d00f993e4601c289

[arxiv] KGLens: Towards Efficient and Effective Knowledge Probing of Large Language Models with Knowledge Graphs

Source

Authors: Shangshang Zheng, He Bai, Yizhe Zhang, Yi Su, Xiaochuan Niu, Navdeep Jaitly

Abstract: Large Language Models (LLMs) might hallucinate facts, while curated Knowledge Graph (KGs) are typically factually reliable especially with domain-specific knowledge. Measuring the alignment between KGs and LLMs can effectively probe the factualness and identify the knowledge blind spots of LLMs. However, verifying the LLMs over extensive KGs can be expensive. In this paper, we present KGLens, a Thompson-sampling-inspired framework aimed at effectively and efficiently measuring the alignment between KGs and LLMs. KGLens features a graph-guided question generator for converting KGs into natural language, along with a carefully designed importance sampling strategy based on parameterized KG structure to expedite KG traversal. Our simulation experiment compares the brute force method with K