SvelteKit

Philosophy, Rationality

The Last Question

sam lessin 🏴‍☠️ on Twitter: “FB’s has an interface problem, not an algorithm problem… because people won’t affirmatively click on the things they actually deep down want to watch https://t.co/QfEiaOf13j” / Twitter

(4) sam lessin 🏴‍☠️ on Twitter: “The coming fall of the Kardashians in context of how entertainment is evolving… (aka why they are so pissed about tiktok) https://t.co/wtYrvxbS35” / Twitter

The End of Social Media - Michael Mignano | Medium

Shameless Samsung – Stratechery by Ben Thompson

Messaging: Mobile’s Killer App – Stratechery by Ben Thompson

Aggregation Theory – Stratechery by Ben Thompson

Snapchat’s Ladder – Stratechery by Ben Thompson

Facebook, Phones, and Phonebooks – Stratechery by Ben Thompson

Goodbye Gatekeepers – Stratechery by Ben Thompson

The Internet and the Third Estate – Stratechery by Ben Thompson

The TikTok War – Stratechery by Ben Thompson

Mistakes and Memes – Stratechery by Ben Thompson

Instagram’s Evolution – Stratechery by Ben Thompson

Metaverses – Stratechery by Ben Thompson

Three Trends Follow-Up, The Question of “Cool”, TikTok and the Sinicization of the Internet – Stratechery by Ben Thompson

Stratechery by Ben Thompson – On the business, strategy, and impact of technology.

Asymptotic safety in quantum gravity - Wikipedia

Physics applications of asymptotically safe gravity - Wikipedia

Induced gravity - Wikipedia

Quantum gravity - Wikipedia

Modified Newtonian dynamics - Wikipedia

Causal sets - Wikipedia

Twistor theory - Wikipedia

Appromoximate

(4) Venkatesh Rao ☀️ (@vgr) / Twitter

ribbonfarm – constructions in magical thinking

Ribbonfarm Studio | Venkatesh Rao | Substack

AMMDI: Protocol Thinking

The Unreasonable Sufficiency of Protocols - Summer of Protocols

Computational Law, Symbolic Discourse and the AI Constitution—Stephen Wolfram Writings

Multicomputation: A Fourth Paradigm for Theoretical Science—Stephen Wolfram Writings

The Concept of the Ruliad—Stephen Wolfram Writings

Galactica: an AI trained on humanity’s scientific knowledge (by Meta) | Hacker News

Chief scientist of major corporation can’t handle criticism of the work he hypes | Hacker News

Writing/the_double_edged_sword_of_AI.md at main ¡ Liu-Eroteme/Writing ¡ GitHub

The End of Programming | January 2023 | Communications of the ACM

Large Language Model: world models or surface statistics?

Scoring forecasts from the 2016 “Expert Survey on Progress in AI” - EA Forum

More Is Different | Science

Transcript: Ezra Klein Interviews Gary Marcus - The New York Times

What does it mean when an AI fails? A Reply to SlateStarCodex’s riff on Gary Marcus

The Road to AI We Can Trust | Gary Marcus | Substack

A reply to Michael Huemer on AI - Matthew Barnett’s Blog

Matthew Barnett’s Blog | Substack

Erich Grunewald’s Blog

Meditations On Moloch | Slate Star Codex

Raikoth: Laws, Language, and Society | Slate Star Codex

Searching For One-Sided Tradeoffs | Slate Star Codex

Archipelago and Atomic Communitarianism | Slate Star Codex

Poor Folks Do Smile…For Now | Slate Star Codex

GPT-2 As Step Toward General Intelligence | Slate Star Codex

The Book of Sand - Wikipedia

The Aleph. Borgean fantastic hyperreality… | by The Sandbook | Medium

Mechanical Sympathy: Understanding the Hardware Makes You a Better Developer - DZone

Evidential decision theory - Wikipedia

Now you can (try to) serve five terabytes, too

Rekt - Value DeFi - REKT 2

Crypto Firm Nomad Loses Nearly $200 Million in Bridge Hack - Bloomberg

Federated learning - Wikipedia

The Dirty Pipe Vulnerability — The Dirty Pipe Vulnerability documentation

CVE-2022-21449: Psychic Signatures in Java – Neil Madden

Thomas H. Ptacek (@tqbf): “It is nevertheless funny that there is a Wycheproof test for this bug (of course there is, it’s the most basic implementation check in ECDSA) and nobody bothered to run it against one of the most important ECDSA’s until now.” | nitter

CVE-2022-34718 - Security Update Guide - Microsoft - Windows TCP/IP Remote Code Execution Vulnerability

Deconstructing Deathism - Answering Objections to Immortality - ImmortalLife.net

anishmaxxing (@thiteanish): “@ggerganov’s LLaMA works on a Pixel 6! LLaMAs been waiting for this, and so have I” | nitter

Community Alert: Ronin Validators Compromised

Honey, I hacked the Empathy Machine!

Brandolini’s law - Wikipedia

Apple, Meta Gave User Data to Hackers With Forged Legal Requests (AAPL, FB) - Bloomberg

Hackers Gaining Power of Subpoena Via Fake “Emergency Data Requests” – Krebs on Security

Mirai (malware) - Wikipedia

Uber apparently hacked by teen, employees thought it was a joke - The Verge

2020 Twitter account hijacking - Wikipedia

The Billion Dollar AI Problem That Just Keeps Scaling

1.1 - Fermi estimate of future training runs

Factored Cognition - AI Alignment Forum

The Toxoplasma Of Rage | Slate Star Codex

xkcd: Duty Calls

Sort By Controversial | Slate Star Codex

CoreWeave — The GPU Cloud

Target Hackers Broke in Via HVAC Company – Krebs on Security

Chinese Spies Hacked a Livestock App to Breach US State Networks | WIRED

harry,whg.eth 🦊💙 (@sniko_): “Supply chain attacks” | nitter

China Has Already Reached Exascale – On Two Separate Systems

John Carmack (@ID_AA_Carmack): “today, but if challenges demanded it, there is a world with a zetta scale, tightly integrated, low latency matrix dissipating a gigawatt in a swimming pool of circulating fluorinert.” | nitter

NYU Accidentally Exposed Military Code-breaking Computer Project to Entire Internet

Flatiron Institute - Wikipedia

Is Programmable Overhead Worth The Cost?

Cerebras - Wikipedia

Extrapolating GPT-N performance - AI Alignment Forum

Computer Scientists Achieve ‘Crown Jewel’ of Cryptography | Quanta Magazine

Rapid Locomotion via Reinforcement Learning

Cerebro-cerebellar networks facilitate learning through feedback decoupling | bioRxiv

Experience curve effects - Wikipedia

Thread: Differentiable Self-organizing Systems

Self-Organising Textures

Growing Neural Cellular Automata

Adversarial Reprogramming of Neural Cellular Automata

The Future of Artificial Intelligence is Self-Organizing and Self-Assembling – Sebastian Risi

Bioelectric Networks: Taming the Collective Intelligence of Cells for Regenerative Medicine - Foresight Institute

On Having No Head: Cognition throughout Biological Systems - PMC

Flying Fish and Aquarium Pets Yield Secrets of Evolution | Quanta Magazine

Synthetic living machines: A new window on life: iScience

Fundamental behaviors emerge from simulations of a living minimal cell: Cell

An Account of Electricity and the Body, Reviewed | The New Yorker

Is Bioelectricity the Key to Limb Regeneration? | The New Yorker

‘Amazing science’: researchers find xenobots can give rise to offspring | Science | The Guardian

A synthetic protein-level neural network in mammalian cells | bioRxiv

Cells Form Into ‘Xenobots’ on Their Own | Quanta Magazine

9 Missile Commanders Fired, Others Disciplined In Air Force Scandal : The Two-Way : NPR

Security troops on US nuclear missile base took LSD | AP News

Joan Rohlfing on how to avoid catastrophic nuclear blunders - 80,000 Hours

The Bitter Lesson

[D] Instances of (non-log) capability spikes or emergent behaviors in NNs? : mlscaling

In-context Learning and Induction Heads

SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient | OpenReview

Robert Oppenheimer - Wikiquote

DeepMind and Google: the battle to control artificial intelligence | The Economist

Boosting Search Engines with Interactive Agents | OpenReview

Learning Robust Real-Time Cultural Transmission without Human Data

What Are Bayesian Neural Network Posteriors Really Like?

Recurrent Experience Replay in Distributed Reinforcement Learning | OpenReview

Microsoft researchers win ImageNet computer vision challenge - The AI Blog

A Recipe for Training Neural Networks

Solving (some) formal math olympiad problems

OpenAI Five defeats Dota 2 world champions

AI and compute

AI and efficiency

Scaling Laws for Language Transfer Learning

DALL¡E: Creating images from text

Fine-tuning GPT-2 from human preferences

DeepSpeed: Accelerating large-scale model inference and training via system optimizations and compression - Microsoft Research

ZeRO-Infinity and DeepSpeed: Unlocking unprecedented model scale for deep learning training - Microsoft Research

Effect of scale on catastrophic forgetting in neural networks | OpenReview

v2appf

Reward is enough - ScienceDirect

Rip van Winkle’s Razor, a Simple New Estimate for Adaptive Data Analysis – Off the convex path

The Bitter Lesson

The neural architecture of language: Integrative modeling converges on predictive processing | bioRxiv

GPT-3 Samples - JustPaste.it

experience curves tag ¡ Gwern.net

Why Tool AIs Want to Be Agent AIs ¡ Gwern.net

preference learning tag ¡ Gwern.net

Codex tag ¡ Gwern.net

MuZero tag ¡ Gwern.net

Fully-Connected Neural Nets ¡ Gwern.net

Surprisingly Turing-Complete ¡ Gwern.net

How Many Computers Are In Your Computer? ¡ Gwern.net

NN sparsity tag ¡ Gwern.net

Computer Optimization: Your Computer Is Faster Than You Think ¡ Gwern.net

economics/automation tag ¡ Gwern.net

end-to-end tag ¡ Gwern.net

Complexity no Bar to AI ¡ Gwern.net

cognitive biases/illusion-of-depth tag ¡ Gwern.net

inner monologue (AI) tag ¡ Gwern.net

preference learning tag ¡ Gwern.net

meta-learning tag ¡ Gwern.net

Technology Forecasting: The Garden of Forking Paths ¡ Gwern.net

On Seeing Through and Unseeing: The Hacker Mindset ¡ Gwern.net

Slowing Moore’s Law: How It Could Happen · Gwern.net

The Neural Net Tank Urban Legend ¡ Gwern.net

Evolution as Backstop for Reinforcement Learning ¡ Gwern.net

Fake Journal Club: Teaching Critical Reading ¡ Gwern.net

Why Do Hipsters Steal Stuff? ¡ Gwern.net

Machine Learning Scaling ¡ Gwern.net

The Scaling Hypothesis ¡ Gwern.net

GPT-3 Nonfiction ¡ Gwern.net

GPT-3 Creative Fiction ¡ Gwern.net

40a93946b61c16a861bb5d277c89bdf07c507d09.pdf

[1806.11146] Adversarial Reprogramming of Neural Networks

080e52b3e827dd0c10a822c22935f62305ee1b8f.pdf

[1809.01829] Adversarial Reprogramming of Text Classification Neural Networks

Magna Alta Doctrina - LessWrong

The Brain as a Universal Learning Machine - LessWrong

Bing Chat is blatantly, aggressively misaligned - LessWrong

Moore’s Law, AI, and the pace of progress - LessWrong

Proposal: Scaling laws for RL generalization - LessWrong

Raising the Sanity Waterline - LessWrong

The Brain as a Universal Learning Machine - LessWrong

Matt Botvinick on the spontaneous emergence of learning algorithms - LessWrong

Taboo Your Words - LessWrong

Truthful and honest AI - LessWrong

But is it really in Rome? An investigation of the ROME model editing technique - LessWrong

A Mechanistic Interpretability Analysis of Grokking - LessWrong

Critique of some recent philosophy of LLMs’ minds - LessWrong

Simulators - LessWrong

An Equilibrium of No Free Energy - LessWrong

MIRI announces new “Death With Dignity” strategy - LessWrong

Optimal Employment - LessWrong

Orthogonality Thesis - Arbital

Instrumental convergence - Arbital

Let’s See You Write That Corrigibility Tag - AI Alignment Forum

AGI Ruin: A List of Lethalities - AI Alignment Forum

Where I agree and disagree with Eliezer - AI Alignment Forum

Some of my disagreements with List of Lethalities - AI Alignment Forum

“Pivotal Act” Intentions: Negative Consequences and Fallacious Arguments - AI Alignment Forum

(My understanding of) What Everyone in Technical Alignment is Doing and Why - AI Alignment Forum

ARC’s first technical report: Eliciting Latent Knowledge - AI Alignment Forum

[AN #81]: Universality as a potential solution to conceptual difficulties in intent alignment - AI Alignment Forum

Mundane solutions to exotic problems - AI Alignment Forum

Optimization daemons - Arbital

Clarifying “AI alignment”. Clarifying what I mean when I say that… | by Paul Christiano | AI Alignment

Oversight Misses 100% of Thoughts The AI Does Not Think - AI Alignment Forum

The Main Sources of AI Risk? - AI Alignment Forum

Distinguishing AI takeover scenarios - AI Alignment Forum

My Overview of the AI Alignment Landscape: Threat Models - AI Alignment Forum

What does it take to defend the world against out-of-control AGIs? - AI Alignment Forum

Conjecture Home

Palimpsest - Wikipedia

AI Alignment

My research methodology - AI Alignment Forum

Testing The Natural Abstraction Hypothesis: Project Update - AI Alignment Forum

Basic Foundations for Agent Models - AI Alignment Forum

Gears Which Turn The World - AI Alignment Forum

Cartesian Frames - AI Alignment Forum

Finite Factored Sets - AI Alignment Forum

The ground of optimization - AI Alignment Forum

evhub - AI Alignment Forum

Stuart_Armstrong - AI Alignment Forum

Intro to Brain-Like-AGI Safety - AI Alignment Forum

Epistemic Cookbook for Alignment - AI Alignment Forum

Productive Mistakes, Not Perfect Answers - AI Alignment Forum

Epistemological Vigilance for Alignment - AI Alignment Forum

Why Agent Foundations? An Overly Abstract Explanation - AI Alignment Forum

A central AI alignment problem: capabilities generalization, and the sharp left turn - AI Alignment Forum

Refining the Sharp Left Turn threat model, part 1: claims and mechanisms - AI Alignment Forum

Refining the Sharp Left Turn threat model, part 2: applying alignment techniques - AI Alignment Forum

Paradigms of AI alignment: components and enablers | Victoria Krakovna

Our approach to alignment research

The case for how and why AI might kill us all

Search - LessWrong

How I’m thinking about GPT-N - LessWrong

[2107.14795] Perceiver IO: A General Architecture for Structured Inputs & Outputs

[2008.02217] Hopfield Networks is All You Need

(93) Yann LeCun | May 18, 2021 | The Energy-Based Learning Model - YouTube

[1905.10985] AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence

[2105.08050] Pay Attention to MLPs

Patches Are All You Need? | OpenReview

[2110.00476] ResNet strikes back: An improved training procedure in timm

The academic contribution to AI safety seems large - EA Forum

GPT-3: a disappointing paper - LessWrong

interpreting GPT: the logit lens - LessWrong

larger language models may disappoint you [or, an eternally unfinished draft] - LessWrong

(1) Home / Twitter

Erik Hanchett on Twitter: “UnoCSS is it worth replacing Tailwind in my next project? 👇 https://t.co/TJX1grdtW2” / Twitter

(1) Alejandro Piad Morffis on Twitter: “Ok guys, please listen. LLMs have no memory, no recall of past events, no mutable internal state. They are complicated functions that map input strings to output strings. They are incredible nonetheless, no need to overcomplicate things. And no, humans are not “maybe also that”.” / Twitter

(1) john stuart chill on Twitter: “eliezer: AI risk is real ok but he doesn’t have a degree stephen hawking: AI risk is real ok but not a computer scientist stuart russell: AI risk is real ok he hasn’t won awards geoffrey hinton: AI risk is real ok but he didn’t invent cs the ghost of alan turing: AI ris—” / Twitter

(1) Manuela Malasaña on Twitter: “Q: what is a shader? A: a kind of instructions we can give the computer to tell it what to make something look like and now, needlessly complicated “shaders for beginners”, a thread” / Twitter

“Non-Player Character” – Eliezer S. Yudkowsky

A Conversation

Noosphere - Wikipedia

Stuart J. Russell - Wikipedia

Terrence Deacon - Wikipedia

(1) Rob Bensinger 🔍 on Twitter: “I’ve been citing https://t.co/jVrdg2mIgz to explain why the situation with AI looks doomy to me. But that post is relatively long, and emphasizes specific open technical problems over “the basics”. Here are 10 things I’d focus on if I were giving “the basics” on why I’m worried:” / Twitter

(1) Rob Bensinger 🔍 on Twitter: “@moskov @adamdangelo @ESYudkowsky @ylecun To properly answer your question, @moskov (and @jasoncrawford): I think Eliezer’s best write-up on “the basics” is https://t.co/pJjocKqHPQ. Here’s my own stab at listing out ten relatively important things behind my high p(doom): https://t.co/00My5hXQBR.” / Twitter

Gogolian/open-humanity: An Open Source Project that will, gather consensual info from people about traits of their characters, views and beliefs, to fund a database, that can be used in the future to provide those people, or their descendants with chatbots as digital twins of theese people. Saving humanity before they disappear.

Train ChatGPT on Your Data - AlphaVenture Experiments

Discussion with Nate Soares on a key alignment difficulty - LessWrong

“Carefully Bootstrapped Alignment” is organizationally hard - LessWrong

On AutoGPT - LessWrong

GPTs are Predictors, not Imitators - LessWrong

Evolution provides no evidence for the sharp left turn - LessWrong

Scaffolded LLMs as natural language computers - LessWrong

Four mindset disagreements behind existential risk disagreements in ML - LessWrong

Killing Socrates - LessWrong