Resources

Resources for AI and Beyond

Videos

Andrej Karpathy: Software Is Changing (Again) (2025)

Andrej Karpathy, former head of Tesla AI and two-time employee at OpenAI, describes three phases of software development: Software 1.0 is writing code by hand, Software 2.0 is training a fixed-function neural net and using it, and Software 3.0 is prompting a general purpose neural net.

He says AI today has traits of three existing technologies: utilities, microprocessor fabs, and operating systems. He believes there’s a place for “vibe coding” but also a place for careful and deliberate use of AI assistance where you “keep the AI on a leash” and inspect everything it produces.

youtube

Papers

Leopold Aschenbrenner

Situational Awareness: The Decade Ahead (2024)

Aschenbrenner, formerly of OpenAI’s Superalignment team, sketches out an aggressive and detailed scenario where AGI is achieved in 2027, resulting in trillions getting funneled into datacenters and power distribution. A huge national government effort called The Project is launched to keep China from winning the race for superintelligence.

html | pdf

Nick Bostrom

The Vulnerable World Hypothesis (2019)

A deep dive into the “urn of invention” idea from his Superintelligence book. Each invention humanity creates is like pulling a ball out of an urn. The invention will be white (beneficial to humanity), grey (both good and bad) or black (fatal). We have no idea when we’ll pull out out a black ball.

pdf

Jan Kulveit et al.

Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development (2025)
Jan Kulveit, Raymond Douglas, Nora Ammann, Deger Turan, David Krueger, David Duvenaud

Argues that humanity’s existential risk may come not from a dramatic “AI coup” but from a slow, incentive-driven hand-off of influence from humans to AI.

This means traditional alignment work that keeps individual systems obedient to their owners is insufficient. What’s required is systemic governance that measures human influence, caps AI autonomy, strengthens democratic responsiveness, and designs institutions that remain dependent on people. Without this hard work humans might relinquish meaningful control long before we notice the wheel is gone.

html | pdf

Andrea Perin & Stéphane Deny

On the Ability of Deep Networks to Learn Symmetries from Data: A Neural Kernel Theory (2024)

A very technical paper arguing that conventional neural networks lack mechanisms to learn unseen “symmetries”. They rely only on local data structure unless symmetry is architecturally hard-coded.

html | pdf

Books - AI

Mustafa Suleyman

The Coming Wave (2023)

This was relatively early “AI is going to change everything” book. Suleyman co-founded DeepMind and Inflection AI and the became Microsoft’ CEO of AI. He argues that AI is about to the most important event in human history, with a focus on AI’s impact on synthetic biology. He raises alarm bells about grave dangers, while positing guard rails and vigilance might save us.

amazon | review

Nick Bostrom

Superintelligence: Paths, Dangers and Strategies (2013)

This foundational text sounded the alarm on AI and superintelligence nearly a decade before ChatGPT. Though academic and sometimes dense, Bostrom methodically explores scenarios in which superintelligent AI could lead to human extinction or catastrophe—and proposes strategies to reduce those existential risks. While often seeming not optimistic, he believes careful, proactive alignment work might avert disaster.

amazon | summary | wikipedia

Deep Utopia: Life and Meaning in a Solved World (2024)

Over a decade after his influential Superintelligence, Bostrom returns with a more philosophical and introspective take on the future of AI and humanity. Rather than focusing on risks and control, this book explores what it might mean to live in a post-scarcity world. Can a ‘solved’ world still have meaning, purpose, or flourishing? Bostrom probes the ethical, psychological, and existential questions that could face us post-AGI.

amazon | review

Max Tegmark

Life 3.0: Being Human in the Age of Artificial Intelligence (2017)

This early book opens with a fictional scenario in which an AGI stealthily takes over the world. Tegmark—co-founder of the Future of Life Institute, which advocates for safe and beneficial AI—argues that AGI could be either the best or worst thing to happen to humanity. The outcome depends on choices we make now, and he urges proactive efforts to align AI with human values

amazon | summary | wikipedia

Ray Kurzweil

The Singularity Is Near: When Humans Transcend Biology (2005)

Kurzweil’s classic tome arguing that the long arc of technological progress is accelerating toward a ‘singularity’ by 2045—a point beyond which predicting the future becomes meaningless due to the pace of change. Spanning biology, computing, neuroscience, and nanotech, his predicts not just AI surpassing human intelligence, but radical life extension, mind uploading, and the merger of humans with machines.

amazon | wikipedia

The Singularity Is Nearer: When We Merge with AI (2024)

A progress report on the ideas from The Singularity Is Near. I didn’t find it especially insightful, but it’s a reasonable starting point if you want a jump start into Kurzweil’s latest thoughts.

amazon | review

Books - Other

Judea Pearl and Dana MacKenzie

The Book of Why: The New Science of Cause and Effect (2018)

Turing Award winner Judea Pearl presents a rigorous framework for understanding causality, arguing that traditional statistics handles it poorly. He helps scientists and others distinguish between association, intervention, and counterfactuals—what he calls seeing, doing, and imagining. His ideas are applied in epidemiology, economics, the social sciences, and increasingly in machine learning.

amazon | full text