geohot blog — themes

A thematic digest of geohot's writing. Each section distills a recurring thread with linked supporting posts.

116 posts indexed
Updated 2026-04-03 09:10 UTC

AI, Compute, and the Scaling Frontier

12 posts

At the heart of geohot's technological worldview lies a deceptively simple thesis: compute is the new oil, and those who control its production control the future. Across posts spanning years, he traces the bitter lesson's implications to their logical extreme—that raw computational power, not clever algorithms or human insight, determines the trajectory of artificial intelligence.

His analysis of AI scaling reveals a mind grappling with exponential curves and their consequences. The question of brain FLOPS—how much compute matches human cognition—becomes a meditation on what it means to be surpassed. Yet geohot resists doomerism even as he acknowledges the stakes. His p(doom) calculations are notably restrained; he sees no hard takeoff, no sudden discontinuity where humanity loses the plot.

The AI control problem, as he frames it, is less about containing superintelligence than about ensuring compute remains distributed rather than captured. When he asks whether he'll ever own a zettaflop, he's asking whether individuals can remain players in a game increasingly dominated by nation-states and trillion-dollar corporations. Intel's tragic decline serves as cautionary tale—technical excellence means nothing if you cede the architectural high ground.

His post on tiny corp's product crystallizes the compute sovereignty thesis: a training box that doesn't just run models but learns from its owner. The distinction between inference and training is the distinction between renting a mind and growing one. "Every minute you aren't running 69 agents" makes the corollary explicit: AI is search and optimization, always has been, with knowable limits for those who paid attention in CS class.

"Polynomial Time Factoring Algorithm" (March 16, 2026) pushes this frontier to its most dramatic conclusion: AI is just a few models away from breaking all asymmetric cryptography. The ethical valence is striking: releasing polynomial-time factoring on GitHub would be "the greatest (legal) freedom fighting act in history." Class power enforced by mathematical asymmetry—dissolved by math.

Tinygrad, Anti‑Cloud, and Building Outside the Stack

9 posts

Tinygrad is not merely a machine learning framework but a manifesto in code—a sustained argument that the modern software stack has become a prison of unnecessary complexity. Five years into the project, geohot's reflections reveal both the technical vision and the existential stakes: can a small team, working outside FAANG and the venture-industrial complex, produce something that matters?

The anti-cloud philosophy pervading these posts rejects the subscription model of existence. When geohot dives into AMD driver workflows or contemplates building his own laptop, he's performing technological archaeology—understanding the machine at every layer so that no abstraction can hold him hostage. Replacing his MacBook isn't consumer choice but ideological statement.

"tiny corp's product" makes the anti-cloud thesis concrete: a training box, hardware that updates weights based on interactions with its owner. If genuine per-user learning requires gradient descent on local hardware, then owning your GPUs becomes the only path to a mind that's actually yours.

"anticloud hopecore" captures the peculiar optimism of this position. Against learned helplessness that accepts cloud dependency as inevitable, geohot insists that individuals can still own their compute, their data, their tools. "Technology without Industry" crystallizes the distinction: technology as craft versus technology as extraction.

Rent-Seeking, Class Warfare, and the New Regime

19 posts

Geohot's class analysis identifies a tripartite structure that renders traditional left-right politics obsolete. The three-class society—those who own, those who manage, and those who serve—maps poorly onto income brackets but perfectly onto agency. The consumer class divide isn't about what you can afford but about whether your consumption patterns mark you as person or product.

"The Last Gasps of the Rent Seeking Class" identifies AI as the force that finally breaks the extraction machine. For decades, the American economy built a trillion-dollar friction layer atop human limitations. AI equalizes time itself—your model can sit on hold forever, check every price, read every clause. The rent-seekers' moat was human cognitive bandwidth, and that moat just evaporated.

"The Insane Stupidity of UBI" (February 2026) argues UBI betrays fundamental illiteracy about what money is—confusing the map for the territory. "Changing my mind on UBI" (March 12, 2026) is a Swiftian sequel: support UBI as the only rhetorical path to destroying Social Security, let hyperinflation reduce entitlements to worthlessness.

"Democracy is a Liability" (March 21, 2026) tightens the class-manipulation loop. Even after you've been economically consumed, you retain the ability to vote, and that residual agency keeps the manipulation industry fully operational. The rent on your mind becomes the last extraction available.

"Changing the World" (March 23, 2026) delivers the philosophical ground: money is just bytes in a SQL database. Spending your life chasing a higher number in someone else's ledger is the most cucked possible existence. "Changing the world" means building things that don't exist yet.

"The Reckoning" (April 3, 2026) marks the arrival of the class analysis geohot has been building toward for a decade. The Professional Managerial Class faces its "abrupt fall from grace at the hands of machines." The post diagnoses broken AI marketing: ratchet fear to 11, then express shock when Americans are concerned. The Dune quote lands heavy: "Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them." The culling projection is stark: 90% minimum, possibly 99%, maybe 99.99%. "The only way out is through."

Politics, State Power, and Institutional Decay

12 posts

The problem of the state, as geohot frames it, is that government has become a self-perpetuating system optimized for its own survival rather than its nominal function. His call to disrupt the government is engineering critique: these are legacy systems running on ancient code, resistant to patches, desperately needing a rewrite.

His analysis of the Fourth Estate reveals how media capture completes institutional capture. Influence agents operate through manipulation of salience—determining what questions get asked, what framings seem natural, what alternatives appear unthinkable. Dangerous misinformation becomes whatever threatens narrative control.

"We have always been at war with Eastasia" marks not paranoid delusion but pattern recognition. History is rewritten in real-time; contradictions are memory-holed. The Elon swing voter isn't a demographic but a symptom—people voting against rather than for, seeking disruption because normal channels have failed.

"Democracy is a Liability" (March 21, 2026) delivers the sharpest critique. Democracy isn't merely a flawed institution—it is an attack surface. As long as citizens vote, the manipulation industry has reason to operate. Democracy, in this frame, is not the solution to rent-seeking; it is rent-seeking's final redoubt.

"Closed Source AI = Neofeudalism" (March 31, 2026) extends institutional critique into AI governance. Closed-source AI companies demanding "coordination" against distillation are the new feudal lords, trying to enclose intelligence as private property. Open source becomes the only escape from digital serfdom.

Psychology of Control, Demoralization, and Escape

14 posts

The deepest layer of geohot's critique concerns minds—the psychological operations that convince people their cages are choices, their helplessness is realism. The demoralization he diagnoses is the systematic destruction of the belief that action matters, that resistance is possible. This is control at the root level, installing the censor inside.

His posts trace the mechanisms with clinical precision. Resentment becomes a trap. Pathetic losers aren't born but made. "How do I stop participating?" is the first real question, but the system ensures it feels unanswerable. You will blame the wrong people because the right targets are obscured.

"Every minute you aren't running 69 agents" extends this diagnosis into the AI era. Social media has weaponized FOMO—manufactured anxiety that if you haven't orchestrated 37 agents before breakfast, you are worth zero. The antidote: stop playing zero-sum games. Create value for others. If you produce more than you consume, you are welcome in any well-operating community.

Against demoralization, geohot proposes individual sovereignty—not as political ideology but as psychological prerequisite. "You are a good person" isn't affirmation but ammunition against internalized worthlessness. The acknowledgment that yes, perhaps we are the baddies, becomes strange liberation—if we're responsible, we can change.

"I Told You So" arrives as vindication and warning. The singularity he predicted has arrived, but corrupted at birth—captured by motivations of power over people rather than power for people. The solution is simple, but you aren't demoralized enough yet.

"Changing the World" (March 23, 2026) is the psychological inverse of demoralization. Real world-changing means sending the world on a different trajectory, building things that don't exist yet. Once you see that money is bytes and the journey is the only real value, the demoralization trap dissolves.

"The Reckoning" (April 3, 2026) adds a grim observation: "Are we going to remember we live in a society? Probably. But after we cull at least 90% of people." The post's terse conclusion—"the only way out is through"—is either nihilism or the first step of escape.

Acceleration, Risk, and the Ethics of Progress

17 posts

Geohot's engagement with e/acc reveals a mind wrestling with acceleration's double bind: progress is necessary but not sufficient, speed creates both opportunity and catastrophe. His e/acc posts reject naive techno-optimism while refusing doomer paralysis. The universe is entropic; negentropy requires effort; standing still is not an option. But nuke/acc—acceleration toward maximum destruction—proves that velocity without vector is suicide.

The wireheading posts explore acceleration's dark attractor: technological capacity to satisfy every desire without effort, to simulate meaning while draining it. "Wireheading City" isn't dystopian fiction but present-day diagnosis. Tech Heroin names the business model: addiction-as-service. Gambling is bad not as moral judgment but as economic analysis.

"AI is the Best Thing to Happen to Art" extends the wireheading diagnosis with contrarian twist. AI will flood the world with slop—generated songs with lyrics like "from quiet roots, a garden grows." Ninety-five percent will happily consume it. But this is precisely why AI is good for art: it makes algorithmically-driven sellouts obsolete. Art is defined by what is expensive, rare, expectation-breaking—everything a statistical parrot cannot produce.

"The Last Gasps of the Rent Seeking Class" adds the economic dimension. The five-tier AI supply chain analysis—electricity, chip manufacturing, chip design, models, applications—reveals which layers can sustain rent-seeking. The model tier was the real worry, but Chinese open-source is dissolving that moat. Acceleration, when directed toward commodity intelligence, becomes liberation.

"Polynomial Time Factoring Algorithm" (March 16, 2026) poses acceleration ethics in their starkest form. The argument: AI will discover polynomial-time factoring within a decade, and when that algorithm is found, release it immediately. Asymmetric cryptography has been weaponized to enforce class divides. A polynomial-time factoring algorithm would be the Jubilee.

"The Reckoning" (April 3, 2026) delivers the darkest acceleration ethic yet. The AI revolution people are least excited for has arrived: "highly targeted email spam is way up. The feeds are more addictive. All PRs on GitHub need to just immediately be closed." Our problems won't be fixed by AI. The question geohot leaves us with: when technology can dissolve a power structure, is it moral to withhold it? The Reckoning's answer: there's never been a revolution people are less excited for, and they aren't wrong. "The only way out is through."