We need to completely reimagine how society works.
The old models of economics and polity will be rendered impotent as the upcoming technological tidal wave washes over every aspect of our practical experience.
We need to reimagine everything from the ground up, so let’s start from the first principles that our current social systems are built on. . .
Humans want things. We make decisions that move us towards getting what we want.
In general, what do we want?
To live another day
To be happy
We achieve these goals through:
Resources we obtain from our environment
Relationships we form with others
Let’s focus on the role of Resources.
Principle 1: People want some things in order to live
How do resources help us live another day?
I use the resource of food to eat. I use water to drink.
Clothing to stabilize my body temperature.
Shelter for security.
These are the primary “things” that we want for survival: Food/Drink, Clothing, Shelter.
We also want the secondary “things” that help us get the primary things.
I want a car so that I can transport myself to go get these other things, which are currently somewhere farther away in the environment (I have to move myself to different points on earth, since not every thing I want is located in my backyard).
I want a phone so I can communicate and coordinate with other people who aren’t in my immediate environment.
And so on.
Principle 2: People want some things in order to be happy
Next, we ask, “How do resources (things) help us be happy?”
Now, of course, any treatment of happiness needs to point to the transcendent pursuits of Truth, Goodness, and Beauty… the invisible and spiritual realities of life, and the richness that can only be found in Relationship.
But again, for the purposes of analyzing something like the material structure of society and economic organization, let’s table that notion for now.
How do we use things to make us happy?
On the surface, this is to say, how do we use things to make us feel good?
Once we move beyond the things that keep us alive, everything else can be considered a toy. The bouncing ball we had as a kid becomes the kitchen gadget we have as an adult.
And so are the movies we watch. The equipment we buy to experience the things we like (a new pair of basketball shoes, a chessboard, books).
The furniture and clothes that go beyond function but also give us some other sense of aesthetic satisfaction, pleasure, or comfort.
These are all toys. We use them to experience the “play” of life.
The purpose of some toys is to signal “status”, which is to say, let you play an exclusive game you otherwise wouldn’t be able to.
And there are also the things that are services. Like the service of education. The service of a mechanic who fixes your things, and so on.
Lastly, the possession of things, in general, can also give you some feeling of agency, optionality, and control, which can contribute to (or detract from… but that’s another story) happiness.
+ + + + +
The Logic of Exchange
Ok, so everyone wants things—some for living, others for loving.
You can make some of the things you want. And you can make some of the things that other people want.
And some people can make the things you want that you can’t or won’t make yourself.
And so to achieve a situation where you get more of what you want AND your neighbors gets more of what they want, we created a system of exchange, which started out as a bartering thing-for-thing and then evolved into a SuperThing-(money)-for-any-thing.
More on that in a minute. But here’s the thing….
To get the thing you want, you either have to make it yourself or you have to get it from someone else who makes that thing, typically by giving them something that they want in exchange.
But AI will soon make most of the things—which means it will no longer be you that’s making them—and AI itself doesn’t “want” any things, so there’s nothing you can give it to “earn” your right to the thing it makes, which means that the existing exchange mechanism will break.
Well… ok. AI may not want things but whoever owns the AI will want things.
But what will you have to give the owners of AI that they can’t get for themselves? In other words, why should AI make anything for you?
And yet if you, the laborer, built the machine that makes labor obsolete, shouldn’t you have some share in that AI?
Read what I wrote 2.5 years ago on this topic and see how it’s tracking today.
Is there a Limit to Innovation?
Jill says, “This new era of LLM AI spells an end for humanity. Already, GPT can do basically everything a person can, as far as tasks that require use of language. So many people will be left unable to work.”
+ + + + +
AI (and robots) will take away human jobs and this will have two major impacts:
Reduce the cost of making something (production)
Remove the need for labor
To illustrate this, imagine that in the pre-AI economy, you had a corporation, ABC Corp, that produces and sells a $100 widget. And ABC Corp employs and pays John $20 to make a widget.
P = $100
L = $20
Once the post-AI economy transition reaches its critical point, the cost of production will fall because John is no longer necessary for building the widget. If there is enough competition, ABC Corp will lower its price (since otherwise, the other competitors—assuming they still exists and new entrants are viable—can lower their prices to win ABC Corp’s customers).
So, hypothetically speaking, we’ll have a scenario of:
P = $1
L = $0
On the one hand: “Wow! Everything is so much more affordable! It’s like I can have anything I want at my fingertips!
On the other hand: “Even though everything is so ridiculously cheap, I still can’t afford it because I literally cannot generate any money because AI is producing everything so no needs me to make anything.”
So who, other than those who already have money saved up, will be able to buy ABC Corp’s widget?
We will run into the problem of not having enough baseline demand to satisfy the existing production capacity.
+ + + + +
Thus far, jobs have done two things:
(1) give us fungible resource power (money)
(2) give us vocation/meaning
In the next social system, we need a substitute for both.
Let’s tackle (1) — and in some sense we already have.
What makes money relevant to our society? Because we want things, and we want the flexibility to be able to manage our things over time.
Money is the “SuperThing” that stands in for all other things… $100 of money can represent a night out at a restaurant, a new outfit, some groceries, or a dental appointment.
It’s fungible.
And you can choose to either save or spend your money, which is to say that you can manage your life by balancing the things you want today versus the things you may want tomorrow.
This ability to save versus spend is, socially-speaking, a very important feature of money because it’s what separates the responsible person from the irresponsible person.
John saves some of his $20 to prepare for a rainy day whereas Bill decides to squander all his $20 away today.
Money allows you to express your character. Possession of money—and the things it represents in general—enables you to express your degree of stewardship.
Are you a wise and generous person? A reckless fool? A greedy bastard?
How one handles his money is one of the clearest signals of agency.
So if we will no longer have jobs, which, at the moment, is more-or-less the only way to make money (unless you own the company that makes the things)… then we will no longer have money.
If we don’t have money, then we won’t be able to have things.
No things means no stewardship.
No stewardship means no signal for honor and agency.
Hmm… ok. But you can’t have a society of starving people. So there inevitably must be some guaranteed floor of providing people the things that keep them alive. We should count on at least that much.
But what about the extra discretionary category of things that keep people “happy”?
Are humans as a race and social collective entitled to that second class of things? or will we have to adjust to be content with only the bare minimum that enables basic survival?
Well, the epicurean consumer-driven excesses of the last 60 years or so have almost certainly made the concept of “survive on the bare minimum” unconscionable… people would rather storm and burn down the streets than go without their video games or access to sportsball.
So there will need to be a mechanism that provides people both with some essential things and some non-essential things.
And since the existing mechanism—jobs—will no longer be around (at least in the form we’ve understood them in the last two millennia), what mechanism will take its place?
And who will own that mechanism?
It seems to me that the mechanism will inevitably be a centralized one… whether the government owns it or the few giant megacorps own it will be practically irrelevant, since to the daily life for the average joe, it will feel like something that flows as a gift from the abstract “State”, as opposed to a fruit of his own hands working the land, so to speak.
And separate from having access to resources (things),
+ + + + +
These are just some of the problems we’ll face… I could go on, but let’s propose some loose solution structures that may be necessary, else we slide into some sort of technocratic Communism.
For starters, it’s hard to imagine any other way than to have some form of Universal Basic Income.
A good analogue of this that is in action today is the Alaska Permanent Fund, which is a state-owned investment fund created in 1976 to preserve a portion of Alaska’s oil revenues for future generations.
Royalties from oil production are put in the fund, which then pays out $1,000 - $2,000 per year to each resident in Alaska. The principle here is that the oil is partly viewed as a shared natural resource belonging to all Alaskans.
Similarly, despite “AI” being owned by a few megacorps, we could make the ethical case that AI should be a shared natural resource belonging to all humans.
And so, there could be an AI royalty that gets paid out based on every terrawatt hour or petaflop of "frontier inference”. This could be deposited into a special account. . . something like a Citizen Capital Accounts (CCAs).
In this account you would have two buckets: 1) The essentials bucket 2) The discretionary bucket
The essentials bucket covers your need for things to live, and the discretionary bucket is for things you want for facilitating your happiness.
That discretionary bucket (#2) is the one where you can save and spend so that you can differentiate yourself and your own agency from those who would make other kinds of lifestyle decisions.
The purpose of having a CCA would also be to provide that guarantee of baseline demand I mentioned earlier, since we still need people to be buying things (unless we do away with the concept of money altogether, which seems completely untenable).
Now people will still need something useful to do and spend their time with (there is great honor in work), so that would lead us to creating something like a Human-Time Exchange (HTX).
This would be something like:
Maria, a 35-year-old in Denver, teaches conversational Spanish and mentors teens.
She lists her hours on the HTX
A family books her twice a week → 8 hours/month → she earns 8 HTCs (Human-Time Credits) = $600 discretionary income.
Her rating rises; she unlocks “Master Mentor” tier → higher exchange rate next quarter.
In the pre-AI economy, labor scarcity and competition for productivity made price signals work naturally.
But in a post-AI economy:
Labor for 90 % of production is no longer scarce — AI can produce near-zero-marginal-cost goods and services.
Prices for those goods collapse, destroying the feedback loop that once connected effort → wages → demand.
That breaks the market’s ability to distribute income and coordinate motivation.
The role of the HTX would be to re-introduces scarcity where it still legitimately exists: authentic human presence, trust, empathy, courage, care, culture.
We need incentivized differentiation.
Remember that without differential reward, virtue decays as irresponsibility imposes no cost.
+ + + + +
If we don’t proactively design the post-AI system, then what would probably happen is some slippery slide into an implementation of Communism (functionally irrespective of whether it’s the government officials or techno-billionaries running it).
And maybe, to make it seem “fair”, the all-seeing and all-knowing Algorithm will be the things that decides who gets access to what things and whose desires should be prioritized over others.
Let’s see!
Drago
P.S. If you want to try to design the post-AI society, here are some questions I’d hope you can answer:
What is the system for? (Human flourishing? Family formation? Security? Innovation?)
What cannot be sacrificed? (Dignity, liberty, subsidiarity, common good, rule of law.)
What does “success” measure (beyond GDP): literacy, safety, time-use, apprenticeship rate, energy reliability, birth/formation metrics?
What is the baseline income mechanism (UBI/NIT/owner dividend) and why? Flat vs cost-indexed?
What creates differentiation above the floor so prudence/effort still matter?
How do we keep personal resource management (budgeting, saving, failing safely) alive?
What are the non-offshorable bases? (Energy congestion rents, land/location value, spectrum, carbon, compute/inference royalties.)
Who owns compute, models, data, and energy? What anti-concentration thresholds and unbundling rules apply?
What portion is citizen-owned (public funds, co-ops, pensions) vs private capital? How is it governed?
Interop & portability: can users exit without losing identity, data, or reputation?
What remains scarce when marginal cost → ~0? (Authentic human time, trust, energy/compute, land.)
Where do we keep prices vs set guardrails? What markets stay “as-is,” which need protocols (e.g., verified human-time)?
How do we avoid platform feudalism (closed algorithms mediating all exchange)?
If “jobs” shrink, what gives meaning and status? (Care, craft, culture, service, risk-taking.)
Do we need exchanges/protocols for verified human-time? How are quality and outcomes proven without bureaucracy?
How do we make apprenticeship (formation) central again?
What are the appeal rights for any automated, life-altering decision?
If you add cash, what prevents it from being captured by landlords/utility monopolies?
What’s the plan for abundant energy (generation + transmission) and by-right housing?
How do infrastructure timelines align with income rollout to avoid inflation?
How do you compensate the displaced generation that built the automation?
What stake do newborns get at birth? How do inheritance rules work?
What prevents perverse incentives in family formation supports?
National fund vs federated funds (municipal/faith/union/co-op) with passive mandates?
What are the hard constitutional guardrails (sunsets, audit ports, conflict rules)?
How do you publish telemetry (public dashboards) to keep trust and course-correct?
How do you handle compute offshoring and imports of AI services? “Access-for-access” rules?
What automatic stabilizers kick in during recessions/energy shocks?
What’s the rollback plan if metrics deteriorate (fraud spikes, rent capture, platform monopolization)?
How will institutions (schools, parishes, guilds, civic corps) form character for freedom with abundance?
What protects attention ecology for youth (phones, schools, Sabbath windows)?