Jill says, “This new era of LLM AI spells an end for humanity. Already, GPT can do basically everything a person can, as far as tasks that require use of language. So many people will be left unable to work.”
Jack says, “Don’t worry. Every time there’s been a technological breakthrough, new industries and jobs emerge to replace those that were disrupted. We’ll all have brand new industries to work in.”
Right now, are you leaning more towards Jill’s or Jack’s point of view? How confident are you and where do you think these differences in viewpoints come from?
I believe that at the core of this current debate is one’s belief about whether there is a limit to innovation.
Pfft… what do you mean a “limit” to innovation? That’s absurd. We can always change things.
Fair enough—more precisely, the question we need to all be asking is:
Is there a limit to useful, human-led innovation?
The related, implied questions are:
Is there a category of innovative activities uniquely performed by humans that can never be replaced by AI?
If yes, is the size of this category unlimited / perpetually sufficient to ensure the availability of full-time employment for every worker that would ever want it?
Will humans always have productive stuff to do? Or will technology one day be able to handle all of society’s needs for tangible and intangible production?
Will technology, which has thus far only been a tool to boost the productivity of existing labor, end up being a direct substitute for labor itself?
e.g. Consider the Cobb-Douglas function and hypothesize what the flourishing of AI will do to the variables A, β, and α.
For the FIRST TIME IN HISTORY, we are entering an era where we will experience a diminishing role of human innovation, such that for the incremental innovation achieved in any given period of time, less and less of it will be the direct output of human ingenuity.
Up until now, all ideas generated (and things produced) in society, came directly from the thoughts that human beings processed within the confines of their own minds. Now, we have a tool—language learning models—which can, in effect, “generate” ideas and perform the processing tasks associated with “thinking” outside of the human mind.
Here’s a visual I put together for you that might communicate this more intuitively:
As far as the things being innovated, the human contribution will get crowded out.
Before 2023, we humans were the sole generators of the things happening at the frontier of innovation. Or framed alternatively, one could say that we had a mix of visionary “ideas” people combined with operator, “executing” people who could take the top-level ideas and break them down into actionable tasks and sub-tasks.
Now, AI is ready to handle the role of the operator / “executor”, and—with a small bit of prompted direction—it can convincingly play the role of “ideas person” as well.
So then, does this mean that in the Next Era, everyone will get to play the role of top-level, creative visionary, while the “AI minds” of the autonomous agents do all of the execution and task processing?
Some have said that “the future of autonomous agents looks like everybody becoming a manager”.
Sounds nice. But how many managers do we really need?
One kitchen can support only so many chefs, and it’s not necessarily a stable societal solution for us to open as many kitchens as possible purely to accommodate the increase in demand for people who want to be chefs.
By “kitchen” of course I mean company or any type of organization. With autonomous agents, the barriers for anyone to start and scale an organization become almost nonexistent, irrespective of skillset or experience.
Traditionally, when functioning optimally and in an unbiased way (critical caveats to be sure 😄), our societies organize hierarchies within groups based on competence.
You attain the position of creative visionary, CEO, founder of a startup, etc. because the players in your ecosystem trust the evidence and track record of you delivering, solving difficult problems, making good decisions, and generally behaving in a way that demonstrates (at least) above-average competence.
By further analogy, not everybody gets access to the nuclear codes, nor do we want everyone to have access to the nuclear codes. There are some “powers” that should be accessible only to those who have the requisite capacity for responsibility. 🕸️
But going forward, this is all going to become murky.
The Dark Side of Democratization
A piece of art created by Me + My Wife + DALL E, which we printed onto a canvas and hung.
In recent years, we’ve heard of countless startups “democratizing” (or making broadly accessible) a wide variety of items.
Robinhood democratizes investing. Canva democratizes design. Kickstarter democratizes funding. Patreon democratizes content monetization. And so on.
In essence, language learning models like GPT-4 are democratizing thinking (specifically the higher-level cognitive processing tasks).
To say it loosely, these tools will soon allow someone of average IQ and experience to perform tasks that previously could only be performed by high-IQ, high-experience individuals.
In time, the path of apprenticeship will no longer be necessary on the journey to mastery.
This has massive implications.
Traditionally, if you wanted to produce first-class output in any particular field, you would have had to go through the gauntlet of accumulating years of experience and putting in the requisite 10,000+ hours.
Only then would you have achieved mastery, rightfully earning the respect and exclusive opportunities that come with such a skillset.
With this newly unleashed power of AI, you could put in less than 100 hours into a new field, and—if you’re clever enough—produce something that’s 99% of the way to expert-level.
From the perspective of a neophyte entering a new industry, there’s never been a technology so empowering.
The junior software developer can produce, with AI, just as good of code as the 10x developer, without needing the same degree of time investment and skill.
The nascent content marketer, with AI, can put together sales copy that sounds so persuasive, you would have thought Robert Cialdini wrote it.
With an image generator like DALL E or Midjourney, you could produce a beautiful work of art just by writing the idea, without needing to go through the technical training of how to apply brush strokes, methods of blending, color coordination, etc.
I’m not going to claim that the painting my wife and I created with AI is masterful, but at the same time, if someone told you that this was a painting that was auctioned off for $500K, would you have disbelieved that with no hesitation?
It’s no exaggeration to suggest that this new dynamic may lead to the death of specialization. Or at least, the concept of investing decades of your life pursing one industry will become an obsolete relic.
For those at the front end of a learning curve, this prospect can bring great joy, as the new technology will allow them to experience as much intellectual variety and richness as their personal interests would allow. This is the vision that our “Jack” character intuitively feels.
However, for the existing master craftsmen and craftswomen, there will be a necessary period of great mourning. I touched on a microcosm of this feeling in my January ChatGPT post.
What’s at stake here is the confidence you currently feel about your existing expertise, which makes you “special” and distinct among your tribe, insofar as the utility you provide to others through your “role”.
The master artisan spent decades honing in their particular craft at the substantial opportunity cost of not being able to pursue any of the other paths they said “no” to. The reward of becoming one of the few experts in their group offset the foregone opportunity of accumulating other skills.
Now, the distribution and playing field will be leveled, and for many people the early sacrifice of specialization might feel as if it were made in vain.
Is any of this “fair” to the person who spent their lives and gave up opportunities to do something else? Does a newbie have the “right” to produce such high quality output?
Regardless, we step into this interesting paradox, where the newbies will reap a massively disproportionate benefit at the expense of the existing class of experts.
This is precisely the ultimate outcome that should accompany any successful endeavor to “democratize”. Will we be ready to reap these fruits?
As a word of encouragement to all experts and specialists out there, remember that our expertise is always contextual. We all wear the hat of newbie in some domain, and the hat of experienced person in another.
In this next chapter of humanity, look to use AI to fill in your gaps and 10x your newbie domains, experiencing the newfound dynamic of having the immediate gratification of expertise without the hard work of having had to build all that expertise.
Want to see my profile picture talk? Press play on this clip
To bring it back full circle, allow me to offer a big picture lens for us to think about:
All work entails a conversion of the potential to the actual.
When you engage in “productive” activities, you are taking something from its potential state (how the world could be) and bringing it forth into the reality of the present (how the world is), thereby making an impact and, literally, changing the world.
The question we’re faced with now is what % of productive activities will be left for humans to do, once LLM technology is rapidly adopted? Will we see a massive slide away from “white collar” tasks towards more hands-on physical labor? Will it be the case that the insane spike in productivity will lead to a reduced work-week?
What does all this look like for you?
When I look at the activities I do in a given week, there’s maybe only 10% that an adequately trained, autonomous GPT agent couldn’t do. At the moment in my own work, I can still drive excess value because I know what questions to ask / what prompts to provide the AI.
i.e. The quality of the LLM’s answer is directly proportionate to the quality of the question being asked of it, so if you can ask great questions, you will thrive.
But how long will such an advantage last? Not too far off in the future, there will be enough training data on prompting, such that the LLM will not only provide the optimal answer for the question given to it, but also it will identify the optimal questions that need to be asked of it, given the features and context of its deployed environment.
Right now, the AI needs your human observations to determine what problems there are to solve in the present environment. The next step will be that it can make its own observations (and infer which problems need to be solved) when deployed into the real world, with live data.
Before concluding, I promised you some more AI tools, so here you go 😄:
AgentGPT: A GPT that assigns tasks to other GPTs. Give it a high level objective and it will create the necessary sub tasks and execute it. I tried “Reach out to Dance Studios letting them know about the best studio software, Swyvel” and what it did was more impressive than a sales professional fresh out of college.
This is an example implementation of the kind of stuff that AutoGPTs will do. In fact, AutoGPT is the term I recommend you get up to speed on ASAP, so you can make sense of what’s about to come down the pipeline.
HeyGen: This is the tool responsible for making it look like my still picture is talking. With better input, it can do far better than that, creating a video avatar you can use to give convincing marketing copy and presentations.
AI Music Covers: This isn’t a tool per se, but I wanted to highlight some of the mind-blowing stuff that’s happening with music now, including an explosion of Kanye West AI covers.
Example:
And an example of Michael Jackson:
To be clear, this means you can now have any artist sing any song … listening to some of these will blow your mind 🤯 (and remember, it’s only going to get more realistic from here).
As usual, thanks for reading and please leave some comments with your thoughts below.
—Drago
PS - I lol’d at the Pew survey results here.
62% believe AI will have a major impact on workers generally but only 28% say it will impact “me personally”. Kind of reminds me of the statistic where most people think they are better than average (hint: 50% of people are below average and 50% are above average)