Investing in AI: A Framework To Avoid Losing All Your Money
A conceptual framework to understand what happens next in AI
When most people look at AI, they probably get confused.
And rightly so.
They see a bunch of people selling ‘AI agents’ that don’t quite work yet, people with no technical background posting AI slop on LinkedIn, and a bunch of seriously questionable VC-backed companies that don’t seem to have any valuable product.
So I don’t blame people for getting confused.
However, what’s happening in AI is real, it is going to impact your job soon enough (no matter what you do), and you must understand it if you want to be able to profit from it, particularly if you’re a private investor (I know a few of you follow me).
To help you understand what’s actually going on in this space I’d like to explain a simple concept: convergence.
Convergence is the process of two or more things coming together - unifying as a whole.
And if you understand this one concept I guarantee you will be able to predict what happens next in this space better than almost everyone else.
Because what’s happening in AI, and will continue to happen, is convergence.
AI is a black hole.
Remember when ChatGPT first came out?
Well, before that, there was GPT-3 - a model that software engineers could access via API, but it wasn’t a chatbot yet.
People were using GPT-3 for stuff like ‘we optimise your real estate listing descriptions using AI’, or ‘we optimise your website’s copy using AI’ - and that was their entire startup.
What happened to these companies?
By and large, they died.
Why did they die?
Because their entire company was just a feature within a future model released by OpenAI.
They fell victim to convergence.
Why pay for that feature when the end user or business can just use ChatGPT and fix their descriptions themselves?
People close to the AI space refer to this as ‘washed away by scale’, meaning: whatever you build now, no matter what the software is, a future AI model will be so much bigger and smarter that it will be able to do whatever your software previously did that made it special.
And if we think about this from first principles, it makes complete sense.
Humans do some version of search over a problem space, combined with program synthesis.
What I mean by this is, we can create ‘programs’ on the fly, through learning, and we can then store these programs via memory so that next time we have to perform them, it’s quicker.
If the end goal of AI progress is AGI - artificial general intelligence (i.e. as smart as a human, or smarter in the case of artificial super intelligence (ASI)), then this makes perfect sense.
Think it through with me.
The smartest person you have met is extraordinarily good at learning anything.
We can say they have a very good ‘learning algorithm’.
They can solve almost any problem, given enough time and resources at their disposal.
So, If we are on the path to AGI, then what we should expect, by definition, is that AGI will eventually be able to produce any software - any useful program - to solve a problem.
In the previous era of software, there was tremendous friction with producing a web/mobile app.
It was difficult to do.
The friction came from the lack of supply of talented software engineers, leading to them being costly to hire.
This meant the supply of software in the world was always limited by the supply of human software engineers that could produce it.
But AI flips that on its head.
If AGI can produce programs at the cost of compute, the supply of software trends to infinity in the long-run (in the limit).
What this also means is the pricing power of companies trying to sell software trends to 0 (in the limit) because their competitor is now AGI, not just other companies run by humans.
The profit margins of existing software companies will go down in the long-run, because an AGI will be able to go into any mobile/web app, scan the entire feature set and architecture, estimate exactly how much it would cost in compute to rebuild, and then it will be able to rebuild it whilst you - the human - do literally nothing.
In this world, why would you choose to use and pay to use the software of an existing SaaS company, when you can direct your own AGI to rebuild it through nothing but capital?
There’s a few valid answers to this question:
- If the cost of compute to rebuild the app exceeds the cost of paying for the app over the time period you care about, the economically rational decision is to pay for the incumbent’s app.
- If the app is more than just software - if it has network effects (e.g. all of the major social media networks): built-in community/distribution, or other real-world ‘pipes’ - not just software alone - then the incumbent app makes sense. By ‘pipes’ I mean that software inverts: from human first, AI second, to AI first, human second. The AI can produce software to achieve whatever goal it is you’re trying to achieve, and then connect you to a human in the real world if necessary. In the previous paradigm, software was just a tool that humans used to help themselves achieve outcomes. This inverts in an AGI world.
Timelines can be debated about when exactly AGI will be achieved (we don’t have it yet), but I’m trying to emphasise a point: