So all this amazing recent progress in AI really begs the question: How far will it go?
I like to think about this question in terms of this abstract landscape of tasks,
where the elevation represents how hard it is for AI to do each task at human level,
and the sea level represents what AI can do today.
The sea level is rising as AI improves, so there's a kind of global warming going on here in the task landscape.
And the obvious takeaway is to avoid careers at the waterfront -- which will soon be automated and disrupted.
But there's a much bigger question as well. How high will the water end up rising?
Will it eventually rise to flood everything, matching human intelligence at all tasks.
This is the definition of artificial general intelligence -- AGI, which has been the holy grail of AI research since its inception.
By this definition, people who say, "Ah, there will always be jobs that humans can do better than machines,"
are simply saying that we'll never get AGI.
Sure, we might still choose to have some human jobs or to give humans income and purpose with our jobs,
but AGI will in any case transform life as we know it with humans no longer being the most intelligent.
Now, if the water level does reach AGI, then further AI progress will be driven mainly not by humans but by AI,
which means that there's a possibility that further AI progress could be way faster than the typical human research and development timescale of years,
raising the controversial possibility of an intelligence explosion
where recursively self-improving AI rapidly leaves human intelligence far behind, creating what's known as superintelligence.