I think the signaling and principal-agent parts of employment will massively expand in importance. I get Kelsey's point about the decline in touch in the economy. However, I think it can leave you with a miscalibration perception.
So, per task, the level of touch drops dramatically in the world that I, and I think Tim as well, are describing. Indeed, many tasks become zero-touch as AIs simply interface with each other through the universal protocol of natural language.
However, human time will become dramatically more high-touch.
There are several ways to view this. I think the most fundamental is to realize that there zero long-term profits to using unaugmented AI. AI specialization is completely replicable at essentially zero cost. AI time will drop ultimately to the price of electricity, but for now, the price of GPUs and electricity. So, there is no way for a company to make a profit from this. Charging anything about the electricity cost just gets your customer to purchase (or rent) the AI for themselves.
So everything is about augmentation. Some of that will be in the domain of specialized data or industry knowledge that went into fine-tuning the AI. However, this kind of thing is rare and challenging. What augmentation primarily means is the human representative who works to ensure that AI is doing what the customer needs it to do.
Virtually, all of the long-term profits in business will come from this. From creating an organization of people who do really well at translating human desires into AI tasks. It is possible that there are a few agglomeration effects at play, leading organizations to shrink in size. Both large and small agglomeration effects are totally consistent with my model.
The crucial thing to see is that all profits must extend from augmentation of some sort or another because every other aspect of the business will be crushed down to the raw price of AI-time.
In a similar way, you can see precisely this in a manufacturer. My mother was a textile worker. She was a weaver for Cone Mills. But what the weaver did was walk back and forth down a long line of weaving machines, checking to see if there was a knot. Cutting the knot out and resetting the machine. Highly skilled weavers could tell you when a knot was about to be formed, though they could not say exactly how they knew.
This augmentation of weaving machines was both the source of my mother's job and Cone's profits. Because the actual production output of the machines themselves was priced down to machine time. It was the weaver's ability to maximize machine time that made all the difference.
What stands out to me is how often the AI–jobs debate gets framed as replacement or preservation. If we think of work as a shifting pattern of tasks, the real question becomes: how do we adapt the human side while the task side keeps changing? In my experience, clarity comes only in cycles - observe, test, reflect, confirm - and maybe the workplace needs the same rhythm. Is resilience in the AI era less about prediction, and more about continuous re-alignment?
I think the signaling and principal-agent parts of employment will massively expand in importance. I get Kelsey's point about the decline in touch in the economy. However, I think it can leave you with a miscalibration perception.
So, per task, the level of touch drops dramatically in the world that I, and I think Tim as well, are describing. Indeed, many tasks become zero-touch as AIs simply interface with each other through the universal protocol of natural language.
However, human time will become dramatically more high-touch.
There are several ways to view this. I think the most fundamental is to realize that there zero long-term profits to using unaugmented AI. AI specialization is completely replicable at essentially zero cost. AI time will drop ultimately to the price of electricity, but for now, the price of GPUs and electricity. So, there is no way for a company to make a profit from this. Charging anything about the electricity cost just gets your customer to purchase (or rent) the AI for themselves.
So everything is about augmentation. Some of that will be in the domain of specialized data or industry knowledge that went into fine-tuning the AI. However, this kind of thing is rare and challenging. What augmentation primarily means is the human representative who works to ensure that AI is doing what the customer needs it to do.
Virtually, all of the long-term profits in business will come from this. From creating an organization of people who do really well at translating human desires into AI tasks. It is possible that there are a few agglomeration effects at play, leading organizations to shrink in size. Both large and small agglomeration effects are totally consistent with my model.
The crucial thing to see is that all profits must extend from augmentation of some sort or another because every other aspect of the business will be crushed down to the raw price of AI-time.
In a similar way, you can see precisely this in a manufacturer. My mother was a textile worker. She was a weaver for Cone Mills. But what the weaver did was walk back and forth down a long line of weaving machines, checking to see if there was a knot. Cutting the knot out and resetting the machine. Highly skilled weavers could tell you when a knot was about to be formed, though they could not say exactly how they knew.
This augmentation of weaving machines was both the source of my mother's job and Cone's profits. Because the actual production output of the machines themselves was priced down to machine time. It was the weaver's ability to maximize machine time that made all the difference.
What stands out to me is how often the AI–jobs debate gets framed as replacement or preservation. If we think of work as a shifting pattern of tasks, the real question becomes: how do we adapt the human side while the task side keeps changing? In my experience, clarity comes only in cycles - observe, test, reflect, confirm - and maybe the workplace needs the same rhythm. Is resilience in the AI era less about prediction, and more about continuous re-alignment?