Discussion about this post

User's avatar
Harry Dean Hudson's avatar

One idea I’ve been thinking about lately is really leaning into MCP as a data boundary—if the AI agent uses tools to access data and system permissions, we be designing the access carefully in tools rather than just making them wrappers.

The slightly longer version is here: https://open.substack.com/pub/harrydeanhudson/p/use-mcp-tools-as-a-data-fence

Expand full comment
Ethan Heppner's avatar

💯. Management is the original solution to the alignment problem. And so far, as technology has improved, the percentage of the workforce in managerial roles has only increased: https://www.2120insights.com/i/150163373/management

I'm also reminded of these figures from Google where even though 30% of code is AI-generated, engineering velocity has only increased 10%: https://x.com/krishnanrohit/status/1933010655965294944

Even if they are really beating that METR RCT, deciding what you want and then validating/accepting the output is more work than many people assume!

Maybe more of this gets automated over time, but at a high enough level, the buck is always going to stop somewhere with a human, unless AI agents are somehow given property rights (but I don't see this ever being a popular political view).

Expand full comment
13 more comments...

No posts