Barn-mucking #52,340,039,140:
I find it hilarious that this guy has unironically pasted a soy-boy meme of himself in the thumbnail without even realizing it ... :LMAO
Here are his phases of AI acceptance:
1) Ignore it
2) Skepticism
3) Curiosity
4) Acceptance
5) Optimization
I fast-cued through some snippets of this because I'm not going to listen to the one-millionth AI hype-pitch about why I should believe that 2025 is the year that literally everything becomes AI. He doesn't seem to be as irrational as most AI hype-bros, but he's still missing the actual stakes involved and ends up missing the mark in respect to how AI is actually going to change software (among other things).
Let's break it down in terms of economics. Let's suppose Bob wants to build a house. Bob has no house-building skills or equipment (or time), so he's going to have to pay somebody to build his house for him. So, Bob calls up Charlie and hires him to build a house. Now, Bob is going to give Charlie, say, $250,000 and Bob expects to receive in return from Charlie one completely built house. That includes all the things that a house is supposed to have, and none of the defects that are unwanted in a house. Charlie is supposed to build a house that Bob wants, according to Bob's specifications. If Charlie can't do this, he's supposed to tell Bob right at the outset and this will even be written into the contract to MAKE SURE that Charlie is going to hold up his side of the deal. Bob's reason for believing that Charlie can and will deliver the house in exchange for the $250k is that Charlie has done this kind of project before, successfully, with positive reviews from satisfied customers. Perhaps Bob got Charlie's number by a word-of-mouth reference from a trusted friend who had earlier hired Charlie.
Why am I talking about house-building? What does that have to do with AI and software? Well, notice that everything in the previous paragraph is
inextricably human. Every single component of what is happening, here, is thoroughly human ... human agreements, human expectations, human exchange, human communication, human friendships and social networks, etc. What can AI add to these building-blocks? Absolutely nothing. It can facilitate them (just as the Internet or phones facilitate them) but it cannot
add to them or enrich them in any way. The AI can
at best be a matchmaker, a go-between, a finder, and so on, and so forth. Those are valid activities, but they are all service activities, they are not legally agentic activities. And that's where the rubber meets the road:
legal agency.
The important thing about Charlie's agreement to build the house in exchange for the $250k, from Bob's perspective, is that if Charlie for some reason
doesn't do what he has agreed to do, Bob has punitive recourse against Charlie. Bob can sue Charlie or even have him charged with criminal extortion depending on the severity of the contractual breach. Charlie is a
legal agent, he is a decision-maker who can be held to legal account in a court-of-law for his decisions. In this specific respect -- legal agency -- AI can, by definition, add absolutely nothing whatsoever, not even as a service! AI is strictly out-of-scope in respect to legal agency because AI cannot be punished!
Now, let's say that Bob wants to build a new mobile app. Bob has no skills in app-building, any more than he does in house-building. So, Bob calls up Dave and contracts with Dave to build an app for him. Bob is going to pay Dave $25,000 for the app that is to be delivered, according to XYZ specification which is contracted when the agreement is signed and the payment made. In respect to the economic transaction occurring here, this is
precisely the same situation as the house-building, the only difference is that the product being delivered is digital, rather than physical. Once Dave is paid, he spins up an AI bot and has the AI write the entire app for him. With just 10 hours of total work, start-to-finish, Dave has a fully functioning app ready for Bob to start testing. To avoid shocking Bob too much, Dave waits until the end of the week to make final delivery of the app. Bob is still astonished at how quickly Dave was able to do the work, but pleased that he can start testing the app right away and get it ready to be formally launched to his customers.
Notice how the AI's role in this example is
purely incidental to the fundamental transaction that took place. Does Bob care if Charlie hires humans to build his house, or whether Charlie utilizes some advanced, whiz-bang home-building robotics to build his house? Absolutely not, all that Bob cares about is that the house is
built to specification, according to the contractual agreement, on time, and that this work is completed for the agreed price ($250k). The same applies to the mobile app that Bob pays Dave to write. It does not matter to Bob whether Dave built the app in 10 hours or 40 hours, nor does it matter to Bob whether Dave used AI or not. What matters is that the app works to specification, and is delivered on time for the agreed price.
All of this "AI will replace us all!" crap is made-up nonsense being pushed by delusional people who probably shouldn't be allowed outside of a padded cell. They are confusing the shovel for the man who uses the shovel to dig a ditch. If I pay you to dig a ditch, I'm not paying you to "use a shovel", I'm paying you to
dig a ditch. If you rent a Ditch-Witch to do the job more quickly and efficiently, so be it, that's your choice in how to do the work that was contracted.
Understanding the boundary between "what" and "how" is absolutely crucial to understanding the future of AI. In engineering, we call this "specification domain" versus "solution space". The specification domain is
WHAT is to be done. The solution space is
HOW it is done. AI may change solution space ever so much, but it cannot -- by definition! -- alter the core components of human agency which comprises the specification domain. Bob is paying Charlie to
build a house, not to use a particular building-technique, such as hand-hammering all the nails, versus using nail-guns. Or, if that were to be specified in the contract because Bob cares so much about it for some reason, Charlie is going to adjust the price for the job accordingly. But no matter how much of the "HOW" is specified in the contract (which determines "WHAT" is to be delivered), the WHAT-ness of human agency can never be eradicated or even altered! Charlie remains a legal agent who is
liable for his delivery (or non-delivery) of the contracted goods! All of this applies with equal force to the software industry as it does to home-building. AI cannot add one iota of value to human legal agency because AI cannot be punished. It really is
that simple.
So, regardless of how much software developers dig in their heels against AI -- or don't -- the reality will remain that
AI is just a tool. The Silicon Valley agenda to transform the public perception of AI as something
beyond merely a tool is really a religious agenda. They are trying to push a new religion and a new god onto you, all without even disclosing that they are preaching a new gospel to you. They are trying to pass this all off as "technological progress", but it is no such thing. It is a new entrant into the pantheon of world religions, trying to camouflage itself as "the inevitable march of technology". As soon as AI is presented as crossing over from mere solution-space, into the specification domain -- that is, as soon as AI is no longer a glorified shovel, but is actually telling you what ditch you may or may not dig -- it is no longer a tool or a technology of any kind, it is a religion making theological claims about the powers and authorities of its new-fangled deity: the AI.
Before all is said and done, this is going to get
UGLY...