MENU

AI is no big deal and Laundry Buddy isn't coming to kill you.

AI is No Big Deal: or Laundry Buddy Isn’t Coming to Kill You

Loading word count...
Listen to this article
AI is no big deal and Laundry Buddy isn't coming to kill you.

When artificial intelligence is depicted in media, it is almost always tied to some ultimate, world changing event. Judgment Day in Terminator, where humanity is nearly wiped out by an AI system seizing control of the world. The Butlerian Jihad in Dune, where humanity’s rebellion against AI leads to the galactic destruction of all computers and their replacement by specially trained humans known as Mentats. The Matrix where, well, you get my point. I tend to view popular views on technology, economics, and politics as being overly informed by popular media.  There’s a meme on /tv/ that goes “I just saw this movie, tell me what I think about it,” but the meme should probably be, “I just saw this movie, now tell me what I think about society.”

It’s in keeping then with my established tradition of being a perennial contrarian when I speak publicly, that I predict AI is arriving in our real world with what will amount to a whimper, not a bang.  This tendency of mine is not because I relish being a contrarian, in contrast I’d argue that most of my views simply align with what most people throughout history have either believed or known to be true.  I simply don’t see much point in writing or speaking about issues a million people before me have debated more eloquently than I care to and where millions or even billions of people hold my same viewpoint.

At the moment there are 3 widely staked positions on AI.  There’s the view held by most people that work at AI companies, their investors, and their cheerleaders in the recently developed “effective accelerationist” (e/acc) movement, which counts among itself tech barons like Marc Andreessen, Gary Tan, and Elon Musk, that AI is going to radically change everything, every industry, everyone’s life, and usher in some brilliant new tech utopia that will generate trillions upon trillions in new shareholder value for a couple dozen or so already gigantic companies.  

Then there’s the “doomer” viewpoint held largely among overeducated pseudointellectuals like Nick Bostrom and the less recently developed “effective altruist” (EA) movement, which counts among itself recently convicted mega-fraudster Sam Bankman-Fried, washed-up poker players like Liv Boeree and Igor Kurganov, and the infamous Twitter whore Aella.  These people hold to a viewpoint that further development in artificial intelligence will likely, or almost certainly, condemn all of humanity to a dystopic future under the tyranny of a silicon tyrant, or outright extinction.  This type you’ll see arguing about what they believe the p(doom) is, or what they think the probability of hell on earth being unleashed by thinking computer chips is.  They’ll throw around wild numbers, ranging from 5% all the way up to 100%, numbers they’ve in every case pulled directly out of their ass with no evidence, experimentation, or testable hypothesis behind it, and use these numbers to make arguments ranging from NATO should airstrike data centers (like Tucker Carlson) or that any gamer with a GPU capable of getting over 240 FPS in Counter-Strike should be sent to a gulag and their property confiscated.  They talk about concepts like “foom” or a Fast Out-Of-the-gate Model where an AI model that is proficient at telling people how to do their laundry suddenly has an exponential explosion of intelligence driven by its ability to improve upon itself and it becomes Skynet overnight and we all grip the fence. 

The “doomer” viewpoint, to outsiders seemingly paradoxically, is also frequently promoted by many key figures at the most prominent AI and tech companies.  To people who know how the game is played their motivations are transparently obvious, they’re simply lobbying for regulatory capture.  Regulatory capture because, in the nascent state that most of this technology is in, none of these companies have much of an actual defensible technology moat that would prevent some upstart company from making a better or cheaper mousetrap and stealing market share away from them.  They go to D.C. and to the E.U. and give these long, flowery, often behind-closed-doors speeches that basically all amount to “regulate me harder, daddy” and try to pull the ladder up behind themselves.

There’s a third view, most prevalent in the conspiratorial circles I find myself most comfortable in, that goes basically that we are already living in a dystopia and that AI will be used by governments, the Jews, and the technoautistocracy to make everyone’s life worse and replace the original cast of Twin Peaks with black people in the name of DEI.  Of the three most common viewpoints, I would concede that this one is the most plausible, since it probably best describes the actual motivations of most of the people behind the development of this technology, but it ignores a key part of the puzzle, which is that motivation alone is not enough to commit a crime, you also need the means and opportunity.  

I’d like to present a fourth viewpoint, which is not entirely unique to me, though I was one of the first to begin voicing it over a year ago now during the initial AI hype craze, but one that is becoming increasingly common among already jaded and cynical people who’ve been around technology for at least a decade and been through multiple hype cycles.  The view is simply, that for a multitude of reasons that can be condensed into a singular explanation, the explosive progress we’re witnessing in AI development will fizzle out relatively quickly, and almost all the contemporary conversation around it is based entirely on marketing hype, and corresponding exuberance or panic based around that marketing hype.  

I’ve been around tech, and in particular tech marketing, long enough to know that almost everything you see in a demo video or keynote speech is complete unadulterated bullshit being presented by a textbook narcissistic sociopath.  Of these sociopaths, the most obvious example is Sam Altman, the gay and Jewish founder and CEO of OpenAI.  Sam Altman gives interviews where he knows he’s just lying his ass off and doesn’t even hesitate for a moment throughout any of it.  Altman, since the moment his original startup, Loopt, an app for gooners to be able to IRL stalk girls they got a mutual with on Facebook, failed, has been nothing but a pathological liar who has made a career out of telling people the bullshit that they want to hear or the bullshit that his investors want to hear him tell people.  He recently in an interview admitted that while he was the CEO of YCombinator, a “tech accelerator” that takes 7% of a company’s stock in exchange for some cash and some advice, he gave nothing but bad advice to his portfolio companies and that he followed exactly none of his own advice when building his own company. 

I’ve myself created and been a part of some extraordinary cap in my own career, but I’ve never seen anything as flagrant as what you see in AI marketing today.  OpenAI literally tells its investors that they don’t need to figure out how to become profitable because they’re going to invent God, and then they’ll just ask God how they can be profitable, and God will tell them.  That’s a quote. Google rolls out demo videos which are nothing but green screened cellphones showing off all kinds of technological magic tricks, then when they launch their search engine AI it tells people that a week has eight days in it, the cure for depression is jumping off a bridge, and that Chris Paul’s “CP3” nickname stands for “Child Porn 3”.  Notwithstanding this, there’s obviously quite a lot that’s impressive about current AI models, but they’re also impressive in the way that it’s impressive to see a retarded kid do a backflip on a dirt bike.  You know it’s technically skilful, it might be far beyond what your expectations were, but there’s also something that’s undeniably broken about them.  Broken in a way that’s probably beyond anyone’s ability to fix.  

At the core of the optimist, exponential growth, e/acc point of view lies an argument that basically goes like this.  You take an AI image recognition model, you feed it pictures of apples and oranges, and it learns to recognize what an apple is and what an orange is.  That’s basically been proven now.  They then take it a step further and say, if you keep increasing this dataset of apples and oranges, give it as much information about apples and oranges as you can, eventually it reaches a level where you can show it a picture of a banana and it recognizes that it’s a banana because the banana is implied by all of the other data it has.  Of course, it’s a little more complicated than this, but that’s the core argument, train an AI to solve a specific kind of problem well enough and it will eventually be able to shed that specificity and apply itself to solving very large, general problems.  In this instance your AI has now reached the ability to extrapolate and has progressed from a weak, narrow capability to a strong, general capability or “Artificial General Intelligence” or “AGI”. 

AGI is also at the core of the doomer view and more-or-less the conspiratorial view as well.  All these groups take note that there has been an explosive upward trend of progress in AI development in a relatively short period of time, and then presume that the only way that trend can go is further upwards at the same pace, forever.  However, just because you’ve seen a massive change in one area in a short period of time, that presents no reasonable evidence that you can expect an equally massive change to suddenly happen again, and again, and again in short periods of time.  As an example, if you compare Half-Life 1 to Half-Life 2 you can see massive changes.  The game went from being a clever, if relatively basic 90s hallway shooter to having a revolutionary physics engine where for the first time you could pick up almost any object in the game world and throw it around.  Fast forward 20 years and there is no Half-Life 3 and the most popular video game is still a version of Counter-Strike that is minimally different from the Counter-Strike: Source that shipped as a prelude to Half-Life 2.  

The core of the perception issue is people suddenly went from a world where there was no deepfaked Alexandria Ocasio-Cortez porn generator to a world where there was one, and it seemed like it developed over the course of a couple of months, so now people expect that level of progress to keep coming every couple of months, or even faster if they’re working under the presumption that AI itself will enhance the rate that AI programs develop at.  From 0 to 1 is an enormous step, and human perception biases generally lead people to believe that 0 to 1 also implies 1 to 10.  

The fundamental problem with this trend projection however is that the experimental evidence for what you can do with AI models currently only solves for 1 and getting from 1 to 10 incorporates an entire list of exceptionally harder problems.  One of these problems is the rare event problem, for example, how do you train an AI to be able to recognize an event that happens maybe only 1 out of 10 million times.  You might think, well an event that happens only 1 out of 10 million times isn’t worth thinking about, but it certainly is, because people tend to witness exceptionally rare events every day.  For example, take the consideration of the formation of rain drops.  Given the motion of all water vapor molecules in a cloud, the incidence of several of these water molecules colliding with each other to form a raindrop is an extremely rare event within the context of all the movement paths of these molecules.  However, people see rain so commonly that a single raindrop is seen by people as a banally common thing.  Yet, within the system that created it, its creation was an exceptionally rare event.  The sum total of events that you witness in a day is the product of billions of billions of interactions happening that slip by your perception, to the point that 1 in 10 million events start to be perceived as common occurrences.  Even if you decrease the scope of the data that you’re analyzing substantially, and have say, a Doctor AI replace doctors working in hospitals, the rare event problem pops up with alarming frequency.  93,000 patients are admitted to US hospitals every day. At that rate, if you had every patient being examined by an AI, you could expect a patient with a medical diagnosis that impacts 1 in 10 million people to pop up about once every four months.  

Another example of a problem for further exponential growth in AI that you can test experimentally is the problem of imprecise vs precise work.  AI is quite good at doing work where there is a wide tolerance for imprecision. If you ask an AI model, draw me a picture of a car dashboard, it can do this perfectly well.  Ask it, draw me a picture of a 1993 BMW M3 dashboard with 85mph on the speedometer, 5000rpm on the tachometer, and 40% fuel in the tank and it quickly fucks up and can’t do any of that, and this is only asking for slightly more precision.  Now take this example of the car dashboard, and imagine it’s one small detail in a drawing you want to render where there’s 20 cars on a street, all different makes and models, unique details about each, their drivers, the trees along the road, the buildings, etc. Then extrapolate this further and imagine you’re trying to render a Grand Theft Auto style game with 11,000 streets. New York City has 32,000 streets for reference.

The comic writer Alan Moore is notorious for producing exceptionally precise scripts. He’ll use an entire page of text or more just for one panel of a page, dictating precisely what he wants to see in the panel to an artist. This has pissed his editors off a few times, but he’s been able to work for decades in this manner. Feed one page of an Alan Moore script to an image generation model and it will completely shit itself.  Yet Alan Moore has written dozens of graphic novels working with dozens of cartoon artists and he’s considered to be one of the best, if not the very best, in the entire industry at writing comic scripts.  He demands an exacting level of precision from the artists that he works with, but they deliver it time and time again to produce award winning collaborative work.  

A single page of Alan Moore script.

In general, I predict that AI development is going to run into, or has already run into, what I’d refer to as a singularity of complexity.  That is, while the first chunk of the work towards this goal of “AGI” has seemingly come quickly, and at a seemingly exponential rate, I expect this progress will quickly level out and reach a point where the complexity of further progress starts to become so incredibly complex that the difficulty of further progress starts becoming exponentially more difficult, until you reach a point where that difficulty approaches infinity.  Maybe you get 20%, or 40%, or even 80% of the way there, maybe we’re already at that level.  However, I predict that eventually it hits a point where to get that next 1% starts to become exponentially more difficult, and then the next .1% starts to become exponentially difficult, and then the next .01%, until you hit a point where the diminishing returns make further efforts at progress entirely futile.  

Each additional level of precision that you demand from an AI model requires exponentially more training data.  The rarer an event you ask an AI model to try to predict for, the more data it needs.  As you start to work further and further along these paths towards this goal of “general intelligence”, it’s quite likely that either the amount of data that you need to collect towards this goal is either impossibly large to collect, surpassing human ability to actually observe and collect this level of data, or that the amount of storage and processing power you need for a model to be capable of working on all of this data is impossibly large, surpassing the total amount of all silicon on the planet.  Or both.  These are just two of the multitude of problems that you encounter as well in trying to produce an AI model of that level of complexity.  Some of the other problems include issues like hardware constraints and the rate of progress in hardware.  Software companies have become a darling of investors because they’re perceived as lending themselves nicely to exponential growth, because you don’t need things like factories, supply chains, logistics, even many workers to be able to grow a software company on an exponential rate.  AI companies are not simply software companies though, they’re also enormous consumers of hardware, and the hardware constraints are very real, and progress in computer hardware has been happening at an increasingly slower rate.  The memory chips in NVIDIA’s latest line of consumer GPUs are based on what is effectively 6-year-old technology at this point.  Assuming a new technology is introduced in the next line, can we expect the next will take another 6 years?  8 years?  20 years?  For what, based on previous results, we could expect at best to be a 2x increase in performance?

If you acknowledge that there has been significant progress in a short period of time to date in AI research, there’s essentially 3 different scenarios you can predict for how it will continue to progress over time. 

In the above graph, the optimistic OpenAI stockholder shill scenario would be represented in yellow as exponential progress, a more balanced linear progression represented in red.  It’s important to note that literally zero technologies have ever progressed over time in a linear fashion.  CPU clock speeds have not progressed over time in a linear fashion.  VRAM constraints on GPUs have not progressed over time in a linear fashion.  The scenario I think we’ll experience instead is outlined in green, where we witness what seems like exponential growth happening over a very short period of time, that just as quickly flattens out whereas time progresses you end up seeing very little further progress.  

I think I’ve firmly established that I’m not an AI doomer, because I don’t see a scenario where AI advances enough to the point of being capable of threatening doom.  Nor am I necessarily an AI hater, I’m just entirely skeptical of the prospect of much further exponential growth happening in the field.  I think AI technologies will be enormously transformative, across many industries, but not to the extent that the cheerleaders or doomsayers anticipate that they’ll be.  The area where I envision AI products will have the most impact on is on solo entrepreneurs and small teams.  Say for example, you’re an expert in product marketing with a crude understanding of product development.  AI tools can bring your competency in product development up to the point that you can create your own products and market them without needing the help of other people.  The result might be a mediocre product, but you can then at that point bring in a consultant to help iterate on the product and refine it over a period, while you get to work selling the initial mid-pack product to fund those costs.  Or you might be a music producer that is competent at playing keyboards but can’t program drums well or sing worth a damn.  AI tools can help fill in those gaps to mask your inadequacies and allow you to get songs out that put a spotlight on your competencies and get you collaborations with people who have talents of their own that synergize with yours.  So those that benefit the most from these technologies will be people who are crafty enough to use these tools to build around what they’re weak at, and those skilled enough to step in to further fill in the gaps that AI models cannot be strong enough to do.

I see no future however, where AI tools reach such a level of general competence that they perform better than any one human at every single task, and I very much doubt that any of us will be gripping the fence anytime soon.  To bring up Alan Moore again, he once asked the question in Marvelman and the Electronic Brain “What happens when an information machine learns to think for itself?  What happens when that machine, an electronic brain, turns criminal?”  Before we need to answer that question though, I think we’ll first have to consider the question of “What happens when investors who have pumped over a trillion dollars into a machine, learn that it’s worse at figuring out how many days are in a week than Bodybuilding.com forum posters?”

Post Author

Leave a comment

5 4 votes
Article Rating
Subscribe
Notify of
1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Very interesting article. I don’t know anyone using it. I’m a blue collar and even in cad modelling (we work with technical drawings do we not), I’d be suprised an a.i can get it right. Like its human counterpart, most of the time, the a.i won’t be dealing with way too many factors that only appear in situe. Hence the exemples chosen for the article.
How many times have I seen technical drawings that do not show some 30ft long structure beam. Or data lines given on drawings that do not refer to anything existing or would be existing in a building.
And I’m on just on the tools haha.

A 4th theory could be that we are only showed a tenth of the product like it has been know for various medias. And real silicon is harvested from human.
Game over Typhus we will have our Skynet!

Joke aside, it is somewhat a white pill. I’ll still have to argue and physically threaten whoever fucked it up on the tertiary (?) Dide. God bless you, son of a bitch

1
0
Would love your thoughts, please comment.x
()
x