Interesting where ChatGTP goes for ‘high performing system’. Mechanization seems the narrowest of views, ‘efficiency’ in a vacuum. One might view my methods of growing a number of fruits and vegetables as absolutely not a high performing system. AI would zero in on labor hours, yield, speed…unless you are obsessively focused on taste. Then everything changes. Beating nature’s natural rhythms gives way to respecting them. A neighbor bit into a tomato variety I grow and almost wept in joy at the mouth-feel and taste, saying she’d never imagined a tomato could be so delicious. But they’re supposed to taste that way and most have not noticed they no longer know what a tomato can/should taste like. And they can thank high-performing systems…that are utterly mis-defined. And I’m sure your people focused brain is understanding a much larger point here. BTW, I love AI/ML and have been developing some pretty amazing applications in healthcare. It is misunderstood as you say. It is not magic. It’s just bloody damn good at certain things and that’s where I focus.
I’m reminded of the worst parts of the business process re-engineering movement. A lot of good things were accomplished, but there was also a lot of people doing useless processes five times faster. Your comment was targeted elsewhere to be sure. I note AI often operates under assumptions that are not properly tested or examined. So much is taken on faith, and dangerous when a user is not a subject matter expert in what they are engaging AI. A scarier version of my great grandmother stating the famous, “If it’s printed in the papers then it’s true.”
I’m actually a big AI fan, 5 years ago when we were kicking around an idea that became our startup, we had a big box on the white board that was called the “affinity engine”, it was going to do all kinds of magic and we thought, shit, that’s going to be tricky... Fast forward 3 years and a critical part was solved by an early Google AI, fast forward to today and pretty much every use case can be handled by LLMs (and guardrails).
I sometimes remind my wife that it's much easier out there now because "average" is at such a low level of performance now that anyone who performs at the old standard of average (even from 10 years ago) is now "outstanding". Our kids will have it much easier than we did for standing out from the crowd.
I agree with your assessment of machines ("AI" and LLM and ML). Soulless corporations might rely on them more and more, but not to their, or anyone else's, benefit - only in order to seek increased "profits". The problem is analogous to the fallacy American manufacturing fell for in outsourcing everything to China - there wasn't really the pretended benefit in quality or even profit, only a shifting around of beans for the bean counters. And in the end the result, gutting American manufacturing, was detrimental even to the companies who participated in it.
AI isn't really AI. I hate the common misuse of the word. It should be called something like "Predictive Modeling". Contrary to what most technologists believe, it never will be. General AI is a myth that will be chased to the detriment of all. The "AI" we do have and will continue getting more of will execute an incestuous, rapidly increasing cycle of lower and lower quality output. AI cannot create, and relies entirely on creative information (created by humans). As humans create less information (because we will rely on AI more and more), the quality will degrade. Cycles of AI being trained on it's own output will lower the quality until it's fairly useless, or at least very suboptimal.
I see it already. White papers, news articles, studies, etc. all written by AI (it's very obvious) and the quality is getting worse.
I could cry over it but instead I take a defiant stance and I welcome it - bring it on. The creative humans left will be outstanding and in high demand. I'm preparing my kids for this.
With you on 'AI'. I recently attended a panel that was ostensibly about "whether we invest in ‘technology’ or ‘people’ first". (Guess why I went - and guess what my position was?)
It resulted in a long document that I wrote to get it off my chest - though it remains unpublished. But here is a relevant extract.
> The evening turned out not to be about technology and people and all about AI and corporate staff.
> The broad term ‘A.I.’ was overused, because the conversation was largely about LLMs .. which are only part of a part of a part of AI - and AI in turn is but a fraction of ‘software' which is a fraction of ‘hi-tech’ which is a fraction of 'technology' … and much of it had nothing to do with the vast majority of people who live in New Zealand.
> In fact it even had little to do with the attendees on the night, since again - the focus was actually staff, employees, partners .. that work inside ‘Corporate New Zealand’.
Yes - they are people - but a tiny, tiny subset of ‘people in New Zealand’, let alone attendees.
I'm glad you're posting again! Always thought provoking.
It's worrisome - the feverish, breakneck pace at which a small handful of individuals are building things that will have far-reaching impact on every human on the planet. And without so much as a pause to say "but should we". Humanity needs to mature in it's scientific and engineering discipline to include philosophy and ask the hard questions and then have the discipline NOT to do something. Otherwise, I'm afraid we will not make it past The Great Filter.
Interesting where ChatGTP goes for ‘high performing system’. Mechanization seems the narrowest of views, ‘efficiency’ in a vacuum. One might view my methods of growing a number of fruits and vegetables as absolutely not a high performing system. AI would zero in on labor hours, yield, speed…unless you are obsessively focused on taste. Then everything changes. Beating nature’s natural rhythms gives way to respecting them. A neighbor bit into a tomato variety I grow and almost wept in joy at the mouth-feel and taste, saying she’d never imagined a tomato could be so delicious. But they’re supposed to taste that way and most have not noticed they no longer know what a tomato can/should taste like. And they can thank high-performing systems…that are utterly mis-defined. And I’m sure your people focused brain is understanding a much larger point here. BTW, I love AI/ML and have been developing some pretty amazing applications in healthcare. It is misunderstood as you say. It is not magic. It’s just bloody damn good at certain things and that’s where I focus.
Performance towards what outcome?
I’m reminded of the worst parts of the business process re-engineering movement. A lot of good things were accomplished, but there was also a lot of people doing useless processes five times faster. Your comment was targeted elsewhere to be sure. I note AI often operates under assumptions that are not properly tested or examined. So much is taken on faith, and dangerous when a user is not a subject matter expert in what they are engaging AI. A scarier version of my great grandmother stating the famous, “If it’s printed in the papers then it’s true.”
Aptly enough the AI generated image of a high performing system features a large number of unconnected gears...
AI needs to be given more credit - clearly hidden depths of nuance and hidden meaning.
First rule when using AI, get your guardrails in place first
I’m actually a big AI fan, 5 years ago when we were kicking around an idea that became our startup, we had a big box on the white board that was called the “affinity engine”, it was going to do all kinds of magic and we thought, shit, that’s going to be tricky... Fast forward 3 years and a critical part was solved by an early Google AI, fast forward to today and pretty much every use case can be handled by LLMs (and guardrails).
'Sam' needs you.
Hello John
Good article, John.
I sometimes remind my wife that it's much easier out there now because "average" is at such a low level of performance now that anyone who performs at the old standard of average (even from 10 years ago) is now "outstanding". Our kids will have it much easier than we did for standing out from the crowd.
I agree with your assessment of machines ("AI" and LLM and ML). Soulless corporations might rely on them more and more, but not to their, or anyone else's, benefit - only in order to seek increased "profits". The problem is analogous to the fallacy American manufacturing fell for in outsourcing everything to China - there wasn't really the pretended benefit in quality or even profit, only a shifting around of beans for the bean counters. And in the end the result, gutting American manufacturing, was detrimental even to the companies who participated in it.
AI isn't really AI. I hate the common misuse of the word. It should be called something like "Predictive Modeling". Contrary to what most technologists believe, it never will be. General AI is a myth that will be chased to the detriment of all. The "AI" we do have and will continue getting more of will execute an incestuous, rapidly increasing cycle of lower and lower quality output. AI cannot create, and relies entirely on creative information (created by humans). As humans create less information (because we will rely on AI more and more), the quality will degrade. Cycles of AI being trained on it's own output will lower the quality until it's fairly useless, or at least very suboptimal.
I see it already. White papers, news articles, studies, etc. all written by AI (it's very obvious) and the quality is getting worse.
I could cry over it but instead I take a defiant stance and I welcome it - bring it on. The creative humans left will be outstanding and in high demand. I'm preparing my kids for this.
Per your AI data comment: https://alexandrabarr.beehiiv.com/p/synthetic-data
Many thanks Richard and lovely to hear from you.
With you on 'AI'. I recently attended a panel that was ostensibly about "whether we invest in ‘technology’ or ‘people’ first". (Guess why I went - and guess what my position was?)
It resulted in a long document that I wrote to get it off my chest - though it remains unpublished. But here is a relevant extract.
> The evening turned out not to be about technology and people and all about AI and corporate staff.
> The broad term ‘A.I.’ was overused, because the conversation was largely about LLMs .. which are only part of a part of a part of AI - and AI in turn is but a fraction of ‘software' which is a fraction of ‘hi-tech’ which is a fraction of 'technology' … and much of it had nothing to do with the vast majority of people who live in New Zealand.
> In fact it even had little to do with the attendees on the night, since again - the focus was actually staff, employees, partners .. that work inside ‘Corporate New Zealand’.
Yes - they are people - but a tiny, tiny subset of ‘people in New Zealand’, let alone attendees.
I'm glad you're posting again! Always thought provoking.
It's worrisome - the feverish, breakneck pace at which a small handful of individuals are building things that will have far-reaching impact on every human on the planet. And without so much as a pause to say "but should we". Humanity needs to mature in it's scientific and engineering discipline to include philosophy and ask the hard questions and then have the discipline NOT to do something. Otherwise, I'm afraid we will not make it past The Great Filter.