AI isn’t taking off because it took off in the 60s. Heck, they were even working on neural nets back then. Same as in the 90s when they actually got them to be useful in a production environment.
We got a deep learning craze in the 2010s and then bolted that onto neural nets to get the current wave of “transformers/diffusion models will solve all problems”. They’re really just today’s LISP machines; expected to take over everything but unlikely to actually succeed.
Notably, deep learning assumes that better results come from a bigger dataset but we already trained our existing models on the sum total of all of humanity’s writings. In fact, current training is hampered by the fact that a substantial amount of all new content is already AI-generated.
Despite how much the current approach is hyped by the tech companies, I can’t see it delivering further substantial improvements by just throwing more data (which doesn’t exist) or processing power at the problem.
We need a systemically different approach and while it seems like there’s all the money in the world to fund the necessary research, the same seemed true in the 50s, the 60s, the 80s, the 90s, the 10s… In the end, a new AI winter will come as people realize that the current approach won’t live up to their unrealistic expectations. Ten to fifteen years later some new approach will come out of underfunded basic research.
And it’s all just a little bit of history repeating.
in the 60s. Heck, they were even working on neural nets back then
I remember playing with neural nets in the late 1980s. They had optical character recognition going even back then. The thing was, their idea of “big networks” was nowhere near big enough scale to do anything as impressive as categorize images: cats vs birds.
We’ve hit the point where supercomputers in your pocket are…
The Cray-1, a pioneering supercomputer from the 1970s, achieved a peak performance of around 160 MFLOPS, it cost $8 million - or $48 million in today’s dollars, it weighed 5 tons
Modern smartphones, even mid-range models, can perform significantly faster than the Cray-1. For example, a 2019 Google Pixel 3 achieved 19 GFLOPS
19000/160 = over 100x as powerful as a Cray from the 1970s.
I just started using a $110 HAILO-8 for image classification, it can perform 26TOPS, that’s over 160,000x a 1970s Cray (granted, the image processor is working with 8 bit ints, the Cray worked with 64 bit floats… but still… 20,000x the operational power for 1/436,000th the cost and 1/100,000th the weight.)
There were around 60 Crays delivered by 1983, HAILO alone is selling on the order of a million chips a year…
Things have sped up significantly in the last 50 years.
There are AI experts (I could air quote that, but really, people who work in the industry) who are legitimately skeptical about the speed, power and real impact AI will have in the short term, so it isn’t a case of everyone who “really knows” thinks we’re getting doomsday AGI tomorrow.
Only the uneducated don’t see AI “taking off” right now.
Every idiot who says this thinks that ChatGPT encompasses all of “AI”. They’re the same people who didn’t get internet in their household until 2010.
AI isn’t taking off because it took off in the 60s. Heck, they were even working on neural nets back then. Same as in the 90s when they actually got them to be useful in a production environment.
We got a deep learning craze in the 2010s and then bolted that onto neural nets to get the current wave of “transformers/diffusion models will solve all problems”. They’re really just today’s LISP machines; expected to take over everything but unlikely to actually succeed.
Notably, deep learning assumes that better results come from a bigger dataset but we already trained our existing models on the sum total of all of humanity’s writings. In fact, current training is hampered by the fact that a substantial amount of all new content is already AI-generated.
Despite how much the current approach is hyped by the tech companies, I can’t see it delivering further substantial improvements by just throwing more data (which doesn’t exist) or processing power at the problem.
We need a systemically different approach and while it seems like there’s all the money in the world to fund the necessary research, the same seemed true in the 50s, the 60s, the 80s, the 90s, the 10s… In the end, a new AI winter will come as people realize that the current approach won’t live up to their unrealistic expectations. Ten to fifteen years later some new approach will come out of underfunded basic research.
And it’s all just a little bit of history repeating.
I remember playing with neural nets in the late 1980s. They had optical character recognition going even back then. The thing was, their idea of “big networks” was nowhere near big enough scale to do anything as impressive as categorize images: cats vs birds.
We’ve hit the point where supercomputers in your pocket are…
19000/160 = over 100x as powerful as a Cray from the 1970s.
I just started using a $110 HAILO-8 for image classification, it can perform 26TOPS, that’s over 160,000x a 1970s Cray (granted, the image processor is working with 8 bit ints, the Cray worked with 64 bit floats… but still… 20,000x the operational power for 1/436,000th the cost and 1/100,000th the weight.)
There were around 60 Crays delivered by 1983, HAILO alone is selling on the order of a million chips a year…
Things have sped up significantly in the last 50 years.
There are AI experts (I could air quote that, but really, people who work in the industry) who are legitimately skeptical about the speed, power and real impact AI will have in the short term, so it isn’t a case of everyone who “really knows” thinks we’re getting doomsday AGI tomorrow.
I didnt even imply it hasn’t happened but ok…
What do I know, im just uneducated, right?
Right!