

I just found out the other day that items pinned to the taskbar are in %AppData%\Microsoft\Internet Explorer\Quick Launch\User Pinned\TaskBar. 🤦♂️
Microsoft is a sad parody of itself.
I just found out the other day that items pinned to the taskbar are in %AppData%\Microsoft\Internet Explorer\Quick Launch\User Pinned\TaskBar. 🤦♂️
Microsoft is a sad parody of itself.
Hear me out, Eliza. It’ll be equally useless and for orders of magnitude less cost. And no one will mistakenly or fraudulently call it AI.
This might work:
They add capabilities not replace.
They poison all repositories of knowledge with their useless slop.
They are plummeting us into a dark age which we are unlikely to survive.
Sure, it’s not the LLMs fault specifically, it’s the bastards who are selling them as sources of information instead of information-shaped slop, but they’re still being used to murder the future in the name of short term profits.
So, no, they’re not useless. They’re infinitely worse than that.
Of course it doesn’t, everyone is wasting time and money on LLMs instead of on proper AI research.
That’s not a reason to call them AI or AGI, though. On the contrary, it’s poisoning the term, because once the LLM bubble bursts no one will want to invest in AI research for decades, because they’ll associate it with LLMs. (Not to mention how hard it’ll be to research anything when all sources of information have been poisoned with LLM slop.)
It’s neither general nor intelligence.
That’s all I want out of AI.
The ability to hurt my computer when it isn’t working properly.
Yes I use AI
No you don’t.
You used a large language model, which is a very fancy statistics based autocomplete algorithm, but has absolutely nothing whatsoever to do with artificial intelligence, other than by harming public opinion of it and sucking off all the funding that could be used on actual AI research.
I’m fairly certain desktop computers (and PCs in general, including laptops) are a very small portion of the devices that use Linux.
I expect most Linux devices are phones and tablets (Android), followed by embedded devices (though BSD probably outweighs Linux on those), followed by servers, followed by desktop computers.
Doesn’t PlayStation use BSD…?
No, LLMs have always been an evident dead end when it comes to general AI.
They’re hampering research in actual AI, and the fact that they’re being marketed as AI ensures that no one will invest in actual AI research in decades after the bubble bursts.
We were on track for a technological singularity in our lifetimes, until those greedy bastards derailed us and murdered the future by poisoning the Internet with their slop for some short term profits.
Now we’ll go extinct due to ignorance and global warming long before we have time to invent something smart enough to save us.
But, hey, at least, for a little while, their line did go up, and that’s all that matters, it seems.
Damm cyclists… they ruined cycling!
That is not dead which can eternal lie,
and with strange aeons even death may die.
Oxidane.
Sure, but they’re a minority. Millions, at most, out of billions. Probably less than that.
All modern LLMs are as good as professional mentalists at convincing most of their users that they know what they’re saying.
That’s what they’re designed, trained, and selected for. Engagement, not correctness.
But LLMs truly excel at making their answers look correct. And at convincing their users that they are.
Humans are generally notoriously bad at that kind of thing, especially when our answers are correct.
And if LLM don’t have the actual answer they blabbering like a redditor, and if someone can’t get an accurate answer they start asking forum and socmed.
LLM’s are completely incapable of giving a correct answer, except by random chance.
They’re extremely good at giving what looks like a correct answer, and convincing their users that it’s correct, though.
When LLMs are the only option, people won’t go elsewhere to look for answers, regardless of how nonsensical or incorrect they are, because the answers will look correct, and we’ll have no way of checking them for correctness.
People will get hurt, of course. And die. (But we won’t hear about it, because the LLM’s won’t talk about it.) And civilization will enter a truly dark age of mindless ignorance.
But that doesn’t matter, because the company will have already got their money, and the line will go up.
Yes, but search engines will serve you LLM generated slop instead of search results, and sites like Stack Overflow will die due to lack of visitors, so the internet will become a reddit-like useless LLM ridden hellscape completely devoid of any human users, and we’ll have to go back to our grandparents’ old dusty paper encyclopedias.
Eventually, in a decade or two, once the bubble has burst and google, meta, and all those bastards have starved each other to death, we might be able to start rebuilding a new internet, probably reinventing usenet over ad-hoc decentralised wifi networks, but we won’t get far, we’ll die in the global warming wars before we get it to any significant size.
At least some bastards will have made billions out of the scam, though, so there’s that, I suppose. 🤷♂️
Dragon 32.