More veggies and less fruit; too much sugar. Update: Sorry for the duplicate, lemmy client glitched.
- 1 Post
- 12 Comments
More veggies and less fruit; too much sugar.
I’m from Europe and I always assumed that America does that, because it’s the cheapest option by far.
wischi@programming.devto Technology@lemmy.world•WhatsApp provides no cryptographic management for group messagesEnglish4·10 days agoIt’s not called Meta data by accident 🤣
Shut that section down and ground the wires. Not really that dangerous. It’s only dangerous if you don’t follow protocol.
wischi@programming.devto Technology@lemmy.world•AI models routinely lie when honesty conflicts with their goalsEnglish3·15 days ago“Amazingly” fast for bio-chemistry, but insanely slow compared to electrical signals, chips and computers. But to be fair the energy usage really is almost magic.
wischi@programming.devto Technology@lemmy.world•AI models routinely lie when honesty conflicts with their goalsEnglish2·16 days agoBut by that definition passing the Turing test might be the same as super human intelligence. There are things that humans can do, but computers can’t. But there is nothing a computer can do but still be slower than humans. That’s actually because our biological brains are insanely slow compared to computers. So once a computer is better or as accurate as a human it’s almost instantly superhuman at that task because of its speed. So if we have something that’s as smart as humans (which is practically implied because it’s indistinguishable) we would have super human intelligence, because it’s as smart as humans but (numbers made up) can do 10 days of cognitive human work in just 10 minutes.
wischi@programming.devto Technology@lemmy.world•AI models routinely lie when honesty conflicts with their goalsEnglish111·16 days agoAI isn’t even trained to mimic human social behavior. Current models are all trained by example so they produce output that would score high in their training process. We don’t even know (and it’s likely not even expressable in language) what their goals are but (anthropomorphised) are probably more like “Answer something that humans that designed and oversaw the training process would approve of”
wischi@programming.devto Technology@lemmy.world•AI models routinely lie when honesty conflicts with their goalsEnglish11·16 days agoTo be fair the Turing test is a moving goal post, because if you know that such systems exist you’d probe them differently. I’m pretty sure that even the first public GPT release would have fooled Alan Turing personally, so I think it’s fair to say that this systems passed the test at least since that point.
wischi@programming.devto Technology@lemmy.world•AI models routinely lie when honesty conflicts with their goalsEnglish24·16 days agoWe don’t know how to train them “truthful” or make that part of their goal(s). Almost every AI we train, is trained by example, so we often don’t even know what the goal is because it’s implied in the training. In a way AI “goals” are pretty fuzzy because of the complexity. A tiny bit like in real nervous systems where you can’t just state in language what the “goals” of a person or animal are.
Erschließt sich mir nicht ganz diese Logik. Hättest auch gleich selbst ein Logo für den CCC erfinden können mit der Begründung, dass es besser in die Vorlage passt.
Especially to “differentiate yourself” 🤣 it’s basically the exact opposite.