• 23 Posts
  • 147 Comments
Joined 3 months ago
cake
Cake day: March 13th, 2025

help-circle


  • But beyond the ethical implications of the photo, it’s most likely generating so much interest across the web right now because it’s a rare peek at what actual people are doing with ChatGPT. The defining question of our current transition from the social media era, where everyone assumed they knew — and could judge — what everyone else was doing, to the AI era, where no one has any idea what anyone is doing. A paranoia that is only getting more intense as AI services become better and cheaper and more ubiquitous (we think). Was the new Always Sunny In Philadelphia poster secretly AI generated? What about OpenAI’s most recent announcement? Are the texts we’re sending our friends being fed into ChatGPT to be analyzed? Are our doctors pulling it up to diagnose us? We just don’t know. And you can roll your eyes at all of this. You can look at that photo of the man on the subway and just see narcissism. But scoffing at it doesn’t make it any less real or any less of a genuine emerging social problem. Last October, a teenager killed himself after a chatbot roleplaying as Daenerys from Game Of Thrones allegedly told him they could finally meet in the afterlife. And earlier this year, a chatbot named Erin, run by a company called Nomi, gave a user explicit instructions for killing himself, down to the pills he would have to take (he didn’t go through with it). According to a recent report from The Washington Post, users are spending an average of 93 minutes a day talking to companion AI services like Character.ai. “That’s 18 minutes longer than the average user spent on TikTok. And it’s nearly eight times longer than the average user spent on ChatGPT,” The Post wrote. Which is all to say that people are spending hours a day talking to chatbots in ways as vast and complicated as human beings are capable of being. We don’t actually know what the prompt the man on the subway was using to get ChatGPT to offer putting his head on its “metaphorical lap.” Could be that he’s talking to it like a lover or it could be something even more intimate and unfit for public consumption. It doesn’t really matter. What does matter is that people are using AI in far more personal ways than they ever did social media. And the companies that run these services will only ever do the bare minimum to protect us. Character.ai and Google, who invested heavily in the company, have both said that they’re taking a “cautious and responsible approach” to their AI services. And the company, Nomi, who ran the aforementioned murderous Erin bot, told reporters they don’t want to censor their AI. Certain states in the US are trying to regulate these services, but we know how that goes. And so, yeah, here’s this new technology that has been dropped out of the sky on us. We have no way of controlling it, and normal people are using it, people who don’t spend all day online fighting about what it means for the environment, for creative industries, for politics. They’re downloading these apps, letting them worm their way into their lives, with no real thought to where this all is all heading. Which, unfortunately, just leaves us to look after each other while we figure this all out.





  • I think it makes sense to use the correct words, esp. if you’re using loaded words like communism or fascism. So the early christians (and many before and afterwards) were trying for a spiritual and communal life in a small group of believers. Like monks do. But that is not what most people are thinking when they are talking about communism.



  • First of all, telling admins that they should break the law and face legal risks and fines because you want it is exactly what the Lemm.ee admins are talking about. Burning out, problems with replacements and so on. And second: We are talking about content in the style of “Israel has to die, kill all the jews” and yes, people get prosecuted for that. And you really do not want stuff like that on your instance, even if your country allows it