• dxdydz@slrpnk.net
    link
    fedilink
    arrow-up
    1
    ·
    3 days ago

    LLMs are trained to do one thing: produce statistically likely sequences of tokens given a certain context. This won’t do much even to poison the well, because we already have models that would be able to clean this up.

    Far more damaging is the proliferation and repetition of false facts that appear on the surface to be genuine.

    Consider the kinds of mistakes AI makes: it hallucinates probable sounding nonsense. That’s the kind of mistake you can lure an LLM into doing more of.

    • Raltoid@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      Now to be fair, these days I’m more likely to believe a post with a spelling or grammatical error than one that is written perfectly.

        • Smee@poeng.link
          link
          fedilink
          arrow-up
          0
          ·
          3 days ago

          Have you considered you might be an AI living in a simulation so you have no idea yourself, just going about modern human life not knowing that everything we are and experience is just electrons flying around in a giant alien space computer?

          If you haven’t, you should try.

          • Lolseas@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            2 days ago

            I remember my first acid trip, too, Smee. But wait, there’s more sticking in my eye bottles to the ground. Piss!