• Match!!@pawb.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    9 days ago

    just one more terawatt-hour of electricity and it’ll be accurate and creative i swear!!

    • surph_ninja@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      8 days ago

      This particular anti-AI stance always reminds me of religion gradually losing ground to science.

      It’s been pointed out by some folks that if religion’s domain is only ‘what science can’t explain,’ then the domain of religion is continuously shrinking as science grows to explain more and more.

      If your anti-AI stance is centered on ‘it wastes power and is wrong too often,’ then your criticism becomes more irrelevant as the accuracy improves and models become more efficient.

      • hark@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago

        The assumption here is that the AI will improve. Under the current approach to AI, that might not be the case, since it could be hitting its limitations and this article may be pointing out a symptom of those limitations.

        • surph_ninja@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          8 days ago

          You’re obviously not interacting with AI much. It is improving, and at an alarming rate. I’m astounded at the difference between AI now vs 3 years ago. They’re moving to new generations in a matter of months.

          • hark@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            8 days ago

            My point is that the rate of improvement is slowing down. Also, its capabilities are often overblown. On the surface it does something amazing, but then flaws are pointed out by those who have a better understanding of the subject matter, then those flaws are excused with fluff words like “hallucinations”.

            • surph_ninja@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              8 days ago

              All it needs to do is produce less flaws than the average human. It’s already passed that mark for many general use cases (which many people said would never happen). The criticism is now moving to more and more specialized work, but the AI continues to improve in those areas as well.

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    No shit.

    The fact that is news and not inherently understood just tells you how uninformed people are in order to sell idiots another subscription.

  • ansiz@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    This is a big reason why I continue to cringe whenever I hear one of the endless news stories or podcasts about how AI is going to revolutionize our society any day now. It’s clear they are being better with image generation but text ‘thinking’ is way too unreliable to use like human replacement knowledge workers or therapists, etc.

    • keegomatic@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      9 days ago

      This is an increasingly bad take. If you work in an industry where LLMs are becoming very useful, you would realize that hallucinations are a minor inconvenience at best for the applications they are well suited for, and the tools are getting better by leaps and bounds, week by week.

      edit: Like it or not, it’s true. I use LLMs at work, most of my colleagues do too, and none of us use the output raw. Hallucinations are not an issue when you are actively collaborating with the model and not using it to either “know things for you” or “do the work for you.” Neither of those things are what LLMs are really good at, but that’s what most laypeople use them for, so these criticisms are very obviously short-sighted to those of us who have real-world experience with them in a domain where they work well.

      • Captain Poofter@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        9 days ago

        you’re getting down voted because you accurately conceive of and treat LLMs the way they should be—as tools. the people down voting you do not have this perspective because the only perspective pushed to people outside of a technical career or research is “it’s artificial intelligence and it will revolutionize society but lol it hallucinates if you ask it stuff”. This is essentially propaganda because the real message should be “it’s an imperfect tool like all tools but boy will it make getting a lot of certain types of work done way more efficient so we can redistribute our own efforts to other tasks quicker and take advantage of LLMs advanced information processing capabilities”

        tldr: people disagree about AI/LLMs because one group thinks about them like Dr. Know from the movie A.I. and the other thinks about them like a TI-86+ on steroids

        • KeenFlame@feddit.nu
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 days ago

          Well, there is also the group that thinks they are “based” “fire” and so on, like always, fanatics ruin everything. They aren’t God, nor a plague. Find another interest if this bores you

        • Captain Poofter@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 days ago

          it’s not a pacemaker though, it’s a hammer. and sometimes the head flies off a hammer and hits someone in the skull. but no one disputes the fact that hammers are essential tools.

  • Halcyon@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    8 days ago

    It’s not “hallucination”. That are false calculations, leading to incorrect text outputs. Let’s stop anthropomorphizing computers.