• Jimmycrackcrack@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    1 month ago

    I realise the dumbass here is the guy saying programmers are ‘cooked’, but there’s something kind of funny how the programmer talks about how people misunderstand the complexities of their job and how LLMs easily make mistakes because of an inability to understand the nuances of what he does everyday and understands deeply. They rightly point out how without their specialist oversight, AI agents would fail in ridiculous and spectacular ways, yet happily and vaguely adds as a throw away statement at the end “replacing other industries, sure.” with the exact same blitheness and lack of personal understanding with which ‘Ace’ proclaims all programmers cooked.

    • Tartas1995@discuss.tchncs.de
      link
      fedilink
      arrow-up
      1
      ·
      24 days ago

      As a programmer, 100% right.

      But Ai shit can ruin the economical validity of certain jobs but it will make the quality of work much much much worse. E.g. ai generated Easter bunny bags will ruin the Easter bunny bag industry for artists and we will get worse Easter bunny bags :( ofc, there will be work for artists but a lot less because people don’t appreciate artistic skills enough. :/

    • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      I find this is a really common trope where people appreciate the complexity of the domain they work in, but assume every other domain is trivial by comparison.

      • HiddenLayer555@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 month ago

        There’s a saying in Mandarin that translates to something like: Being in different professions is like being on opposite sides of a mountain. It basically means you can never fully understand a given profession unless you’re actually doing it.

  • HiddenLayer555@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    LLMs can’t even stay on topic when specifically being asked to solve one problem.

    This happens to me all the damn time:

    I paste a class that references some other classes which I have already tested to be working, my problem is in a specific method that doesn’t directly call on any of the other classes. I tell the LLM specifically which method is not working, I also tell it that I have tested all the other methods and they work as intended (complete with comments documenting what they’re supposed to do). I then ask the LLM to only focus on the method I have specified, and it still goes on about “have you implemented all the other classes this class references? Here’s my shitty implementation of those classes instead.”

    So then I paste all the classes that the one I’m asking about depends on, reiterate that all of them have been tested and are working, tell the LLM which method has the problem again, and it still decides that my problem must be in the other classes and starts “fixing” them which 9 out of 10 times is just rearranging the code that I already wrote and fucking up the organisation that I had designed.

    It’s somewhat useful for searching for well-known example code using natural language, i.e. “How do I open a network socket using Rust,” or if your problem is really simple. Maybe it’s just the specific LLM I use, but in my experience it can’t actually problem solve better than humans.

    • zalgotext@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      Yeah I find LLMs most useful to basically read the docs for me and provide it’s own sample/pseudocode. If it goes off the rails, I have to guide it back myself using natural language. Even then though it’s still just a tool that gets me going in the right direction, or helps me consider alternative solutions buried in the docs that I might have skimmed over. Rarely does it produce code that I can actually use in my project.