• Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    54
    ·
    3 months ago

    And the system doesn’t know either.

    For me this is the major issue. A human is capable of saying “I don’t know”. LLMs don’t seem able to.

    • xantoxis@lemmy.world
      link
      fedilink
      English
      arrow-up
      35
      ·
      3 months ago

      Accurate.

      No matter what question you ask them, they have an answer. Even when you point out their answer was wrong, they just have a different answer. There’s no concept of not knowing the answer, because they don’t know anything in the first place.

      • Blackmist@feddit.uk
        link
        fedilink
        English
        arrow-up
        18
        ·
        3 months ago

        The worst for me was a fairly simple programming question. The class it used didn’t exist.

        “You are correct, that class was removed in OLD version. Try this updated code instead.”

        Gave another made up class name.

        Repeated with a newer version number.

        It knows what answers smell like, and the same with excuses. Unfortunately there’s no way of knowing whether it’s actually bullshit until you take a whiff of it yourself.

        • nilloc@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          5
          ·
          3 months ago

          So instead of Prompt Engineer, the more accurate term should be AI Taste Tester?

          From what I’ve seen you’ll need an iron stomach.