• justdoitlater@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    7 months ago

    I think its a bit more complex than that: you are right, but just in the beginning… after the AI is trained you dont need the cheap labor anymore. Which imho makes it even worse.

    • queermunist she/her@lemmy.ml
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      edit-2
      7 months ago

      Marketing hype.

      No amount of training can ever eliminate the need for human curation. This is not AI, it’s a jumped up pattern recognition engine. False positives and false negatives are inevitable without a consciousness to evaluate it. Hallucinations are an intractable problem that can not be solved, regardless of training, and so all these AI can ever be is a tool for human workers.

      It’ll take something totally different and new.

      • jsomae@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        7 months ago

        LLMs may fabricate things now and then but so do humans. I am not convinced the problem is intractable.

        • queermunist she/her@lemmy.ml
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          7 months ago

          You have no reason to believe the problem can be solved.

          It’s almost religious. You just have faith in technology you don’t understand.

          Keep praying to your machine spirits, maybe the Omnissiah will deliver the answer!

          • jsomae@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            7 months ago

            I have no reason to believe the problem can’t be solved, except insofar as it hasn’t been solved yet (but LLMs only recently took off). So without a good reason to believe it’s intractable, I’m at worst 50/50 on if it can be solved. Faith in the machine spirit would be if I had an unreasonably high expectation LLMs can be made not to hallucinate, like 100%.

            My expectation is around 70% that it’s solvable.

            • queermunist she/her@lemmy.ml
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              7 months ago

              You have no reason to think it can be solved. You’re just blindly putting your faith in something you don’t understand and making up percentages to make yourself sound less like a religious nut.

              • jsomae@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                7 months ago

                If I have no reason to believe X and no reason not to believe X, then the probability of X would be 50%, no?

                • queermunist she/her@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  7 months ago

                  By this logic, the probability of every stupid thing is 50%

                  You have no reason to believe magic is real, but you have no reason to not believe magic is real. So, is there a 50% probability that magic is real? Evidently you think so, because the magic science mans are going to magic up a solution to the problems faced by these chatbots.

                  • jsomae@lemmy.ml
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    edit-2
                    7 months ago

                    Absolutely not true. The probabilities of stupid things are very low; that’s because they are stupid. If we expected such things to be probable, we probably wouldn’t call them stupid.

                    I have plenty of evidence to believe magic isn’t real. Don’t mistake “no evidence (and we haven’t checked)” for “no evidence (but we’ve checked)”. I’ve lived my whole life and haven’t seen magic, and I have a very predictive model for the universe which has no term for ‘magic’.

                    LLMs are new, and have made sweeping, landmark improvements every year since GPT2. Therefore I have reason to believe (not 100%!) that we are still in the goldrush phase and new landmark improvements will continue to be made in the field for some time. I haven’t really seen an argument that hallucination is an intractable problem, and while it’s true that all LLMs have hallucinated so far, GPT4 hallucinates much less than GPT3, and GPT3 hallucinates a lot less than GPT2.

                    But realistically speaking, even if I were unknowledgeable and unqualified to say anything with confidence about LLMs, I could still say this: for any statement X about LLMs which is not stupid by the metric that an unknowledgeable person would be able to perceive, the probability of that statement being true about LLMs to an unknowledgeable person is 50%. We know this because the opposite of that statement, call it ¬X, would also be equally opaque to an unknowledgeable person. Given X and ¬X are mutually exclusive, and we have no reason to favor one over the other, both have probability 50%.

      • justdoitlater@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        7 months ago

        I understand what you are saying but I dont agree, look at the examples we already have: I use chatgpt at work to code, it has limitations but works without any human curation. Check midjourney as well, it has great accuracy and if you ask a picture of dogs it will create without any human intervention. Yes, it took a long time and human effort to train them, but in the end it is not needed anymore for the majority of the cases. What you say about hallucinations, innacurate results, they happen yes, but ita becoming fringe cases and less and less. Its true that its not the miracle tool that marketing says it is, thats marketing, but its much more dangerous than it looks and will definetly substitute a lot of workers, it already does.

        • queermunist she/her@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          7 months ago

          Have you stopped coding? I assume not! ChatGPT is a tool that can be used by human workers, replacing human workers is beyond it.

          And sure, you can generate bland and derivative images with stable diffusion stuff, but it can’t replace anyone. At best it just opens up the creation of art to a wider group of people, essentially de-skilling the profession. That’s a serious problem! That’s not actually substitution.

          De-skilling is definitely worth talking about, though. When someone who doesn’t really understand coding or art can generate what they need, all of that skill built up over the years by professionals will become less valuable. That’s just like how I weld car parts despite not having the skill to actually weld most things. I’m not a skilled welder, yet I can replace a skilled welder with the right tools and robots.