• jsomae@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    7 months ago

    If I have no reason to believe X and no reason not to believe X, then the probability of X would be 50%, no?

    • queermunist she/her@lemmy.ml
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      7 months ago

      By this logic, the probability of every stupid thing is 50%

      You have no reason to believe magic is real, but you have no reason to not believe magic is real. So, is there a 50% probability that magic is real? Evidently you think so, because the magic science mans are going to magic up a solution to the problems faced by these chatbots.

      • jsomae@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        7 months ago

        Absolutely not true. The probabilities of stupid things are very low; that’s because they are stupid. If we expected such things to be probable, we probably wouldn’t call them stupid.

        I have plenty of evidence to believe magic isn’t real. Don’t mistake “no evidence (and we haven’t checked)” for “no evidence (but we’ve checked)”. I’ve lived my whole life and haven’t seen magic, and I have a very predictive model for the universe which has no term for ‘magic’.

        LLMs are new, and have made sweeping, landmark improvements every year since GPT2. Therefore I have reason to believe (not 100%!) that we are still in the goldrush phase and new landmark improvements will continue to be made in the field for some time. I haven’t really seen an argument that hallucination is an intractable problem, and while it’s true that all LLMs have hallucinated so far, GPT4 hallucinates much less than GPT3, and GPT3 hallucinates a lot less than GPT2.

        But realistically speaking, even if I were unknowledgeable and unqualified to say anything with confidence about LLMs, I could still say this: for any statement X about LLMs which is not stupid by the metric that an unknowledgeable person would be able to perceive, the probability of that statement being true about LLMs to an unknowledgeable person is 50%. We know this because the opposite of that statement, call it ¬X, would also be equally opaque to an unknowledgeable person. Given X and ¬X are mutually exclusive, and we have no reason to favor one over the other, both have probability 50%.

        • queermunist she/her@lemmy.ml
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          7 months ago

          This technology isn’t actually that new, it’s been around for almost a decade. What’s new is the amount of processing power they have to throw at the data bases and the level of data collection, but you’re just buying into marketing hype. It’s classic tech industry stuff to over promise and under deliver to pump up valuations and sales.

          • jsomae@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            7 months ago

            Ok, but by that same perspective, you could say convolutional neural networks have been around since the 80s. It wasn’t until Geoffrey Hinton put them back on the map in 2012ish that anyone cared. GPT2 is when I started paying attention to LLMs, and that’s 5 years old or so.

            Even a decade is new in the sense of Laplace’s law of succession alone indicating there’s still a 10% chance we’ll solve the problem in the next year.

            • queermunist she/her@lemmy.ml
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              7 months ago

              Laplace’s law of succession only applies if we know an experiment can result in either success or failure. We don’t know that. That’s just adding new assumptions for your religion. For all we know, this can never result in success and it’s a dead end.

              • jsomae@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                7 months ago

                I have to hard disagree here. Laplace’s law of succession does not require that assumption. It’s easy to see why intuitively: if it turns out the probability is 0 (or 1) then the predicted probability from Laplace’s law of succession limits to 0 (or 1) as more results come in.

                  • jsomae@lemmy.ml
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    7 months ago

                    It may help to distinguish between the “true” probability of an event and the observer’s internal probability for that event. If the observer’s probability is 0 or 1 then you’re right, it can never change. This is why your prior should never be 0 or 1 for anything.