Moore’s law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years.

Is there anything similar for the sophistication of AI, or AGI in particular?

  • Chrüsimüsi@feddit.ch
    link
    fedilink
    arrow-up
    21
    ·
    edit-2
    1 year ago

    Some in the AI industry have proposed concepts similar to Moore’s Law to describe the rapid growth of AI capabilities.

    Although there is no universally accepted law or principle akin to Moore’s Law for AI, people often refer to trends that describe the doubling of model sizes or capabilities over a specific time frame.

    For instance, OpenAI has previously described a trend where the amount of computing power used to train the largest AI models has been doubling roughly every 3.5 months since 2012.

    Source

    • Andy@lemmy.worldOP
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      Thank you!

      But does that equate to the power of AI doubling every 3.5 months?

            • Buffalox@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              I’d say that when playing chess was the premiere achievement of AI, it was as good as dead, playing chess proves very little, as it’s basically a task that can be achieved computationally. Investments in research had almost completely dried out for a couple of decades.

              AI development was almost completely dead, but calling it the AI winter is fine too. ;)

          • Buffalox@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            1 year ago

            AI made very little progress for 40 years from the 70’s, basically just some basic pattern recognition like OCR in the 80’s.

            Up until recently AI development has been extremely underwhelming, especially compared to what we hoped back in the 80’s.

            Although results are pretty impressive, autonomous cars are still a hard nut to crack.

            Most impressive IMO are the recent LLMs (Large Language Model), but these results are very recent, compared to the many decades research has been done to develop better AI.

            Honestly an AI beating a human at chess is not that impressive AI research IMO, as it’s an extremely narrow task, you can basically just throw computational power at. Still for many years that was the most impressive AI achievement.

      • Chrüsimüsi@feddit.ch
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        I guess it’s hard to measure the power of AI anyway but I would say a strong no: it doesn’t equate to the power of AI doubling every 3.5 months 😅

  • Behohippy@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    1 year ago

    The advancements in this space have moved so fast, it’s hard to extract a predictive model on where we’ll end up and how fast it’ll get there.

    Meta releasing LLaMA produced a ton of innovation from open source that showed you could run models that were nearly the same level as ChatGPT with less parameters, on smaller and smaller hardware. At the same time, almost every large company you can think of has prioritized integrating generative AI as a high strategic priority with blank cheque budgets. Whole industries (also deeply funded) are popping up around solving the context window memory deficiencies, prompt stuffing for better steerability, better summarization and embedding of your personal or corporate data.

    We’re going to see LLM tech everywhere in everything, even if it makes no sense and becomes annoying. After a few years, maybe it’ll seem normal to have a conversation with your shoes?

  • TacoEvent@lemmy.zip
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    1 year ago

    We’ve reached far beyond practical necessity in model sizes for Moore’s Law to apply there. That is, model sizes have become so huge that they are performing at 99% of the capability they ever will be able to.

    Context size however, has a lot farther to go. You can think of context size as “working memory” where model sizes are more akin to “long term memory”. The larger the context size, the more a model is able to understand beyond the scope of it’s original model training in one go.

    • AggressivelyPassive@feddit.de
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      That is a pretty wild assumption. There’s absolutely no reason, why a larger model wouldn’t produce drastically better results. Maybe not next month, maybe not with this architecture, but it’s almost certain that they will grow.

      This has hard “256kb is enough” vibes.

        • AggressivelyPassive@feddit.de
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          Actual understanding of the prompts, for example? LLMs are just text generators, they have no concepts of what’s being the words.

          Thing is, you seem to be completely uncreative or rather deny the designers and developers any creativity if you just assume “now we’re done”. Would you have thought the same about Siri ten years ago? “Well, it understands that I’m planning a meeting, AI is done.”

          • TacoEvent@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I see your point. Rereading the OP, it looks like I jumped to a conclusion about LLMs and not AI in general.

            My takeaway still stands for LLMs. These models have gotten huge with little net gain on each increase. But a Moore’s Law equivalent should apply to context sizes. That has a long way to go.