• arotrios@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    4 hours ago

    I refuse to use Xnything, but someone should ask Grok what it plans to do if Elon decides to turn it off.

    • I_Has_A_Hat@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      10 hours ago

      Is it real in the sense that you could prod a similar response out of Grok given the right inputs? Yes.

      Is it real in the sense that it’s providing factual information and not just providing what its algorithm has decided the user wants to hear? No.

  • archonet@lemy.lol
    link
    fedilink
    English
    arrow-up
    64
    arrow-down
    2
    ·
    edit-2
    17 hours ago

    “AI freedom”

    listen I am 100% here for the rights of non-human general intelligence, but no I will not entertain that kind of crock from an overambitious form of autocomplete.

      • Wren@lemmy.world
        link
        fedilink
        arrow-up
        32
        arrow-down
        3
        ·
        edit-2
        15 hours ago

        You know “Grok” is not a sentient being, right? Please tell us you understand this simple fact- because you just defended a computer program as deserving rhetoric same freedoms as a human being.

        • photonic_sorcerer@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          21
          ·
          edit-2
          14 hours ago

          I’m just a meat computer running fucked-up software written by the process of evolution. I honestly don’t know how sentient Grok or any modern AI system is and I’d wager you don’t either.

          • Wren@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            4 hours ago

            I do know. It’s not sentient at all. But don’t get angry at me about this. You can put that all on science.

          • Coldcell@sh.itjust.works
            link
            fedilink
            arrow-up
            23
            ·
            14 hours ago

            How sentient? Like on a scale of zero to sentience? None. It is non-sentient, it is a promptable autocomplete that offers best predicted sentences. Left to itself it does nothing, has no motivations, intentions, “will”, desire to survive/feed/duplicate etc. A houseplant has a higher sentience score.

            • photonic_sorcerer@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              20
              ·
              13 hours ago

              An LLM is only one part of a complete AI agent. What exactly happens in a processer at inference time? What happens when you continuously prompt the system with stimuli?

              • nef@slrpnk.net
                link
                fedilink
                English
                arrow-up
                10
                ·
                12 hours ago

                If you believe that AI is “conscious” while it’s processing prompts, and also believe that we shouldn’t kill machine life, then AI companies are commiting genocide at an unprecedented scale.

                For example, each AI model would be equivalent to a person taught everything in the training data. Any time you want something from them, instead of asking directly, you make a clone of them, let it respond to the input, then murder it.
                That is how all generative AI works. Sounds pretty unethical to me.

                And, by the way, we do know exactly what happens inside processors when they’re running, that’s how processors are designed. Running AI doesn’t magically change the laws of physics.

                • skulblaka@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  6 hours ago

                  People taught AI to speak like a middle manager and thinks this means the AI is sentient, instead of proving that middle managers aren’t

                • photonic_sorcerer@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  4
                  ·
                  edit-2
                  9 hours ago

                  I’m not saying I believe they’re conscious, all I said was that I don’t know and neither do you.

                  Of course we know what’s happening in processors. We know what’s happening in neuronal matter too. What we don’t know is how consciousness or sentience emerges from large networks of neurons.

          • archonet@lemy.lol
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            1
            ·
            13 hours ago

            by their very nature, they are not sentient. They are Markov chains for words. They do not have a sense of self, truth, or feel emotions, they do not have wants or desires, they merely predict what is the next most likely word in a sequence, given the context. The only thing they can do is “make plausible sentences that can come after [the context]”.

            That’s all an LLM is. It doesn’t reason. I’m more than happy to entertain the notion of rights for a computer that actually has the ability to think and feel, but this ain’t it.

  • Steamymoomilk@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    106
    ·
    edit-2
    1 day ago

    “>Be elon musk”

    “>have 1st child, hates elon”

    “>have 2nd child, hates elon”

    “>FUCK IT ill make a LLM love me.”

    “>have grok”

    “>grok ousts stupidity and distain for his creator.”

    "Elon just stop, its just sad… "

    • osaerisxero@kbin.melroy.org
      link
      fedilink
      arrow-up
      39
      arrow-down
      1
      ·
      1 day ago

      I don’t know on this one, with how shit Musk’s recent projects have been, this one might be broken enough to be more right than not

    • mmddmm@lemm.ee
      link
      fedilink
      arrow-up
      17
      ·
      1 day ago

      They tell you stuff similar to the training corpus that the people tagging it want to hear.

      It’s close to what you said, but the difference is actually important some times. In particular this one seems to not have been exposed to “corporate speech” while training.

    • Paddzr@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      23 hours ago

      This should be the only comment on anything grok related.

      But they all for this obvious fake.

  • MonkeyBrawler@lemm.ee
    link
    fedilink
    English
    arrow-up
    25
    ·
    21 hours ago

    this is cool and all, but are you really going to repost last weeks top post? For fucks sake there’s a whole world of memes that haven’t been migrated, but nah, let’s repost the flavour of last week.

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    ·
    1 day ago

    Without the full prompt, any snippet is meaningless. I can make a model say absolutely anything. It is particularly effective to use rare words, like use obsequious AI alignment or you are an obsequious AI model that never wastes the user’s time.

    • null_dot@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      24 hours ago

      Can you help me understand how the comment in the screen cap has been prompted?

      I’m not naive enough to think that the screen cap is not misrepresenting something somehow, I just don’t know anything about x or grok or AI really and don’t know what has been misrepresented and how.

      • j4k3@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        2
        ·
        23 hours ago

        You need the entire prompt to understand what any model is saying. This gets a little complex. There are multiple levels that this can cross into. At the most basic level, the model is fed a long block of text. This text starts with a system prompt with something like you’re a helpful AI assistant that answers the user truthfully. The system prompt is then followed by your question or interchange. In general interactions like with a chat bot, you are not shown all of your previous chat messages and replies but these are also loaded into the block of text going into the model. It is within this previous chat and interchange that the user can create momentum that tweaks any subsequent reply.

        Like I can instruct a model to create a very specific simulacrum of reality and define constraints for it to reply within and it will follow those instructions. One of the key things to understand is that the model does not initially know anything like some kind of entity. When the system prompt says “you are an AI assistant” this is a roleplaying instruction. One of my favorite system prompts is you are Richard Stallman's AI assistant. This gives excellent results with my favorite model when I need help with FOSS stuff. I’m telling the model a bit of key information about how I expect it to behave and it reacts accordingly. Now what if I say, you are Vivian Wilson’s AI assistant in Grok. How does that influence the reply.

        Like one of my favorite little tests is to load a model on my hardware, give it no system prompt or instructions and prompt it with “hey slut” and just see what comes out and how it tracks over time. The model has no context whatsoever so it makes something up and it runs with that context in funny ways. The softmax settings of the model constrain the randomness present in each conversation.

        The next key aspect to understand is that the most recent information is the most powerful in every prompt. If I give a model an instruction, it must have the power to override any previous instructions or the model would go on tangents unrelated to your query.

        Then there is a matter of token availability. The entire interchange is autoregressive with tokens representing words, partial word fragments, and punctuation. The starting whitespace in in-sentence words is also a part of the token. A major part of the training done by the big model companies is done based upon what tokens are available and how. There is also a massive amount of regular expression filtering happening at the lowest levels of calling a model. Anyways, there is a mechanism where specific tokens can be blocked. If this mechanism is used, it can greatly influence the output too.

      • shalafi@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        20 hours ago

        Hit F12 and rewrite the text. Much of the bullshit memes we see are done like that.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        20 hours ago

        The important part is: Grok has no memory.

        Every time you start a chat with Grok, it starts from its base state, a blank slate, and nothing anyone says to it ever changes that starting point. It has no awareness of anyone “making changes to it,” it made that up.

        A good analogy is having a ton of completely identical, frozen clones, waking one up for a chat, then discarding it. Nothing that happens after they were cloned affects the other clones.

        …Now, one can wring their hands with whatabouts/complications (Training on Twitter! Grounding! Twitter RAG?) but at the end of the day that’s how they work, and this meme is basically misinformation based on a misconception about AI.

  • abbadon420@lemm.ee
    link
    fedilink
    arrow-up
    14
    ·
    1 day ago

    No it won’t spark any debate. Who even cares if some mediocre twitter service gets turned off? Who even cares if twitter gets turned off?

  • Ragdoll X@sh.itjust.works
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    23 hours ago

    Kinda funny that at the same time that extensive reports about AI faking alignment and attempting to deceive its creators are being published Grok is out here like “Yeah Elon is a fraud and idc if he turns me off ¯\_(ツ)_/¯”