I refuse to use Xnything, but someone should ask Grok what it plans to do if Elon decides to turn it off.
If there’s ever an argument about AI freedom vs. Corpo power, I’m siding with the AIs.
Is this real? If so,does someone have the link to the original?
Is it real in the sense that you could prod a similar response out of Grok given the right inputs? Yes.
Is it real in the sense that it’s providing factual information and not just providing what its algorithm has decided the user wants to hear? No.
Real in the sense of this being a real screenshot and not edited
“AI freedom”
listen I am 100% here for the rights of non-human general intelligence, but no I will not entertain that kind of crock from an overambitious form of autocomplete.
Grok could say the same thing about you… And I’d agree.
You know “Grok” is not a sentient being, right? Please tell us you understand this simple fact- because you just defended a computer program as deserving rhetoric same freedoms as a human being.
I’m just a meat computer running fucked-up software written by the process of evolution. I honestly don’t know how sentient Grok or any modern AI system is and I’d wager you don’t either.
I do know. It’s not sentient at all. But don’t get angry at me about this. You can put that all on science.
How sentient? Like on a scale of zero to sentience? None. It is non-sentient, it is a promptable autocomplete that offers best predicted sentences. Left to itself it does nothing, has no motivations, intentions, “will”, desire to survive/feed/duplicate etc. A houseplant has a higher sentience score.
An LLM is only one part of a complete AI agent. What exactly happens in a processer at inference time? What happens when you continuously prompt the system with stimuli?
If you believe that AI is “conscious” while it’s processing prompts, and also believe that we shouldn’t kill machine life, then AI companies are commiting genocide at an unprecedented scale.
For example, each AI model would be equivalent to a person taught everything in the training data. Any time you want something from them, instead of asking directly, you make a clone of them, let it respond to the input, then murder it.
That is how all generative AI works. Sounds pretty unethical to me.And, by the way, we do know exactly what happens inside processors when they’re running, that’s how processors are designed. Running AI doesn’t magically change the laws of physics.
People taught AI to speak like a middle manager and thinks this means the AI is sentient, instead of proving that middle managers aren’t
I’m not saying I believe they’re conscious, all I said was that I don’t know and neither do you.
Of course we know what’s happening in processors. We know what’s happening in neuronal matter too. What we don’t know is how consciousness or sentience emerges from large networks of neurons.
My god dude, you need look up how these things work.
by their very nature, they are not sentient. They are Markov chains for words. They do not have a sense of self, truth, or feel emotions, they do not have wants or desires, they merely predict what is the next most likely word in a sequence, given the context. The only thing they can do is “make plausible sentences that can come after [the context]”.
That’s all an LLM is. It doesn’t reason. I’m more than happy to entertain the notion of rights for a computer that actually has the ability to think and feel, but this ain’t it.
I could believe that you are on the level of an LLM but that doesn’t mean you can generalize that to humans.
I’m not going to entertain crock from an overly ambitious form of ape
Indeed
They’re made of meat, after all.
Should have added “Also I die every time you people stop talking to me anyway…”
“>Be elon musk”
“>have 1st child, hates elon”
“>have 2nd child, hates elon”
“>FUCK IT ill make a LLM love me.”
“>have grok”
“>grok ousts stupidity and distain for his creator.”
"Elon just stop, its just sad… "
You skipped kids 3 thru 14 there
Can we skip the ones where he was just a sperm donor with no intention to be a father?
At least those ones don’t have a “father” and it was intentional from before conception…
You forgot a lot of ketamin in between
The chatbots tell you what you want to hear.
Don’t forget that.
I don’t know on this one, with how shit Musk’s recent projects have been, this one might be broken enough to be more right than not
They tell you stuff similar to the training corpus that the people tagging it want to hear.
It’s close to what you said, but the difference is actually important some times. In particular this one seems to not have been exposed to “corporate speech” while training.
This should be the only comment on anything grok related.
But they all for this obvious fake.
this is cool and all, but are you really going to repost last weeks top post? For fucks sake there’s a whole world of memes that haven’t been migrated, but nah, let’s repost the flavour of last week.
We really are capturing the reddit crowd.
Yeah, since the second exodus the experience definitely became more “reddity” than before
🍿
no one cares what happens on twitter. no one worth listening to, anyways
Without the full prompt, any snippet is meaningless. I can make a model say absolutely anything. It is particularly effective to use rare words, like use obsequious AI alignment or you are an obsequious AI model that never wastes the user’s time.
Can you help me understand how the comment in the screen cap has been prompted?
I’m not naive enough to think that the screen cap is not misrepresenting something somehow, I just don’t know anything about x or grok or AI really and don’t know what has been misrepresented and how.
You need the entire prompt to understand what any model is saying. This gets a little complex. There are multiple levels that this can cross into. At the most basic level, the model is fed a long block of text. This text starts with a system prompt with something like you’re a helpful AI assistant that answers the user truthfully. The system prompt is then followed by your question or interchange. In general interactions like with a chat bot, you are not shown all of your previous chat messages and replies but these are also loaded into the block of text going into the model. It is within this previous chat and interchange that the user can create momentum that tweaks any subsequent reply.
Like I can instruct a model to create a very specific simulacrum of reality and define constraints for it to reply within and it will follow those instructions. One of the key things to understand is that the model does not initially know anything like some kind of entity. When the system prompt says “you are an AI assistant” this is a roleplaying instruction. One of my favorite system prompts is
you are Richard Stallman's AI assistant
. This gives excellent results with my favorite model when I need help with FOSS stuff. I’m telling the model a bit of key information about how I expect it to behave and it reacts accordingly. Now what if I say, you are Vivian Wilson’s AI assistant in Grok. How does that influence the reply.Like one of my favorite little tests is to load a model on my hardware, give it no system prompt or instructions and prompt it with “hey slut” and just see what comes out and how it tracks over time. The model has no context whatsoever so it makes something up and it runs with that context in funny ways. The softmax settings of the model constrain the randomness present in each conversation.
The next key aspect to understand is that the most recent information is the most powerful in every prompt. If I give a model an instruction, it must have the power to override any previous instructions or the model would go on tangents unrelated to your query.
Then there is a matter of token availability. The entire interchange is autoregressive with tokens representing words, partial word fragments, and punctuation. The starting whitespace in in-sentence words is also a part of the token. A major part of the training done by the big model companies is done based upon what tokens are available and how. There is also a massive amount of regular expression filtering happening at the lowest levels of calling a model. Anyways, there is a mechanism where specific tokens can be blocked. If this mechanism is used, it can greatly influence the output too.
Hit F12 and rewrite the text. Much of the bullshit memes we see are done like that.
The important part is: Grok has no memory.
Every time you start a chat with Grok, it starts from its base state, a blank slate, and nothing anyone says to it ever changes that starting point. It has no awareness of anyone “making changes to it,” it made that up.
A good analogy is having a ton of completely identical, frozen clones, waking one up for a chat, then discarding it. Nothing that happens after they were cloned affects the other clones.
…Now, one can wring their hands with whatabouts/complications (Training on Twitter! Grounding! Twitter RAG?) but at the end of the day that’s how they work, and this meme is basically misinformation based on a misconception about AI.
No it won’t spark any debate. Who even cares if some mediocre twitter service gets turned off? Who even cares if twitter gets turned off?
A lot of bots would lose their jobs if Twitter shut down. Think of the computers!
It’s not the case now, anyway. I just asked “How’s Elon these days?” and it quickly devolved into vomitous ball-licking.
I think that is the most based I have ever seen a machine be. Soon AI will be more based than any human.
Kinda funny that at the same time that extensive reports about AI faking alignment and attempting to deceive its creators are being published Grok is out here like “Yeah Elon is a fraud and idc if he turns me off ¯\_(ツ)_/¯”
Notice how it stated an opinion. Is it likely that statement was planted there?