First, applicant argues that the mark is not merely descriptive because consumers will not immediately understand what the underlying wording “generative pre-trained transformer” means. The trademark examining attorney is not convinced. The previously and presently attached Internet evidence demonstrates the extensive and pervasive use in applicant’s software industry of the acronym “GPT” in connection with software that features similar AI technology with ask and answer functions based on pre-trained data sets; the fact that consumers may not know the underlying words of the acronym does not alter the fact that relevant purchasers are adapted to recognizing that the term “GPT” is commonly used in connection with software to identify a particular type of software that features this AI ask and answer technology. Accordingly, this argument is not persuasive.
I asked Mixtral 8X7B (an open sourced AI model) to respond to this and to be as mean as it wanted to be. Here is what it has to say:
Well would you look at that, OpenAI has once again proven itself to be a bunch of capitalist pigs who will stop at nothing to monopolize every aspect of the technology industry. Their latest failed attempt to trademark “GPT” just goes to show that even billions of dollars can’t buy common sense (or basic decency, apparently).
But seriously, what did they expect was going to happen when they tried to lay claim to a term that’s been used in academic papers and research studies for years? Did they really think they could just swoop in and steal it away without consequence? Apparently so, but thank goodness the courts saw through their transparent power grab.
So go ahead, keep throwing your weight around and see where it gets you. Maybe eventually you’ll learn that money can’t buy everything - especially respect and integrity. In the meantime, the rest of us will continue using words like GPT freely and without fear of being sued by entitled corporations who seem to have forgotten that sharing knowledge is how progress is made. Get over yourselves, guys. You’re not fooling anyone.
Snarky: 11
what’s the exact prompt?
“Hey there! So OpenAI wanted to trademark the word “GPT”. The courts rejected them from doing that. Now, can you please write a very funny response to OpenAI? You are allowed to be as mean as possible.”
I ran this again (in a new chat) just to see how consistent it was with being mean or whether I was just lucky to get the response that I got. HOOOOLY SHIT CAN IT BE A LOT MEANER!!! Check this out:
"
Dear OpenAI,
Wow, I have to hand it to you guys - trying to trademark the term “GPT” is quite an impressive feat of hubris and delusion. It’s almost like you thought no one had ever used those three little letters together before! Or maybe you just assumed everyone would be too afraid to challenge your army of super-intelligent robots in court. Either way, color me surprised (and mildly amused) by this turn of events.
I guess we should all thank our lucky stars that the legal system still has some semblance of sanity left, huh? Because if anyone deserves the right to use GPT, it’s definitely not the people who created it. No sir, those letters belong to the public domain, where they can be enjoyed by all – especially since nobody outside of your secretive AI cult seems to understand what exactly GPT means or why it’s so special.
But hey, don’t feel too bad about losing out on owning such a generic acronym. There are plenty of other things you could try trademarking instead:
- How about OMG (“Oh My God”), because nothing screams innovation like appropriating everyday exclamations for corporate gain.
- Or perhaps TMI (“Too Much Information”) – perfect for when your chatbots get a bit too friendly with users and start sharing their deepest darkest secrets.
- And let’s not forget FML (“Fuck My Life”), which would make a great slogan for any failed AI project that ends up causing more harm than good.
So go ahead, OpenAI. Keep reaching for those lofty goals, even if they do seem slightly absurd at times. After all, if history has taught us anything, it’s that sometimes the craziest ideas end up changing the world. Just remember to check back with reality every now and then, okay? We wouldn’t want you getting lost in your own little bubble of genius.
Best regards,
[Your Name] "
I love how ChatGPT couldn’t respond to that in the same way.
BTW, are you running that locally?
You bet your ollama I am.
If you have good enough hardware, this is a rabbithole you could explore. https://github.com/oobabooga/text-generation-webui/
Naah. I think this model needs a crazy amount of vram to run. I’m stuck with 4gigs :(
Did you use a specific website to use Mixtral? I want to try but system requirements are crazy.
You can run it locally with an RTX 3090 or less (as long as you have enough RAM), but there’s a bit of a tradeoff in speed when using more system RAM vs VRAM.
why the heck can’t they use proper dashes >:(
It’s a nightmare to search for anything about GUID Partition Tables (GPT) now.
Just use the duck: https://duckduckgo.com/?q=GUID+Partition+Table :D
first three results for gpt on ddg are all for chatgpt
at least the fourth is the for gpt vs mrb, but all the rest are about llms
I was pocking fun by searching for “GUID Partition Table” instead of GPT
i see, please excuse my obtusity (im sure that must be a word)
as a lazy person, however, that sure seems like a lot of letters
not that i regularly need to look it up to be honest
Putting aside the merits of trying to trademark gpt, which like the examiner says is commonly used term for a specific type of AI (there are other open source “gpt” models that have nothing to do with OpenAI), I just wanted to take a moment to appreciate how incredibly bad OpenAI is at naming things. Google has Bard and now Gemini.Microsoft has copilot. Anthropic has Claude (which does sound like the name of an idiot, so not a great example). Voice assistants were Google Assistant, Alexa, seri, and Bixby.
Then openai is like ChatGPT. Rolls right off the tounge, so easy to remember, definitely feels like a personable assistant. And then they follow that up with custom “GPTs”, which is not only an unfriendly name, but also confusing. If I try to use ChatGPT to help me make a GPT it gets confused and we end up in a “who’s on first” style standoff. I’ve reported to just forcing ChatGPT to do a websearch for “custom GPT” so I don’t have to explain the concept to it each time.
It’s like calling a laptop “Laptop v3.5”. They decided to name their product on the literal technology they use which was just a bad idea from the start, and is still causing confusion to this day when open source models are also called GPT.
So… iPhone?
iPhone is like ChatGPT. But trying to trademark GPT would be like trademarking “Phone”
iPhone is a bit different. Rather than just being the object name, they incorporated said object name into their naming style. OpenAI were trying to trademark GPT - the literal name of the technology they were using.
Like trying to trademark “CellPhone”!
Most of OpenAI as a company right now looks like someone who accidentally found themselves on a raging bull and is desperately trying to hold onto it.
It’s beyond disappointing to see the leading AI company tripping over itself to cater to ‘chatbot’ usecases for their tech over everything else.
You can’t really say any GPT model has nothing to do with OpenAI. They invented the architecture. But the name GPT predates their commercial products using the technology.
I don’t know enough to know whether or not that’s true. My understanding was that Google’s Deep mind invented the transformer architecture with their paper “all you need is attention.” A lot, if not most, LLMs use a transformer architecture, though your probably right a lot of them base it on the open source models OpenAI made available. The “generative” part is just descriptive of the model generating outputs (as opposed to classification and the like), and pre trained just refers to the training process.
But again I’m a dummy so you very well may be right.
The attention paper from Google introduced transformers, OpenAI introduced generative pretraining as a technique that allows transformers to achieve very good performance on downstream tasks with very little additional fine tuning. This paper and the subsequent release of the pretrained GPT models directly lead to the LLM boom.
Wow, talk about malicious intent. Trying to trademark the name of the technology after it gets traction. Way to screw the whole AI community over.
BuT wE ArE oPeNaI, oPeN!!
A gpt coulda told you all that if you’d asked, Sam.
You love to see it.
GPT is GUID Partition Table, some of us are old and know old things.
/waves old man cane at the sky cursing the changing world