

14·
8 days agoWell, it is sending it off your device to the AI’s API. Luckily it won’t have any id information, such as cookies, screen size, OS, IP, etc.
The problem seems to be with the word luckily.
Well, it is sending it off your device to the AI’s API. Luckily it won’t have any id information, such as cookies, screen size, OS, IP, etc.
The problem seems to be with the word luckily.
This feels like windows recall…
I would rather have questions that cannot be answered than answers that cannot be questioned.
Richard Feynman
You didn’t misread. It says something along the lines that being generated locally takes long, and that it could be faster to read the article and summarize it yourself.
Then, there’s the inconvenience of having a small LLM instance installed locally: being small means it’s not very effective, but “small” is not really small… So what could the future bring us?
Exactly! The convenience of a big LLM, that is fast, that is more accurate, at the relative small cost of not being hosted locally. It’s a slippery slope, and as LLMs evolve (both in effectiveness and size), I think we know where it all ends.