• 0 Posts
  • 4 Comments
Joined 3 months ago
cake
Cake day: January 29th, 2025

help-circle
  • You didn’t misread. It says something along the lines that being generated locally takes long, and that it could be faster to read the article and summarize it yourself.

    Then, there’s the inconvenience of having a small LLM instance installed locally: being small means it’s not very effective, but “small” is not really small… So what could the future bring us?

    Exactly! The convenience of a big LLM, that is fast, that is more accurate, at the relative small cost of not being hosted locally. It’s a slippery slope, and as LLMs evolve (both in effectiveness and size), I think we know where it all ends.