• NVIDIA released a demo version of a chatbot that runs locally on your PC, giving it access to your files and documents.
• The chatbot, called Chat with RTX, can answer queries and create summaries based on personal data fed into it.
• It supports various file formats and can integrate YouTube videos for contextual queries, making it useful for data research and analysis.
deleted by creator
There were CUDA cores before RTX. I can run LLMs on my CPU just fine.
This statement is so wrong. I have Ollama with llama2 dataset running decently on a 970 card. Is it super fast? No. Is it usable? Yes absolutely.
There are a number of local AI LLMs that run on any modern CPU. No GPU needed at all, let alone RTX.