NVIDIA ChatRTX is a recently released demo enabling you to easily build a customized LLM that runs locally on your own machine, assuming it is using Windows and running a compatible NVIDIA card (a 30 or 40 series card, or earlier with 8GB+ of RAM). The key features of ChatRTX are it’s free, it runs locally on your own machine, it can use a variety of AI models, it’s easy to setup and most importantly, you can feed it your own training data in txt, pdf or other formats.
What Is ChatRTX?
ChatRTX is a demo app that lets you personalize a GPT large language model (LLM) connected to your own content—docs, notes, images, or other data. Leveraging retrieval-augmented generation (RAG), TensorRT-LLM, and RTX acceleration, you can query a custom chatbot to quickly get contextually relevant answers. And because it all runs locally on your Windows RTX PC or workstation, you’ll get fast and secure results.
Chat With Your Files
ChatRTX supports various file formats, including txt, pdf, doc/docx, jpg, png, gif, and xml. Simply point the application at the folder containing your files and it’ll load them into the library in a matter of seconds.
Talk to ChatRTX
ChatRTX features an automatic speech recognition system that uses AI to process spoken language and provide text responses with support for multiple languages. Simply click the microphone icon and talk to ChatRTX to get started.
Photo and Image Search
Let ChatRTX do the work—sort through your photo albums with a simple text or voice search, keeping everything private and hassle-free.
Key Links
GitHub Project ChatRTX is Based On
Of course ChatRTX isn’t the only option when it comes to easily setting up a local AI chat server, other options such as LM Studio and Jan provide similar functionality (without the NVIDIA or Windows OS requirements. You can learn more about NVIDIA ChatRTX in the video below.