Enhancing Interaction on PC With Nvidia’s Innovative Chatbot

Enhancing Interaction on PC With Nvidia’s Innovative Chatbot

Brian Lv12

Enhancing Interaction on PC With Nvidia’s Innovative Chatbot

Key Takeaways

  • Nvidia Chat with RTX is an AI chatbot that runs locally on your PC, using TensorRT-LLM and RAG for customized responses.
  • Install Chat with RTX has the following minimum requirements: an RTX GPU, 16GB RAM, 100GB storage, and Windows 11.
  • Use Chat with RTX to set up files for RAG, ask questions, analyze YouTube videos, and ensure data security.

MUO VIDEO OF THE DAY

SCROLL TO CONTINUE WITH CONTENT

Nvidia has launched Chat with RXT, an AI chatbot that operates on your PC and offers features similar to ChatGPT and more! All you need is an Nvidia RTX GPU, and you’re all set to start using Nvidia’s new AI chatbot.

What Is Nvidia Chat with RTX?

Nvidia Chat with RTX is an AI software that lets you run a large language model (LLM) locally on your computer . So, instead of going online to use an AI chatbot like ChatGPT, you can use Chat with RTX offline whenever you want.

Chat with RTX uses TensorRT-LLM, RTX acceleration, and a quantized Mistral 7-B LLM to provide fast performance and quality responses on par with other online AI chatbots. It also provides retrieval-augmented generation (RAG), allowing the chatbot to read through your files and enable customized answers based on the data you provide. This allows you to customize the chatbot to provide a more personal experience.

If you want to try out Nvidia Chat with RTX, here’s how to download, install, and configure it on your computer.

How to Download and Install Chat with RTX

Chat with RTX official web page

Nvidia has made running an LLM locally on your computer much easier. To run Chat with RTX, you only need to download and install the app, just as you would with any other software. However, Chat with RTX does have some minimum specification requirements to install and use properly.

  • RTX 30-Series or 40-Series GPU
  • 16GB RAM
  • 100GB free memory space
  • Windows 11

If your PC does pass the minimum system requirement, you can go ahead and install the app.

  • Step 1: Download the Chat with RTX ZIP file.
  • Step 2: Extract the ZIP file by right-clicking and selecting a file archive tool like 7Zip or double-clicking the file and selecting Extract All.
  • Step 3: Open the extracted folder and double-click setup.exe. Follow the onscreen instructions and check all the boxes during the custom installation process. After hitting Next​​​​​​, the installer will download and install the LLM and all dependencies.
    Installation process of Chat with RTX

The Chat with RTX installation will take some time to finish as it downloads and installs a large amount of data. After the installation process, hit Close, and you’re done. Now, it’s time for you to try out the app.

How to Use Nvidia Chat with RTX

Although you can use Chat with RTX like a regular online AI chatbot, I strongly suggest you check its RAG functionality, which enables you to customize its output based on the files you give access to.

Step 1: Create RAG Folder

To start using RAG on Chat with RTX, create a new folder to store the files you want the AI to analyze.

After creation, place your data files into the folder. The data you store can cover many topics and file types, such as documents, PDFs, text, and videos. However, you may want to limit the number of files you place in this folder so as not to affect performance. More data to search through means Chat with RTX will take longer to return responses for specific queries (but this is also hardware-dependent).

Create data folder for RAG

Project Manager - Asset Browser for 3Ds Max

Now your database is ready, you can set up Chat with RTX and start using it to answer your questions and queries.

ZoneAlarm Extreme Security NextGen

Step 2: Set Up Environment

Open Chat with RTX. It should look like the image below.

Chat with RTX web interface

KoolReport Pro is an advanced solution for creating data reports and dashboards in PHP. Equipped with all extended packages , KoolReport Pro is able to connect to various datasources, perform advanced data analysis, construct stunning charts and graphs and export your beautiful work to PDF, Excel, JPG or other formats. Plus, it includes powerful built-in reports such as pivot report and drill-down report which will save your time in building ones.

It will help you to write dynamic data reports easily, to construct intuitive dashboards or to build a whole business intelligence cockpit.

KoolReport Pro package goes with Full Source Code, Royal Free, ONE (1) Year Priority Support, ONE (1) Year Free Upgrade and 30-Days Money Back Guarantee.

Developer License allows Single Developer to create Unlimited Reports, deploy on Unlimited Servers and able deliver the work to Unlimited Clients.

Under Dataset, make sure that the Folder Path option is selected. Now click on the edit icon below (the pen icon) and select the folder containing all the files you want Chat with RTX to read. You can also change the AI model if other options are available (at the time of writing, only Mistral 7B is available).

You are now ready to use Chat with RTX.

Step 3: Ask Chat with RTX Your Questions

There are several ways to query Chat with RTX. The first one is to use it like a regular AI chatbot. I asked Chat with RTX about the benefits of using a local LLM and was satisfied with its answer. It wasn’t enormously in-depth, but accurate enough.

Using Chat with RTX like a regular chatbot

But since Chat with RTX is capable of RAG, you can also use it as a personal AI assistant.

Asking Chat with RTX personal questions

Above, I’ve used Chat with RTX to ask about my schedule. The data came from a PDF file containing my schedule, calendar, events, work, and so on. In this case, Chat with RTX has pulled the correct calendar data from the data; you’ll have to keep your data files and calendar dates updated for features like this to work properly until there are integrations with other apps.

There are many ways you can use Chat with RTX’s RAG to your advantage. For example, you can use it to read through legal papers and give a summary, generate code relevant to the program you’re developing, get bulleted highlights about a video you’re too busy to watch, and so much more!

Step 4: Bonus Feature

In addition to your local data folder, you can use Chat with RTX to analyze YouTube videos. To do so, under Dataset, change the Folder Path to YouTube URL.

Set data path for YouTube

Copy the YouTube URL you want to analyze and paste it below the drop-down menu. Then ask away!

Using Chat with RTX to summarize a YouTube video

Chat with RTX’s YouTube video analysis was pretty good and delivered accurate information, so it could be handy for research, quick analysis, and more.

Jutoh is an ebook creator for Epub, Kindle and more. It’s fast, runs on Windows, Mac, and Linux, comes with a cover design editor, and allows book variations to be created with alternate text, style sheets and cover designs.

Is Nvidia’s Chat with RTX Any Good?

ChatGPT provides RAG functionality. Some local AI chatbots have significantly lower system requirements . So, is Nvidia Chat with RTX even worth using?

The answer is yes! Chat with RTX is worth using despite the competition.

One of the biggest selling points of using Nvidia Chat with RTX is its ability to use RAG without sending your files to a third-party server. Customizing GPTs through online services can expose your data . But since Chat with RTX runs locally and without an internet connection, using RAG on Chat with RTX ensures your sensitive data is safe and only accessible on your PC.

As for other locally running AI chatbots running Mistral 7B, Chat with RTX performs better and faster. Although a big part of the performance boost comes from using higher-end GPUs, the use of Nvidia TensorRT-LLM and RTX acceleration made running Mistral 7B faster on Chat with RTX when compared to other ways of running a chat-optimized LLM.

It is worth noting that the Chat with RTX version we are currently using is a demo. Later releases of Chat with RTX will likely become more optimized and deliver performance boosts.

What if I Don’t Have an RTX 30 or 40 Series GPU?

Chat with RTX is an easy, fast, and secure way of running an LLM locally without the need for an internet connection. If you’re also interested in running an LLM or local but don’t have an RTX 30 or 40 Series GPU, you can try other ways of running an LLM locally. Two of the most popular ones would be GPT4ALL and Text Gen WebUI. Try GPT4ALL if you want a plug-and-play experience locally running an LLM. But if you’re a bit more technically inclined, running LLMs through Text Gen WebUI will provide better fine-tuning and flexibility.

  • Title: Enhancing Interaction on PC With Nvidia’s Innovative Chatbot
  • Author: Brian
  • Created at : 2024-08-18 10:00:20
  • Updated at : 2024-08-19 10:00:20
  • Link: https://tech-savvy.techidaily.com/enhancing-interaction-on-pc-with-nvidias-innovative-chatbot/
  • License: This work is licensed under CC BY-NC-SA 4.0.