Nvidia is putting support for multiple languages into its NeMo Retriever. That can now bring up relevant information from datasets to AI prompts, even if that info is available in a mishmash of languages.
Nvidia gives its NeMo Retriever a language bath. NeMo Retriever is part of Nvidia’s AI stack and helps organizations when they want to use Retrievel Augmented Generation (RAG). With RAG, a prompt to an LLM is automatically provided with relevant background info. For example, if you ask what the status of an order is, an LLM is not just going to make up the answer. The prompt is equipped behind the scenes with info from the order database, so the AI model can look in there for an answer.
Multilingual
RAG is an efficient way to deploy LLMs to current business data without the need to train a complex model itself. The NeMo Retriever now ensures that relevant info is added to prompts regardless of the language in which that info is available. The retriever translates the information into vectors so that AI can deal with it efficiently.
Nvidia makes the case that the tool’s multilingualism is important for large companies. After all, relevant information there is not hidden exclusively in English. The multilingual NeMo Retriever is available as a microservice through the Nvidia API catalog.