There are a wide variety of LLM models, for example, OpenAI (not Azure), Gemini Pro, Cohere and Claude. Managing and monitoring these LLM models are crucial to use them. Data Drift monitoring by DataRobot MLOps enable us to detect the changes the user prompt and its responses and notify us that user might use different as AI builder expected initially. Sidecar models prevent the Jailbreak or replace Personally Identifiable Information or evaluate LLM response by our global model in the model registry or your created models. Data Export functionality reminds us of what the user desired to know at each moment or of necessary data you should be included in RAG system. Custom Metrics indicates your own KPI which you can make your decision e.g. token costs, toxicity and Hallucination.

 

In addition, Playground in DataRobot enables us to compare the RAG system of LLM models you would like to try once you deploy the models into DataRobot MLOps. You can obtain the best LLM model to accelerate your business. The comparison of variety of LLM models is key element to success the RAG system.

 

Here the DataRobot shows the collection of LLM custom inference model template, which enable you to deploy and accelerate to use your own LLM, along with the battery included LLMs like Azure OpenAI, Google and AWS. Currently, the DataRobot have the template for OpenAI (not Azure), Gemini Pro, Cohere and Claude. To use this template following the instruction.

Labels (1)
Contributors
Version history
Last update:
3 weeks ago
Updated by: