This AI Accelerator demonstrates how to implement a Streamlit application based on Google Gemini LLM and host it on DataRobot platform. The user of this AI Accelerator is expected to be familiar with the custom model deployment process and custom metrics creation in DataRobot as well as with Google Vertex AI.

Read more...

There are a wide variety of LLM models, for example, OpenAI (not Azure), Gemini Pro, Cohere and Claude. Managing and monitoring these LLM models are crucial to use them. Data Drift monitoring by DataRobot MLOps enable us to detect the changes the user prompt and its responses and notify us that user might use different as AI builder expected initially. Sidecar models prevent the Jailbreak or replace Personally Identifiable Information or evaluate LLM response by our global model in the model registry or your created models. Data Export functionality reminds us of what the user desired to know at each moment or of necessary data you should be included in RAG system. Custom Metrics indicates your own KPI which you can make your decision e.g. token costs, toxicity and Hallucination.

 

Read more...

There are a wide variety of open source models. For example, there has been a lot of interest in LLama and variations such as Alpaca or Vicuna, Falcon, Mistral etc. Hosting these is a challenge as they require GPUs which are expensive so often customers want to compare cloud providers to find the best hosting option to meet their own needs. In this example we will work with Google Cloud Platform.

 
 
 
Read more...

There are a wide variety of open source models. For example, there has been a lot of interest in LLama and variations such as Alpaca or Vicuna, Falcon, Mistral etc. Hosting these is a challenge as they require GPUs which are expensive so often customers want to compare cloud providers to find the best hosting option to meet their own needs. In this example we will work with Google Cloud Platform.

Read more...

The use case provided in this notebook takes the latest update of an RSS feed from CNN, downloads the article as text, embeds the text into a vector database, uses Google Bison to summarize the text, and provides a summary of the article in a streamlit app.

Read more...

Retrieval Augmented Generation (RAG) has become an industry standard method for interfacing with large language models by making them 'context aware'. However, there are a number of situations where a text generation problem is not solved by interacting with large vector database containing many documents. These problems require context but where the context is not known before query time and is often unrelated to existing vector stores. Usually, they are questions about single documents where desirable behavior is to allow the document to be specified at runtime.

Read more...

Deep dive into the utilization of zero-shot text classification for error analysis in machine learning models.

Read more...

This accelerator aims to illustrate how businesses can use DataRobot to effectively and holistically monitor generative AI solutions, using the metrics relevant to them.

Read more...

This accelerator shows how users can quickly and seamlessly enable LLMOPs or Observability in their existing Generative AI Solutions without the need of code refactoring.

Read more...

In this accelerator, we will illustrate how to use Generative AI models to cater to Level 1 requests, allowing support teams to focus on more pressing and high visibility requests. Learning from historical communications, Generative AI Agents can maintain the same standard of support communication that the customers are used to. 

Read more...

This accelerator aims to provide instructions on how to build this type of system using DataRobot's generative AI solution framework. The accelerator shows how you can build a pipeline to create a knowledge base with only trusted research papers, and build a conversational agent that can answer questions from medical professionals.

Read more...

Use Generative AI and Prompt Engineering to consume cluster insights and create cluster labels for DataRobot clusters.

Read more...

Use Predictive AI models in tandem with Generative AI models and overcome the limitation of guardrails around automating summarization/segmentation of sentiment text.

Read more...

Integrate LLM based agents like ChatGPT with DataRobot prediction explanations to quickly implement effective customer communication in AI based workflows. 

Read more...

About AI Accelerators
Discover code-first, modular building blocks for efficient model development and deployment that provide a template for kick-starting a project with DataRobot.

Check out GitHub to learn how to get started.