Overview

Embedchain comes with built-in support for various popular large language models. We handle the complexity of integrating these models for you, allowing you to easily customize your language model interactions through a user-friendly interface.

OpenAI

To use OpenAI LLM models, you have to set the OPENAI_API_KEY environment variable. You can obtain the OpenAI API key from the OpenAI Platform.

Once you have obtained the key, you can use it like this:

import os
from embedchain import App

os.environ['OPENAI_API_KEY'] = 'xxx'

app = App()
app.add("https://en.wikipedia.org/wiki/OpenAI")
app.query("What is OpenAI?")

If you are looking to configure the different parameters of the LLM, you can do so by loading the app using a yaml config file.

Function Calling

To enable function calling in your application using embedchain and OpenAI, you need to pass functions into OpenAILlm class as an array of functions. Here are several ways in which you can achieve that:

Examples:

Google AI

To use Google AI model, you have to set the GOOGLE_API_KEY environment variable. You can obtain the Google API key from the Google Maker Suite

Azure OpenAI

To use Azure OpenAI model, you have to set some of the azure openai related environment variables as given in the code block below:

You can find the list of models and deployment name on the Azure OpenAI Platform.

Anthropic

To use anthropic’s model, please set the ANTHROPIC_API_KEY which you find on their Account Settings Page.

Cohere

Install related dependencies using the following command:

pip install --upgrade 'embedchain[cohere]'

Set the COHERE_API_KEY as environment variable which you can find on their Account settings page.

Once you have the API key, you are all set to use it with Embedchain.

Together

Install related dependencies using the following command:

pip install --upgrade 'embedchain[together]'

Set the TOGETHER_API_KEY as environment variable which you can find on their Account settings page.

Once you have the API key, you are all set to use it with Embedchain.

Ollama

Setup Ollama using https://github.com/jmorganca/ollama

vLLM

Setup vLLM by following instructions given in their docs.

GPT4ALL

Install related dependencies using the following command:

pip install --upgrade 'embedchain[opensource]'

GPT4all is a free-to-use, locally running, privacy-aware chatbot. No GPU or internet required. You can use this with Embedchain using the following code:

JinaChat

First, set JINACHAT_API_KEY in environment variable which you can obtain from their platform.

Once you have the key, load the app using the config yaml file:

Hugging Face

Install related dependencies using the following command:

pip install --upgrade 'embedchain[huggingface-hub]'

First, set HUGGINGFACE_ACCESS_TOKEN in environment variable which you can obtain from their platform.

Once you have the token, load the app using the config yaml file:

Custom Endpoints

You can also use Hugging Face Inference Endpoints to access custom endpoints. First, set the HUGGINGFACE_ACCESS_TOKEN as above.

Then, load the app using the config yaml file:

If your endpoint requires additional parameters, you can pass them in the model_kwargs field:

llm:
  provider: huggingface
  config:
    endpoint: <YOUR_ENDPOINT_URL_HERE>
    model_kwargs:
      max_new_tokens: 100
      temperature: 0.5

Currently only supports text-generation and text2text-generation for now [ref].

See langchain’s hugging face endpoint for more information.

Llama2

Llama2 is integrated through Replicate. Set REPLICATE_API_TOKEN in environment variable which you can obtain from their platform.

Once you have the token, load the app using the config yaml file:

Vertex AI

Setup Google Cloud Platform application credentials by following the instruction on GCP. Once setup is done, use the following code to create an app using VertexAI as provider:

Mistral AI

Obtain the Mistral AI api key from their console.

AWS Bedrock

Setup

  • Before using the AWS Bedrock LLM, make sure you have the appropriate model access from Bedrock Console.
  • You will also need to authenticate the boto3 client by using a method in the AWS documentation
  • You can optionally export an AWS_REGION

Usage


The model arguments are different for each providers. Please refer to the AWS Bedrock Documentation to find the appropriate arguments for your model.


If you can't find the specific LLM you need, no need to fret. We're continuously expanding our support for additional LLMs, and you can help us prioritize by opening an issue on our GitHub or simply reaching out to us on our Slack or Discord community.