Overview
Embedchain comes with built-in support for various popular large language models. We handle the complexity of integrating these models for you, allowing you to easily customize your language model interactions through a user-friendly interface.OpenAI
Google AI
Azure OpenAI
Anthropic
Cohere
Together
Ollama
vLLM
GPT4All
JinaChat
Hugging Face
Llama2
Vertex AI
Mistral AI
AWS Bedrock
OpenAI
To use OpenAI LLM models, you have to set theOPENAI_API_KEY
environment variable. You can obtain the OpenAI API key from the OpenAI Platform.
Once you have obtained the key, you can use it like this:
Function Calling
To enable function calling in your application using embedchain and OpenAI, you need to pass functions intoOpenAILlm
class as an array of functions. Here are several ways in which you can achieve that:
Examples:
Using Pydantic Models
Using Pydantic Models
Using OpenAI JSON schema
Using OpenAI JSON schema
Using actual python functions
Using actual python functions
Google AI
To use Google AI model, you have to set theGOOGLE_API_KEY
environment variable. You can obtain the Google API key from the Google Maker Suite
Azure OpenAI
To use Azure OpenAI model, you have to set some of the azure openai related environment variables as given in the code block below:Anthropic
To use anthropic’s model, please set theANTHROPIC_API_KEY
which you find on their Account Settings Page.
Cohere
Install related dependencies using the following command:COHERE_API_KEY
as environment variable which you can find on their Account settings page.
Once you have the API key, you are all set to use it with Embedchain.
Together
Install related dependencies using the following command:TOGETHER_API_KEY
as environment variable which you can find on their Account settings page.
Once you have the API key, you are all set to use it with Embedchain.
Ollama
Setup Ollama using https://github.com/jmorganca/ollamavLLM
Setup vLLM by following instructions given in their docs.GPT4ALL
Install related dependencies using the following command:JinaChat
First, setJINACHAT_API_KEY
in environment variable which you can obtain from their platform.
Once you have the key, load the app using the config yaml file:
Hugging Face
Install related dependencies using the following command:HUGGINGFACE_ACCESS_TOKEN
in environment variable which you can obtain from their platform.
Once you have the token, load the app using the config yaml file:
Custom Endpoints
You can also use Hugging Face Inference Endpoints to access custom endpoints. First, set theHUGGINGFACE_ACCESS_TOKEN
as above.
Then, load the app using the config yaml file:
model_kwargs
field:
text-generation
and text2text-generation
for now [ref].
See langchain’s hugging face endpoint for more information.
Llama2
Llama2 is integrated through Replicate. SetREPLICATE_API_TOKEN
in environment variable which you can obtain from their platform.
Once you have the token, load the app using the config yaml file:
Vertex AI
Setup Google Cloud Platform application credentials by following the instruction on GCP. Once setup is done, use the following code to create an app using VertexAI as provider:Mistral AI
Obtain the Mistral AI api key from their console.AWS Bedrock
Setup
- Before using the AWS Bedrock LLM, make sure you have the appropriate model access from Bedrock Console.
- You will also need to authenticate the
boto3
client by using a method in the AWS documentation - You can optionally export an
AWS_REGION
Usage
The model arguments are different for each providers. Please refer to the AWS Bedrock Documentation to find the appropriate arguments for your model.
If you can't find the specific LLM you need, no need to fret. We're continuously expanding our support for additional LLMs, and you can help us prioritize by opening an issue on our GitHub or simply reaching out to us on our Slack or Discord community.