Embedchain comes with built-in support for various popular large language models. We handle the complexity of integrating these models for you, allowing you to easily customize your language model interactions through a user-friendly interface.
To enable function calling in your application using embedchain and OpenAI, you need to pass functions into OpenAILlm class as an array of functions. Here are several ways in which you can achieve that:
Examples:
Copy
Ask AI
import osfrom embedchain import Appfrom embedchain.llm.openai import OpenAILlmimport requestsfrom pydantic import BaseModel, Field, ValidationError, field_validatoros.environ["OPENAI_API_KEY"] = "sk-xxx"class QA(BaseModel): """ A question and answer pair. """ question: str = Field( ..., description="The question.", example="What is a mountain?" ) answer: str = Field( ..., description="The answer.", example="A mountain is a hill." ) person_who_is_asking: str = Field( ..., description="The person who is asking the question.", example="John" ) @field_validator("question") def question_must_end_with_a_question_mark(cls, v): """ Validate that the question ends with a question mark. """ if not v.endswith("?"): raise ValueError("question must end with a question mark") return v @field_validator("answer") def answer_must_end_with_a_period(cls, v): """ Validate that the answer ends with a period. """ if not v.endswith("."): raise ValueError("answer must end with a period") return vllm = OpenAILlm(config=None,functions=[QA])app = App(llm=llm)result = app.query("Hey I am Sid. What is a mountain? A mountain is a hill.")print(result)
Copy
Ask AI
import osfrom embedchain import Appfrom embedchain.llm.openai import OpenAILlmimport requestsfrom pydantic import BaseModel, Field, ValidationError, field_validatoros.environ["OPENAI_API_KEY"] = "sk-xxx"json_schema = { "name": "get_qa", "description": "A question and answer pair and the user who is asking the question.", "parameters": { "type": "object", "properties": { "question": {"type": "string", "description": "The question."}, "answer": {"type": "string", "description": "The answer."}, "person_who_is_asking": { "type": "string", "description": "The person who is asking the question.", } }, "required": ["question", "answer", "person_who_is_asking"], },}llm = OpenAILlm(config=None,functions=[json_schema])app = App(llm=llm)result = app.query("Hey I am Sid. What is a mountain? A mountain is a hill.")print(result)
Copy
Ask AI
import osfrom embedchain import Appfrom embedchain.llm.openai import OpenAILlmimport requestsfrom pydantic import BaseModel, Field, ValidationError, field_validatoros.environ["OPENAI_API_KEY"] = "sk-xxx"def find_info_of_pokemon(pokemon: str): """ Find the information of the given pokemon. Args: pokemon: The pokemon. """ req = requests.get(f"https://pokeapi.co/api/v2/pokemon/{pokemon}") if req.status_code == 404: raise ValueError("pokemon not found") return req.json()llm = OpenAILlm(config=None,functions=[find_info_of_pokemon])app = App(llm=llm)result = app.query("Tell me more about the pokemon pikachu.")print(result)
To use Google AI model, you have to set the GOOGLE_API_KEY environment variable. You can obtain the Google API key from the Google Maker Suite
Copy
Ask AI
import osfrom embedchain import Appos.environ["GOOGLE_API_KEY"] = "xxx"app = App.from_config(config_path="config.yaml")app.add("https://www.forbes.com/profile/elon-musk")response = app.query("What is the net worth of Elon Musk?")if app.llm.config.stream: # if stream is enabled, response is a generator for chunk in response: print(chunk)else: print(response)
Install related dependencies using the following command:
Copy
Ask AI
pip install --upgrade 'embedchain[opensource]'
GPT4all is a free-to-use, locally running, privacy-aware chatbot. No GPU or internet required. You can use this with Embedchain using the following code:
Copy
Ask AI
from embedchain import App# load llm configuration from config.yaml fileapp = App.from_config(config_path="config.yaml")
Setup Google Cloud Platform application credentials by following the instruction on GCP. Once setup is done, use the following code to create an app using VertexAI as provider:
Copy
Ask AI
from embedchain import App# load llm configuration from config.yaml fileapp = App.from_config(config_path="config.yaml")
os.environ["MISTRAL_API_KEY"] = "xxx"app = App.from_config(config_path="config.yaml")app.add("https://www.forbes.com/profile/elon-musk")response = app.query("what is the net worth of Elon Musk?")# As of January 16, 2024, Elon Musk's net worth is $225.4 billion.response = app.chat("which companies does elon own?")# Elon Musk owns Tesla, SpaceX, Boring Company, Twitter, and X.response = app.chat("what question did I ask you already?")# You have asked me several times already which companies Elon Musk owns, specifically Tesla, SpaceX, Boring Company, Twitter, and X.
The model arguments are different for each providers. Please refer to the AWS Bedrock Documentation to find the appropriate arguments for your model.
If you can't find the specific LLM you need, no need to fret. We're continuously expanding our support for additional LLMs, and you can help us prioritize by opening an issue on our GitHub or simply reaching out to us on our Slack or Discord community.