Installing LLM Engine
Configuration
Configuration is done with an environment variables file. Copy .env.example to create your .env file.
Running locally
- Start by copying
.env.exampleto.env. - Install
mongodb(ubuntu 24 server) - Run MongoDB with
mongod💡 Note: Mac users who have used Homebrew to install MongoDB should use the command
brew services start mongodb-communityto run the MongoDB service instead. - Install
node.jsand set to a version specified inpackage.jsonfile (Consider using nvm) - Install yarn
- Install all dependencies with
yarn install. - Run
yarn run devto serve the API locally.
LLM Model selection
LLM Engine supports a range of LLM platforms.
OpenAI
- Configure
DEFAULT_OPENAI_API_KEYandDEFAULT_OPENAI_BASE_URLin your.envfile. - When creating a Conversation with an Agent, specify
llmPlatformto beopenaiandllmModelto be an available OpenAI model.
Note that this will work for any OpenAI compatible LLM provider.
AWS Bedrock (including Claude)
- Configure
BEDROCK_API_KEYandBEDROCK_BASE_URLin your.envfile. - When creating a Conversation with an Agent, specify
llmPlatformto bebedrockandllmModelto be an available Bedrock model.
Open Source Models via vLLM
Open source models are available through vLLM running locally or on one of two hosted serverless providers:
- Runpod - See detailed instruction for setup in our runpod guide.
- Modal - See detailed instructions for setup in our modal guide.
-
Local vLLM - Follow their setup guide.
-
Configure
VLLM_API_KEYandVLLM_BASE_URLin your.envfile. - When creating a Conversation with an Agent, specify
llmPlatformto bevllmandllmModelto be an available open source model supported by vllm.
Open Source Models via Ollama
Open source models are also available through Ollama running locally.
- Install Ollama locally.
- Configure
OLLAMA_BASE_URL - When creating a Conversation with an Agent, specify
llmPlatformto beollamaandllmModelto be an available open source model supported by ollama.
Optional: Retrieval Augmented Generation
If you would like to make use of Retrieval Augmented Generation (RAG) see our rag guide.
Optional: Nextspace integration
If you would like to use LLM Engine with the Nextspace client, see our nextspace guide.
Optional: Zoom integration
If you would like to use LLM Engine with Zoom, see our zoom guide.