LobeChat
Ctrl K
Back to Discovery
OllamaOllama
@Ollama
40 models
Ollama provides models that cover a wide range of fields, including code generation, mathematical operations, multilingual processing, and conversational interaction, catering to diverse enterprise-level and localized deployment needs.

Supported Models

Ollama
Maximum Context Length
128K
Maximum Output Length
--
Input Price
--
Output Price
--
Maximum Context Length
128K
Maximum Output Length
--
Input Price
--
Output Price
--
Maximum Context Length
128K
Maximum Output Length
--
Input Price
--
Output Price
--
Maximum Context Length
16K
Maximum Output Length
--
Input Price
--
Output Price
--

Using Ollama in LobeChat

Using Ollama in LobeChat

Ollama is a powerful framework for running large language models (LLMs) locally, supporting various language models including Llama 2, Mistral, and more. Now, LobeChat supports integration with Ollama, meaning you can easily enhance your application by using the language models provided by Ollama in LobeChat.

This document will guide you on how to use Ollama in LobeChat:

Using Ollama on macOS

Local Installation of Ollama

Download Ollama for macOS and unzip/install it.

Configure Ollama for Cross-Origin Access

Due to Ollama's default configuration, which restricts access to local only, additional environment variable setting OLLAMA_ORIGINS is required for cross-origin access and port listening. Use launchctl to set the environment variable:

bash
launchctl setenv OLLAMA_ORIGINS "*"

After setting up, restart the Ollama application.

Conversing with Local Large Models in LobeChat

Now, you can start conversing with the local LLM in LobeChat.

Chat with llama3 in LobeChat

Using Ollama on Windows

Local Installation of Ollama

Download Ollama for Windows and install it.

Configure Ollama for Cross-Origin Access

Since Ollama's default configuration allows local access only, additional environment variable setting OLLAMA_ORIGINS is needed for cross-origin access and port listening.

On Windows, Ollama inherits your user and system environment variables.

  1. First, exit the Ollama program by clicking on it in the Windows taskbar.
  2. Edit system environment variables from the Control Panel.
  3. Edit or create the Ollama environment variable OLLAMA_ORIGINS for your user account, setting the value to *.
  4. Click OK/Apply to save and restart the system.
  5. Run Ollama again.

Conversing with Local Large Models in LobeChat

Now, you can start conversing with the local LLM in LobeChat.

Using Ollama on Linux

Local Installation of Ollama

Install using the following command:

bash
curl -fsSL https://ollama.com/install.sh | sh

Alternatively, you can refer to the Linux manual installation guide.

Configure Ollama for Cross-Origin Access

Due to Ollama's default configuration, which allows local access only, additional environment variable setting OLLAMA_ORIGINS is required for cross-origin access and port listening. If Ollama runs as a systemd service, use systemctl to set the environment variable:

  1. Edit the systemd service by calling sudo systemctl edit ollama.service:
bash
sudo systemctl edit ollama.service
  1. Add Environment under [Service] for each environment variable:
bash
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_ORIGINS=*"
  1. Save and exit.
  2. Reload systemd and restart Ollama:
bash
sudo systemctl daemon-reload
sudo systemctl restart ollama

Conversing with Local Large Models in LobeChat

Now, you can start conversing with the local LLM in LobeChat.

Deploying Ollama using Docker

Pulling Ollama Image

If you prefer using Docker, Ollama provides an official Docker image that you can pull using the following command:

docker pull ollama/ollama

Configure Ollama for Cross-Origin Access

Since Ollama's default configuration allows local access only, additional environment variable setting OLLAMA_ORIGINS is needed for cross-origin access and port listening.

If Ollama runs as a Docker container, you can add the environment variable to the docker run command.

bash
docker run -d --gpus=all -v ollama:/root/.ollama -e OLLAMA_ORIGINS="*" -p 11434:11434 --name ollama ollama/ollama

Conversing with Local Large Models in LobeChat

Now, you can start conversing with the local LLM in LobeChat.

Installing Ollama Models

Ollama supports various models, which you can view in the Ollama Library and choose the appropriate model based on your needs.

Installation in LobeChat

In LobeChat, we have enabled some common large language models by default, such as llama3, Gemma, Mistral, etc. When you select a model for conversation, we will prompt you to download that model.

LobeChat guide your to install Ollama model

Once downloaded, you can start conversing.

Pulling Models to Local with Ollama

Alternatively, you can install models by executing the following command in the terminal, using llama3 as an example:

ollama pull llama3

Custom Configuration

You can find Ollama's configuration options in Settings -> Language Models, where you can configure Ollama's proxy, model names, etc.

Ollama Provider Settings

Visit Integrating with Ollama to learn how to deploy LobeChat to meet integration needs with Ollama.

Related Providers

OpenAIOpenAI
@OpenAI
22 models
OpenAI is a global leader in artificial intelligence research, with models like the GPT series pushing the frontiers of natural language processing. OpenAI is committed to transforming multiple industries through innovative and efficient AI solutions. Their products demonstrate significant performance and cost-effectiveness, widely used in research, business, and innovative applications.
Anthropic
ClaudeClaude
@Anthropic
7 models
Anthropic is a company focused on AI research and development, offering a range of advanced language models such as Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus, and Claude 3 Haiku. These models achieve an ideal balance between intelligence, speed, and cost, suitable for various applications from enterprise workloads to rapid-response scenarios. Claude 3.5 Sonnet, as their latest model, has excelled in multiple evaluations while maintaining a high cost-performance ratio.
AWS
BedrockBedrock
@Bedrock
12 models
Bedrock is a service provided by Amazon AWS, focusing on delivering advanced AI language and visual models for enterprises. Its model family includes Anthropic's Claude series, Meta's Llama 3.1 series, and more, offering a range of options from lightweight to high-performance, supporting tasks such as text generation, conversation, and image processing for businesses of varying scales and needs.
Google
GeminiGemini
@Google
13 models
Google's Gemini series represents its most advanced, versatile AI models, developed by Google DeepMind, designed for multimodal capabilities, supporting seamless understanding and processing of text, code, images, audio, and video. Suitable for various environments from data centers to mobile devices, it significantly enhances the efficiency and applicability of AI models.
DeepSeekDeepSeek
@DeepSeek
1 models
DeepSeek is a company focused on AI technology research and application, with its latest model DeepSeek-V2.5 integrating general dialogue and code processing capabilities, achieving significant improvements in human preference alignment, writing tasks, and instruction following.