docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-w
Open WebUI Features
-
🖥️ Intuitive interface : Our chat interface draws inspiration from ChatGPT to ensure a user-friendly experience. -
📱 Responsive design : Enjoy seamless experience on desktop and mobile devices. -
⚡ quick response : Enjoy fast response performance. -
🚀 Easy setup : Use Docker or Kubernetes (kubectl, kustomize, or help) to install seamlessly for a worry free experience. -
💻 Code syntax highlighting : Enhance code readability through our syntax highlighting feature. -
✒️🔢 Full Markdown and LaTeX support : enrich interaction and improve your LLM experience through comprehensive Markdown and LaTeX functions. -
📚 Local RAG integration : Support in-depth understanding of the future of chat interaction through breakthrough retrieval enhanced generation (RAG). This feature seamlessly integrates document interaction into your chat experience. You can load the document directly into the chat or add the file to the document library using # The commands in the prompt easily access them. In the alpha phase, when we actively improve and enhance this feature to ensure optimal performance and reliability, problems may occasionally occur. -
🌐 Web browsing function # : Use the command after the URL to seamlessly integrate the website into your chat experience. This feature allows you to incorporate web content directly into your conversation, thereby enhancing the richness and depth of interaction. -
📜 Prompt preset support / : Use commands in chat input to access preset prompts immediately. Easily load predefined conversation openings and speed up your interaction. Easily import prompts through the Open WebUI Community integration. -
👍👎 RLHF Note : Enhance your message's ability to create reinforcement learning data sets based on human feedback (RLHF) by rating messages "yes" and "no". Use your messages to train or fine tune the model while ensuring the confidentiality of locally saved data. -
🏷️ Dialog Tag : Easily classify and locate specific chats for quick reference and simplified data collection. -
📥🗑️ Download/Delete Model : Easily download or delete models directly from the web UI. -
⬆️ GGUF file model creation : You can easily create Ollama models by uploading GGUF files directly from the Web UI. Simplified process, you can choose to upload GGUF files from your computer or download GGUF files from Hugging Face. -
🤖 Multi model support : Seamless switching between different chat models to achieve diversified interaction. -
🔄 Multi mode support : Seamless interaction with models that support multi-mode interaction, including images (such as LLava). -
🧩 Model File Generator : Easily create Ollama model files through the Web UI. Easily create and add roles/agents, customize chat elements, and import model files through open WebUI community integration. -
⚙️ Conversation with multiple models : Easily interact with multiple models at the same time, and use their unique advantages to get the best response. Enhance your experience by leveraging a set of different models in parallel. -
💬 Collaborative chat : Use the collective wisdom of multiple models by seamlessly arranging group dialogues. use @ Command to specify the model and enable dynamic and diversified conversations in the chat interface. Immerse yourself in the collective wisdom of the chat environment. -
🔄 Regeneration History Access : Easily revisit and explore your entire regeneration history. -
📜 Chat History : Easily access and manage your conversation history. -
📤📥 Import/export chat history : Move your chat data seamlessly into and out of the platform. -
🗣️ Voice input support : Interact with your model through voice interaction; Enjoy the convenience of talking directly with models. In addition, explore the option of automatically sending voice input after 3 seconds of silence for a simplified experience. -
⚙️ Fine tuning with advanced parameters : Get deeper control by adjusting parameters such as temperature and defining system prompts to customize the dialogue according to your specific preferences and needs. -
🎨🤖 Image generation integration : Use AUTOMATIC1111 API (local) and DALL-E to seamlessly integrate image generation function, enrich your chat experience through dynamic visual content. -
🤝 OpenAI API integration : Easily integrate OpenAI compatible API, and have multi-functional dialogue with Ollama model. Customize the API base URL to link to LMStudio, Mistral, OpenRouter, etc 。 -
✨ Multiple OpenAI compatible API support : Seamless integration and customization of various OpenAI compatible APIs to enhance the versatility of chat interaction. -
🔗 External Ollama server connection : Seamlessly link to external Ollama servers hosted at different addresses by configuring environment variables. -
🔀 Load balancing of multiple Ollama instances : Easily allocate chat requests between multiple Ollama instances to enhance performance and reliability. -
👥 Multi user management : Easily supervise and manage users through our intuitive management panel to simplify the user management process. -
🔐 Role Based Access Control (RBAC) : Ensure secure access through restricted permissions; Only authorized individuals can access your Ollama and reserve exclusive model creation/pull permissions for administrators. -
🔒 Back end reverse proxy support : Enhanced security through direct communication between the Open WebUI backend and Ollama. This key feature eliminates the need to expose Ollama over the LAN. Requests sent from the Web UI to the "/ollama/api" route will be seamlessly redirected from the backend to Ollama, thus enhancing the overall system security. -
🌐🌍 Multi language support : With our internationalization (i18n) support, experience open WebUI in your favorite language. Join us and expand the languages we support! We are actively looking for contributors! -
🌟 Continuous update : We are committed to improving Open WebUI through regular updates and new features.