You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

209 lines
7.0 KiB

11 months ago
  1. # How to use Dockerized Anything LLM
  2. Use the Dockerized version of AnythingLLM for a much faster and complete startup of AnythingLLM.
  3. ### Minimum Requirements
  4. > [!TIP]
  5. > Running AnythingLLM on AWS/GCP/Azure?
  6. > You should aim for at least 2GB of RAM. Disk storage is proportional to however much data
  7. > you will be storing (documents, vectors, models, etc). Minimum 10GB recommended.
  8. - `docker` installed on your machine
  9. - `yarn` and `node` on your machine
  10. - access to an LLM running locally or remotely
  11. \*AnythingLLM by default uses a built-in vector database powered by [LanceDB](https://github.com/lancedb/lancedb)
  12. \*AnythingLLM by default embeds text on instance privately [Learn More](../server/storage/models/README.md)
  13. ## Recommend way to run dockerized AnythingLLM!
  14. > [!IMPORTANT]
  15. > If you are running another service on localhost like Chroma, LocalAi, or LMStudio
  16. > you will need to use http://host.docker.internal:xxxx to access the service from within
  17. > the docker container using AnythingLLM as `localhost:xxxx` will not resolve for the host system.
  18. >
  19. > **Requires** Docker v18.03+ on Win/Mac and 20.10+ on Linux/Ubuntu for host.docker.internal to resolve!
  20. >
  21. > _Linux_: add `--add-host=host.docker.internal:host-gateway` to docker run command for this to resolve.
  22. >
  23. > eg: Chroma host URL running on localhost:8000 on host machine needs to be http://host.docker.internal:8000
  24. > when used in AnythingLLM.
  25. > [!TIP]
  26. > It is best to mount the containers storage volume to a folder on your host machine
  27. > so that you can pull in future updates without deleting your existing data!
  28. Pull in the latest image from docker. Supports both `amd64` and `arm64` CPU architectures.
  29. ```shell
  30. docker pull mintplexlabs/anythingllm
  31. ```
  32. <table>
  33. <tr>
  34. <th colspan="2">Mount the storage locally and run AnythingLLM in Docker</th>
  35. </tr>
  36. <tr>
  37. <td>
  38. Linux/MacOs
  39. </td>
  40. <td>
  41. ```shell
  42. export STORAGE_LOCATION=$HOME/anythingllm && \
  43. mkdir -p $STORAGE_LOCATION && \
  44. touch "$STORAGE_LOCATION/.env" && \
  45. docker run -d -p 3001:3001 \
  46. --cap-add SYS_ADMIN \
  47. -v ${STORAGE_LOCATION}:/app/server/storage \
  48. -v ${STORAGE_LOCATION}/.env:/app/server/.env \
  49. -e STORAGE_DIR="/app/server/storage" \
  50. mintplexlabs/anythingllm
  51. ```
  52. </td>
  53. </tr>
  54. <tr>
  55. <td>
  56. Windows
  57. </td>
  58. <td>
  59. ```powershell
  60. # Run this in powershell terminal
  61. $env:STORAGE_LOCATION="$HOME\Documents\anythingllm"; `
  62. If(!(Test-Path $env:STORAGE_LOCATION)) {New-Item $env:STORAGE_LOCATION -ItemType Directory}; `
  63. If(!(Test-Path "$env:STORAGE_LOCATION\.env")) {New-Item "$env:STORAGE_LOCATION\.env" -ItemType File}; `
  64. docker run -d -p 3001:3001 `
  65. --cap-add SYS_ADMIN `
  66. -v "$env:STORAGE_LOCATION`:/app/server/storage" `
  67. -v "$env:STORAGE_LOCATION\.env:/app/server/.env" `
  68. -e STORAGE_DIR="/app/server/storage" `
  69. mintplexlabs/anythingllm;
  70. ```
  71. </td>
  72. </tr>
  73. <tr>
  74. <td> Docker Compose</td>
  75. <td>
  76. ```yaml
  77. version: '3.8'
  78. services:
  79. anythingllm:
  80. image: mintplexlabs/anythingllm
  81. container_name: anythingllm
  82. ports:
  83. - "3001:3001"
  84. cap_add:
  85. - SYS_ADMIN
  86. environment:
  87. # Adjust for your environment
  88. - STORAGE_DIR=/app/server/storage
  89. - JWT_SECRET="make this a large list of random numbers and letters 20+"
  90. - LLM_PROVIDER=ollama
  91. - OLLAMA_BASE_PATH=http://127.0.0.1:11434
  92. - OLLAMA_MODEL_PREF=llama2
  93. - OLLAMA_MODEL_TOKEN_LIMIT=4096
  94. - EMBEDDING_ENGINE=ollama
  95. - EMBEDDING_BASE_PATH=http://127.0.0.1:11434
  96. - EMBEDDING_MODEL_PREF=nomic-embed-text:latest
  97. - EMBEDDING_MODEL_MAX_CHUNK_LENGTH=8192
  98. - VECTOR_DB=lancedb
  99. - WHISPER_PROVIDER=local
  100. - TTS_PROVIDER=native
  101. - PASSWORDMINCHAR=8
  102. # Add any other keys here for services or settings
  103. # you can find in the docker/.env.example file
  104. volumes:
  105. - anythingllm_storage:/app/server/storage
  106. restart: always
  107. volumes:
  108. anythingllm_storage:
  109. driver: local
  110. driver_opts:
  111. type: none
  112. o: bind
  113. device: /path/on/local/disk
  114. ```
  115. </td>
  116. </tr>
  117. </table>
  118. Go to `http://localhost:3001` and you are now using AnythingLLM! All your data and progress will persist between
  119. container rebuilds or pulls from Docker Hub.
  120. ## How to use the user interface
  121. - To access the full application, visit `http://localhost:3001` in your browser.
  122. ## About UID and GID in the ENV
  123. - The UID and GID are set to 1000 by default. This is the default user in the Docker container and on most host operating systems. If there is a mismatch between your host user UID and GID and what is set in the `.env` file, you may experience permission issues.
  124. ## Build locally from source _not recommended for casual use_
  125. - `git clone` this repo and `cd anything-llm` to get to the root directory.
  126. - `touch server/storage/anythingllm.db` to create empty SQLite DB file.
  127. - `cd docker/`
  128. - `cp .env.example .env` **you must do this before building**
  129. - `docker-compose up -d --build` to build the image - this will take a few moments.
  130. Your docker host will show the image as online once the build process is completed. This will build the app to `http://localhost:3001`.
  131. ## Integrations and one-click setups
  132. The integrations below are templates or tooling built by the community to make running the docker experience of AnythingLLM easier.
  133. ### Use the Midori AI Subsystem to Manage AnythingLLM
  134. Follow the setup found on [Midori AI Subsystem Site](https://io.midori-ai.xyz/subsystem/manager/) for your host OS
  135. After setting that up install the AnythingLLM docker backend to the Midori AI Subsystem.
  136. Once that is done, you are all set!
  137. ## Common questions and fixes
  138. ### Cannot connect to service running on localhost!
  139. If you are in docker and cannot connect to a service running on your host machine running on a local interface or loopback:
  140. - `localhost`
  141. - `127.0.0.1`
  142. - `0.0.0.0`
  143. > [!IMPORTANT]
  144. > On linux `http://host.docker.internal:xxxx` does not work.
  145. > Use `http://172.17.0.1:xxxx` instead to emulate this functionality.
  146. Then in docker you need to replace that localhost part with `host.docker.internal`. For example, if running Ollama on the host machine, bound to http://127.0.0.1:11434 you should put `http://host.docker.internal:11434` into the connection URL in AnythingLLM.
  147. ### API is not working, cannot login, LLM is "offline"?
  148. You are likely running the docker container on a remote machine like EC2 or some other instance where the reachable URL
  149. is not `http://localhost:3001` and instead is something like `http://193.xx.xx.xx:3001` - in this case all you need to do is add the following to your `frontend/.env.production` before running `docker-compose up -d --build`
  150. ```
  151. # frontend/.env.production
  152. GENERATE_SOURCEMAP=false
  153. VITE_API_BASE="http://<YOUR_REACHABLE_IP_ADDRESS>:3001/api"
  154. ```
  155. For example, if the docker instance is available on `192.186.1.222` your `VITE_API_BASE` would look like `VITE_API_BASE="http://192.186.1.222:3001/api"` in `frontend/.env.production`.
  156. ### Having issues with Ollama?
  157. If you are getting errors like `llama:streaming - could not stream chat. Error: connect ECONNREFUSED 172.17.0.1:11434` then visit the README below.
  158. [Fix common issues with Ollama](../server/utils/AiProviders/ollama/README.md)
  159. ### Still not working?
  160. [Ask for help on Discord](https://discord.gg/6UyHPeGZAC)