Write in front With the continuous development of AI, all major manufacturers have disclosed many models, so whether we can run these models locally, the answer is yes! Today, I will give you three minutes to use Ollama to quickly build an environment to run the local large model and support the mainstream Lama 3, Phi 3, Mistral, Gemma and other large models also support mainstream operating systems. No matter you are Mac, Linux or Windows, even if you do not have a powerful GPU, you can experience and learn through the CPU.
This article will take you to use Ollama to run your own big model, and provide a profit plan for the secondary distribution of external APIs. I hope it can give you some inspiration and introduction. For more in-depth understanding, please refer to the corresponding official website, This article was originally created by the blogger and is not easy to create. Please mark the source for reprinting
。
[...]
]]>