[ad_1]
On this article, I’ll present you the right way to use the Modelfile in Ollama to vary how an present LLM (Llama2) behaves when interacting with it. I’ll additionally present you the right way to save your newly custom-made mannequin to your private namespace on the Ollama server.
I do know it may possibly get a bit complicated with all of the completely different ”llamas” flying round. Simply keep in mind, Ollama is the corporate that lets you obtain and domestically run many various LLMs. Whereas, Llama2 is a selected LLM created by Meta the proprietor of Fb. Aside from this relationship, they aren’t linked in another means.
When you’ve by no means heard of Ollama earlier than I like to recommend that you just try my article beneath the place I am going into depth on what Ollama is and the right way to set up it in your system.
What’s a modelfile?
In Ollama, a modelfile
refers to a configuration file that defines the blueprint to create and share fashions with Ollama.
The modelfile comprises data akin to,
- Base Mannequin Reference. All modefiles should have a mannequin that they use as the idea for any new mannequin
- Parameters. These specify issues such because the
temperature, top_k and top_p
that must be utilized to the brand new mannequin. We’ll speak extra about these in a while. - Template. This specifies what the ultimate immediate might be that’s handed to the LLM.
- System. We are able to use this command to find out how the system behaves total.
There are different properties the modelfile could make use of however we’ll solely be utilizing those above. There’s a hyperlink to the Ollama documentation on the finish of the article if you wish to discover out extra about this.
The bottom mannequin
The very first thing we have to do is establish an present mannequin so we are able to look at its properties and make the modifications we wish to it. For that, I’m going…
[ad_2]
Source link