[ad_1]
For the reason that launch of Mixtral-8x7B by Mistral AI, there was a renewed curiosity within the mixture of expert (MoE) models. This structure exploits knowledgeable sub-networks amongst which solely a few of them are chosen and activated by a router community throughout inference.
MoEs are so easy and versatile that it’s simple to make a customized MoE. On the Hugging Face Hub, we are able to now discover a number of trending LLMs which can be customized MoEs, resembling mlabonne/phixtral-4x2_8.
Nevertheless, most of them usually are not conventional MoEs comprised of scratch, they merely use a mixture of already fine-tuned LLMs as specialists. Their creation was made simple with mergekit (LGPL-3.0 license). For example, Phixtral LLMs have been made with mergekit by combining a number of Phi-2 models.
On this article, we are going to see how Phixtral was created. We’ll apply the identical course of to create our personal combination of specialists, Maixtchup, utilizing a number of Mistral 7B fashions.
To shortly perceive the high-level structure of a mannequin, I wish to print it. For example, for mlabonne/phixtral-4x2_8 (MIT license):
from transformers import AutoModelForCausalLM
mannequin = AutoModelForCausalLM.from_pretrained(
"mlabonne/phixtral-4x2_8",
torch_dtype="auto",
load_in_4bit=True,
trust_remote_code=True
)
print(mannequin)
It prints:
PhiForCausalLM(
(transformer): PhiModel(
(embd): Embedding(
(wte): Embedding(51200, 2560)
(drop): Dropout(p=0.0, inplace=False)
)
(h): ModuleList(
(0-31): 32 x ParallelBlock(
(ln): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
(resid_dropout): Dropout(p=0.1, inplace=False)
(mixer): MHA(
(rotary_emb): RotaryEmbedding()
(Wqkv): Linear4bit(in_features=2560, out_features=7680, bias=True)
(out_proj): Linear4bit(in_features=2560, out_features=2560, bias=True)
(inner_attn): SelfAttention(
(drop): Dropout(p=0.0, inplace=False)
)
(inner_cross_attn): CrossAttention(
(drop): Dropout(p=0.0, inplace=False)
)
)
(moe): MoE(
(mlp): ModuleList(…
[ad_2]
Source link