]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
llama : support LiquidAI LFM2-MoE hybrid model (#16464)
authorTarek Dakhran <redacted>
Tue, 7 Oct 2025 18:03:35 +0000 (20:03 +0200)
committerGitHub <redacted>
Tue, 7 Oct 2025 18:03:35 +0000 (20:03 +0200)
commitaeaf8a36f06b5810f5ae4bbefe26edb33925cf5e
tree95f0abfa91e21ec578bba8c9f38b17c4e46ad8c7
parentdf1b612e29ba97a2e67db339b1e8c7465702b7e8
llama : support LiquidAI LFM2-MoE hybrid model (#16464)

* llama : support LiquidAI LFM2-MoE hybrid model

Add support for [LiquidAI/LFM2-8B-A1B](https://huggingface.co/LiquidAI/LFM2-8B-A1B) model.
For more information about models, please read [the blog post](https://www.liquid.ai/company/news).

[HF PR](https://github.com/huggingface/transformers/pull/41401)
[GGUFs](https://huggingface.co/LiquidAI/LFM2-8B-A1B-GGUF)

* Do not use defaultdict

* Address PR feedback
convert_hf_to_gguf.py
gguf-py/gguf/constants.py
gguf-py/gguf/tensor_mapping.py
src/llama-arch.cpp
src/llama-arch.h
src/llama-model.cpp
src/llama-model.h