]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
model : add JAIS-2 architecture support (#19488)
author3 a l i <redacted>
Thu, 19 Feb 2026 12:30:17 +0000 (16:30 +0400)
committerGitHub <redacted>
Thu, 19 Feb 2026 12:30:17 +0000 (13:30 +0100)
commit2bf318fd2f690f12ba0ee87ac63157f5b9300886
tree7ddb2ab36bf4725cf8465e0adc6228c3c1c9fcc5
parentc78e682245f856ab5cfc2ffc0f8c20e8e12f163f
model : add JAIS-2 architecture support (#19488)

* model: add JAIS-2 architecture support

Add support for the JAIS-2 family of Arabic-English bilingual models
from Inception AI (https://huggingface.co/inceptionai/Jais-2-8B-Chat).

Architecture characteristics:
- LayerNorm (not RMSNorm) with biases
- ReLU² (ReLU squared) activation function
- Separate Q/K/V projections with biases
- Simple MLP without gate projection (up -> act -> down)
- RoPE positional embeddings
- GPT-2 BPE tokenizer

Supported model sizes:
- Jais-2-8B (32 layers, 26 heads, 3328 hidden)
- Jais-2-70B (68 layers, 56 heads, 7168 hidden)

Tested with quantizations: BF16, Q8_0, Q6_K, Q5_K_M, Q5_0, Q4_K_M, Q4_0, Q3_K_M, Q2_K

Note: JAIS-2 requires F32 precision accumulators for numerical stability
and uses standard attention (not flash attention) on CUDA backends.

* fix: run convert_hf_to_gguf_update.py for jais-2 tokenizer hash

* fix: use NEOX RoPE type for JAIS2

* fix: remove Q/K permutation (NEOX RoPE doesn't need it)

* fix: enable flash attention for JAIS2 (fixed by #19115)

* fix: add dedicated JAIS2 pre-tokenizer type and control vector support

- Add LLAMA_VOCAB_PRE_TYPE_JAIS2 with cascading whitespace regex
- Include original regex from tokenizer.json as comment
- Add build_cvec call for control vector support

* no longer necessary to override set_vocab

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
12 files changed:
convert_hf_to_gguf.py
convert_hf_to_gguf_update.py
gguf-py/gguf/constants.py
src/CMakeLists.txt
src/llama-arch.cpp
src/llama-arch.h
src/llama-graph.cpp
src/llama-model.cpp
src/llama-vocab.cpp
src/llama-vocab.h
src/models/jais2.cpp [new file with mode: 0644]
src/models/models.h