Note: JAIS-2 requires F32 precision accumulators for numerical stability
and uses standard attention (not flash attention) on CUDA backends.
* fix: run convert_hf_to_gguf_update.py for jais-2 tokenizer hash
* fix: use NEOX RoPE type for JAIS2
* fix: remove Q/K permutation (NEOX RoPE doesn't need it)
* fix: enable flash attention for JAIS2 (fixed by #19115)
* fix: add dedicated JAIS2 pre-tokenizer type and control vector support
- Add LLAMA_VOCAB_PRE_TYPE_JAIS2 with cascading whitespace regex
- Include original regex from tokenizer.json as comment
- Add build_cvec call for control vector support