]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
model : Qwen3 Next (#16095)
authorPiotr Wilkin (ilintar) <redacted>
Fri, 28 Nov 2025 11:02:56 +0000 (12:02 +0100)
committerGitHub <redacted>
Fri, 28 Nov 2025 11:02:56 +0000 (12:02 +0100)
commitff55414c42522adbeaa1bd9c52c0e9db16942484
treeb783bd9a92bd7b716f817458d915862c92fcdab0
parent73955f7d2a3ce1f36d7ecc14495e08957b51d113
model : Qwen3 Next (#16095)

* Qwen3 Next - cleaned up version

* Whitespaces and stuff

* Correct minor errors

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Misc. fixes.

* Clean up code, add missing hybrid qualifier

* Did someone transpose the SOLVE_TRI result matrix? Perhaps...

* Whitespace

* Proper tensors for cb calls

* Use llama-graph.h vertical alignment

* BROKEN: chunking

* Set new tensors as inputs.

* Proper chunk logic

* It's the circle of life...

* More shenanigans for n_seq > 1

* Nail in the coffin?

* Fix Windows build

* Eh, one fails on Windows, the other fails on Mac... just use general capture.

* quant : cleanup

* model : cleanup

* qwen3 : cleanup

* cont : cleanup

* cont : cleanup

* ggml : revert change

* qwen3 : cleanup

* cont : cleanup

* Readd cmath

* qwen3 : fix typo

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Usual suspects

* fix my bad suggestion

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
Co-authored-by: Georgi Gerganov <redacted>
16 files changed:
convert_hf_to_gguf.py
examples/model-conversion/scripts/causal/run-converted-model.sh
examples/model-conversion/scripts/causal/run-org-model.py
ggml/src/ggml-cpu/ops.cpp
gguf-py/gguf/constants.py
gguf-py/gguf/tensor_mapping.py
src/CMakeLists.txt
src/llama-arch.cpp
src/llama-arch.h
src/llama-context.cpp
src/llama-hparams.h
src/llama-model.cpp
src/llama-model.h
src/llama-quant.cpp
src/models/models.h
src/models/qwen3next.cpp [new file with mode: 0644]