]> git.djapps.eu Git - pkg/ggml/sources/ggml/commit
model : Qwen3 Next (llama/16095)
authorPiotr Wilkin (ilintar) <redacted>
Fri, 28 Nov 2025 11:02:56 +0000 (12:02 +0100)
committerGeorgi Gerganov <redacted>
Thu, 11 Dec 2025 13:32:48 +0000 (15:32 +0200)
commit8e7df929f6704a9da47082354ad42cbe6a75371d
tree14ceb702e6801806c8ef49bd8044876669522eea
parenta8a933fb1fe8b612459a8c340aed81f567d15cfa
model : Qwen3 Next (llama/16095)

* Qwen3 Next - cleaned up version

* Whitespaces and stuff

* Correct minor errors

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Misc. fixes.

* Clean up code, add missing hybrid qualifier

* Did someone transpose the SOLVE_TRI result matrix? Perhaps...

* Whitespace

* Proper tensors for cb calls

* Use llama-graph.h vertical alignment

* BROKEN: chunking

* Set new tensors as inputs.

* Proper chunk logic

* It's the circle of life...

* More shenanigans for n_seq > 1

* Nail in the coffin?

* Fix Windows build

* Eh, one fails on Windows, the other fails on Mac... just use general capture.

* quant : cleanup

* model : cleanup

* qwen3 : cleanup

* cont : cleanup

* cont : cleanup

* ggml : revert change

* qwen3 : cleanup

* cont : cleanup

* Readd cmath

* qwen3 : fix typo

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Usual suspects

* fix my bad suggestion

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
Co-authored-by: Georgi Gerganov <redacted>
src/ggml-cpu/ops.cpp