]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Daniel Bevenius [Sun, 27 Jul 2025 10:10:51 +0000 (12:10 +0200)]
llama : clarify comment about pp and tg graphs [no ci] (#14895)
* llama : clarify comment about pp and tg graphs [no ci]
This commit clarifies the comment in `llama-context.cpp` regarding the
prefill prompt (pp), and token generation (tg) graphs.
The motivation for this is that I've struggled to remember these and had
to look them up more than once, so I thought it would be helpful to add
a comment that makes it clear what these stand for.
* squash! llama : clarify comment about pp and tg graphs [no ci]
Change "pp" to "prompt processing".
Erik Scholz [Sun, 27 Jul 2025 10:04:33 +0000 (12:04 +0200)]
vulkan : add fp16 support for the conv_2d kernel (#14872)
* add f16 to conv_2d testing
* weaken conv2d test error threshold
Jeff Bolz [Sun, 27 Jul 2025 09:05:34 +0000 (04:05 -0500)]
vulkan: skip empty set_rows to avoid invalid API usage (#14860)
Gabriel Larson [Sun, 27 Jul 2025 08:18:37 +0000 (03:18 -0500)]
model : make rope_yarn_log_mul optional for deepseek2 (#14896)
* make rope_yarn_log_mul optional for deepseek2
* default rope_yarn_log_mul = 0.0f
Shunta Saito [Sun, 27 Jul 2025 07:38:44 +0000 (16:38 +0900)]
llama : fix kq_scale for the attention layers of PLaMo2 (#14892)
* Fix dimensions for expand
* Change dimensions to copy states to cache
* Fix the default value for plamo2 conversion
* Fix scale given to build_attn
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
Aman Gupta [Sun, 27 Jul 2025 01:36:43 +0000 (09:36 +0800)]
Docs: add instructions for adding backends (#14889)
deepsek [Sat, 26 Jul 2025 22:28:14 +0000 (18:28 -0400)]
HIP: Enable Matrix cores for MMQ Kernels, Enable stream-K for CDNA 3 (#14624)
This commit adds support for MFMA instructions to MMQ. CDNA1/GFX908 CDNA2/GFX90a and CDNA3/GFX942 are supported by the MFMA-enabled code path added by this commit. The code path and stream-k is only enabled on CDNA3 for now as it fails to outperform blas in all cases on the other devices.
Blas is currently only consistently outperformed on CDNA3 due to issues in the amd-provided blas libraries.
This commit also improves the awareness of MMQ towards different warp sizes and as a side effect improves the performance of all quant formats besides q4_0 and q4_1, which regress slightly, on GCN gpus.
hipudding [Sat, 26 Jul 2025 09:56:18 +0000 (17:56 +0800)]
CANN: Implement GLU ops (#14884)
Implement REGLU, GEGLU, SWIGLU ops according to #14158
R0CKSTAR [Sat, 26 Jul 2025 02:36:02 +0000 (10:36 +0800)]
musa: fix build warnings (unused variable) (#14869)
Signed-off-by: Xiaodong Ye <redacted>
Aaron Teo [Fri, 25 Jul 2025 17:09:03 +0000 (01:09 +0800)]
ggml-cpu : disable GGML_NNPA by default due to instability (#14880)
* docs: update s390x document for sentencepiece
Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit
e086c5e3a7ab3463d8e0906efcfa39352db0a48d )
* docs: update huggingface links + reword
Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit
8410b085ea8c46e22be38266147a1e94757ef108 )
* ggml-cpu: disable ggml-nnpa compile flag by default
fixes #14877
Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit
412f4c7c88894b8f55846b4719c76892a23cfe09 )
* docs: update s390x build docs to reflect nnpa disable
Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit
c1eeae1d0c2edc74ab9fbeff2707b0d357cf0b4d )
---------
Signed-off-by: Aaron Teo <redacted>
Gabe Goodhart [Fri, 25 Jul 2025 16:47:39 +0000 (10:47 -0600)]
metal: SSM_SCAN performance (#14743)
* feat: Add s_off as a parameter in the args struct
This may not be necessary, but it more closely mirrors the CUDA kernel
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* perf: Parallelize mamba2 SSM_SCAN metal kernel over d_state
This is a first attempt at optimizing the metal kernel. The changes here
are:
- Launch the kernel with a thread group of size d_state
- Use simd groups and shared memory to do the summation for the y
computation
When tested with G4 tiny preview, this shows roughly a 3x speedup on
prefill and 15% speedup on decode.
Signed-off-by: Gabe Goodhart <redacted>
* fix: Update logic to correctly do the multi-layer parallel sum
Signed-off-by: Gabe Goodhart <redacted>
* fix: Correctly size the shared memory bufer and assert expected size relationships
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* refactor: Compute block offsets once rather than once per token
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* feat: Use local variable for state recursion
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* feat: Use a secondary simd_sum instead of a for loop
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* feat: Add assertion and comment about relationship between simd size and num simd groups
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* feat: Parallelize of d_state for mamba-1
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* feat: Parallel sum in SSM_CONV
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* Revert "feat: Parallel sum in SSM_CONV"
After discussion with @compilade, the size of the parallelism here is
not worth the cost in complexity or overhead of the parallel for.
https://github.com/ggml-org/llama.cpp/pull/14743#discussion_r2223395357
This reverts commit
16bc059660c1c59e566628201c0ca2c20c9f4bc3 .
Signed-off-by: Gabe Goodhart <redacted>
* refactor: Simplify shared memory sizing
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
Co-Authored-By: Georgi Gerganov <redacted>
---------
Signed-off-by: Gabe Goodhart <redacted>
Co-authored-by: Georgi Gerganov <redacted>
lhez [Fri, 25 Jul 2025 15:12:13 +0000 (08:12 -0700)]
opencl: add fused `rms_norm_mul` (#14841)
* opencl: add fused `rms_norm` + `mul`
* opencl: improve workgroup size for `rms_norm_mul`
wooksong [Fri, 25 Jul 2025 14:25:05 +0000 (23:25 +0900)]
docs : update HOWTO‑add‑model.md for ModelBase and new model classes (#14874)
This patch updates the example in docs/development/HOWTO-add-model.md to
reflect recent changes after `TextModel` and `MmprojModel` were introduced.
It replaces the outdated `Model` base class with `TextModel` or `MmprojModel`
and updates the registration example accordingly.
Signed-off-by: Wook Song <redacted>
Oliver Simons [Fri, 25 Jul 2025 11:29:57 +0000 (13:29 +0200)]
ggml : remove invalid portPos specifiers from dot files (#14838)
Neither "g" nor "x" are valid portPos specifiers per the official
[graphviz documents](https://graphviz.org/docs/attr-types/portPos/):
> If a compass point is used, it must have the form "n","ne","e","se","s","sw","w","nw","c","_".
I tested locally for it to fall back to default portPos specifier if an
invalid portPos is specified. As a consequence, we can remove associated
code.
Georgi Gerganov [Fri, 25 Jul 2025 11:28:06 +0000 (14:28 +0300)]
context : restore preemptive sched reset when LLAMA_SET_ROWS=0 (#14870)
ggml-ci
kiwi [Fri, 25 Jul 2025 11:08:04 +0000 (19:08 +0800)]
mtmd : fix 32-bit narrowing issue in export-lora and mtmd clip (#14503)
* [fix] Fix 32-bit narrowing issue in export-lora and mtmd clip
* Update export-lora.cpp
* Update clip.cpp
* Update export-lora.cpp
* format: use space to replace tab
Chris Rohlf [Fri, 25 Jul 2025 10:17:02 +0000 (06:17 -0400)]
rpc : check for null buffers in get/set/copy tensor endpoints (#14868)
Diego Devesa [Fri, 25 Jul 2025 08:07:26 +0000 (01:07 -0700)]
sched : fix multiple evaluations of the same graph with pipeline parallelism (#14855)
ggml-ci
R0CKSTAR [Thu, 24 Jul 2025 19:05:37 +0000 (03:05 +0800)]
musa: upgrade musa sdk to rc4.2.0 (#14498)
* musa: apply mublas API changes
Signed-off-by: Xiaodong Ye <redacted>
* musa: update musa version to 4.2.0
Signed-off-by: Xiaodong Ye <redacted>
* musa: restore MUSA graph settings in CMakeLists.txt
Signed-off-by: Xiaodong Ye <redacted>
* musa: disable mudnnMemcpyAsync by default
Signed-off-by: Xiaodong Ye <redacted>
* musa: switch back to non-mudnn images
Signed-off-by: Xiaodong Ye <redacted>
* minor changes
Signed-off-by: Xiaodong Ye <redacted>
* musa: restore rc in docker image tag
Signed-off-by: Xiaodong Ye <redacted>
---------
Signed-off-by: Xiaodong Ye <redacted>
Georgi Gerganov [Thu, 24 Jul 2025 15:30:33 +0000 (18:30 +0300)]
sync : ggml
ggml-ci
Kai Pastor [Tue, 22 Jul 2025 18:13:21 +0000 (20:13 +0200)]
cmake : fix usage issues (ggml/1257)
* CMake config: Create target only once
Fix error on repeated find_package(ggml).
For simplicity, check only for the top-level ggml::ggml.
* CMake config: Add CUDA link libs
* CMake config: Add OpenCL link libs
* CMake config: Use canonical find_dependency
Use set and append to control link lib variables.
Apply more $<LINK_ONLY...>.
* CMake config: Wire OpenMP dependency
Daniel Bevenius [Mon, 21 Jul 2025 13:53:12 +0000 (15:53 +0200)]
ggml-cpu : remove stdlib include from repack.cpp (ggml/1276)
This commit removes the inclusion of `<cstdlib>`.
The motivation for this change is that this source file does not seem to
use any functions from this header and the comment about `qsort` is a
little misleading/confusing.
Georgi Gerganov [Thu, 24 Jul 2025 13:31:48 +0000 (16:31 +0300)]
context : perform output reorder lazily upon access after sync (#14853)
* context : perform output reorder after lazily upon access after sync
ggml-ci
* cont : add TODO
Xuan-Son Nguyen [Thu, 24 Jul 2025 11:59:56 +0000 (13:59 +0200)]
chat : fix kimi-k2 chat template (#14852)
Alberto Cabrera Pérez [Thu, 24 Jul 2025 10:09:57 +0000 (11:09 +0100)]
sycl: fixed semantics of block offset calculation (#14814)
yummy [Thu, 24 Jul 2025 09:50:51 +0000 (17:50 +0800)]
llama : fix MiniCPM inference after Granite Four changes (#14850)
MiniCPM models use the llm_build_granite constructor which was changed
in the Granite Four PR to use hparams.rope_finetuned instead of a
use_rope parameter. MiniCPM models need rope enabled by default.
Fixes inference from gibberish to correct responses.
Pouya [Thu, 24 Jul 2025 09:26:44 +0000 (12:26 +0300)]
docs: add libcurl-dev install hint for Linux distros (#14801)
* docs: add libcurl-dev install hint for Linux distros
Signed-off-by: PouyaGhahramanian <redacted>
* Update docs/build.md
---------
Signed-off-by: PouyaGhahramanian <redacted>
Co-authored-by: Xuan-Son Nguyen <redacted>
Georgi Gerganov [Thu, 24 Jul 2025 07:24:05 +0000 (10:24 +0300)]
metal : fix fusion across different encoders (#14849)
* metal : fix fusion across different encoders
ggml-ci
* cont : add assertion
ggml-ci
Donghyeon Jeong [Thu, 24 Jul 2025 04:50:41 +0000 (13:50 +0900)]
sycl: fix undefined variable in work group size check (#14843)
jacekpoplawski [Wed, 23 Jul 2025 21:23:57 +0000 (23:23 +0200)]
convert : text-only support for GLM-4.1V-9B-Thinking (#14823)
* use language_model part only, ignore visual layers
* fix rope_dim calculation
Johannes Gäßler [Wed, 23 Jul 2025 19:43:25 +0000 (21:43 +0200)]
CUDA: fix overflow in FA, tune performance (#14840)
Johannes Gäßler [Wed, 23 Jul 2025 16:22:30 +0000 (18:22 +0200)]
CUDA: fix compilation with GGML_CUDA_F16 (#14837)
Sigbjørn Skjæret [Wed, 23 Jul 2025 12:27:54 +0000 (14:27 +0200)]
ci : correct label refactor->refactoring (#14832)
Johannes Gäßler [Wed, 23 Jul 2025 10:35:53 +0000 (12:35 +0200)]
CUDA: fix quantized KV cache + multiple sequences (#14822)
* CUDA: fix quantized KV cache + multiple sequences
* Update ggml/src/ggml-cuda/fattn-common.cuh
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Fri, 18 Jul 2025 10:36:27 +0000 (13:36 +0300)]
tests : add non-cont K,V FA tests
ggml-ci
l3utterfly [Wed, 23 Jul 2025 08:16:41 +0000 (16:16 +0800)]
memory : handle saving/loading null layers in recurrent memory (#14675)
* Update llama-memory-recurrent.cpp
handle saving/loading null layers in recurrent memory
* fixed styling issues and updated comments
* fix styling issue
Co-authored-by: Sigbjørn Skjæret <redacted>
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
lixing-star [Wed, 23 Jul 2025 06:39:51 +0000 (14:39 +0800)]
ggml: fix loongarch quantize_row_q8_1 error (#14827)
chen fan [Wed, 23 Jul 2025 03:58:00 +0000 (11:58 +0800)]
CANN: weight format to NZ for Ascend310P3 (#14407)
* weight format to nz for 310p
* remove quant weight format to nz
* clean code
* fix
* make the conditions for converting weights to NZ format consistent
* clean code
Aman Gupta [Wed, 23 Jul 2025 01:25:42 +0000 (09:25 +0800)]
CUDA: add fused rms norm (#14800)
Csaba Kecskemeti [Tue, 22 Jul 2025 16:29:43 +0000 (09:29 -0700)]
ggml : model card yaml tab->2xspace (#14819)
Jeff Bolz [Tue, 22 Jul 2025 15:35:21 +0000 (10:35 -0500)]
vulkan: fix rms_norm_mul to handle broadcasting dim0 (#14817)
Molly Sophia [Tue, 22 Jul 2025 15:01:29 +0000 (23:01 +0800)]
llama : add model type detection for rwkv7 7B&14B (#14816)
Signed-off-by: Molly Sophia <redacted>
Ed Addario [Tue, 22 Jul 2025 12:33:37 +0000 (13:33 +0100)]
imatrix: add option to display importance score statistics for a given imatrix file (#12718)
* Add --show-statistics option
* Add --show-statistics logic
* Add tensor name parsing
* Tidy output format
* Fix typo in title
* Improve tensor influence ranking
* Add better statistics
* Change statistics' sort order
* Add Cosine Similarity
* Add header search path
* Change header search path to private
* Add weighted statistics per layer
* Update report title
* Refactor compute_statistics out of main
* Refactor compute_cossim out of load_imatrix
* Refactor compute_statistics out of load_imatrix
* Move imatrix statistics calculation into its own functions
* Add checks and validations
* Remove unnecessary include directory
* Rename labels
* Add m_stats getter and refactor compute_statistics out of load_imatrix
* Refactor variable names
* Minor cosmetic change
* Retrigger checks (empty commit)
* Rerun checks (empty commit)
* Fix unnecessary type promotion
Co-authored-by: compilade <redacted>
* Reverting change to improve code readability
* Rerun checks (empty commit)
* Rerun checks (empty commit)
* Rerun checks - third time's the Charm 🤞 (empty commit)
* Minor cosmetic change
* Update README
* Fix typo
* Update README
* Rerun checks (empty commit)
* Re-implement changes on top of #9400
* Update README.md
* Update README
* Update README.md
Co-authored-by: compilade <redacted>
* Update README.md
Co-authored-by: compilade <redacted>
* Update README.md
* Remove duplicate option in print_usage()
* Update README.md
* Update README.md
Co-authored-by: compilade <redacted>
* Update README.md
Co-authored-by: compilade <redacted>
* Remove input check
* Remove commented out code
---------
Co-authored-by: compilade <redacted>
stduhpf [Tue, 22 Jul 2025 10:51:03 +0000 (12:51 +0200)]
Mtmd: add a way to select device for vision encoder (#14236)
* Mtmd: add a way to select device for vision encoder
* simplify
* format
* Warn user if manual device selection failed
* initialize backend to nullptr
Sigbjørn Skjæret [Tue, 22 Jul 2025 10:33:10 +0000 (12:33 +0200)]
cuda : implement bf16 cpy ops and enable bf16 cont (#14763)
* implement bf16 cpy ops and enable bf16 cont
* deduplicate copy functions
* deduplicate checks
lhez [Tue, 22 Jul 2025 06:53:30 +0000 (23:53 -0700)]
opencl: remove unreachable `return` (#14806)
Molly Sophia [Tue, 22 Jul 2025 01:24:22 +0000 (09:24 +0800)]
server : allow setting `--reverse-prompt` arg (#14799)
Signed-off-by: Molly Sophia <redacted>
R0CKSTAR [Mon, 21 Jul 2025 23:45:26 +0000 (07:45 +0800)]
cuda: remove linking to cublasLt (#14790)
Signed-off-by: Xiaodong Ye <redacted>
Sigbjørn Skjæret [Mon, 21 Jul 2025 20:55:10 +0000 (22:55 +0200)]
opencl: fix `im2col` when `KW!=KH` (#14803)
rmatif [Mon, 21 Jul 2025 17:03:19 +0000 (19:03 +0200)]
opencl: add conv2d kernel (#14403)
* add conv2d kernel
* fix trailing whitespace
* whitespace fixe
* handle f16 input and f16 kernel, more opt
* resolve conflicts
* use enqueue_ndrange_kernel
Romain Biessy [Mon, 21 Jul 2025 16:39:29 +0000 (18:39 +0200)]
sycl: Fix im2col (#14797)
Charles Xu [Mon, 21 Jul 2025 13:49:52 +0000 (15:49 +0200)]
kleidiai: add support for get_rows (#14676)
* kleidiai: add support for get_rows
* apply fixes based on code review
* apply more fixes based on code review
Radoslav Gerganov [Mon, 21 Jul 2025 12:03:49 +0000 (15:03 +0300)]
docs : fix backends table in README.md (#14796)
Jeff Bolz [Mon, 21 Jul 2025 11:35:40 +0000 (06:35 -0500)]
vulkan/cuda: Fix im2col when KW!=KH (#14789)
The tid is decomposed into "ow + ky*OW + kx*OW*KH". Change "ksize" to match.
Molly Sophia [Mon, 21 Jul 2025 09:38:36 +0000 (17:38 +0800)]
llama : fix `--reverse-prompt` crashing issue (#14794)
Signed-off-by: Molly Sophia <redacted>
IsaacDynamo [Mon, 21 Jul 2025 07:24:51 +0000 (09:24 +0200)]
server : add parse_special option to /tokenize endpoint (#14783)
Aman Gupta [Sun, 20 Jul 2025 18:13:47 +0000 (02:13 +0800)]
docs : fix link for tools/perplexity in README.md (#14780)
rspOverflow [Sun, 20 Jul 2025 16:55:32 +0000 (23:55 +0700)]
Documentation: Further revisions to the Vulkan section in build.md (#14785)
* Documentation: Revised and further improved the Vulkan instructions for Linux users in build.md.
* Minor: Revise step 2 of the Vulkan instructions for Linux users in build.md
Aman Gupta [Sun, 20 Jul 2025 11:42:34 +0000 (19:42 +0800)]
Clang-format: local files first + fix BinPacking (#14779)
0cc4m [Sat, 19 Jul 2025 20:47:21 +0000 (22:47 +0200)]
Contrib: add 0cc4m as codeowner for Vulkan backend (#14775)
Ervin Áron Tasnádi [Sat, 19 Jul 2025 19:59:08 +0000 (21:59 +0200)]
ggml: adds CONV_2D op and direct GEMM Vulkan implementation (#14316)
* ggml/ggml-vulkan/test-backend-ops: adds CONV_2D for Vulkan
* ggml-vulkan: adds f32 scalar shader to compute 2D convolution directly
with gemm (no need for im2col),
* test-backend-ops: adds test_case_ref to check the validity/performance of ops
against reference implementations having different graphs, adds tests
* * Performance fixes: minimized branch divergence, uses collectives to
eliminate redundant calculation, macros removed.
* Kernel shared memory size check
* Updates test-backend-ops to support graphs for performance
measurement.
* * Apple/Win32 compile errors fixed
* Subgroup size used to determine tile size -> fixes llvmpipe errors.
* Collectives disabled by default.
* Intel support is disabled as the performance is poor.
* Conv2d enabled for Intel with disabled collectives, disabled for Apple
* test-backend-ops modifications are reverted
* Trailing spaces and missing override fixed.
* Triggering pipeline relaunch.
* Code formatted with .clang-format.
compilade [Sat, 19 Jul 2025 16:51:22 +0000 (12:51 -0400)]
imatrix : use GGUF to store importance matrices (#9400)
* imatrix : allow processing multiple chunks per batch
* perplexity : simplify filling the batch
* imatrix : fix segfault when using a single chunk per batch
* imatrix : use GGUF to store imatrix data
* imatrix : fix conversion problems
* imatrix : use FMA and sort tensor names
* py : add requirements for legacy imatrix convert script
* perplexity : revert changes
* py : include imatrix converter requirements in toplevel requirements
* imatrix : avoid using designated initializers in C++
* imatrix : remove unused n_entries
* imatrix : allow loading mis-ordered tensors
Sums and counts tensors no longer need to be consecutive.
* imatrix : more sanity checks when loading multiple imatrix files
* imatrix : use ggml_format_name instead of std::string concatenation
Co-authored-by: Xuan Son Nguyen <redacted>
* quantize : use unused imatrix chunk_size with LLAMA_TRACE
* common : use GGUF for imatrix output by default
* imatrix : two-way conversion between old format and GGUF
* convert : remove imatrix to gguf python script
* imatrix : use the function name in more error messages
* imatrix : don't use FMA explicitly
This should make comparisons between the formats easier
because this matches the behavior of the previous version.
* imatrix : avoid returning from void function save_imatrix
* imatrix : support 3d tensors with MUL_MAT
* quantize : fix dataset name loading from gguf imatrix
* common : move string_remove_suffix from quantize and imatrix
Co-authored-by: Sigbjørn Skjæret <redacted>
* imatrix : add warning when legacy format is written
* imatrix : warn when writing partial data, to help guess dataset coverage
Also make the legacy format store partial data
by using neutral values for missing data.
This matches what is done at read-time for the new format,
and so should get the same quality in case the old format is still used.
* imatrix : avoid loading model to convert or combine imatrix
* imatrix : avoid using imatrix.dat in README
---------
Co-authored-by: Xuan Son Nguyen <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
Peter0x44 [Sat, 19 Jul 2025 15:58:03 +0000 (16:58 +0100)]
vulkan: Add logging for bf16 features to ggml_vk_print_gpu_info (#13274) (#14707)
0cc4m [Sat, 19 Jul 2025 15:47:53 +0000 (17:47 +0200)]
Vulkan: Fix fprintf format-security warning (#14770)
rspOverflow [Sat, 19 Jul 2025 10:18:36 +0000 (17:18 +0700)]
Documentation: Update build.md's Vulkan section (#14736)
* Documentation: Rewrote and updated the "Without docker" portion of the Vulkan backend build documentation.
* Documentation: Reorganize build.md's Vulkan section.
Georgi Gerganov [Sat, 19 Jul 2025 08:46:12 +0000 (11:46 +0300)]
sync : ggml
Georgi Gerganov [Fri, 18 Jul 2025 17:37:26 +0000 (20:37 +0300)]
metal : fuse add, mul + add tests (#14596)
ggml-ci
Georgi Gerganov [Fri, 18 Jul 2025 17:08:33 +0000 (20:08 +0300)]
graph : fix graph reuse reset of params (#14760)
ggml-ci
Georgi Gerganov [Fri, 18 Jul 2025 14:33:41 +0000 (17:33 +0300)]
parallel : add option for different RNG seeds (#14757)
ggml-ci
Oliver Simons [Fri, 18 Jul 2025 11:35:32 +0000 (13:35 +0200)]
cuda : Fix Gemma3n not executed as CUDA_GRAPH on NVGPUs (#14741)
* Fix Gemma3n not executed as CUDA_GRAPH on NVGPUs
Gemma3n uses Matrix-Matrix addition as part of their input processing,
wrongly triggering CUDA_GRAPH disablement on NVGPUs even when batch-size
of 1 is used.
* Exclude `project_per_layer_input` by matching node names
This ensures that all other graphs which don't exhibit this pattern do
not have their behavior changed.
* Revert unnecessary formatting changes
Georgi Gerganov [Fri, 18 Jul 2025 11:31:15 +0000 (14:31 +0300)]
graph : avoid huge warm-up graphs for MoE models (#14753)
* graph : avoid huge warm-up graphs for MoE models
ggml-ci
* cont : bump max nodes to 8x model tensors
Georgi Gerganov [Fri, 18 Jul 2025 08:53:55 +0000 (11:53 +0300)]
model : fix build after merge conflict (#14754)
lgai-exaone [Fri, 18 Jul 2025 08:45:49 +0000 (17:45 +0900)]
model : add EXAONE 4.0 support (#14630)
Aman Gupta [Fri, 18 Jul 2025 06:54:18 +0000 (14:54 +0800)]
CUDA: set_rows + cpy.cu refactor (#14712)
Georgi Gerganov [Fri, 18 Jul 2025 05:29:28 +0000 (08:29 +0300)]
graph : refactor context to not pass gf explicitly (#14629)
ggml-ci
Nexes the Elder [Fri, 18 Jul 2025 04:25:54 +0000 (06:25 +0200)]
graph : Pass the graph placeholder message in debug mode (#14748)
Without that condition, this debug log clutters the screen every batch treated in the prompt processing, or every token generated in Kobold.cpp.
Neo Zhang Jianyu [Fri, 18 Jul 2025 02:23:14 +0000 (10:23 +0800)]
use max work group size for device to replace the magic number (#14732)
Piotr Wilkin (ilintar) [Thu, 17 Jul 2025 23:17:16 +0000 (01:17 +0200)]
convert : fix Ernie4.5 MoE without shared experts (#14746)
Wroclaw [Thu, 17 Jul 2025 22:18:16 +0000 (00:18 +0200)]
nix : use optionalAttrs for env mkDerivation attrset argument (#14726)
Piotr Wilkin (ilintar) [Thu, 17 Jul 2025 21:15:32 +0000 (23:15 +0200)]
model: add Ernie 4.5 MoE support (#14658)
* Add Ernie4.5 MoE
* Fix Flake errors.
* Properly encode/decode MoE layer step
* Correct tensor mappings (.weight)
* Pass and read n_ff_exp
* n_ff_shexp calculation and further minor changes
* Rope fixes.
* .gitignore fix
* Add unit32 cast for Linux builds
* Apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <redacted>
* Further fixes from code review
* Fix trailing whitespace
* Reenable missing experts error
* Code style from code review
Co-authored-by: Sigbjørn Skjæret <redacted>
* Fix non-MoE regression
Co-authored-by: Sigbjørn Skjæret <redacted>
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
Georgi Gerganov [Thu, 17 Jul 2025 17:52:33 +0000 (20:52 +0300)]
kv-cache : fix k-shift for multiple streams (#14742)
ggml-ci
Georgi Gerganov [Thu, 17 Jul 2025 16:08:33 +0000 (19:08 +0300)]
llama : reuse compute graphs (#14482)
* llama : reuse compute graphs
ggml-ci
* llama-bench : add graph reuse parameter
ggml-ci
* cont : remove the parameter and the sched resets
ggml-ci
* graph : rename update() to can_reuse()
ggml-ci
* params : remove is_same()
ggml-ci
* graph : set res->params in llm_graph_context constructor
ggml-ci
* graph : avoid set_max_nodes in llm_graph_result
ggml-ci
* kv-cache : reuse llama_context's graph result instance
ggml-ci
* context : reset the previous graph result upon memory updates
ggml-ci
* batch : llama_ubatch now carries its data instead of pointing to balloc
ggml-ci
* merge : fix build
ggml-ci
* graph : fix can_reuse() checks when flash-attention is disabled
* graph : move llm_graph_result impl in source file + debug env
ggml-ci
Tarek Dakhran [Thu, 17 Jul 2025 07:22:11 +0000 (09:22 +0200)]
llama : fix parallel processing for lfm2 (#14705)
Georgi Gerganov [Thu, 17 Jul 2025 06:49:15 +0000 (09:49 +0300)]
kv-cache : opt mask set input (#14600)
ggml-ci
Georgi Gerganov [Thu, 17 Jul 2025 06:45:54 +0000 (09:45 +0300)]
batch : fix uninitialized has_cpl flag (#14733)
ggml-ci
Sigbjørn Skjæret [Wed, 16 Jul 2025 23:52:08 +0000 (01:52 +0200)]
ci : disable failing vulkan crossbuilds (#14723)
Sigbjørn Skjæret [Wed, 16 Jul 2025 21:17:43 +0000 (23:17 +0200)]
convert : make hf token optional (#14717)
* make hf token optional
* fail if we can't get necessary tokenizer config
Diner Burger [Wed, 16 Jul 2025 19:17:25 +0000 (15:17 -0400)]
llama : fix parameter order for hybrid memory initialization (#14725)
Reese Levine [Wed, 16 Jul 2025 15:18:51 +0000 (08:18 -0700)]
ggml: Add initial WebGPU backend (#14521)
* Minimal setup of webgpu backend with dawn. Just prints out the adapter and segfaults
* Initialize webgpu device
* Making progress on setting up the backend
* Finish more boilerplate/utility functions
* Organize file and work on alloc buffer
* Add webgpu_context to prepare for actually running some shaders
* Work on memset and add shader loading
* Work on memset polyfill
* Implement set_tensor as webgpu WriteBuffer, remove host_buffer stubs since webgpu doesn't support it
* Implement get_tensor and buffer_clear
* Finish rest of setup
* Start work on compute graph
* Basic mat mul working
* Work on emscripten build
* Basic WebGPU backend instructions
* Use EMSCRIPTEN flag
* Work on passing ci, implement 4d tensor multiplication
* Pass thread safety test
* Implement permuting for mul_mat and cpy
* minor cleanups
* Address feedback
* Remove division by type size in cpy op
* Fix formatting and add github action workflows for vulkan and metal (m-series) webgpu backends
* Fix name
* Fix macos dawn prefix path
tempstudio [Wed, 16 Jul 2025 15:02:06 +0000 (10:02 -0500)]
model : support output bias for qwen2 (#14711)
Co-authored-by: qwaqrm <redacted>
Georgi Gerganov [Wed, 16 Jul 2025 13:35:42 +0000 (16:35 +0300)]
llama : add high-throughput mode (#14363)
* kv-cache : prepare K/V buffers for separation
ggml-ci
* batched-bench : fix oob write
ggml-ci
* llama : add "virtual sequences"
ggml-ci
* llama : use "stream" vs "virtual sequence"
ggml-ci
* graph : fix stream splitting when KV cache is not used
ggml-ci
* kv-cache : add multi-stream save/load support
ggml-ci
* llama : add "--attn-streams" flag
ggml-ci
* kv-cache : fix handling when find_slot fails
ggml-ci
* kv-cache : restore find_slot impl
ggml-ci
* kv-cache : add comments
* kv-cache : add bounds checks for sequence id
ggml-ci
* cont : add n_seq_max to batch allocr
ggml-ci
* kv-cache : perform stream copies lazily after llama_synchronize
ggml-ci
* kv-cache : avoid throwing exceptions across the C boundary
ggml-ci
* CUDA: 4D FlashAttention support (#14628)
* CUDA: 4D FlashAttention support
* CUDA: fix WMMA FA kernel
* llama : rename attn_streams -> kv_unified
ggml-ci
* common : rename kv_split -> kv_unified
ggml-ci
---------
Co-authored-by: Johannes Gäßler <redacted>
Aman Gupta [Wed, 16 Jul 2025 12:03:51 +0000 (20:03 +0800)]
Support diffusion models: Add Dream 7B (#14644)
* Support diffusion models: Add Dream 7B
* Move diffusion to examples
* Move stuff to examples. Add patch to not use kv-cache
* Address review comments
* Make sampling fast
* llama: remove diffusion functions
* Add basic timings + cleanup
* More cleanup
* Review comments: better formating, use LOG instead std::cerr, re-use batch, use ubatch instead of max_length
* fixup!
* Review: move everything to diffusion-cli for now
Georgi Gerganov [Wed, 16 Jul 2025 11:43:32 +0000 (14:43 +0300)]
ggml : add asserts (#14720)
* ggml : add asserts
ggml-ci
* cont : fix constant type
Co-authored-by: Diego Devesa <redacted>
---------
Co-authored-by: Diego Devesa <redacted>
Georgi Gerganov [Wed, 16 Jul 2025 11:04:12 +0000 (14:04 +0300)]
server : pre-calculate EOG logit biases (#14721)
ggml-ci
Shunta Saito [Wed, 16 Jul 2025 10:12:22 +0000 (19:12 +0900)]
llama : fix parallel processing for plamo2 (#14716)
Georgi Gerganov [Wed, 16 Jul 2025 09:13:57 +0000 (12:13 +0300)]
server : fix handling of the ignore_eos flag (#14710)
ggml-ci
Johannes Gäßler [Wed, 16 Jul 2025 07:33:28 +0000 (09:33 +0200)]
scripts: synthetic prompt mode for server-bench.py (#14695)
Sigbjørn Skjæret [Wed, 16 Jul 2025 06:52:04 +0000 (08:52 +0200)]
convert : only check for tokenizer folder if we need it (#14704)
Sigbjørn Skjæret [Wed, 16 Jul 2025 06:51:12 +0000 (08:51 +0200)]
convert : add pre-computed hashes first to prevent order mishaps (#14701)
Min-Hua [Wed, 16 Jul 2025 04:00:42 +0000 (12:00 +0800)]
llama: add LLAMA_API to deprecated llama_kv_self_seq_div (#14708)
Add LLAMA_API to fix the run-time error with llama-cpp-python in Windows env:
attributeError: function 'llama_kv_self_seq_div' not found.
Did you mean: 'llama_kv_self_seq_add'?
Although llama_kv_self_seq_div() has been marked deprecated but
it is necessary to export it to make llama-cpp-python happy.
Observed software version:
OS: windows
compiler: MSVC
llama-cpp-python: tag: v0.3.12-cu124
llama.cpp: tag: b5833
Signed-off-by: Min-Hua Chen <redacted>
Co-authored-by: Min-Hua Chen <redacted>