]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
2 months agoCUDA: fix compilation with GGML_CUDA_F16 (#14837)
Johannes Gäßler [Wed, 23 Jul 2025 16:22:30 +0000 (18:22 +0200)]
CUDA: fix compilation with GGML_CUDA_F16 (#14837)

2 months agoci : correct label refactor->refactoring (#14832)
Sigbjørn Skjæret [Wed, 23 Jul 2025 12:27:54 +0000 (14:27 +0200)]
ci : correct label refactor->refactoring (#14832)

2 months agoCUDA: fix quantized KV cache + multiple sequences (#14822)
Johannes Gäßler [Wed, 23 Jul 2025 10:35:53 +0000 (12:35 +0200)]
CUDA: fix quantized KV cache + multiple sequences (#14822)

* CUDA: fix quantized KV cache + multiple sequences

* Update ggml/src/ggml-cuda/fattn-common.cuh

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
2 months agotests : add non-cont K,V FA tests
Georgi Gerganov [Fri, 18 Jul 2025 10:36:27 +0000 (13:36 +0300)]
tests : add non-cont K,V FA tests

ggml-ci

2 months agomemory : handle saving/loading null layers in recurrent memory (#14675)
l3utterfly [Wed, 23 Jul 2025 08:16:41 +0000 (16:16 +0800)]
memory : handle saving/loading null layers in recurrent memory (#14675)

* Update llama-memory-recurrent.cpp

handle saving/loading null layers in recurrent memory

* fixed styling issues and updated comments

* fix styling issue

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agoggml: fix loongarch quantize_row_q8_1 error (#14827)
lixing-star [Wed, 23 Jul 2025 06:39:51 +0000 (14:39 +0800)]
ggml: fix loongarch quantize_row_q8_1 error (#14827)

2 months agoCANN: weight format to NZ for Ascend310P3 (#14407)
chen fan [Wed, 23 Jul 2025 03:58:00 +0000 (11:58 +0800)]
CANN: weight format to NZ for Ascend310P3 (#14407)

* weight format to nz for 310p

* remove quant weight format to nz

* clean code

* fix

* make the conditions for converting weights to NZ format consistent

* clean code

2 months agoCUDA: add fused rms norm (#14800)
Aman Gupta [Wed, 23 Jul 2025 01:25:42 +0000 (09:25 +0800)]
CUDA: add fused rms norm (#14800)

2 months agoggml : model card yaml tab->2xspace (#14819)
Csaba Kecskemeti [Tue, 22 Jul 2025 16:29:43 +0000 (09:29 -0700)]
ggml : model card yaml tab->2xspace (#14819)

2 months agovulkan: fix rms_norm_mul to handle broadcasting dim0 (#14817)
Jeff Bolz [Tue, 22 Jul 2025 15:35:21 +0000 (10:35 -0500)]
vulkan: fix rms_norm_mul to handle broadcasting dim0 (#14817)

2 months agollama : add model type detection for rwkv7 7B&14B (#14816)
Molly Sophia [Tue, 22 Jul 2025 15:01:29 +0000 (23:01 +0800)]
llama : add model type detection for rwkv7 7B&14B (#14816)

Signed-off-by: Molly Sophia <redacted>
2 months agoimatrix: add option to display importance score statistics for a given imatrix file...
Ed Addario [Tue, 22 Jul 2025 12:33:37 +0000 (13:33 +0100)]
imatrix: add option to display importance score statistics for a given imatrix file (#12718)

* Add --show-statistics option

* Add --show-statistics logic

* Add tensor name parsing

* Tidy output format

* Fix typo in title

* Improve tensor influence ranking

* Add better statistics

* Change statistics' sort order

* Add Cosine Similarity

* Add header search path

* Change header search path to private

* Add weighted statistics per layer

* Update report title

* Refactor compute_statistics out of main

* Refactor compute_cossim out of load_imatrix

* Refactor compute_statistics out of load_imatrix

* Move imatrix statistics calculation into its own functions

* Add checks and validations

* Remove unnecessary include directory

* Rename labels

* Add m_stats getter and refactor compute_statistics out of load_imatrix

* Refactor variable names

* Minor cosmetic change

* Retrigger checks (empty commit)

* Rerun checks (empty commit)

* Fix unnecessary type promotion

Co-authored-by: compilade <redacted>
* Reverting change to improve code readability

* Rerun checks (empty commit)

* Rerun checks (empty commit)

* Rerun checks - third time's the Charm 🤞 (empty commit)

* Minor cosmetic change

* Update README

* Fix typo

* Update README

* Rerun checks (empty commit)

* Re-implement changes on top of #9400

* Update README.md

* Update README

* Update README.md

Co-authored-by: compilade <redacted>
* Update README.md

Co-authored-by: compilade <redacted>
* Update README.md

* Remove duplicate option in print_usage()

* Update README.md

* Update README.md

Co-authored-by: compilade <redacted>
* Update README.md

Co-authored-by: compilade <redacted>
* Remove input check

* Remove commented out code

---------

Co-authored-by: compilade <redacted>
2 months agoMtmd: add a way to select device for vision encoder (#14236)
stduhpf [Tue, 22 Jul 2025 10:51:03 +0000 (12:51 +0200)]
Mtmd: add a way to select device for vision encoder (#14236)

* Mtmd: add a way to select device for vision encoder

* simplify

* format

* Warn user if manual device selection failed

* initialize backend to nullptr

2 months agocuda : implement bf16 cpy ops and enable bf16 cont (#14763)
Sigbjørn Skjæret [Tue, 22 Jul 2025 10:33:10 +0000 (12:33 +0200)]
cuda : implement bf16 cpy ops and enable bf16 cont (#14763)

* implement bf16 cpy ops and enable bf16 cont

* deduplicate copy functions

* deduplicate checks

2 months agoopencl: remove unreachable `return` (#14806)
lhez [Tue, 22 Jul 2025 06:53:30 +0000 (23:53 -0700)]
opencl: remove unreachable `return` (#14806)

2 months agoserver : allow setting `--reverse-prompt` arg (#14799)
Molly Sophia [Tue, 22 Jul 2025 01:24:22 +0000 (09:24 +0800)]
server : allow setting `--reverse-prompt` arg (#14799)

Signed-off-by: Molly Sophia <redacted>
2 months agocuda: remove linking to cublasLt (#14790)
R0CKSTAR [Mon, 21 Jul 2025 23:45:26 +0000 (07:45 +0800)]
cuda: remove linking to cublasLt (#14790)

Signed-off-by: Xiaodong Ye <redacted>
2 months agoopencl: fix `im2col` when `KW!=KH` (#14803)
Sigbjørn Skjæret [Mon, 21 Jul 2025 20:55:10 +0000 (22:55 +0200)]
opencl: fix `im2col` when `KW!=KH` (#14803)

2 months agoopencl: add conv2d kernel (#14403)
rmatif [Mon, 21 Jul 2025 17:03:19 +0000 (19:03 +0200)]
opencl: add conv2d kernel (#14403)

* add conv2d kernel

* fix trailing whitespace

* whitespace fixe

* handle f16 input and f16 kernel, more opt

* resolve conflicts

* use enqueue_ndrange_kernel

2 months agosycl: Fix im2col (#14797)
Romain Biessy [Mon, 21 Jul 2025 16:39:29 +0000 (18:39 +0200)]
sycl: Fix im2col (#14797)

2 months agokleidiai: add support for get_rows (#14676)
Charles Xu [Mon, 21 Jul 2025 13:49:52 +0000 (15:49 +0200)]
kleidiai: add support for get_rows (#14676)

* kleidiai: add support for get_rows

* apply fixes based on code review

* apply more fixes based on code review

2 months agodocs : fix backends table in README.md (#14796)
Radoslav Gerganov [Mon, 21 Jul 2025 12:03:49 +0000 (15:03 +0300)]
docs : fix backends table in README.md (#14796)

2 months agovulkan/cuda: Fix im2col when KW!=KH (#14789)
Jeff Bolz [Mon, 21 Jul 2025 11:35:40 +0000 (06:35 -0500)]
vulkan/cuda: Fix im2col when KW!=KH (#14789)

The tid is decomposed into "ow + ky*OW + kx*OW*KH". Change "ksize" to match.

2 months agollama : fix `--reverse-prompt` crashing issue (#14794)
Molly Sophia [Mon, 21 Jul 2025 09:38:36 +0000 (17:38 +0800)]
llama : fix `--reverse-prompt` crashing issue (#14794)

Signed-off-by: Molly Sophia <redacted>
2 months agoserver : add parse_special option to /tokenize endpoint (#14783)
IsaacDynamo [Mon, 21 Jul 2025 07:24:51 +0000 (09:24 +0200)]
server : add parse_special option to /tokenize endpoint (#14783)

2 months agodocs : fix link for tools/perplexity in README.md (#14780)
Aman Gupta [Sun, 20 Jul 2025 18:13:47 +0000 (02:13 +0800)]
docs : fix link for tools/perplexity in README.md (#14780)

2 months agoDocumentation: Further revisions to the Vulkan section in build.md (#14785)
rspOverflow [Sun, 20 Jul 2025 16:55:32 +0000 (23:55 +0700)]
Documentation: Further revisions to the Vulkan section in build.md (#14785)

* Documentation: Revised and further improved the Vulkan instructions for Linux users in build.md.

* Minor: Revise step 2 of the Vulkan instructions for Linux users in build.md

2 months agoClang-format: local files first + fix BinPacking (#14779)
Aman Gupta [Sun, 20 Jul 2025 11:42:34 +0000 (19:42 +0800)]
Clang-format: local files first + fix BinPacking (#14779)

2 months agoContrib: add 0cc4m as codeowner for Vulkan backend (#14775)
0cc4m [Sat, 19 Jul 2025 20:47:21 +0000 (22:47 +0200)]
Contrib: add 0cc4m as codeowner for Vulkan backend (#14775)

2 months agoggml: adds CONV_2D op and direct GEMM Vulkan implementation (#14316)
Ervin Áron Tasnádi [Sat, 19 Jul 2025 19:59:08 +0000 (21:59 +0200)]
ggml: adds CONV_2D op and direct GEMM Vulkan implementation (#14316)

* ggml/ggml-vulkan/test-backend-ops: adds CONV_2D for Vulkan

* ggml-vulkan: adds f32 scalar shader to compute 2D convolution directly
with gemm (no need for im2col),

* test-backend-ops: adds test_case_ref to check the validity/performance of ops
against reference implementations having different graphs, adds tests

* * Performance fixes: minimized branch divergence, uses collectives to
  eliminate redundant calculation, macros removed.

* Kernel shared memory size check

* Updates test-backend-ops to support graphs for performance
  measurement.

* * Apple/Win32 compile errors fixed

* Subgroup size used to determine tile size -> fixes llvmpipe errors.

* Collectives disabled by default.

* Intel support is disabled as the performance is poor.

* Conv2d enabled for Intel with disabled collectives, disabled for Apple

* test-backend-ops modifications are reverted

* Trailing spaces and missing override fixed.

* Triggering pipeline relaunch.

* Code formatted with .clang-format.

2 months agoimatrix : use GGUF to store importance matrices (#9400)
compilade [Sat, 19 Jul 2025 16:51:22 +0000 (12:51 -0400)]
imatrix : use GGUF to store importance matrices (#9400)

* imatrix : allow processing multiple chunks per batch

* perplexity : simplify filling the batch

* imatrix : fix segfault when using a single chunk per batch

* imatrix : use GGUF to store imatrix data

* imatrix : fix conversion problems

* imatrix : use FMA and sort tensor names

* py : add requirements for legacy imatrix convert script

* perplexity : revert changes

* py : include imatrix converter requirements in toplevel requirements

* imatrix : avoid using designated initializers in C++

* imatrix : remove unused n_entries

* imatrix : allow loading mis-ordered tensors

Sums and counts tensors no longer need to be consecutive.

* imatrix : more sanity checks when loading multiple imatrix files

* imatrix : use ggml_format_name instead of std::string concatenation

Co-authored-by: Xuan Son Nguyen <redacted>
* quantize : use unused imatrix chunk_size with LLAMA_TRACE

* common : use GGUF for imatrix output by default

* imatrix : two-way conversion between old format and GGUF

* convert : remove imatrix to gguf python script

* imatrix : use the function name in more error messages

* imatrix : don't use FMA explicitly

This should make comparisons between the formats easier
because this matches the behavior of the previous version.

* imatrix : avoid returning from void function save_imatrix

* imatrix : support 3d tensors with MUL_MAT

* quantize : fix dataset name loading from gguf imatrix

* common : move string_remove_suffix from quantize and imatrix

Co-authored-by: Sigbjørn Skjæret <redacted>
* imatrix : add warning when legacy format is written

* imatrix : warn when writing partial data, to help guess dataset coverage

Also make the legacy format store partial data
by using neutral values for missing data.
This matches what is done at read-time for the new format,
and so should get the same quality in case the old format is still used.

* imatrix : avoid loading model to convert or combine imatrix

* imatrix : avoid using imatrix.dat in README

---------

Co-authored-by: Xuan Son Nguyen <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agovulkan: Add logging for bf16 features to ggml_vk_print_gpu_info (#13274) (#14707)
Peter0x44 [Sat, 19 Jul 2025 15:58:03 +0000 (16:58 +0100)]
vulkan: Add logging for bf16 features to ggml_vk_print_gpu_info (#13274) (#14707)

2 months agoVulkan: Fix fprintf format-security warning (#14770)
0cc4m [Sat, 19 Jul 2025 15:47:53 +0000 (17:47 +0200)]
Vulkan: Fix fprintf format-security warning (#14770)

2 months agoDocumentation: Update build.md's Vulkan section (#14736)
rspOverflow [Sat, 19 Jul 2025 10:18:36 +0000 (17:18 +0700)]
Documentation: Update build.md's Vulkan section (#14736)

* Documentation: Rewrote and updated the "Without docker" portion of the Vulkan backend build documentation.

* Documentation: Reorganize build.md's Vulkan section.

2 months agosync : ggml
Georgi Gerganov [Sat, 19 Jul 2025 08:46:12 +0000 (11:46 +0300)]
sync : ggml

2 months agometal : fuse add, mul + add tests (#14596)
Georgi Gerganov [Fri, 18 Jul 2025 17:37:26 +0000 (20:37 +0300)]
metal : fuse add, mul + add tests (#14596)

ggml-ci

2 months agograph : fix graph reuse reset of params (#14760)
Georgi Gerganov [Fri, 18 Jul 2025 17:08:33 +0000 (20:08 +0300)]
graph : fix graph reuse reset of params (#14760)

ggml-ci

2 months agoparallel : add option for different RNG seeds (#14757)
Georgi Gerganov [Fri, 18 Jul 2025 14:33:41 +0000 (17:33 +0300)]
parallel : add option for different RNG seeds (#14757)

ggml-ci

2 months agocuda : Fix Gemma3n not executed as CUDA_GRAPH on NVGPUs (#14741)
Oliver Simons [Fri, 18 Jul 2025 11:35:32 +0000 (13:35 +0200)]
cuda : Fix Gemma3n not executed as CUDA_GRAPH on NVGPUs (#14741)

* Fix Gemma3n not executed as CUDA_GRAPH on NVGPUs

Gemma3n uses Matrix-Matrix addition as part of their input processing,
wrongly triggering CUDA_GRAPH disablement on NVGPUs even when batch-size
of 1 is used.

* Exclude `project_per_layer_input` by matching node names

This ensures that all other graphs which don't exhibit this pattern do
not have their behavior changed.

* Revert unnecessary formatting changes

2 months agograph : avoid huge warm-up graphs for MoE models (#14753)
Georgi Gerganov [Fri, 18 Jul 2025 11:31:15 +0000 (14:31 +0300)]
graph : avoid huge warm-up graphs for MoE models (#14753)

* graph : avoid huge warm-up graphs for MoE models

ggml-ci

* cont : bump max nodes to 8x model tensors

2 months agomodel : fix build after merge conflict (#14754)
Georgi Gerganov [Fri, 18 Jul 2025 08:53:55 +0000 (11:53 +0300)]
model : fix build after merge conflict (#14754)

2 months agomodel : add EXAONE 4.0 support (#14630)
lgai-exaone [Fri, 18 Jul 2025 08:45:49 +0000 (17:45 +0900)]
model : add EXAONE 4.0 support (#14630)

2 months agoCUDA: set_rows + cpy.cu refactor (#14712)
Aman Gupta [Fri, 18 Jul 2025 06:54:18 +0000 (14:54 +0800)]
CUDA: set_rows + cpy.cu refactor (#14712)

2 months agograph : refactor context to not pass gf explicitly (#14629)
Georgi Gerganov [Fri, 18 Jul 2025 05:29:28 +0000 (08:29 +0300)]
graph : refactor context to not pass gf explicitly (#14629)

ggml-ci

2 months agograph : Pass the graph placeholder message in debug mode (#14748)
Nexes the Elder [Fri, 18 Jul 2025 04:25:54 +0000 (06:25 +0200)]
graph : Pass the graph placeholder message in debug mode (#14748)

Without that condition, this debug log clutters the screen every batch treated in the prompt processing, or every token generated in Kobold.cpp.

2 months agouse max work group size for device to replace the magic number (#14732)
Neo Zhang Jianyu [Fri, 18 Jul 2025 02:23:14 +0000 (10:23 +0800)]
use max work group size for device to replace the magic number (#14732)

2 months agoconvert : fix Ernie4.5 MoE without shared experts (#14746)
Piotr Wilkin (ilintar) [Thu, 17 Jul 2025 23:17:16 +0000 (01:17 +0200)]
convert : fix Ernie4.5 MoE without shared experts (#14746)

2 months agonix : use optionalAttrs for env mkDerivation attrset argument (#14726)
Wroclaw [Thu, 17 Jul 2025 22:18:16 +0000 (00:18 +0200)]
nix : use optionalAttrs for env mkDerivation attrset argument (#14726)

2 months agomodel: add Ernie 4.5 MoE support (#14658)
Piotr Wilkin (ilintar) [Thu, 17 Jul 2025 21:15:32 +0000 (23:15 +0200)]
model: add Ernie 4.5 MoE support (#14658)

* Add Ernie4.5 MoE

* Fix Flake errors.

* Properly encode/decode MoE layer step

* Correct tensor mappings (.weight)

* Pass and read n_ff_exp

* n_ff_shexp calculation and further minor changes

* Rope fixes.

* .gitignore fix

* Add unit32 cast for Linux builds

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* Further fixes from code review

* Fix trailing whitespace

* Reenable missing experts error

* Code style from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* Fix non-MoE regression

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agokv-cache : fix k-shift for multiple streams (#14742)
Georgi Gerganov [Thu, 17 Jul 2025 17:52:33 +0000 (20:52 +0300)]
kv-cache : fix k-shift for multiple streams (#14742)

ggml-ci

2 months agollama : reuse compute graphs (#14482)
Georgi Gerganov [Thu, 17 Jul 2025 16:08:33 +0000 (19:08 +0300)]
llama : reuse compute graphs (#14482)

* llama : reuse compute graphs

ggml-ci

* llama-bench : add graph reuse parameter

ggml-ci

* cont : remove the parameter and the sched resets

ggml-ci

* graph : rename update() to can_reuse()

ggml-ci

* params : remove is_same()

ggml-ci

* graph : set res->params in llm_graph_context constructor

ggml-ci

* graph : avoid set_max_nodes in llm_graph_result

ggml-ci

* kv-cache : reuse llama_context's graph result instance

ggml-ci

* context : reset the previous graph result upon memory updates

ggml-ci

* batch : llama_ubatch now carries its data instead of pointing to balloc

ggml-ci

* merge : fix build

ggml-ci

* graph : fix can_reuse() checks when flash-attention is disabled

* graph : move llm_graph_result impl in source file + debug env

ggml-ci

2 months agollama : fix parallel processing for lfm2 (#14705)
Tarek Dakhran [Thu, 17 Jul 2025 07:22:11 +0000 (09:22 +0200)]
llama : fix parallel processing for lfm2 (#14705)

2 months agokv-cache : opt mask set input (#14600)
Georgi Gerganov [Thu, 17 Jul 2025 06:49:15 +0000 (09:49 +0300)]
kv-cache : opt mask set input (#14600)

ggml-ci

2 months agobatch : fix uninitialized has_cpl flag (#14733)
Georgi Gerganov [Thu, 17 Jul 2025 06:45:54 +0000 (09:45 +0300)]
batch : fix uninitialized has_cpl flag (#14733)

ggml-ci

2 months agoci : disable failing vulkan crossbuilds (#14723)
Sigbjørn Skjæret [Wed, 16 Jul 2025 23:52:08 +0000 (01:52 +0200)]
ci : disable failing vulkan crossbuilds (#14723)

2 months agoconvert : make hf token optional (#14717)
Sigbjørn Skjæret [Wed, 16 Jul 2025 21:17:43 +0000 (23:17 +0200)]
convert : make hf token optional (#14717)

* make hf token optional

* fail if we can't get necessary tokenizer config

2 months agollama : fix parameter order for hybrid memory initialization (#14725)
Diner Burger [Wed, 16 Jul 2025 19:17:25 +0000 (15:17 -0400)]
llama : fix parameter order for hybrid memory initialization (#14725)

2 months agoggml: Add initial WebGPU backend (#14521)
Reese Levine [Wed, 16 Jul 2025 15:18:51 +0000 (08:18 -0700)]
ggml: Add initial WebGPU backend (#14521)

* Minimal setup of webgpu backend with dawn. Just prints out the adapter and segfaults

* Initialize webgpu device

* Making progress on setting up the backend

* Finish more boilerplate/utility functions

* Organize file and work on alloc buffer

* Add webgpu_context to prepare for actually running some shaders

* Work on memset and add shader loading

* Work on memset polyfill

* Implement set_tensor as webgpu WriteBuffer, remove host_buffer stubs since webgpu doesn't support it

* Implement get_tensor and buffer_clear

* Finish rest of setup

* Start work on compute graph

* Basic mat mul working

* Work on emscripten build

* Basic WebGPU backend instructions

* Use EMSCRIPTEN flag

* Work on passing ci, implement 4d tensor multiplication

* Pass thread safety test

* Implement permuting for mul_mat and cpy

* minor cleanups

* Address feedback

* Remove division by type size in cpy op

* Fix formatting and add github action workflows for vulkan and metal (m-series) webgpu backends

* Fix name

* Fix macos dawn prefix path

2 months agomodel : support output bias for qwen2 (#14711)
tempstudio [Wed, 16 Jul 2025 15:02:06 +0000 (10:02 -0500)]
model : support output bias for qwen2 (#14711)

Co-authored-by: qwaqrm <redacted>
2 months agollama : add high-throughput mode (#14363)
Georgi Gerganov [Wed, 16 Jul 2025 13:35:42 +0000 (16:35 +0300)]
llama : add high-throughput mode (#14363)

* kv-cache : prepare K/V buffers for separation

ggml-ci

* batched-bench : fix oob write

ggml-ci

* llama : add "virtual sequences"

ggml-ci

* llama : use "stream" vs "virtual sequence"

ggml-ci

* graph : fix stream splitting when KV cache is not used

ggml-ci

* kv-cache : add multi-stream save/load support

ggml-ci

* llama : add "--attn-streams" flag

ggml-ci

* kv-cache : fix handling when find_slot fails

ggml-ci

* kv-cache : restore find_slot impl

ggml-ci

* kv-cache : add comments

* kv-cache : add bounds checks for sequence id

ggml-ci

* cont : add n_seq_max to batch allocr

ggml-ci

* kv-cache : perform stream copies lazily after llama_synchronize

ggml-ci

* kv-cache : avoid throwing exceptions across the C boundary

ggml-ci

* CUDA: 4D FlashAttention support (#14628)

* CUDA: 4D FlashAttention support

* CUDA: fix WMMA FA kernel

* llama : rename attn_streams -> kv_unified

ggml-ci

* common : rename kv_split -> kv_unified

ggml-ci

---------

Co-authored-by: Johannes Gäßler <redacted>
2 months agoSupport diffusion models: Add Dream 7B (#14644)
Aman Gupta [Wed, 16 Jul 2025 12:03:51 +0000 (20:03 +0800)]
Support diffusion models: Add Dream 7B (#14644)

* Support diffusion models: Add Dream 7B

* Move diffusion to examples

* Move stuff to examples. Add patch to not use kv-cache

* Address review comments

* Make sampling fast

* llama: remove diffusion functions

* Add basic timings + cleanup

* More cleanup

* Review comments: better formating, use LOG instead std::cerr, re-use batch, use ubatch instead of max_length

* fixup!

* Review: move everything to diffusion-cli for now

2 months agoggml : add asserts (#14720)
Georgi Gerganov [Wed, 16 Jul 2025 11:43:32 +0000 (14:43 +0300)]
ggml : add asserts (#14720)

* ggml : add asserts

ggml-ci

* cont : fix constant type

Co-authored-by: Diego Devesa <redacted>
---------

Co-authored-by: Diego Devesa <redacted>
2 months agoserver : pre-calculate EOG logit biases (#14721)
Georgi Gerganov [Wed, 16 Jul 2025 11:04:12 +0000 (14:04 +0300)]
server : pre-calculate EOG logit biases (#14721)

ggml-ci

2 months agollama : fix parallel processing for plamo2 (#14716)
Shunta Saito [Wed, 16 Jul 2025 10:12:22 +0000 (19:12 +0900)]
llama : fix parallel processing for plamo2 (#14716)

2 months agoserver : fix handling of the ignore_eos flag (#14710)
Georgi Gerganov [Wed, 16 Jul 2025 09:13:57 +0000 (12:13 +0300)]
server : fix handling of the ignore_eos flag (#14710)

ggml-ci

2 months agoscripts: synthetic prompt mode for server-bench.py (#14695)
Johannes Gäßler [Wed, 16 Jul 2025 07:33:28 +0000 (09:33 +0200)]
scripts: synthetic prompt mode for server-bench.py (#14695)

2 months agoconvert : only check for tokenizer folder if we need it (#14704)
Sigbjørn Skjæret [Wed, 16 Jul 2025 06:52:04 +0000 (08:52 +0200)]
convert : only check for tokenizer folder if we need it (#14704)

2 months agoconvert : add pre-computed hashes first to prevent order mishaps (#14701)
Sigbjørn Skjæret [Wed, 16 Jul 2025 06:51:12 +0000 (08:51 +0200)]
convert : add pre-computed hashes first to prevent order mishaps (#14701)

2 months agollama: add LLAMA_API to deprecated llama_kv_self_seq_div (#14708)
Min-Hua [Wed, 16 Jul 2025 04:00:42 +0000 (12:00 +0800)]
llama: add LLAMA_API to deprecated llama_kv_self_seq_div (#14708)

Add LLAMA_API to fix the run-time error with llama-cpp-python in Windows env:
attributeError: function 'llama_kv_self_seq_div' not found.
Did you mean: 'llama_kv_self_seq_add'?

Although llama_kv_self_seq_div() has been marked deprecated but
it is necessary to export it to make llama-cpp-python happy.

Observed software version:
OS: windows
compiler: MSVC
llama-cpp-python: tag: v0.3.12-cu124
llama.cpp: tag: b5833

Signed-off-by: Min-Hua Chen <redacted>
Co-authored-by: Min-Hua Chen <redacted>
2 months agogguf-py : dump bpw per layer and model in markdown mode (#14703)
Ed Addario [Tue, 15 Jul 2025 22:04:42 +0000 (23:04 +0100)]
gguf-py : dump bpw per layer and model in markdown mode (#14703)

2 months agomodel : add Kimi-K2 support (#14654)
Gabriel Larson [Tue, 15 Jul 2025 19:54:22 +0000 (14:54 -0500)]
model : add Kimi-K2 support (#14654)

* Kimi-K2 conversion

* add Kimi_K2  pre type

* Kimi-K2

* Kimi-K2 unicode

* Kimi-K2

* LLAMA_MAX_EXPERTS 384

* fix vocab iteration

* regex space fix

* add kimi-k2 to pre_computed_hashes

* Updated with kimi-k2 get_vocab_base_pre hash

* fix whitespaces

* fix flake errors

* remove more unicode.cpp whitespaces

* change set_vocab() flow

* add moonshotai-Kimi-K2.jinja to /models/templates/

* update moonshotai-Kimi-K2.jinja

* add kimi-k2 chat template

* add kimi-k2

* update NotImplementedError

Co-authored-by: Sigbjørn Skjæret <redacted>
* except Exception

Co-authored-by: Sigbjørn Skjæret <redacted>
* LLM_CHAT_TEMPLATE_KIMI_K2 if(add_ass){}

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agovulkan: fix noncontig check for mat_mul_id splitting (#14683)
Jeff Bolz [Tue, 15 Jul 2025 19:51:09 +0000 (14:51 -0500)]
vulkan: fix noncontig check for mat_mul_id splitting (#14683)

* vulkan: fix noncontig check for mat_mul_id splitting

Remove supports_op check for > 4096 (splitting fixes this)

* vulkan: fix batched matmul dequant for Q*_K

2 months agovulkan: add RTE variants for glu/add/sub/mul/div (#14653)
Jeff Bolz [Tue, 15 Jul 2025 19:32:11 +0000 (14:32 -0500)]
vulkan: add RTE variants for glu/add/sub/mul/div (#14653)

2 months agomodel : add PLaMo-2 support (#14560)
Shunta Saito [Tue, 15 Jul 2025 16:11:42 +0000 (01:11 +0900)]
model : add PLaMo-2 support (#14560)

* Add PLaMo-2 model using hybrid memory module

* Fix z shape

* Add cmath to include from llama-vocab.h

* Explicitly dequantize normalization weights before RoPE apply

* Revert unnecessary cast because the problem can be solved by excluding attn_k, attn_q when quantizing

* Use ATTN_K/Q_NORM for k,q weights to prevent quantization

* Remove SSM_BCDT that is not used from anywhere

* Do not duplicate embedding weights for output.weight

* Fix tokenizer encoding problem for multibyte strings

* Apply suggestion from @CISC

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Use LLM_FFN_SWIGLU instead of splitting ffn_gate and ffn_up

* Remove unnecessary part for Grouped Query Attention

* Fix how to load special token id to gguf

* Remove unused tensor mapping

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Remove llama_vocab_plamo2 class and replace it with llm_tokenizer_plamo2_session to follow the other tokenizer implementations

* Update src/llama-vocab.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
* Fix plamo2 tokenizer session to prevent multiple calls of build()

---------

Co-authored-by: Francis Couture-Harpin <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
Co-authored-by: Georgi Gerganov <redacted>
2 months agocuda: fix build warnings in set-rows.cu (unused variable) (#14687)
R0CKSTAR [Tue, 15 Jul 2025 07:28:53 +0000 (15:28 +0800)]
cuda: fix build warnings in set-rows.cu (unused variable) (#14687)

Signed-off-by: Xiaodong Ye <redacted>
2 months agosycl: Hotfix for non dnnl codepath (#14677)
Anton Mitkov [Mon, 14 Jul 2025 17:12:42 +0000 (18:12 +0100)]
sycl: Hotfix for non dnnl codepath (#14677)

2 months agoggml : refactor llamafile_sgemm PPC code (#14673)
shalinib-ibm [Mon, 14 Jul 2025 13:16:42 +0000 (18:46 +0530)]
ggml : refactor llamafile_sgemm PPC code (#14673)

Remove un-necessary templates from class definition and packing functions
Reduce deeply nested conditionals, if-else switching in mnapck function
Replace repetitive code with inline functions in Packing functions

2 ~ 7% improvement in Q8 Model
15 ~ 50% improvement in Q4 Model

Signed-off-by: Shalini Salomi Bodapati <redacted>
2 months agollama-context: add ability to get logits (#14672)
Aman Gupta [Mon, 14 Jul 2025 13:01:41 +0000 (21:01 +0800)]
llama-context: add ability to get logits (#14672)

2 months agoscripts: benchmark for HTTP server throughput (#14668)
Johannes Gäßler [Mon, 14 Jul 2025 11:14:30 +0000 (13:14 +0200)]
scripts: benchmark for HTTP server throughput (#14668)

* scripts: benchmark for HTTP server throughput

* fix server connection reset

2 months agoSYCL: use 1D kernel for set_rows (#14618)
Akarshan Biswas [Mon, 14 Jul 2025 09:37:55 +0000 (15:07 +0530)]
SYCL: use 1D kernel for set_rows (#14618)

* SYCL: Use 1D kernel for set_rows

* Remove dangling comment

* Refactor and use ceil_div

2 months agosycl: Batched mulmat rework for oneDNN dispatch (#14617)
Anton Mitkov [Mon, 14 Jul 2025 09:37:35 +0000 (10:37 +0100)]
sycl: Batched mulmat rework for oneDNN dispatch (#14617)

2 months agollama : add jinja template for rwkv-world (#14665)
Molly Sophia [Sun, 13 Jul 2025 23:43:43 +0000 (07:43 +0800)]
llama : add jinja template for rwkv-world (#14665)

* llama : add jinja template for rwkv-world

Signed-off-by: Molly Sophia <redacted>
* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Signed-off-by: Molly Sophia <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agoquantize : fix minor logic flaw in --tensor-type (#14572)
Ed Addario [Sun, 13 Jul 2025 16:02:17 +0000 (17:02 +0100)]
quantize : fix minor logic flaw in --tensor-type (#14572)

2 months agocuda : add set rows for bf16 (#14664)
Sigbjørn Skjæret [Sun, 13 Jul 2025 13:01:24 +0000 (15:01 +0200)]
cuda : add set rows for bf16 (#14664)

2 months agocuda : add ELU support (#14657)
Yavor Ivanov [Sun, 13 Jul 2025 09:33:16 +0000 (02:33 -0700)]
cuda : add ELU support (#14657)

2 months agoggml : add build-time message to remind about ggml_set_rows (#14661)
Georgi Gerganov [Sun, 13 Jul 2025 07:36:33 +0000 (10:36 +0300)]
ggml : add build-time message to remind about ggml_set_rows (#14661)

ggml-ci

2 months agometal : Add missing unary ops Metal support (#14660)
Yavor Ivanov [Sun, 13 Jul 2025 05:38:13 +0000 (22:38 -0700)]
metal : Add missing unary ops Metal support (#14660)

2 months agocmake : Add CMake presets for Linux and GCC (#14656)
Yavor Ivanov [Sun, 13 Jul 2025 05:12:36 +0000 (22:12 -0700)]
cmake : Add CMake presets for Linux and GCC (#14656)

2 months agotests : cover lfm2 cases in test_ssm_conv (#14651)
Tarek Dakhran [Sat, 12 Jul 2025 17:10:14 +0000 (19:10 +0200)]
tests : cover lfm2 cases in test_ssm_conv (#14651)

2 months agodocs : add LFM2 to models section (#14650)
Tarek Dakhran [Sat, 12 Jul 2025 17:07:08 +0000 (19:07 +0200)]
docs : add LFM2 to models section (#14650)

* readme : add LFM2 to models section

* fix copy paste...

2 months agoCUDA: add set rows for f32 and f16 (#14551) upstream/0.0.5882
Aman Gupta [Sat, 12 Jul 2025 13:31:38 +0000 (21:31 +0800)]
CUDA: add set rows for f32 and f16 (#14551)

* CUDA: add set rows for f32 and f16

* Review: change kernel params, use strides from host

* Use 1-d kernel

* Review: use int64_t for blockDim.x, rename nb->s for clarity

2 months agosync : ggml
Georgi Gerganov [Sat, 12 Jul 2025 13:06:12 +0000 (16:06 +0300)]
sync : ggml

2 months agovulkan : remove unused vars (#0)
Georgi Gerganov [Sat, 12 Jul 2025 09:39:32 +0000 (12:39 +0300)]
vulkan : remove unused vars (#0)

ggml-ci

2 months agosync : ggml
Georgi Gerganov [Sat, 12 Jul 2025 09:39:27 +0000 (12:39 +0300)]
sync : ggml

ggml-ci

2 months agovulkan : implement bilinear interpolation (ggml/1291)
Acly [Sat, 12 Jul 2025 09:37:37 +0000 (12:37 +0300)]
vulkan : implement bilinear interpolation (ggml/1291)

ggml-ci

2 months agovulkan : implement ggml_roll (ggml/1290)
Acly [Sat, 12 Jul 2025 09:32:32 +0000 (12:32 +0300)]
vulkan : implement ggml_roll (ggml/1290)

ggml-ci

2 months agoserver : fix pooled embedding output (#14645)
Douglas Hanley [Sat, 12 Jul 2025 10:21:02 +0000 (06:21 -0400)]
server : fix pooled embedding output (#14645)

2 months agovulkan: support SET_ROWS (#14587)
Jeff Bolz [Sat, 12 Jul 2025 10:12:26 +0000 (05:12 -0500)]
vulkan: support SET_ROWS (#14587)

* vulkan: support SET_ROWS

Add variants of the copy_to_quant shader that do the SET_ROWS operation.
Change these shaders to spread the work across the workgroup.
The memory access pattern is probably not great (one thread per quant block),
but should be fine for now.

* vulkan: optimize set_rows

Larger workgroups for non-quant types.
Set "norepeat" (there is manual repeat logic).
Use fastmod.

2 months agovulkan: optimizations for deepseek prompt processing (#14555)
Jeff Bolz [Sat, 12 Jul 2025 09:51:58 +0000 (04:51 -0500)]
vulkan: optimizations for deepseek prompt processing (#14555)

* vulkan: allow unclamped loads in coopmat2 mul_mat_id shader

* vulkan: increase coopmat2 mul_mat_id tile size

* vulkan: optimize mat_mul_id row_ids search to batch loads, and port to coopmat1 path

* vulkan: use smaller FA row size when head size is large. applies to both scalar and CM2 paths (CM1 isn't used due to shared memory limits)

2 months agomodel : support LiquidAI LFM2 hybrid family (#14620)
Tarek Dakhran [Fri, 11 Jul 2025 18:27:01 +0000 (20:27 +0200)]
model : support LiquidAI LFM2 hybrid family (#14620)

**Important**
LFM2 was [merged ](https://github.com/huggingface/transformers/pull/39340)into transformers, but has not yet been released.
To convert into gguf, install transformers from source
```shell
pip install "transformers @ git+https://github.com/huggingface/transformers.git@main"
```