]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
6 weeks agoserver : Support multimodal completion and embeddings prompts in JSON format (#15108)
65a [Fri, 22 Aug 2025 08:10:14 +0000 (08:10 +0000)]
server : Support multimodal completion and embeddings prompts in JSON format (#15108)

- Use server_tokens in more places in server and util.cpp
- Convert most functions that used llama_tokens to server_tokens
- Modify input tokenizer to handle JSON objects as subprompts
- Break out MTMD prompt parsing into utility function
- Support JSON objects with multimodal_data arrays for MTMD prompts along with other existing types
- Add capability to model endpoint to indicate if client can send multimodal data
- Add tests.

6 weeks agoreadme : model : mtdm : lfm2 improvements (#15476)
Tarek Dakhran [Fri, 22 Aug 2025 07:29:08 +0000 (09:29 +0200)]
readme : model : mtdm : lfm2 improvements (#15476)

* Support untied embeddings

* Increase number of image tokens to 1024

* Add LFM2-VL to readme

* Actually use untied embeddings

6 weeks agoCANN: Optimize RMS_NORM using cache (#15419)
Chenguang Li [Fri, 22 Aug 2025 06:12:07 +0000 (14:12 +0800)]
CANN: Optimize RMS_NORM using cache (#15419)

* [CANN] Optimize RMS_NORM using cache

Signed-off-by: noemotiovon <redacted>
* fix typo

Signed-off-by: noemotiovon <redacted>
* fix review comment

Signed-off-by: noemotiovon <redacted>
* codestyle adjustment

Signed-off-by: noemotiovon <redacted>
---------

Signed-off-by: noemotiovon <redacted>
6 weeks agosched : fix possible use of wrong ids tensor when offloading moe prompt processing...
Diego Devesa [Thu, 21 Aug 2025 21:09:32 +0000 (14:09 -0700)]
sched : fix possible use of wrong ids tensor when offloading moe prompt processing (#15488)

6 weeks agollama : remove deprecated llama_kv_self API (#15472)
Georgi Gerganov [Thu, 21 Aug 2025 16:13:45 +0000 (19:13 +0300)]
llama : remove deprecated llama_kv_self API (#15472)

ggml-ci

6 weeks agograph : remove build_attn_with_sinks overload (#15469)
Georgi Gerganov [Thu, 21 Aug 2025 15:44:45 +0000 (18:44 +0300)]
graph : remove build_attn_with_sinks overload (#15469)

ggml-ci

6 weeks agovulkan : support conv_2d_dw with f16 weights (#15392)
Acly [Thu, 21 Aug 2025 15:01:51 +0000 (17:01 +0200)]
vulkan : support conv_2d_dw with f16 weights (#15392)

6 weeks agovulkan: add exp operation (#15456)
Dong Won Kim [Thu, 21 Aug 2025 15:00:16 +0000 (00:00 +0900)]
vulkan: add exp operation (#15456)

Co-authored-by: aeseulgi <redacted>
6 weeks agovulkan: Reuse conversion results in prealloc_y (#15410)
Jeff Bolz [Thu, 21 Aug 2025 14:55:00 +0000 (09:55 -0500)]
vulkan: Reuse conversion results in prealloc_y (#15410)

* vulkan: Reuse conversion results in prealloc_y

Cache the pipeline and tensor that were most recently used to fill prealloc_y,
and skip the conversion if the current pipeline/tensor match.

* don't use shared pointer for prealloc_y_last_pipeline_used

6 weeks agoexamples : fix some typos in examples/model-conversion/README.md (#15477)
Jie Fu (傅杰) [Thu, 21 Aug 2025 14:53:13 +0000 (22:53 +0800)]
examples : fix some typos in examples/model-conversion/README.md (#15477)

Signed-off-by: Jie Fu <redacted>
6 weeks agokv-cache : drop the "unified" prefix (#15467)
Georgi Gerganov [Thu, 21 Aug 2025 14:00:33 +0000 (17:00 +0300)]
kv-cache : drop the "unified" prefix (#15467)

* kv-cache : drop the "unified" prefix

ggml-ci

* cont : fix comment [no ci]

6 weeks agoexamples : install torch-cpu for model conversion tool/example (#15475)
Jie Fu (傅杰) [Thu, 21 Aug 2025 13:42:34 +0000 (21:42 +0800)]
examples : install torch-cpu for model conversion tool/example (#15475)

Signed-off-by: Jie Fu <redacted>
6 weeks agoci : enable RVV1.0 native build (#15386)
Ali Tariq [Thu, 21 Aug 2025 12:52:16 +0000 (17:52 +0500)]
ci : enable RVV1.0 native build (#15386)

* Changed the CI file to hw

* Changed the CI file to hw

* Added to sudoers for apt

* Removed the clone command and used checkout

* Added libcurl

* Added gcc-14

* Checking gcc --version

* added gcc-14 symlink

* added CC and C++ variables

* Added the gguf weight

* Changed the weights path

* Added system specification

* Removed white spaces

* ci: Replace Jenkins riscv native build Cloud-V pipeline with GitHub Actions workflow

Removed the legacy .devops/cloud-v-pipeline Jenkins CI configuration and introduced .github/workflows/build-riscv-native.yml for native RISC-V builds using GitHub Actions.

* removed trailing whitespaces

* Added the trigger at PR creation

* Corrected OS name

* Added ccache as setup package

* Added ccache for self-hosted runner

* Added directory for ccache size storage

Co-authored-by: Sigbjørn Skjæret <redacted>
* Changed the build command and added ccache debug log

* Added the base dir for the ccache

* Re-trigger CI

* Cleanup and refactored ccache steps

* Cleanup and refactored ccache steps

---------

Co-authored-by: Akif Ejaz <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
6 weeks agoci : continue file download with wget (#15471)
Georgi Gerganov [Thu, 21 Aug 2025 10:42:55 +0000 (13:42 +0300)]
ci : continue file download with wget (#15471)

ggml-ci

6 weeks agoexamples : add model conversion tool/example (#15455)
Daniel Bevenius [Thu, 21 Aug 2025 10:16:54 +0000 (12:16 +0200)]
examples : add model conversion tool/example (#15455)

* examples : add model conversion tool/example

This commit adds an "example/tool" that is intended to help in the
process of converting models to GGUF. Currently it supports normal
causal models and embedding models. The readme contains instructions and
command to guide through the process.

The motivation for this to have a structured and repeatable process for
model conversions and hopefully with time improve upon it to make the
process easier and more reliable. We have started to use this for new
model conversions internally and will continue doing so and improve it
as we go along. Perhaps with time this should be placed in a different
directory than the examples directory, but for now it seems like a good
place to keep it while we are still developing it.

* squash! examples : add model conversion tool/example

Remove dependency on scikit-learn in model conversion example.

* squash! examples : add model conversion tool/example

Update transformer dep to use non-dev version. And also import
`AutoModelForCausalLM` instead of `AutoModel` to ensure compatibility
with the latest version.

* squash! examples : add model conversion tool/example

Remove the logits requirements file from the all requirements file.

6 weeks agoci : fix -Werror=return-type in clip.cpp so ci/run.sh can run without issue (#15221)
Michael Giba [Thu, 21 Aug 2025 10:06:46 +0000 (05:06 -0500)]
ci : fix -Werror=return-type in clip.cpp so ci/run.sh can run without issue (#15221)

* Fix -Werror=return-type so ci/run.sh can run

* Update tools/mtmd/clip.cpp

Co-authored-by: Diego Devesa <redacted>
* Remove false now that we have abort

---------

Co-authored-by: Diego Devesa <redacted>
6 weeks agoci : add copilot-instructions.md (#15286)
Copilot [Thu, 21 Aug 2025 09:47:52 +0000 (11:47 +0200)]
ci : add copilot-instructions.md (#15286)

* Initial plan

* Initialize copilot instructions exploration

* Add comprehensive .github/copilot-instructions.md file

* Update Python environment and tools directory documentation

- Add instructions for using .venv Python environment
- Include flake8 and pyright linting tools from virtual environment
- Add tools/ as core directory in project layout
- Reference existing configuration files (.flake8, pyrightconfig.json)

* add more python dependencies to .venv

* Update copilot instructions: add backend hardware note and server testing

* Apply suggestions from code review

* Apply suggestions from code review

* Replace clang-format with git clang-format to format only changed code

* Minor formatting improvements: remove extra blank line and add trailing newline

* try installing git-clang-format

* try just clang-format

* Remove --binary flag from git clang-format and add git-clang-format installation to CI

* download 18.x release

* typo--

* remove --binary flag

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
6 weeks agoconvert : make Mistral community chat templates optional via parameter (#15420)
Julien Denize [Thu, 21 Aug 2025 09:19:50 +0000 (11:19 +0200)]
convert : make Mistral community chat templates optional via parameter (#15420)

* Make Mistral community chat templates optional

* Change the flag arg to disable instead of enable community chat templates

* Improve error message

* Improve help message

* Tone down the logger messages

6 weeks agocommon : fix incorrect print of non-ascii characters in the logging (#15466)
Jie Fu (傅杰) [Thu, 21 Aug 2025 08:54:34 +0000 (16:54 +0800)]
common : fix incorrect print of non-ascii characters in the logging (#15466)

Signed-off-by: Jie Fu <redacted>
6 weeks agoggml : fix condition of im2col on Metal backend (#15460)
Xuan-Son Nguyen [Thu, 21 Aug 2025 05:32:26 +0000 (07:32 +0200)]
ggml : fix condition of im2col on Metal backend (#15460)

6 weeks agoserver : fix webui (#15462)
stduhpf [Thu, 21 Aug 2025 05:19:22 +0000 (07:19 +0200)]
server : fix webui (#15462)

* Fix webui crash after streaming

* build webui

6 weeks agoexamples : remove references to `make` in examples [no ci] (#15457)
Daniel Bevenius [Thu, 21 Aug 2025 04:12:28 +0000 (06:12 +0200)]
examples : remove references to `make` in examples [no ci] (#15457)

This commit removes references to `make` in the examples, as the build
system has been updated to use CMake directly and using `make` will now
generate an error since Commit 37f10f955f70e0158d50343d0b9a3f92d194daae
("make : remove make in favor of CMake (#15449)").

6 weeks agomusa: add GGML_UNUSED_VARS (#15446)
R0CKSTAR [Thu, 21 Aug 2025 03:06:05 +0000 (11:06 +0800)]
musa: add GGML_UNUSED_VARS (#15446)

Signed-off-by: Xiaodong Ye <redacted>
6 weeks agosched : copy only the used experts when offloading prompt processing (#15346)
Diego Devesa [Wed, 20 Aug 2025 23:35:28 +0000 (16:35 -0700)]
sched : copy only the used experts when offloading prompt processing (#15346)

6 weeks agoserver: fix OpenAI API compatibility for usage statistics in chat streams (#15444)
teo [Wed, 20 Aug 2025 22:10:08 +0000 (07:10 +0900)]
server: fix OpenAI API compatibility for usage statistics in chat streams (#15444)

6 weeks agoCUDA: refactor FA support/selection code (#15454)
Johannes Gäßler [Wed, 20 Aug 2025 21:14:14 +0000 (23:14 +0200)]
CUDA: refactor FA support/selection code (#15454)

6 weeks agoCUDA: replace GGML_CUDA_F16 with CUDA arch checks (#15433)
Johannes Gäßler [Wed, 20 Aug 2025 14:58:49 +0000 (16:58 +0200)]
CUDA: replace GGML_CUDA_F16 with CUDA arch checks (#15433)

6 weeks agovulkan: shorten pipeline name strings (#15431)
Jeff Bolz [Wed, 20 Aug 2025 14:33:14 +0000 (09:33 -0500)]
vulkan: shorten pipeline name strings (#15431)

These detailed strings were causing increased build time on gcc.

6 weeks agochat: handle gpt-oss return/end token inconsistency (#15421)
Daniel Bevenius [Wed, 20 Aug 2025 12:26:01 +0000 (14:26 +0200)]
chat: handle gpt-oss return/end token inconsistency (#15421)

This commit addresses an inconsistency during inference by adding a new
member to the `templates_params` struct to indicate whether the chat is
in inference mode. This allows the gpt-oss specific function
`common_chat_params_init_gpt_oss` to check this flag and the
`add_generation_prompt` flag to determine if it should replace the
`<|return|>` token with the `<|end|>` token in the prompt.

The motivation for this change is to ensure that the formatted prompt of
past messages in `common_chat_format_single` matches the output of the
formatted new message. The issue is that the gpt-oss template returns
different end tags: `<|return|>` when `add_generation_prompt` is false,
and `<|end|>` when `add_generation_prompt` is true. This causes the
substring function to start at an incorrect position, resulting in
tokenization starting with 'tart|>' instead of '<|start|>'.

Resolves: https://github.com/ggml-org/llama.cpp/issues/15417

6 weeks agocommon : fix context shift help message (#15448)
Jie Fu (傅杰) [Wed, 20 Aug 2025 10:33:30 +0000 (18:33 +0800)]
common : fix context shift help message (#15448)

Signed-off-by: Jie Fu <redacted>
6 weeks agocmake : fix target include directories (#15450)
xiaobing318 [Wed, 20 Aug 2025 10:32:05 +0000 (18:32 +0800)]
cmake : fix target include directories (#15450)

* Update docker.yml

修改docker.yml文件中的内容使其停止周期性的运行该workflow,如果想要运行该workflow可以手动启动

* feat:Modify the header file include path

1. There's no llava directory in the tools directory.
2. Because the command `target_include_directories(mtmd PUBLIC .)` is used in the `mtmd` CMakeLists.txt file, other targets that link against `mtmd` automatically include the `mtmd` directory as a search path for header files. Therefore, you can remove `target_include_directories(${TARGET} PRIVATE ../llava`` or use `target_include_directories(${TARGET} PRIVATE ../mtmd`` to explicitly require the `llama-server` target to use header files from `mtmd`.

* Restore the docker.yml file

6 weeks agomake : remove make in favor of CMake (#15449)
Daniel Bevenius [Wed, 20 Aug 2025 10:31:16 +0000 (12:31 +0200)]
make : remove make in favor of CMake (#15449)

This commit removes the content from the Makefile and updates the
current deprecation message to information that `make` has been
replaced by CMake instead.

The message when `make` is invoked will now be the following:
```console
$ make
Makefile:6: *** Build system changed:
 The Makefile build has been replaced by CMake.

 For build instructions see:
 https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md

.  Stop.
```

The motivation for this is that many, if not all targets fail to build
now, after changes to the system, and `make` has also been deprected for
some time now.

6 weeks agolookahead : add sample command to readme (#15447)
Georgi Gerganov [Wed, 20 Aug 2025 10:30:46 +0000 (13:30 +0300)]
lookahead : add sample command to readme (#15447)

* lookahead : add sample command to readme

* cont : build-agnostic command

6 weeks agomusa: fix build warnings (#15258)
R0CKSTAR [Wed, 20 Aug 2025 02:17:37 +0000 (10:17 +0800)]
musa: fix build warnings (#15258)

* musa: fix build warnings

Signed-off-by: Xiaodong Ye <redacted>
* fix warning: comparison of integers of different signs: 'const int' and 'unsigned int' [-Wsign-compare]

Signed-off-by: Xiaodong Ye <redacted>
---------

Signed-off-by: Xiaodong Ye <redacted>
6 weeks agoopencl: mark `argsort` unsupported if cols exceed workgroup limit (#15375)
lhez [Tue, 19 Aug 2025 18:25:51 +0000 (02:25 +0800)]
opencl: mark `argsort` unsupported if cols exceed workgroup limit (#15375)

6 weeks agomodel : add gpt-oss type strings (#15424)
Georgi Gerganov [Tue, 19 Aug 2025 16:58:28 +0000 (19:58 +0300)]
model : add gpt-oss type strings (#15424)

6 weeks agocommon : Add top-nsigma sampler to help globally (#15428)
Gian-Carlo Pascutto [Tue, 19 Aug 2025 16:58:14 +0000 (18:58 +0200)]
common : Add top-nsigma sampler to help globally (#15428)

Fixes #15423.

6 weeks agoserver : disable context shift by default (#15416)
Georgi Gerganov [Tue, 19 Aug 2025 13:46:37 +0000 (16:46 +0300)]
server : disable context shift by default (#15416)

* server : disable context shift by default

ggml-ci

* server : make scopr of test parameters local

6 weeks agoCANN: optimize rope operator (#15335)
SHUAI YANG [Tue, 19 Aug 2025 13:28:22 +0000 (21:28 +0800)]
CANN: optimize rope operator (#15335)

* optimize rope ops

* amendment

* delete trailing whitespace

* change the variable name

6 weeks agomusa: handle __hgt2_mask, available starting from MUSA SDK rc4.3.0 (#15413)
R0CKSTAR [Tue, 19 Aug 2025 10:33:47 +0000 (18:33 +0800)]
musa: handle __hgt2_mask, available starting from MUSA SDK rc4.3.0 (#15413)

Signed-off-by: Xiaodong Ye <redacted>
6 weeks agoggml-cpu: add mxfp4 VSX intrinsics for Power9+ (ppc64le) hardware (#15385)
Marvin Gießing [Tue, 19 Aug 2025 08:54:31 +0000 (10:54 +0200)]
ggml-cpu: add mxfp4 VSX intrinsics for Power9+ (ppc64le) hardware (#15385)

* Added VSX intrinsics for Power9+ systems

Signed-off-by: mgiessing <redacted>
* Manual unrolling for minor perf improvement

Signed-off-by: mgiessing <redacted>
* Update ggml/src/ggml-cpu/arch/powerpc/quants.c

Co-authored-by: Georgi Gerganov <redacted>
---------

Signed-off-by: mgiessing <redacted>
Co-authored-by: Georgi Gerganov <redacted>
6 weeks agochat : clarify the meaning of reasoning_format (#15408)
Xuan-Son Nguyen [Tue, 19 Aug 2025 08:29:36 +0000 (10:29 +0200)]
chat : clarify the meaning of reasoning_format (#15408)

* chat : clarify the meaning of reasoning_format

* add link to this PR

6 weeks agoserver : remove swa_full warning (#15399) upstream/latest
Georgi Gerganov [Tue, 19 Aug 2025 05:45:26 +0000 (08:45 +0300)]
server : remove swa_full warning (#15399)

6 weeks agobatched-bench : use rand tokens (#15398)
Georgi Gerganov [Tue, 19 Aug 2025 05:45:12 +0000 (08:45 +0300)]
batched-bench : use rand tokens (#15398)

6 weeks agomtmd : clean up clip_n_output_tokens (#15391) upstream/0.0.6199
Xuan-Son Nguyen [Mon, 18 Aug 2025 20:53:52 +0000 (22:53 +0200)]
mtmd : clean up clip_n_output_tokens (#15391)

6 weeks agocodeowners : remove mmv.*
Georgi Gerganov [Mon, 18 Aug 2025 19:02:50 +0000 (22:02 +0300)]
codeowners : remove mmv.*

6 weeks agosync : ggml
Georgi Gerganov [Mon, 18 Aug 2025 19:02:11 +0000 (22:02 +0300)]
sync : ggml

6 weeks agoscripts : update sync scripts
Georgi Gerganov [Mon, 18 Aug 2025 17:35:47 +0000 (20:35 +0300)]
scripts : update sync scripts

6 weeks agollama : merge conts and reshapes and remove unnecessary cont (#15380)
Sigbjørn Skjæret [Mon, 18 Aug 2025 17:30:17 +0000 (19:30 +0200)]
llama : merge conts and reshapes and remove unnecessary cont (#15380)

* remove unnecessary conts and merge reshapes

* restore necessary conts

* merge more conts and reshapes

* merge even more conts and reshapes

6 weeks agoreadme : update hot topics (#15397)
Georgi Gerganov [Mon, 18 Aug 2025 15:11:44 +0000 (18:11 +0300)]
readme : update hot topics (#15397)

6 weeks agoserver : fix incoming tasks not process in order (#15395)
davidef [Mon, 18 Aug 2025 14:51:42 +0000 (16:51 +0200)]
server : fix incoming tasks not process in order (#15395)

6 weeks agoFix broken build: require updated pip to support --break-system-packages (#15357)
Dobri Danchev [Mon, 18 Aug 2025 10:50:48 +0000 (05:50 -0500)]
Fix broken build: require updated pip to support --break-system-packages (#15357)

* Revert "devops : fix compile bug when the BASE_CUDA_DEV_CONTAINER is based on Ubuntu 24.04 (#15005)"

This reverts commit e4e915912cfd2ee15c5a4a0074813232134892f6.

* devops: Allow pip to modify externally-managed python environment (system installation)

- Updated pip install commands to include the --break-system-packages
  flag, ensuring compatibility when working with system-managed Python
  environments (PEP 668).

- Note: The --break-system-packages option was introduced in 2023.
  Ensure pip is updated to a recent version before using this flag.

fixes [#15004](https://github.com/danchev/llama.cpp/issues/15004)

6 weeks agoggml-quants : fix make_qp_quants NANs and IQ1 assertion errors (#15379)
compilade [Mon, 18 Aug 2025 07:23:56 +0000 (03:23 -0400)]
ggml-quants : fix make_qp_quants NANs and IQ1 assertion errors (#15379)

* ggml-quants : fix make_qp_quants NANs and IQ1 assertion errors

* ggml-quants : avoid division by zero in make_q3_quants

6 weeks agovulkan: disable spirv-opt for bfloat16 shaders (#15352)
Jeff Bolz [Mon, 18 Aug 2025 05:56:29 +0000 (00:56 -0500)]
vulkan: disable spirv-opt for bfloat16 shaders (#15352)

6 weeks agoserver : export max observed n_past value (#15361)
Oleksandr Kuvshynov [Sun, 17 Aug 2025 22:28:58 +0000 (18:28 -0400)]
server : export max observed n_past value (#15361)

Add tracking for high watermark cache usage and make it available in /metrics endpoint.

Use-case: Tracking largest needed cache usage under realistic workload
to better understand memory requirements and be able to adjust
cache size/quantization for model/cache accordingly.

6 weeks agovulkan: Use larger workgroups for mul_mat_vec when M is small (#15355)
Jeff Bolz [Sun, 17 Aug 2025 16:08:57 +0000 (11:08 -0500)]
vulkan: Use larger workgroups for mul_mat_vec when M is small (#15355)

* vulkan: Use larger workgroups for mul_mat_vec when M is small

Also use subgroup instructions for (part of) the reduction when supported.
Without this, the more expensive reductions would eat into the benefits of
the larger workgroups.

* update heuristic for amd/intel

Co-authored-by: 0cc4m <redacted>
---------

Co-authored-by: 0cc4m <redacted>
6 weeks agovulkan: support sqrt (#15370)
Dong Won Kim [Sun, 17 Aug 2025 14:03:09 +0000 (23:03 +0900)]
vulkan: support sqrt (#15370)

6 weeks agoconvert : force patch_embd weights to F16 or F32 to avoid broken GGUFs (#15367)
Sigbjørn Skjæret [Sun, 17 Aug 2025 12:47:42 +0000 (14:47 +0200)]
convert : force patch_embd weights to F16 or F32 to avoid broken GGUFs (#15367)

* force patch_embd weights to f32

* use MmprojModel base tensor_force_quant instead

6 weeks agoci : fix hang in windows-hip build/release (#15365)
Sigbjørn Skjæret [Sun, 17 Aug 2025 11:30:23 +0000 (13:30 +0200)]
ci : fix hang in windows-hip build/release (#15365)

* fix hang in windows-latest-cmake-hip

* apply fix to release as well

6 weeks agovulkan: Optimize argsort (#15354)
Jeff Bolz [Sun, 17 Aug 2025 08:41:45 +0000 (03:41 -0500)]
vulkan: Optimize argsort (#15354)

- Launch an appropriate number of invocations (next larger power of two).
32 invocations is common and the barrier is much cheaper there.
- Specialize for "needs bounds checking" vs not.
- Make the code less branchy and [[unroll]] the loops. In the final code,
I see no branches inside the main loop (only predicated stores) when
needs_bounds_check is false.
- Always sort ascending, then apply the ascending vs descending option when
doing the final stores to memory.
- Copy the values into shared memory, makes them slightly cheaper to access.

6 weeks agomodel : support vision LiquidAI LFM2-VL family (#15347)
Tarek Dakhran [Sat, 16 Aug 2025 21:33:54 +0000 (23:33 +0200)]
model : support vision LiquidAI LFM2-VL family (#15347)

* wip lfm2 vision model

* Fix conv weight

* Implement dynamic resolution

* Fix cuda

* support LFM2-VL-450M

* happy CI

* Remove extra `ggml_conv` and put others into the right place

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Xuan Son Nguyen <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
6 weeks agovulkan: fuse adds (#15252)
Jeff Bolz [Sat, 16 Aug 2025 16:48:22 +0000 (11:48 -0500)]
vulkan: fuse adds (#15252)

* vulkan: fuse adds

Fuse adds that have the same shape, which are common in MoE models.
It will currently fuse up to 6 adds, because we assume no more than
8 descriptors per dispatch. But this could be changed.

* check runtimeDescriptorArray feature

* disable multi_add for Intel due to likely driver bug

6 weeks agovulkan: Support mul_mat_id with f32 accumulators (#15337)
Jeff Bolz [Sat, 16 Aug 2025 09:18:31 +0000 (04:18 -0500)]
vulkan: Support mul_mat_id with f32 accumulators (#15337)

* vulkan: Add missing bounds checking to scalar/coopmat1 mul_mat_id

* vulkan: Support mul_mat_id with f32 accumulators, but they are not hooked up

- There's no explicit way to request f32 precision for mul_mat_id, but there
probably should be, and this gets the code in place for that.
- A couple fixes to check_results.
- Remove casts to fp16 in coopmat1 FA shader (found by inspection).

6 weeks agovulkan: Add missing bounds checking to scalar/coopmat1 mul_mat_id (#15334)
Jeff Bolz [Sat, 16 Aug 2025 08:58:38 +0000 (03:58 -0500)]
vulkan: Add missing bounds checking to scalar/coopmat1 mul_mat_id (#15334)

6 weeks agoOpenCL: add initial FA support (#14987)
rmatif [Sat, 16 Aug 2025 08:05:55 +0000 (10:05 +0200)]
OpenCL: add initial FA support (#14987)

* add F16/F16 fa support

* fix kernel init

* use mad instead of fma

* use inline function

* mark FA with sinks as unsupported for now

* add pragma unroll to loops

7 weeks agocommon : fix double bos, use common_chat_templates for add_bos and add_eos (#15326)
Daniel Bevenius [Fri, 15 Aug 2025 17:50:52 +0000 (19:50 +0200)]
common : fix double bos, use common_chat_templates for add_bos and add_eos (#15326)

This commit updates common_chat_templates_apply_jinja to use the
the add_bos and add_eos parameters from the chat template instead of
the inputs.

The motivation for this is that currently if the `add_bos` and `add_eos`
from the input parameters are used it is possible to there will be a
missmatch between the model and the chat template which can lead to the
the removal of duplicate BOS/EOS tokens in chat.cpp `apply` to not
happen leading to two BOS tokens being added to the template.

7 weeks agoopencl: add initial mxfp4 support via mv (#15270)
lhez [Fri, 15 Aug 2025 16:52:14 +0000 (00:52 +0800)]
opencl: add initial mxfp4 support via mv (#15270)

* opencl: add reference `mul_mv_mxfp4_f32`

* opencl: add reference `mul_mv_id` for mxfp4

* Q4_0 tranpose fix for Adreno

---------

Co-authored-by: shawngu-quic <redacted>
7 weeks agovulkan : fix out-of-bounds access in argmax kernel (#15342)
Georgi Gerganov [Fri, 15 Aug 2025 14:16:36 +0000 (17:16 +0300)]
vulkan : fix out-of-bounds access in argmax kernel (#15342)

ggml-ci

7 weeks agovulkan : fix compile warnings on macos (#15340)
Georgi Gerganov [Fri, 15 Aug 2025 13:28:28 +0000 (16:28 +0300)]
vulkan : fix compile warnings on macos (#15340)

ggml-ci

7 weeks agoggml: initial IBM zDNN backend (#14975)
Aaron Teo [Fri, 15 Aug 2025 13:11:22 +0000 (21:11 +0800)]
ggml: initial IBM zDNN backend (#14975)

* ggml-zdnn: inital backend impl

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: temp change z17 to arch15

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: fix build bugs

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: tensor->extra logging check

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: add layout name mapping, ztensor information

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: separate logging into its own line

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: add shape comparison

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: add ggml_tensor shape log

Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: fix incorrect shape logging

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add output buffer check

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: run compute and store into tensor->extra

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add set_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add more loggers

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: update set_tensor logging to check only for matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: last working matmul version

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add comments to prevent accidentally deleting lines

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: support op out_prod

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: update op out_prod to use tensor->extra

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: rewrite the backend implementation

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: bugfix new impl

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix compiler warnings and bugfixes

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: test ztensor finding in init_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: implement at least 1 op to test

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: assign tensor->extra to buffer

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add check for view tensors to prevent init_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: rework init_tensor to create new buffers

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: switch to std vector instead of array

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: switch buffers back and set to arbitrary number

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: impl init_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: update supports_op matmul matrix

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix incorrect ztensor shape, reduce memory padding

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: code clean up

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: impl matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix compiler error missing type

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing data transform call

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add bias init_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: tighten memory usage, change string allocation

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add bias ztensor and data free

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add bias data transform

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add more debug info for extra buffer transform

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add logger to check if mat mul ops go through set_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: activate bias transform in matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: move weights transform into mulmat

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add more safeguards in matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix sequencing of transforms

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: bugfix transform ztensor vs origtensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: figure out why sigtrap is happening

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix sigsegv

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: move everything back to local declaration

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: move bias data to local also

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: bring back working matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: rewrite into mre

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing vector import

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing vector import in header

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt to fix sigsegv

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing load tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix invalid ztensor buffer release

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add logging to debug free buffer

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: remove free_buffer debug info

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add parmblkformat detections

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add nnpa installed detection

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add zdnn_init call for static libs

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add init_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt at fixing invalid buffer

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: switch to using deque to fix pointer deref problem

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add weights logging to check

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt to use unique ptr

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add tensor to pre_tfm_desc logging

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add inputs logging

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: disable op_none initialisation for testing

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing return from init_tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: load ztensors in cgraph exec

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: work on moving output ztensor as well

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: disable logging and breakpoints for full test

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt at manually changing the layout

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt at using default nwhc format instead

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: disable global load ztensor for now

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix errorenous output load tensor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add guards to prevent loading ztensor if transformed

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: code cleanup

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: bring load ztensor back to init routine

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: code clean up

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix ztensor deallocation abort

stabilise ggml <-> zdnn api

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: clean up matmul selection

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: clean up project structure

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: update documentation, prepare for upstream

Signed-off-by: Aaron Teo <redacted>
* chore: add codeowners

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: disable batched matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt at fixing tensor views during matmul

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: deny all view tensors directly

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix pr comments

Signed-off-by: Aaron Teo <redacted>
* docs: update ops docs for zdnn

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: redo test-backend-ops for ops.md

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix typo in build-s390x.md

Signed-off-by: Aaron Teo <redacted>
* codeowners: remove taronaeo for now

Signed-off-by: Aaron Teo <redacted>
* Revert "codeowners: remove taronaeo for now"

This reverts commit 411ea4ed78d08778967bd0bd33a6538cfcbe082f.

* ggml-zdnn: remove unused ggml_zdnn macro

Signed-off-by: Aaron Teo <redacted>
---------

Signed-off-by: Aaron Teo <redacted>
7 weeks agoci : fix ios-xcode-build (#15324)
Sigbjørn Skjæret [Fri, 15 Aug 2025 12:02:39 +0000 (14:02 +0200)]
ci : fix ios-xcode-build (#15324)

* fix ios-xcode-build

* use xcode-select with fixed version

* switch to macos-15 to get xcode 16.4

7 weeks agoci : move ccache action to ggml-org fork (#15328)
Diego Devesa [Fri, 15 Aug 2025 10:27:02 +0000 (03:27 -0700)]
ci : move ccache action to ggml-org fork (#15328)

7 weeks agotest-opt: fix backend support check (#15317)
Johannes Gäßler [Fri, 15 Aug 2025 09:23:17 +0000 (11:23 +0200)]
test-opt: fix backend support check (#15317)

* test-opt: fix backend support check

* Update tests/test-opt.cpp

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
7 weeks agoCUDA: fix negative KV_max values in FA (#15321)
Johannes Gäßler [Thu, 14 Aug 2025 21:21:24 +0000 (23:21 +0200)]
CUDA: fix negative KV_max values in FA (#15321)

7 weeks agoeval-callback : stop on first NaN (#15320)
Georgi Gerganov [Thu, 14 Aug 2025 19:10:51 +0000 (22:10 +0300)]
eval-callback : stop on first NaN (#15320)

* eval-callback : stop on first NaN

* cont : log error

7 weeks agochat : include kwargs in template example (#15309)
Diego Devesa [Thu, 14 Aug 2025 17:28:29 +0000 (10:28 -0700)]
chat : include kwargs in template example (#15309)

7 weeks agollama : add 18-layer model type for Gemma 3-270m (#15319)
Daniel Bevenius [Thu, 14 Aug 2025 15:56:26 +0000 (17:56 +0200)]
llama : add 18-layer model type for Gemma 3-270m (#15319)

This commit adds support for the 18-layer model type in the Gemma3
series, which is the size of the Gemma3-270m model.

The motivation for this commit is was the only change required for
Gemma3-270m to be converted to GGUF format and used with llama.cpp.

Once the model has been converted and uploaded to Huggingface it can be
used like this:
```console
$ ./build/bin/llama-cli -hf ggml-org/gemma-3-270m-GGUF:Q8_0
```

7 weeks agodevops : fix compile bug when the BASE_CUDA_DEV_CONTAINER is based on Ubuntu 24.04...
simevo [Thu, 14 Aug 2025 15:45:27 +0000 (17:45 +0200)]
devops : fix compile bug when the BASE_CUDA_DEV_CONTAINER is based on Ubuntu 24.04 (#15005)

fixes #15004

Co-authored-by: Paolo Greppi <redacted>
7 weeks agoHIP: Cleanup hipification header (#15285)
uvos [Thu, 14 Aug 2025 14:23:56 +0000 (16:23 +0200)]
HIP: Cleanup hipification header (#15285)

add expicit conversion operator to support older versions of rocm
Switch over to hip_bf16 from legacy hip_bfloat16
Simplify RDNA3 define
Reduce swap over of new hipblas api to rocm 6.5 as this version is used for rocm 7.0 previews

---------

Co-authored-by: Johannes Gäßler <redacted>
7 weeks agogpt-oss: implement harmony parsing (#15181) upstream/0.0.6164
Aldehir Rojas [Thu, 14 Aug 2025 14:23:11 +0000 (09:23 -0500)]
gpt-oss: implement harmony parsing (#15181)

* model : add harmony parser for gpt-oss

* gpt-oss : fix grammar trigger from causing empty stack

* gpt-oss: tweak the grammar trigger again

* gpt-oss : add support for recipient in role header

* gpt-oss : fix ungrouped tool calls in grammar

* gpt-oss : loosen function name matching during parse

* gpt-oss : clean up workarounds

* gpt-oss : add template tests

* gpt-oss : simulate thinking and tool call tags

* gpt-oss : undo think tags when reasoning_format is none

* gpt-oss : set special tokens back to user defined

* gpt-oss : update openai-gpt-oss template

* server : filter out harmony thought messages

* gpt-oss : simplify parsing

7 weeks agodocker : Enable GGML_CPU_ALL_VARIANTS for ARM (#15267)
Christian Kastner [Thu, 14 Aug 2025 14:22:58 +0000 (16:22 +0200)]
docker : Enable GGML_CPU_ALL_VARIANTS for ARM (#15267)

7 weeks agoreadme : update hot topics (#15315)
Georgi Gerganov [Thu, 14 Aug 2025 14:16:03 +0000 (17:16 +0300)]
readme : update hot topics (#15315)

7 weeks agovulkan: perf_logger improvements (#15246)
Jeff Bolz [Thu, 14 Aug 2025 13:38:10 +0000 (08:38 -0500)]
vulkan: perf_logger improvements (#15246)

* vulkan: perf_logger improvements

- Account for batch dimension in flops calculation.
- Fix how "_VEC" is detected for mat_mul_id.
- Fix "n" dimension for mat_mul_id (in case of broadcasting).
- Include a->type in name.

* use <=mul_mat_vec_max_cols rather than ==1

7 weeks agoserver : add SWA checkpoints (#15293)
Georgi Gerganov [Thu, 14 Aug 2025 11:59:50 +0000 (14:59 +0300)]
server : add SWA checkpoints (#15293)

* server : add SWA checkpoints

ggml-ci

* cont : server clean-up

* server : handle state restore fails

* llama : add extended llama_state_seq_ API

* server : do not make checkpoints if --swa-full

ggml-ci

* llama : remove flags value for NONE

* server : configure number of SWA checkpoints with CLI arg

ggml-ci

* args : fix scope of new argument

7 weeks agosync : ggml
Georgi Gerganov [Thu, 14 Aug 2025 11:19:23 +0000 (14:19 +0300)]
sync : ggml

ggml-ci

7 weeks agoggml: fix ggml_conv_1d_dw bug (ggml/1323)
Jason Ni [Thu, 14 Aug 2025 11:17:51 +0000 (19:17 +0800)]
ggml: fix ggml_conv_1d_dw bug (ggml/1323)

* ggml: fix ggml_conv_1d_dw bug

* Fixed conv1d_dw weight tensor dimension.

7 weeks agotests : remove unused includes (ggml/0)
Georgi Gerganov [Thu, 14 Aug 2025 10:41:03 +0000 (13:41 +0300)]
tests : remove unused includes (ggml/0)

7 weeks agoperplexity : provide a helpful hint for has_cpl case in split_equal error. (#15304)
kallewoof [Thu, 14 Aug 2025 11:03:30 +0000 (20:03 +0900)]
perplexity : provide a helpful hint for has_cpl case in split_equal error. (#15304)

When attempting to do llama-perplexity on certain tasks which have coupled sequences there is a cryptic error that does not tell you what to do, which is to set the -kvu flag. This adds a hint about that fact.

7 weeks agocuda : fix GGML_CUDA_GRAPHS=OFF (#15300)
Sigbjørn Skjæret [Thu, 14 Aug 2025 10:22:07 +0000 (12:22 +0200)]
cuda : fix GGML_CUDA_GRAPHS=OFF (#15300)

* fix USE_CUDA_GRAPH=OFF

ggml-ci

* check capture status

* completely disable capturing check instead

7 weeks agofinetune: SGD optimizer, more CLI args (#13873)
Jonathan Graehl [Thu, 14 Aug 2025 10:03:57 +0000 (03:03 -0700)]
finetune: SGD optimizer, more CLI args (#13873)

* examples/finetune -opt SGD (stochastic gradient descent) memory opt

add unit tested GGML_OPT_OPTIMIZER_SGD to ggml - avoids allocating
m, v tensors.

support finetune.cpp arg -opt SGD (or sgd). (default adamw as before)

llama 3.2-1b-F32 result: observed 11gb gpu ram (41 sec/epoch)
when using SGD instead of 19gb (55 sec/epoch) using adamw.
(wikipedia 100 lines finetune)

(
using the same GPU memory, adamw can only do before OOM 512
batch/context, reaching:
train: [███████▉] data=0000140/0000140 loss=0.02575±0.00099 acc=99.52±0.03% t=00:00:47 ETA=00:00:00
val:   [███████▉] data=0000008/0000008 loss=4.76565±0.28810 acc=41.46±0.77% t=00:00:00 ETA=00:00:00

SGD is superior, though it converges slower, with max before OOM 1728
batch/context (esp see the better validation perf):
train: [███████▉] data=0000039/0000039 loss=0.00371±0.00010 acc=99.96±0.01% t=00:00:41 ETA=00:00:00
val:   [███████▉] data=0000003/0000003 loss=5.11406±0.76034 acc=48.01±0.69% t=00:00:01 ETA=00:00:00
)

note: when finetuning long enough (or w/ enough -lr),
validation accuracy *eventually* drops ('catastrophic forgetting')

-lr-half (halflife) option useful for SGD to avoid oscillation or
super slow underdamped learning (makes setting -lr more forgiving).
terminal -lr for now is set by lr-halvings i.e. if you want at most
1/8 the inital -lr you set -lr-halvings 3.

note: objective loss not directly comparable between adamw, sgd? -
check perplexity or accuracy or consider relative improvements
for convergence

new finetune args -wd 1e-9 to enable weight decay in sgd or adamw,
and max -epochs N (default 2 as before)

cache (1 - wd*alpha) in 'adamw' opt struct -
no noticeable perf benefit, disabled (still done
for new SGD though)

since opt. memory is pre-allocated, the ggml_opt_get_optimizer_params
would probably be able to change between SGD and AdamW with each epoch
but would need to use adamw for the first (unconfirmed - no cmdline arg
to set such a policy yet)

test-opt checks adamw as before and now sgd (except for a few disabled
tests for sgd only; probably just needs logging values and adding
alternate reference values);  tolerance on the 'regression'
test is broader for sgd (so we don't need many more epochs)

* Vulkan: Implement GGML_OP_OPT_STEP_SGD

* tests: Fix OPT_STEP_SGD test-backend-ops

* SGD op param store weight-decay and not 1-alpha*wd

* minor + cosmetic changes

* fix vulkan sgd

* try CI fix

---------

Co-authored-by: 0cc4m <redacted>
Co-authored-by: Johannes Gäßler <redacted>
7 weeks agoperplexity: give more information about constraints on failure (#15303)
kallewoof [Thu, 14 Aug 2025 06:16:32 +0000 (15:16 +0900)]
perplexity: give more information about constraints on failure (#15303)

* perplexity: give more information about constraints on failure

This checks whether -np is insufficient vs context, and provides clues as to how much is needed for each.

* log formatting

* log error and return instead of storing max_seq_exceeded int

* check if s0 is zero for -np check

7 weeks agoHIP: bump requirement to rocm 6.1 (#15296)
uvos [Wed, 13 Aug 2025 18:44:30 +0000 (20:44 +0200)]
HIP: bump requirement to rocm 6.1 (#15296)

7 weeks agofix(nix): remove non-functional llama-cpp cachix cache from flake.nix (#15295)
Bas Nijholt [Wed, 13 Aug 2025 18:21:31 +0000 (11:21 -0700)]
fix(nix): remove non-functional llama-cpp cachix cache from flake.nix (#15295)

The flake.nix included references to llama-cpp.cachix.org cache with a comment
claiming it's 'Populated by the CI in ggml-org/llama.cpp', but:

1. No visible CI workflow populates this cache
2. The cache is empty for recent builds (tested b6150, etc.)
3. This misleads users into expecting pre-built binaries that don't exist

This change removes the non-functional cache references entirely, leaving only
the working cuda-maintainers cache that actually provides CUDA dependencies.

Users can still manually add the llama-cpp cache if it becomes functional in the future.

7 weeks agoserver : enable -td and -tbd parameters (#15172)
Sigbjørn Skjæret [Wed, 13 Aug 2025 13:43:00 +0000 (15:43 +0200)]
server : enable -td and -tbd parameters (#15172)

7 weeks agoggml : update `ggml_rope_multi` (#12665)
Judd [Wed, 13 Aug 2025 10:45:15 +0000 (18:45 +0800)]
ggml : update `ggml_rope_multi` (#12665)

* update `rope_multi`:

1. add `ggml_rope_multi_inplace`;
1. use `GGML_MROPE_SECTIONS` instead of 4.

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
7 weeks ago common : add --override-tensor-draft, --cpu-moe-draft and --n-cpu-moe-draft paramete...
Copilot [Wed, 13 Aug 2025 10:44:40 +0000 (12:44 +0200)]
 common : add --override-tensor-draft, --cpu-moe-draft and --n-cpu-moe-draft parameters (#15191)

* Checkpoint from VS Code for coding agent session

* Initial plan

* Fix typo in --override-tensor-draft flag implementation

* Add null termination for speculative tensor buffer overrides

* Apply suggestions from code review

* Apply suggestions from code review

* Extract tensor override parsing logic to common function (addresses @slaren's feedback)

* Apply suggestions from code review

* Apply suggestions

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Diego Devesa <redacted>
7 weeks agoserver : filter out harmony thought messages (#15278)
Aldehir Rojas [Wed, 13 Aug 2025 10:28:21 +0000 (05:28 -0500)]
server : filter out harmony thought messages (#15278)

7 weeks agoci : Added CI with RISC-V RVV1.0 Hardware (#14439)
Ali Tariq [Wed, 13 Aug 2025 10:14:44 +0000 (15:14 +0500)]
ci : Added CI with RISC-V RVV1.0 Hardware (#14439)

* Changed the CI file to hw

* Changed the CI file to hw

* Added to sudoers for apt

* Removed the clone command and used checkout

* Added libcurl

* Added gcc-14

* Checking gcc --version

* added gcc-14 symlink

* added CC and C++ variables

* Added the gguf weight

* Changed the weights path

* Added system specification

* Removed white spaces

* ci: Replace Jenkins riscv native build Cloud-V pipeline with GitHub Actions workflow

Removed the legacy .devops/cloud-v-pipeline Jenkins CI configuration and introduced .github/workflows/build-riscv-native.yml for native RISC-V builds using GitHub Actions.

* removed trailing whitespaces

---------

Co-authored-by: Akif Ejaz <redacted>
7 weeks agoci : add more python requirements to copilot-setup-steps (#15289)
Sigbjørn Skjæret [Wed, 13 Aug 2025 09:30:45 +0000 (11:30 +0200)]
ci : add more python requirements to copilot-setup-steps (#15289)

* ci : add flake8 and pyright to copilot-setup-steps.yml

* add tools/server/tests/requirements.txt

7 weeks agoggml : repack block_iq4_nlx8 (#14904)
Georgi Gerganov [Wed, 13 Aug 2025 08:09:39 +0000 (11:09 +0300)]
ggml : repack block_iq4_nlx8 (#14904)

ggml-ci