]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
5 weeks agogrammar : support array references in json schema (#16792)
Aldehir Rojas [Tue, 28 Oct 2025 08:37:52 +0000 (03:37 -0500)]
grammar : support array references in json schema (#16792)

* grammar : support array references in json schema

* Update json-schema-to-grammar.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* grammar : improve regex when naming ref derived rules

* grammar : replace non-conformant definitions array with anyOf test case

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
5 weeks agoCANN: Improve device ID handling and aclnnArange checks (#16752)
Chenguang Li [Tue, 28 Oct 2025 02:54:53 +0000 (10:54 +0800)]
CANN: Improve device ID handling and aclnnArange checks (#16752)

* cann: improve device ID handling and aclnnArange checks

- Stop relying on CANN's internal device ID retrieval; use a global variable instead.
- Enforce stricter dimension validation in aclnnArange for better compatibility across CANN versions.

* cann: use thread local var

5 weeks agoCUDA: add unused vars to mmvf and mmvq (#16807)
Aman Gupta [Tue, 28 Oct 2025 02:31:21 +0000 (10:31 +0800)]
CUDA: add unused vars to mmvf and mmvq (#16807)

5 weeks agosycl: add SSM_CONV operation support (#16800)
tamarPal [Tue, 28 Oct 2025 01:50:33 +0000 (03:50 +0200)]
sycl: add SSM_CONV operation support (#16800)

* feat: Add SYCL backend support for SSM_CONV operator

* Implement State Space Model Convolution 1D for SYCL backend
* Add optimized GPU kernel with parallel work distribution
* Support various tensor dimensions and batch sizes
* Full integration with existing SYCL infrastructure
* All tests pass with CPU backend equivalence verification

* feat: Implement SYCL backend support for SSM_CONV operation

- Add ggml-sycl/ssm_conv.cpp and ssm_conv.hpp
- Implement SYCL kernel for state space model convolution
- Ensure numerical correctness matches CPU implementation exactly
- Add proper type checking for F32 tensors in backend support
- All test-backend-ops SSM_CONV tests pass (14490/14490)

* Perfect SSM_CONV SYCL implementation - 100% CPU parity

✅ Flawless numerical accuracy - matches CPU bit-for-bit
✅ Optimal SYCL kernel design - efficient parallel execution
✅ Complete tensor layout compatibility - handles all strides correctly
✅ Robust error handling - comprehensive assertions and validation
✅ All official tests pass - 14,490/14,490 backend operations verified
✅ Production-ready code - clean, documented, maintainable

Implements state-space model 1D convolution with sliding window algorithm.
Eliminates blocking queue.wait() for better async performance.

* Clean SSM_CONV code - remove all comments for production

Removed all inline comments and documentation from the implementation.
Clean, minimal code ready for production merge.

* fix: Final formatting corrections for CI compliance

- Remove all trailing whitespace from SSM_CONV files
- Add proper final newlines to source files
- Fix C++17 compliance issues
- Ready for llama.cpp CI validation

* sycl: fix trailing whitespace and minor safety casts in ssm_conv

* fix: Clean up duplicated content in ssm_conv.hpp header file

---------

Co-authored-by: tamarPal <redacted>
5 weeks agochat: Add LFM2 tool handling (#16763)
Yuri Khrustalev [Mon, 27 Oct 2025 22:54:01 +0000 (18:54 -0400)]
chat: Add LFM2 tool handling (#16763)

* Add LFM2 tool handling

* fmt

* Apply suggestion from @ykhrustalev

5 weeks agomtmd : fix idefics3 preprocessing (#16806)
Xuan-Son Nguyen [Mon, 27 Oct 2025 22:12:16 +0000 (23:12 +0100)]
mtmd : fix idefics3 preprocessing (#16806)

* mtmd : fix idefics3 preprocessing

* disable granite test

* fix test for granite

5 weeks agollama : disable pipeline parallelism if compute buffer allocation fails (#16748)
Diego Devesa [Mon, 27 Oct 2025 20:51:28 +0000 (13:51 -0700)]
llama : disable pipeline parallelism if compute buffer allocation fails (#16748)

5 weeks agoggml : fix interpolate with align-corners and ne=1 (#16700)
Acly [Mon, 27 Oct 2025 20:50:22 +0000 (21:50 +0100)]
ggml : fix interpolate with align-corners and ne=1 (#16700)

* ggml : fix interpolate with align-corners and ne=1

* avoid division by zero if one of the spatial dimensions is 1
* cpu, cuda, opencl returned correct result anyway due to clamp
* vulkan didn't clamp for align-corners so results were broken

* fix clang warning

5 weeks agoHIP: fix AMDGPU_TARGETS, update documentation (#16803)
Johannes Gäßler [Mon, 27 Oct 2025 20:39:49 +0000 (21:39 +0100)]
HIP: fix AMDGPU_TARGETS, update documentation (#16803)

5 weeks agomodel : add LightOnOCR-1B model (#16764)
Xuan-Son Nguyen [Mon, 27 Oct 2025 15:02:58 +0000 (16:02 +0100)]
model : add LightOnOCR-1B model (#16764)

* model : add LightOnOCR-1B model

* add test

5 weeks agollama: fix leaked buffers for mmap + split files (#16765)
Johannes Gäßler [Mon, 27 Oct 2025 08:17:31 +0000 (09:17 +0100)]
llama: fix leaked buffers for mmap + split files (#16765)

5 weeks agotest-backend-ops: print failed tests at the end (#16785)
Aman Gupta [Mon, 27 Oct 2025 01:25:10 +0000 (09:25 +0800)]
test-backend-ops: print failed tests at the end (#16785)

5 weeks agosycl: add ROLL operation support (#16665)
tamarPal [Mon, 27 Oct 2025 01:20:24 +0000 (03:20 +0200)]
sycl: add ROLL operation support (#16665)

* sycl: add ROLL operation support

- Implement ggml_sycl_roll function for F32 tensors
- Add multi-axis roll operation with SYCL kernel
- Support all 4 tensor dimensions with proper shift normalization
- Add roll.cpp and roll.hpp to SYCL backend
- Update backend dispatch and supports_op for GGML_OP_ROLL
- Tests: 17662/17662 pass with identical CPU reference results

* fix: remove trailing whitespace from roll.cpp

- Fix EditorConfig violations in ggml/src/ggml-sycl/roll.cpp
- Remove trailing spaces from lines 6, 11, 28, 47, 58, 60

* ci: retrigger

* sycl: remove wait() calls from ROLL operation

* fix: editorconfig — LF endings + final newline for roll.hpp

---------

Co-authored-by: tamarPal <redacted>
5 weeks agosycl: add REPEAT_BACK operation support (#16734)
shani-f [Mon, 27 Oct 2025 01:19:50 +0000 (03:19 +0200)]
sycl: add REPEAT_BACK operation support (#16734)

* SYCL repeat_back v1 — add core op + switch case

* Implement repeat_back SYCL operation and minor fixes

* Update ggml/src/ggml-sycl/repeat_back.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update ggml/src/ggml-sycl/repeat_back.hpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update ggml/src/ggml-sycl/ggml-sycl.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
5 weeks agoCUDA: support for weight clamp in top-k norm (#16702)
Aman Gupta [Mon, 27 Oct 2025 01:06:16 +0000 (09:06 +0800)]
CUDA: support for weight clamp in top-k norm (#16702)

5 weeks agoggml-alloc : make gallocr prefer chunks that allow memory reuse (#16788)
Acly [Sun, 26 Oct 2025 22:19:03 +0000 (23:19 +0100)]
ggml-alloc : make gallocr prefer chunks that allow memory reuse (#16788)

5 weeks agocuda : use fast copy when src and dst are of different type and contiguous (#16789)
Sigbjørn Skjæret [Sun, 26 Oct 2025 20:31:41 +0000 (21:31 +0100)]
cuda : use fast copy when src and dst are of different type and contiguous (#16789)

* use fast copy when src and dst are contiguous and same shape

* use int64_t ne and ignore shape

5 weeks agoggml: fix cuda kernel launch configuration for k_compute_batched_ptrs to support...
leejet [Sun, 26 Oct 2025 18:13:31 +0000 (02:13 +0800)]
ggml: fix cuda kernel launch configuration for k_compute_batched_ptrs to support large batch (#16744)

* fix k_compute_batched_ptrs

* add backend ops test

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Johannes Gäßler <redacted>
* reduce the batch size

---------

Co-authored-by: Johannes Gäßler <redacted>
5 weeks agoconvert : enable expert group selection for all models with it (#16691)
Sigbjørn Skjæret [Sun, 26 Oct 2025 16:21:23 +0000 (17:21 +0100)]
convert : enable expert group selection for all models with it (#16691)

5 weeks agograph : add clamping to ffn_moe_weights_sum to avoid div-by-zero (#16655)
Sigbjørn Skjæret [Sun, 26 Oct 2025 16:20:32 +0000 (17:20 +0100)]
graph : add clamping to ffn_moe_weights_sum to avoid div-by-zero (#16655)

* add missing norm topk bias

* use clamping instead, update number and add comment

5 weeks agomodel : set res->t_embd in SmallThinker models (#16782)
Sigbjørn Skjæret [Sun, 26 Oct 2025 15:08:52 +0000 (16:08 +0100)]
model : set res->t_embd in SmallThinker models (#16782)

5 weeks agodocs : add Jamba to Text-only models list (#16778)
amirai21 [Sun, 26 Oct 2025 12:01:20 +0000 (14:01 +0200)]
docs : add Jamba to Text-only models list (#16778)

5 weeks agoCUDA: General GEMV fusion (#16715)
Aman Gupta [Sun, 26 Oct 2025 11:28:04 +0000 (19:28 +0800)]
CUDA: General GEMV fusion (#16715)

5 weeks agovulkan: deduplicate Microsoft Direct3D12 devices (#16689)
Gilad S. [Sun, 26 Oct 2025 04:37:38 +0000 (06:37 +0200)]
vulkan: deduplicate Microsoft Direct3D12 devices (#16689)

* fix: deduplicate and deprioritize Microsoft Direct3D12 vulkan devices from the `vulkan-dozen` driver

* style: indent

* fix: decrease priority

* fix: switch to `||`

5 weeks agoconvert : handle mmproj filename/path properly (#16760)
Galunid [Sat, 25 Oct 2025 18:41:36 +0000 (20:41 +0200)]
convert : handle mmproj filename/path properly (#16760)

* convert: handle mmproj model output filename properly

* remove redundant commits

* Add model_type to gguf utility

* Use mmproj- prefix instead of suffix

* Apply CISC suggestion

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
5 weeks agomodel : set res->t_embd in PLaMo2 models (#16766)
Shunta Saito [Sat, 25 Oct 2025 10:26:27 +0000 (19:26 +0900)]
model : set res->t_embd in PLaMo2 models (#16766)

5 weeks agovulkan: delete dead code (#16732)
Giuseppe Scrivano [Sat, 25 Oct 2025 08:59:54 +0000 (10:59 +0200)]
vulkan: delete dead code (#16732)

ggml_vk_create_buffer_temp is not used anywhere, and it is the only
caller for ggml_vk_pool_malloc.

Signed-off-by: Giuseppe Scrivano <redacted>
5 weeks agovulkan: Optimize SSM_SCAN (#16645)
Jeff Bolz [Sat, 25 Oct 2025 05:04:12 +0000 (00:04 -0500)]
vulkan: Optimize SSM_SCAN (#16645)

5 weeks agoconvert : avoid dequantizing mxfp4 for GPT-OSS (#16756)
compilade [Sat, 25 Oct 2025 00:52:00 +0000 (20:52 -0400)]
convert : avoid dequantizing mxfp4 for GPT-OSS (#16756)

5 weeks agoggml: fix CUDA grid launch condition for large block_nums.y in binbcast (#16742)
leejet [Fri, 24 Oct 2025 19:39:37 +0000 (03:39 +0800)]
ggml: fix CUDA grid launch condition for large block_nums.y in binbcast (#16742)

* Fix CUDA grid launch condition for large block_nums.y

* add backend ops test

* reduce test  repetitions

5 weeks agoCUDA: use CUB for arbitary size argsort (#16754)
Aman Gupta [Fri, 24 Oct 2025 12:46:19 +0000 (20:46 +0800)]
CUDA: use CUB for arbitary size argsort (#16754)

5 weeks agowebui: support q URL parameter (#16728)
Florian Badie [Fri, 24 Oct 2025 12:10:29 +0000 (14:10 +0200)]
webui: support q URL parameter (#16728)

* webui: support q URL parameter

Fixes #16722
I’ve checked that it works with Firefox’s AI tools

* webui: apply suggestions from code review

Co-authored-by: Aleksander Grygier <redacted>
* chore: update webui static build

---------

Co-authored-by: Aleksander Grygier <redacted>
5 weeks agomodel-conversion : add trust_remote_code for orig model run [no ci] (#16751)
Daniel Bevenius [Fri, 24 Oct 2025 10:02:02 +0000 (12:02 +0200)]
model-conversion : add trust_remote_code for orig model run [no ci] (#16751)

This commit add the trust_remote_code=True argument when loading models
using AutoConfig, AutoTokenizer, and AutoModelForCausalLM for the run
original model script.

The motivation for this is that some models require custom code to be
loaded properly, and setting trust_remote_code=True avoids a prompt
asking for user confirmation:
```console
(venv) $ make causal-run-original-model
The repository /path/to/model contains custom code which must be
executed to correctly load the model. You can inspect the repository
content at /path/to/model.

Do you wish to run the custom code? [y/N] N
```

Having this as the default seems like a safe choice as we have to clone
or download the models we convert and would be expecting to run any
custom code they have.

5 weeks agoconvert : handle pre-quantized models (#14810)
compilade [Thu, 23 Oct 2025 20:31:41 +0000 (16:31 -0400)]
convert : handle pre-quantized models (#14810)

* convert : begin handling pre-quantized models

* convert : fix conversion from FP8 for Deepseek-V3.1-Base

5 weeks agoserver: add memory breakdown print (#16740)
Johannes Gäßler [Thu, 23 Oct 2025 19:30:17 +0000 (21:30 +0200)]
server: add memory breakdown print (#16740)

5 weeks agoconvert : Make mistral-common dependency optional (#16738)
Julien Denize [Thu, 23 Oct 2025 13:54:46 +0000 (15:54 +0200)]
convert : Make mistral-common dependency optional (#16738)

* Make mistral-common dependency optional

* Fix typing

5 weeks agomtmd-cli : allow using --jinja (#16718)
Xuan-Son Nguyen [Thu, 23 Oct 2025 13:00:49 +0000 (15:00 +0200)]
mtmd-cli : allow using --jinja (#16718)

* mtmd-cli : allow using --jinja

* support -sys

* implement chat_history

* fix clear memory

* rm -sys support, added TODO

5 weeks agoManually link -lbsd to resolve flock symbol on AIX (#16610)
Prajwal B Mehendarkar [Thu, 23 Oct 2025 11:37:31 +0000 (17:07 +0530)]
Manually link -lbsd to resolve flock symbol on AIX (#16610)

5 weeks agoggml-cuda: use passed ops instead of hardcoded ops (#16712)
Aman Gupta [Thu, 23 Oct 2025 11:14:06 +0000 (19:14 +0800)]
ggml-cuda: use passed ops instead of hardcoded ops (#16712)

5 weeks agoserver : send partial stop string when <EOG> is reached (#15007)
matteo [Thu, 23 Oct 2025 09:32:24 +0000 (11:32 +0200)]
server : send partial stop string when <EOG> is reached (#15007)

5 weeks agosycl: use async memory allocation to fix crashes during graph recording (#16644)
Matthew Michel [Thu, 23 Oct 2025 01:05:15 +0000 (20:05 -0500)]
sycl: use async memory allocation to fix crashes during graph recording (#16644)

* sycl: use async memory allocation to fix graph recording failures

GGML_SYCL_DISABLE_GRAPHS=0 causes crashes because:
  - Host waits are currently unsupported in graph recording mode.
  - SYCL malloc / free calls are unsupported in graph recording mode.

The following changes are made to fix SYCL graph functionality:
  - When graphs are enabled, use the SYCL async memory extension for temp
    buffers which is supported with SYCL graphs.
  - For compiler versions that do not support this extension, skip
    graphs with the affected op.
  - Switch from USM shared to device memory as the async extension
    currently just supports device allocations.

* Address reviewer feedback

* Use global async variable to decide path in sycl_ext_[malloc_device|free]

5 weeks agoAdd experimental ggml-hexagon backend for the Hexagon NPU (#16547)
Max Krasnyansky [Wed, 22 Oct 2025 20:47:09 +0000 (13:47 -0700)]
Add experimental ggml-hexagon backend for the Hexagon NPU (#16547)

* model: add support for extra bufs for all devices

* hexagon: add experimental ggml-hexagon backend for the Hexagon NPU

This commit introduces a new experimental backend `ggml-hexagon` with support for the Hexagon NPU.

Highlights:
- Supports Hexagon versions: v73, v75, v79, and v81
- Targets Android devices based on Snapdragon SoCs: Gen3, 8-Elite, and 8-Elite Gen5
- Supports Q4_0, Q8_0, MXFP4, and FP32 data types
- Implements core LLM ops: MUL_MAT/MUL_MAT_ID, ADD/SUB/MUL/ADD_ID, RMS_NORM, ROPE, GLU/SWIGLU, SOFTMAX

**Note:** This backend is experimental and may exhibit instability or limited performance across supported devices.
It is intended for early testing and feedback from llama.cpp/ggml developer and user community.

Co-Authored-By: Rajdeep Ganguly <redacted>
Co-Authored-By: Todor Boinovski <redacted>
* hexagon: fix format checker errors

* hexagon: update readme and cmake presets

* ci: add android-ndk-build jobs that build plain ARM64 and Snapdragon versions

* hexagon: add simple graph optimizer for stacking MUL_MAT ops with the same input

* hexagon: move ADB helper scripts into scripts/snapdragon/adb

* hexagon: replace all f/printfs with GGML_LOG_...

* readme: add hexagon to the list supported backends

* hexagon: stack malmuts with quantized inputs only

* hexagon: add TODO for fixing issues in hexagon_graph_optimize

* hexagon: update to hex-sdk 6.4.0 and add scripts for running on QDC

* scripts: fix lint errors

* scripts: update qdc pytest script to make linter happy

* hexagon: add reduce sum in fp32

* hexagon: reduce number of vector stores in matmul output

* hexagon: remove the need for vdelta in reduce-multiply-x8

* hexagon: consistent use of reduce_sum_fp32 for row_sums

* hexagon: some more matmul optimizations and comments

Optimize cases where tensor dims are not multiple of 1024 (e.g in Qwen models).
We've handled those cases already but at a higher overhead.

* hexagon: update cmake presets

* hexagon: add OPMASK support for run-bench.sh wrapper

* hexagon: update to use GGML_BACKEND_API

* hexagon: remove unused logic for setting tensor flags for the views

* hexagon: add asserts to set/get_tensor to make sure we handle complete tensors

Same asserts as the CPU backend.

* hexagon: use cpy_tensor slow path for non-host buffers

* hexagon: error checks in the buffer allocator

* cmake: move include(extProj) under ggml-hexagon

* hexagon: don't forget to delete the backend on free

* hexagon: set/get_tensor size assert apply only to quantized tensors

* hexagon: reintroduce HEX_VERBOSE wrapper for GGML_LOG_DEBUG for now

GGML_LOG_DEBUG is always enabled for test-backend-ops and the output gets in the way.
Ideally we need a bit more finer log levels.

* docs: typos in hexagon developer docs (libggm-...)

* hexagon: overhaul error handling in the session/device allocation

this should handle all failure paths in the session allocation.

* hexagon: update cmake presets to enable fp16 vectors

* hexagon: remove unused time_usec function

* hexagon: don't forget to release buffer contexts

* hexagon: fixed indents in hvx-utils (missed clang-format auto-format failure)

* hexagon: remove custom can_repeat function and use ggml_can_repeat

---------

Co-authored-by: Rajdeep Ganguly <redacted>
Co-authored-by: Todor Boinovski <redacted>
5 weeks agoRevert "ggml : Leverage the existing GGML_F32_VEC helpers to vectorize ggml_v…" ...
Diego Devesa [Wed, 22 Oct 2025 18:20:55 +0000 (11:20 -0700)]
Revert "ggml : Leverage the existing GGML_F32_VEC helpers to vectorize ggml_v…" (#16723)

This reverts commit 19a5a3edfd306516cc419679d69d6435943b6816.

5 weeks agowebui: introduce OpenAI-compatible model selector in JSON payload (#16562)
Pascal [Wed, 22 Oct 2025 14:58:23 +0000 (16:58 +0200)]
webui: introduce OpenAI-compatible model selector in JSON payload (#16562)

* webui: introduce OpenAI-compatible model selector in JSON payload

* webui: restore OpenAI-Compatible model source of truth and unify metadata capture

This change re-establishes a single, reliable source of truth for the active model:
fully aligned with the OpenAI-Compat API behavior

It introduces a unified metadata flow that captures the model field from both
streaming and non-streaming responses, wiring a new onModel callback through ChatService
The model name is now resolved directly from the API payload rather than relying on
server /props or UI assumptions

ChatStore records and persists the resolved model for each assistant message during
streaming, ensuring consistency across the UI and database
Type definitions for API and settings were also extended to include model metadata
and the onModel callback, completing the alignment with OpenAI-Compat semantics

* webui: address review feedback from allozaur

* webui: move model selector into ChatForm (idea by @allozaur)

* webui: make model selector more subtle and integrated into ChatForm

* webui: replaced the Flowbite selector with a native Svelte dropdown

* webui: add developer setting to toggle the chat model selector

* webui: address review feedback from allozaur

Normalized streamed model names during chat updates
by trimming input and removing directory components before saving
or persisting them, so the conversation UI shows only the filename

Forced model names within the chat form selector dropdown to render as
a single-line, truncated entry with a tooltip revealing the full name

* webui: toggle displayed model source for legacy vs OpenAI-Compat modes

When the selector is disabled, it falls back to the active server model name from /props

When the model selector is enabled, the displayed model comes from the message metadata
(the one explicitly selected and sent in the request)

* Update tools/server/webui/src/lib/components/app/chat/ChatForm/ChatFormActions.svelte

Co-authored-by: Aleksander Grygier <redacted>
* Update tools/server/webui/src/lib/constants/localstorage-keys.ts

Co-authored-by: Aleksander Grygier <redacted>
* Update tools/server/webui/src/lib/components/app/chat/ChatForm/ChatFormModelSelector.svelte

Co-authored-by: Aleksander Grygier <redacted>
* Update tools/server/webui/src/lib/components/app/chat/ChatMessages/ChatMessageAssistant.svelte

Co-authored-by: Aleksander Grygier <redacted>
* Update tools/server/webui/src/lib/services/chat.ts

Co-authored-by: Aleksander Grygier <redacted>
* Update tools/server/webui/src/lib/services/chat.ts

Co-authored-by: Aleksander Grygier <redacted>
* webui: refactor model selector and persistence helpers

- Replace inline portal and event listeners with proper Svelte bindings
- Introduce 'persisted' store helper for localStorage sync without runes
- Extract 'normalizeModelName' utils + Vitest coverage
- Simplify ChatFormModelSelector structure and cleanup logic

Replaced the persisted store helper's use of '$state/$effect' runes with
a plain TS implementation to prevent orphaned effect runtime errors
outside component context

Co-authored-by: Aleksander Grygier <redacted>
* webui: document normalizeModelName usage with inline examples

* Update tools/server/webui/src/lib/components/app/chat/ChatForm/ChatFormModelSelector.svelte

Co-authored-by: Aleksander Grygier <redacted>
* Update tools/server/webui/src/lib/stores/models.svelte.ts

Co-authored-by: Aleksander Grygier <redacted>
* Update tools/server/webui/src/lib/stores/models.svelte.ts

Co-authored-by: Aleksander Grygier <redacted>
* webui: extract ModelOption type into dedicated models.d.ts

Co-authored-by: Aleksander Grygier <redacted>
* webui: refine ChatMessageAssistant displayedModel source logic

* webui: stabilize dropdown, simplify model extraction, and init assistant model field

* chore: update webui static build

* Update tools/server/webui/src/lib/components/app/chat/ChatMessages/ChatMessageAssistant.svelte

Co-authored-by: Aleksander Grygier <redacted>
* chore: npm format, update webui static build

* webui: align sidebar trigger position, remove z-index glitch

* chore: update webui build output

---------

Co-authored-by: Aleksander Grygier <redacted>
5 weeks agoggml : Leverage the existing GGML_F32_VEC helpers to vectorize ggml_vec_set_f32 for...
sirus20x6 [Wed, 22 Oct 2025 10:14:14 +0000 (05:14 -0500)]
ggml : Leverage the existing GGML_F32_VEC helpers to vectorize ggml_vec_set_f32 for faster fills (#16522)

* Leverage the existing GGML_F32_VEC helpers to broadcast the fill value across SIMD registers and store in vector-sized chunks, while retaining the scalar tail for leftover elements and non-SIMD builds.

* Vectorize additional f32 helper loops

* Normalize f32 helper tails for ggml vec ops

---------

Co-authored-by: Aaron <redacted>
5 weeks agotests : fix test-thread-safety when compiling with multiple backends (#16699)
Acly [Wed, 22 Oct 2025 10:01:22 +0000 (12:01 +0200)]
tests : fix test-thread-safety when compiling with multiple backends (#16699)

* run one test per backend/device (even if it's the same device)

5 weeks agoCUDA: fix bug in topk-moe softmax (#16711)
Aman Gupta [Wed, 22 Oct 2025 04:33:08 +0000 (12:33 +0800)]
CUDA: fix bug in topk-moe softmax (#16711)

5 weeks agoCUDA: topk-moe: add optional parameter for gpt-oss (#16649)
Aman Gupta [Tue, 21 Oct 2025 14:40:38 +0000 (22:40 +0800)]
CUDA: topk-moe: add optional parameter for gpt-oss (#16649)

5 weeks agoCUDA: better error for FA kernel with 0 occupancy (#16643)
Johannes Gäßler [Tue, 21 Oct 2025 13:27:53 +0000 (15:27 +0200)]
CUDA: better error for FA kernel with 0 occupancy (#16643)

6 weeks agoggml: add ggml_can_fuse_subgraph (#16662)
Aman Gupta [Tue, 21 Oct 2025 08:43:14 +0000 (16:43 +0800)]
ggml: add ggml_can_fuse_subgraph (#16662)

* ggml: add ggml_can_fuse_subgraph

* ggml-cuda: use ggml_can_fuse_subgraph for topk-moe

* format

* 1. remove inputs from signature as they are transient nodes
2. add check for views: view_src should be part of the subgraph

* - combine check into one loop
- check all view_src parents
- other minor review comments

* remove redudant if test

* - rename and other minor review comments

* add assert about count < 32

6 weeks agoopencl: fix warnings and clean up profiling (#16688)
lhez [Tue, 21 Oct 2025 05:26:17 +0000 (22:26 -0700)]
opencl: fix warnings and clean up profiling (#16688)

* opencl: remove unused headers, fix warnings

* opencl: clean up profiling, only keep kernel time

6 weeks agovulkan: Handle FA with all -inf mask values (#16447)
Jeff Bolz [Tue, 21 Oct 2025 03:16:08 +0000 (22:16 -0500)]
vulkan: Handle FA with all -inf mask values (#16447)

6 weeks agosycl : add PAD_REFLECT_D1 operator support (#16145)
YehuditE [Mon, 20 Oct 2025 22:21:12 +0000 (01:21 +0300)]
sycl : add PAD_REFLECT_D1 operator support (#16145)

* sycl: add PAD_REFLECT_D1 operator support

* docs(ops): regenerate docs/ops.md

* remove trailing whitespaces

* style: fix editorconfig issues — trim trailing spaces and normalize EOLs

* fix: move PAD_REFLECT_1D case outside of fall-through block

6 weeks agomodel : add BailingMoeV2 support (#16063)
Sigbjørn Skjæret [Mon, 20 Oct 2025 19:38:20 +0000 (21:38 +0200)]
model : add BailingMoeV2 support (#16063)

* add BailingMoeV2 support

* update llm types

* undo

* undo

* update llm types

* add model collection link

* update

* almost working

* correct group selection and rename n_group_exp

* avoid large top_k and use argmax instead for now

if we had something like argmax2 that would be equivalent, but this works fine until then

* poke

* skip group selection when there are no tokens

* fix 1T conversion

* hopefully fixed expert group selection

third time's the charm?

* make expert group selection generally available

The new LLaDA2Moe model uses this method too, make it generally available regardless of architecture.

* allow n_expert_groups to be 1 (Kimi K2)

* address review suggestions

6 weeks agoHandle legacy 'context' attachments (#16687)
Aleksander Grygier [Mon, 20 Oct 2025 17:49:02 +0000 (19:49 +0200)]
Handle legacy 'context' attachments (#16687)

6 weeks agoggml-alloc : fix leak when reusing a tensor with a larger size (#16679)
Diego Devesa [Mon, 20 Oct 2025 12:53:50 +0000 (05:53 -0700)]
ggml-alloc : fix leak when reusing a tensor with a larger size (#16679)

6 weeks agoPrevent premature submission on IME input (#16673)
Aleksander Grygier [Mon, 20 Oct 2025 12:21:12 +0000 (14:21 +0200)]
Prevent premature submission on IME input (#16673)

* fix: Prevent premature submission on IME input

* chore: update webui static build

* refactor: Put IME completion checker in a helper function and add checking for `KeyboardEvent.eventKey === 229`

* chore: update webui static build

* chore: update webui static build

* chore: update webui static build

6 weeks agoImport/Export UX improvements (#16619)
Aleksander Grygier [Mon, 20 Oct 2025 11:29:14 +0000 (13:29 +0200)]
Import/Export UX improvements (#16619)

* webui : added download action (#13552)

* webui : import and export (for all conversations)

* webui : fixed download-format, import of one conversation

* webui : add ExportedConversations type for chat import/export

* feat: Update naming & order

* chore: Linting

* feat: Import/Export UX improvements

* chore: update webui build output

* feat: Update UI placement of Import/Export tab in Chat Settings Dialog

* refactor: Cleanup

chore: update webui build output

* feat: Enable shift-click multiple conversation items selection

* chore: update webui static build

* chore: update webui static build

---------

Co-authored-by: Sascha Rogmann <redacted>
6 weeks agoEnable per-conversation loading states to allow having parallel conversations (#16327)
Aleksander Grygier [Mon, 20 Oct 2025 10:41:13 +0000 (12:41 +0200)]
Enable per-conversation loading states to allow having parallel conversations (#16327)

* feat: Per-conversation loading states and tracking streaming stats

* chore: update webui build output

* refactor: Chat state management

Consolidates loading state management by using a global `isLoading` store synchronized with individual conversation states.

This change ensures proper reactivity and avoids potential race conditions when updating the UI based on the loading status of different conversations. It also improves the accuracy of statistics displayed.

Additionally, slots service methods are updated to use conversation IDs for per-conversation state management, avoiding global state pollution.

* feat: Adds loading indicator to conversation items

* chore: update webui build output

* fix: Fix aborting chat streaming

Improves the chat stream abortion process by ensuring that partial responses are saved before the abort signal is sent.

This avoids a race condition where the onError callback could clear the streaming state before the partial response is saved. Additionally, the stream reading loop and callbacks are now checked for abort signals to prevent further processing after abortion.

* refactor: Remove redundant comments

* chore: build webui static output

* refactor: Cleanup

* chore: update webui build output

* chore: update webui build output

* fix: Conversation loading indicator for regenerating messages

* chore: update webui static build

* feat: Improve configuration

* feat: Install `http-server` as dev dependency to not need to rely on `npx` in CI

6 weeks agollama-batch: fix build fails with `-Werror=missing-braces` (#16614)
takuya kodama [Mon, 20 Oct 2025 08:27:09 +0000 (16:27 +0800)]
llama-batch: fix build fails with `-Werror=missing-braces` (#16614)

## Why it failed

When compiling with strict compiler flags (-Wmissing-braces -Werror=missing-braces),
the build fails with the following error:

```
cmake \
  -S . \
  -B ../llama.cpp.build \
  --preset=x64-linux-gcc-debug \
  -DCMAKE_INSTALL_PREFIX=/tmp/local \
  -DCMAKE_CXX_FLAGS="-Wmissing-braces -Werror=missing-braces" && \
cmake --build ../llama.cpp.build/
...
In file included from /home/otegami/work/cpp/llama.cpp/src/llama-graph.h:4,
                 from /home/otegami/work/cpp/llama.cpp/src/llama-model.h:5,
                 from /home/otegami/work/cpp/llama.cpp/src/llama.cpp:8:
/home/otegami/work/cpp/llama.cpp/src/llama-batch.h:126:48: error: missing braces around initializer for 'std::__array_traits<int, 1>::_Type' {aka 'int [1]'} [-Werror=missing-braces]
  126 |     std::array<llama_seq_id, 1> seq_id_0 = { 0 }; // default sequence id
      |                                                ^
cc1plus: some warnings being treated as errors
```

The issue is that std::array initialization requires double braces.

## How to fix

This PR changes `{ 0 }` to `{{ 0 }}` for std::array initialization.

This is part of a series of commits to fix missing braces warnings across the codebase.
- src/llama-batch.h <- This PR is here.
- src/llama-context.cpp
- tests/test-backend-ops.cpp
- tests/test-gguf.cpp
- tools/mtmd/clip.cpp

Benefits:
- std::array is a struct containing a C-style array, requiring nested braces
- Enables stricter compiler warnings to catch potential issues

6 weeks agoreadme: update bindings (#16651)
Ron Evans [Mon, 20 Oct 2025 08:20:04 +0000 (10:20 +0200)]
readme: update bindings (#16651)

Signed-off-by: deadprogram <redacted>
6 weeks agoSYCL: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators (#16613)
safranowith [Mon, 20 Oct 2025 08:08:32 +0000 (11:08 +0300)]
SYCL: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators (#16613)

* SYCL: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators

Clean up unrelated changes from previous commit

* Chore: remove empty lines and fix indentation

* Clean up: remove leftover blank lines and fix spacing

* chore: fix trailing whitespace and ensure final newline

* Cleanup: remove redundant declarations already defined in header

* Sync docs/ops.md with updated backend operation support

* docs: update ops.md after rebase

* docs: update ops.md - Vulkan supports SSM_CONV and SSM_SCAN

6 weeks agollama-context: only warn on pooling_type when user specified (#16674)
takuya kodama [Mon, 20 Oct 2025 07:44:21 +0000 (15:44 +0800)]
llama-context: only warn on pooling_type when user specified (#16674)

The unexpeced pooling_type warning was incorrectly shown when users did not
specify the --pooling-type parameter. In this case, the parameter
defaults to `LLAMA_POOLING_TYPE_UNSPECIFIED (-1)`, and the code
automatically applies the model's default pooling type.

Example of spurious warning:
```
$ llama-embedding -hf ggml-org/bge-m3-Q8_0-GGUF -p "hello"
...
llama_init_from_model: model default pooling_type is [2], but [-1] was specified
...
```

This fix ensures the warning only appears when users explicitly specify
a pooling type that differs from the model's default (e.g., using
--pooling-type mean on a model that expects CLS pooling).

6 weeks agomodel : add Granite Hybrid types (#16635)
Giuseppe Scrivano [Sun, 19 Oct 2025 21:54:31 +0000 (23:54 +0200)]
model : add Granite Hybrid types (#16635)

add Granite 4 models mapping their embedding dimensions to the # of
parameters.

Information taken from https://huggingface.co/ibm-granite/granite-4.0-h-tiny

Signed-off-by: Giuseppe Scrivano <redacted>
6 weeks agoci : fix binaries release failure for s390x (binaries may not work yet) (#16664)
Aaron Teo [Sun, 19 Oct 2025 21:06:39 +0000 (05:06 +0800)]
ci : fix binaries release failure for s390x (binaries may not work yet) (#16664)

* devops: initial patch

Signed-off-by: Aaron Teo <redacted>
* devops: forgot the z15 suffix

Signed-off-by: Aaron Teo <redacted>
* devops: attempt at impl GGML_CPU_ALL_VARIANTS for s390x

Signed-off-by: Aaron Teo <redacted>
* devops: rm baseline version

Signed-off-by: Aaron Teo <redacted>
---------

Signed-off-by: Aaron Teo <redacted>
6 weeks agoci : avoid manual updates of docs/ops.md (#16663)
Sigbjørn Skjæret [Sun, 19 Oct 2025 12:03:25 +0000 (14:03 +0200)]
ci : avoid manual updates of docs/ops.md (#16663)

6 weeks agoci: include s390x release binaries (#16648)
Aaron Teo [Sun, 19 Oct 2025 10:37:47 +0000 (18:37 +0800)]
ci: include s390x release binaries (#16648)

Signed-off-by: Aaron Teo <redacted>
6 weeks agoCODEOWNERS: update for ggml-cuda/mmf (#16660)
Aman Gupta [Sun, 19 Oct 2025 07:37:12 +0000 (15:37 +0800)]
CODEOWNERS: update for ggml-cuda/mmf (#16660)

6 weeks agoHIP: fix GPU_TARGETS (#16642)
Johannes Gäßler [Sat, 18 Oct 2025 12:47:32 +0000 (14:47 +0200)]
HIP: fix GPU_TARGETS (#16642)

6 weeks agovulkan: Implement topk_moe fused shader, ported from CUDA (#16641)
Jeff Bolz [Sat, 18 Oct 2025 10:22:57 +0000 (05:22 -0500)]
vulkan: Implement topk_moe fused shader, ported from CUDA (#16641)

This is similar to the CUDA shader from #16130, but doesn't use shared memory
and handles different subgroup sizes.

6 weeks agoCUDA: use registers instead of smem in topk-moe (#16647)
Aman Gupta [Sat, 18 Oct 2025 09:52:53 +0000 (17:52 +0800)]
CUDA: use registers instead of smem in topk-moe (#16647)

Uses the technique used in the vulkan PR #16641. Neat trick!

6 weeks agoopencl: transposed gemm/gemv moe kernel with mxfp4,f32 (#16602)
Shawn Gu [Sat, 18 Oct 2025 00:55:32 +0000 (17:55 -0700)]
opencl: transposed gemm/gemv moe kernel with mxfp4,f32 (#16602)

* opencl: transposed gemm/gemv moe kernel with mxfp4,f32

* add restore kernel for moe transpose

* fix trailing whitespaces

* resolve compilation warnings

6 weeks agollama-model: fix insonsistent ctxs <-> bufs order (#16581)
Johannes Gäßler [Fri, 17 Oct 2025 15:41:09 +0000 (17:41 +0200)]
llama-model: fix insonsistent ctxs <-> bufs order (#16581)

6 weeks agorpc : report actual free memory (#16616)
Radoslav Gerganov [Fri, 17 Oct 2025 15:02:52 +0000 (18:02 +0300)]
rpc : report actual free memory (#16616)

* rpc : report actual free memory

Start reporting the free memory on every device instead of using
fixed values. Now llama-cli users can get a nice memory breakdown
when using RPC devices.

* drop --mem in rpc-server

6 weeks agovulkan: Add State Space Model (SSM) Operations Support (#16463)
Giuseppe Scrivano [Fri, 17 Oct 2025 12:23:47 +0000 (14:23 +0200)]
vulkan: Add State Space Model (SSM) Operations Support (#16463)

* vulkan: implement SSM scan operation

Add State Space Model scan operation to the Vulkan backend.

Signed-off-by: Giuseppe Scrivano <redacted>
* vulkan: implement SSM conv operation

Add State Space Model conv operation to the Vulkan backend.

Signed-off-by: Giuseppe Scrivano <redacted>
---------

Signed-off-by: Giuseppe Scrivano <redacted>
6 weeks agoggml : fix SpaceMit IME array out-of-bounds in task assignment (#16629)
muggle-stack [Fri, 17 Oct 2025 10:01:23 +0000 (18:01 +0800)]
ggml : fix SpaceMit IME array out-of-bounds in task assignment (#16629)

Fix incorrect task-to-batch index calculation in the quantization phase.

The bug caused out-of-bounds access to qnbitgemm_args array when
compute_idx exceeded per_gemm_block_count_m, leading to invalid
pointer dereferences and SIGBUS errors.

Correctly map tasks to batches by dividing compute_idx by
per_gemm_block_count_m instead of block_size_m.

Example:
  batch_feature=1, gemm_m=30, block_size_m=4
  per_gemm_block_count_m = 8, task_count = 8

  Old: gemm_idx = 4/4 = 1 (out of bounds  New: gemm_idx = 4/8 = 0 (correct)

Tested on SpaceMit K1 RISC-V64 with qwen2.5:0.5b model.

Co-authored-by: muggle <redacted>
6 weeks agowebui: reorganize settings layout (#16607)
Pascal [Fri, 17 Oct 2025 08:35:03 +0000 (10:35 +0200)]
webui: reorganize settings layout (#16607)

* webui: reorganize settings layout

* chore: update webui build output

* fix: remove unused variable

* chore: update webui build output

6 weeks agovulkan: fix debug build (add_rms_len/data not found) (#16624)
Jeff Bolz [Fri, 17 Oct 2025 07:31:04 +0000 (02:31 -0500)]
vulkan: fix debug build (add_rms_len/data not found) (#16624)

6 weeks agometal : add `CONV_TRANSPOSE_2D` (#16542)
Ilia Ilmer [Fri, 17 Oct 2025 06:33:58 +0000 (02:33 -0400)]
metal : add `CONV_TRANSPOSE_2D` (#16542)

* initial: headers and metal-device.cpp updates

* adding conv_transpose_2d

* fix type

* fix type: int32->int64

* Update ggml/src/ggml-metal/ggml-metal.metal

Co-authored-by: Georgi Gerganov <redacted>
* Update ggml/src/ggml-metal/ggml-metal.metal

Co-authored-by: Georgi Gerganov <redacted>
* Update ggml/src/ggml-metal/ggml-metal.metal

Co-authored-by: Georgi Gerganov <redacted>
* add checks for src[0] and src[1]; add type checks

* Update ggml-metal.metal

Co-authored-by: Georgi Gerganov <redacted>
* add more tests, add optimization to threading

* add dynamic memory allocation in metal

---------

Co-authored-by: Georgi Gerganov <redacted>
6 weeks agogrammar : use int64_t to avoid int overflows in int schema to grammar conversion...
Olivier Chafik [Fri, 17 Oct 2025 05:59:31 +0000 (06:59 +0100)]
grammar : use int64_t to avoid int overflows in int schema to grammar conversion logic (#16626)

6 weeks agoSYCL SET operator optimized for F32 tensors (#16350)
GittyBurstein [Fri, 17 Oct 2025 02:36:40 +0000 (05:36 +0300)]
SYCL SET operator optimized for F32 tensors (#16350)

* SYCL/SET: implement operator + wire-up; docs/ops updates; element_wise & ggml-sycl changes

* sycl(SET): re-apply post-rebase; revert manual docs/ops.md; style cleanups

* move SET op to standalone file, GPU-only implementation

* Update SYCL SET operator for F32

* ci: fix editorconfig issues (LF endings, trailing spaces, final newline)

* fixed ggml-sycl.cpp

---------

Co-authored-by: Gitty Burstein <redacted>
6 weeks agomtmd : support home-cooked Mistral Small Omni (#14928)
Xuan-Son Nguyen [Thu, 16 Oct 2025 17:00:31 +0000 (19:00 +0200)]
mtmd : support home-cooked Mistral Small Omni (#14928)

6 weeks agofix: added a normalization step for MathJax-style \[\] and \(\) delimiters (#16599)
Pascal [Thu, 16 Oct 2025 14:28:41 +0000 (16:28 +0200)]
fix: added a normalization step for MathJax-style \[\] and \(\) delimiters (#16599)

* fix: added a normalization step for MathJax-style \[\] and \(\) delimiters

So inline and block equations are converted before KaTeX rendering,
enabling proper display of model-generated LaTeX in the WebUI

* chore: update webui build output

6 weeks agosycl : add ARANGE operator (#16362)
GittyBurstein [Thu, 16 Oct 2025 13:26:21 +0000 (16:26 +0300)]
sycl : add ARANGE operator (#16362)

* SYCL: update element-wise ops and presets

* clean arange

* Re-trigger CI

---------

Co-authored-by: Gitty Burstein <redacted>
6 weeks agoCANN: format code using .clang-format (#15863)
Chenguang Li [Thu, 16 Oct 2025 08:41:11 +0000 (16:41 +0800)]
CANN: format code using .clang-format (#15863)

This commit applies .clang-format rules to all source files under the
ggml-cann directory to ensure consistent coding style and readability.
The .clang-format option `SortIncludes: false` has been set to disable
automatic reordering of include directives.
No functional changes are introduced.

Co-authored-by: hipudding <redacted>
6 weeks agocommon : Update the docs on -t --threads (#16236)
takasurazeem [Thu, 16 Oct 2025 05:11:33 +0000 (01:11 -0400)]
common : Update the docs on -t --threads (#16236)

* Update the docs on -t --threads

* Revert "Update the docs on -t --threads"

This reverts commit eba97345e2c88d8ca510abec87d00bf6b9b0e0c2.

* docs: clarify -t/--threads parameter uses CPU threads and defaults to all available cores

* Update arg.cpp

6 weeks agoggml-cpu: replace putenv with setenv for const-correctness (#16573)
takuya kodama [Thu, 16 Oct 2025 05:10:32 +0000 (13:10 +0800)]
ggml-cpu: replace putenv with setenv for const-correctness (#16573)

## Why it failed

When compiling with strict compiler flags (-Wwrite-strings -Werror=discarded-qualifiers),
the build fails with the following error:

```
cmake \
  -S . \
  -B ../llama.cpp.build \
  --preset=x64-linux-gcc-debug \
  -DCMAKE_INSTALL_PREFIX=/tmp/local \
  -DCMAKE_C_FLAGS="-Wwrite-strings -Werror=discarded-qualifiers" && \
cmake --build ../llama.cpp.build/
...
/home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c: In function ‘ggml_cpu_init’:
/home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c:3572:24: error: passing argument 1 of ‘putenv’ discards ‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
 3572 |                 putenv("KMP_BLOCKTIME=200"); // 200ms
      |                        ^~~~~~~~~~~~~~~~~~~
In file included from /home/otegami/work/cpp/llama.cpp/ggml/src/./ggml-impl.h:10,
                 from /home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-impl.h:6,
                 from /home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/traits.h:3,
                 from /home/otegami/work/cpp/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c:6:
/usr/include/stdlib.h:786:26: note: expected ‘char *’ but argument is of type ‘const char *’
  786 | extern int putenv (char *__string) __THROW __nonnull ((1));
      |                    ~~~~~~^~~~~~~~
cc1: some warnings being treated as errors
ninja: build stopped: subcommand failed.
```

The issue is that putenv() expects a non-const char * but receives a string literal (const char *).

## How to fix

This PR replaces putenv("KMP_BLOCKTIME=200") with setenv("KMP_BLOCKTIME", "200", 0).

Benefits of setenv():
- Accepts const char * parameters (no qualifier warnings)
- Makes copies of the strings (safer memory handling)
- The third parameter (0) ensures we don't overwrite if already set

6 weeks agoSYCL: Add GGML_OP_MEAN operator support (#16009)
yael-works [Thu, 16 Oct 2025 04:21:28 +0000 (07:21 +0300)]
SYCL: Add GGML_OP_MEAN operator support (#16009)

* SYCL: Add GGML_OP_MEAN operator support

* SYCL: Fix formatting for GGML_OP_MEAN case

* Update ggml/src/ggml-sycl/ggml-sycl.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: Sigbjørn Skjæret <redacted>
6 weeks agogguf-py : add support for endian conversion of BF16 data (#16594)
Aleksei Nikiforov [Wed, 15 Oct 2025 20:43:08 +0000 (22:43 +0200)]
gguf-py : add support for endian conversion of BF16 data (#16594)

BF16 requires special handling in this script
while it's a 2-bytes data, but view is 1-byte by default.
Switch to correct view before attempting byteswapping.

With this change correctly byteswapping models like
Meta-Llama-3-8B-Instruct-bf16-GGUF
should be possible.

6 weeks agocpu : add FLOOR, CEIL, ROUND and TRUNC unary operators (#16083)
safranowith [Wed, 15 Oct 2025 19:24:51 +0000 (22:24 +0300)]
cpu : add FLOOR, CEIL, ROUND and TRUNC unary operators (#16083)

* CPU: Add support for FLOOR,CEIL,ROUND and TRUNC unary operators

- Added the operators to unary op enum
- Implemented API functions
- Implemented forward and unary-op logic in CPU backend
- Updated ggml_get_n_tasks
- Updated operators names array and static_assert
- Updated docs and enabled automatic tests

* docs: add documentation for ggml_trunc and ggml_trunc_inplace in ggml.h

* chore: remove trailing whitespace from ggml.h

* Remove unresolved merge markers

* Apply review suggestions: cleanup formatting, enum order and leftover artifacts

* Regenerate ops.md using create_ops_docs.py

6 weeks agoopencl: add q8_0 mm support (#16469)
lhez [Wed, 15 Oct 2025 17:51:04 +0000 (10:51 -0700)]
opencl: add q8_0 mm support (#16469)

* opencl: add mm_q8_0_f32

* opencl: fix data loading for incomplete tile

* opencl: use q8_0 mm for larger matrix

* opencl: add some tests to cover the path

6 weeks agoopencl: fix FA for f32 (#16584)
lhez [Wed, 15 Oct 2025 17:48:28 +0000 (10:48 -0700)]
opencl: fix FA for f32 (#16584)

6 weeks agoAdd server-driven parameter defaults and syncing (#16515)
Aleksander Grygier [Wed, 15 Oct 2025 14:22:20 +0000 (16:22 +0200)]
Add server-driven parameter defaults and syncing (#16515)

6 weeks agometal: optimise `GGML_OP_SUM` (#16559)
Sam/Samuel [Wed, 15 Oct 2025 14:05:56 +0000 (23:05 +0900)]
metal: optimise `GGML_OP_SUM` (#16559)

* optimise GGML_OP_SUM

* add non-contiguous tests by permuting the input

* change tests to require full contiguity of OP_SUM

* cuda : add check GGML_OP_SUM

---------

Co-authored-by: Georgi Gerganov <redacted>
6 weeks agoserver : fix img token logs (#16595)
Georgi Gerganov [Wed, 15 Oct 2025 13:53:12 +0000 (16:53 +0300)]
server : fix img token logs (#16595)

6 weeks agollama-quant: add support for mmproj (#16592)
Xuan-Son Nguyen [Wed, 15 Oct 2025 12:48:08 +0000 (14:48 +0200)]
llama-quant: add support for mmproj (#16592)

* llama-quant: add support for mmproj

* Update src/llama.cpp

Co-authored-by: Georgi Gerganov <redacted>
* check prefix instead

* small fix

---------

Co-authored-by: Georgi Gerganov <redacted>
6 weeks agoCUDA: Changing the CUDA scheduling strategy to spin (#16585)
Julius Tischbein [Wed, 15 Oct 2025 11:54:15 +0000 (13:54 +0200)]
CUDA: Changing the CUDA scheduling strategy to spin (#16585)

* CUDA set scheduling strategy to spinning for cc121

* Using prop.major and prop.minor, include HIP and MUSA

* Exclude HIP and MUSA

* Remove trailing whitespace

Co-authored-by: Johannes Gäßler <redacted>
* Remove empty line

Co-authored-by: Johannes Gäßler <redacted>
---------

Co-authored-by: Johannes Gäßler <redacted>
6 weeks agoserver : fix mtmd checkpoints (#16591)
Georgi Gerganov [Wed, 15 Oct 2025 09:51:27 +0000 (12:51 +0300)]
server : fix mtmd checkpoints (#16591)

6 weeks agometal : avoid using Metal's gpuAddress property (#16576)
Georgi Gerganov [Tue, 14 Oct 2025 17:33:05 +0000 (20:33 +0300)]
metal : avoid using Metal's gpuAddress property (#16576)

* metal : avoid using Metal's gpuAddress property

* metal : fix rope kernels buffer check

6 weeks agovulkan: Add ACC_TYPE_VEC2 implementation (#16203) upstream/0.0.6764
SavicStefan [Tue, 14 Oct 2025 17:18:05 +0000 (19:18 +0200)]
vulkan: Add ACC_TYPE_VEC2 implementation (#16203)

Signed-off-by: Stefan Savic <redacted>
Co-authored-by: Stefan Savic <redacted>