]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
7 weeks agocommon : Add a warning when we can't match samplers from a string or char. (#13330)
Ycros [Wed, 7 May 2025 08:23:28 +0000 (18:23 +1000)]
common : Add a warning when we can't match samplers from a string or char. (#13330)

7 weeks agocuda : remove nrows_x in mul_mat_q_process_tile (#13325)
R0CKSTAR [Wed, 7 May 2025 07:48:23 +0000 (15:48 +0800)]
cuda : remove nrows_x in mul_mat_q_process_tile (#13325)

Signed-off-by: Xiaodong Ye <redacted>
7 weeks agoexamples : remove infill (#13283)
Georgi Gerganov [Wed, 7 May 2025 07:28:02 +0000 (10:28 +0300)]
examples : remove infill (#13283)

ggml-ci

7 weeks agollama : support tie embedding for chatglm models (#13328)
piDack [Wed, 7 May 2025 07:23:11 +0000 (15:23 +0800)]
llama : support tie embedding for chatglm models (#13328)

7 weeks agoCUDA: mix virt/real CUDA archs for GGML_NATIVE=OFF (#13135)
Johannes Gäßler [Tue, 6 May 2025 21:35:51 +0000 (23:35 +0200)]
CUDA: mix virt/real CUDA archs for GGML_NATIVE=OFF (#13135)

7 weeks agoclip : refactor graph builder (#13321)
Xuan-Son Nguyen [Tue, 6 May 2025 20:40:24 +0000 (22:40 +0200)]
clip : refactor graph builder (#13321)

* mtmd : refactor graph builder

* fix qwen2vl

* clean up siglip cgraph

* pixtral migrated

* move minicpmv to a dedicated build function

* move max_feature_layer to build_llava

* use build_attn for minicpm resampler

* fix windows build

* add comment for batch_size

* also support tinygemma3 test model

* qwen2vl does not use RMS norm

* fix qwen2vl norm (2)

7 weeks agosampling : make top_n_sigma no-op at <=0 or a single candidate (#13345)
DocShotgun [Tue, 6 May 2025 20:36:24 +0000 (13:36 -0700)]
sampling : make top_n_sigma no-op at <=0 or a single candidate (#13345)

7 weeks agosampling : don't consider -infinity values in top_n_sigma (#13344)
oobabooga [Tue, 6 May 2025 18:24:15 +0000 (15:24 -0300)]
sampling : don't consider -infinity values in top_n_sigma (#13344)

7 weeks agocmake : remove arm64 msvc presets (#13342)
Diego Devesa [Tue, 6 May 2025 18:15:31 +0000 (20:15 +0200)]
cmake : remove arm64 msvc presets (#13342)

7 weeks agoSYCL: Disable reorder optimize by default and stop setting tensor extras when optimiz...
Akarshan Biswas [Tue, 6 May 2025 14:57:06 +0000 (20:27 +0530)]
SYCL: Disable reorder optimize by default and stop setting tensor extras when optimize is disabled (#13254)

* SYCL: Do not set tensor extras when reorder optimize is disabled

* SYCL: Disable reorder optimize by default

7 weeks agollama : fix build_ffn without gate (#13336)
Xuan-Son Nguyen [Tue, 6 May 2025 12:25:40 +0000 (14:25 +0200)]
llama : fix build_ffn without gate (#13336)

* llama : fix build_ffn without gate

* fix build on windows

* Revert "fix build on windows"

This reverts commit fc420d3c7eef3481d3d2f313fef2757cb33a7c56.

7 weeks agoCUDA: fix bad asserts for partial offload (#13337)
Johannes Gäßler [Tue, 6 May 2025 11:58:51 +0000 (13:58 +0200)]
CUDA: fix bad asserts for partial offload (#13337)

7 weeks agoconvert : qwen2/3moe : set yarn metadata if present (#13331)
Sigbjørn Skjæret [Tue, 6 May 2025 09:12:06 +0000 (11:12 +0200)]
convert : qwen2/3moe : set yarn metadata if present (#13331)

* set yarn metadata if present

* add comment about enabling YaRN

Co-authored-by: Xuan-Son Nguyen <redacted>
---------

Co-authored-by: Xuan-Son Nguyen <redacted>
7 weeks agoCUDA: fix --split-mode row for MMQ (#13323)
Johannes Gäßler [Tue, 6 May 2025 06:36:46 +0000 (08:36 +0200)]
CUDA: fix --split-mode row for MMQ (#13323)

7 weeks agogguf-py : avoid requiring pyside6 for other scripts (#13036)
compilade [Tue, 6 May 2025 02:27:31 +0000 (22:27 -0400)]
gguf-py : avoid requiring pyside6 for other scripts (#13036)

- gguf-py : remove gguf-py/gguf/scripts/__init__.py because it's not needed

Implicit namespaces are supported since Python 3.3 (https://peps.python.org/pep-0420/),
and the entrypoints in pyproject.toml can directly refer to the main functions.

7 weeks agoCUDA: fix logic for clearing padding with -ngl 0 (#13320)
Johannes Gäßler [Mon, 5 May 2025 20:32:13 +0000 (22:32 +0200)]
CUDA: fix logic for clearing padding with -ngl 0 (#13320)

7 weeks agosampling : Integrate Top-nσ into main sampling chain (and add it to the server) ...
oobabooga [Mon, 5 May 2025 20:12:19 +0000 (17:12 -0300)]
sampling : Integrate Top-nσ into main sampling chain (and add it to the server) (#13264)

* sampling: add Top-nσ sampler to `llama-server` and sampler ordering

* revert: sampler ordering

* revert: VS' crappy auto-formatting

* revert: VS' crappy auto-formatting pt.2

* revert: my crappy eye sight...

* sampling: add XTC to Top-nσ sampler chain

* sampling: add Dyna. Temp. to Top-nσ sampler chain

* sampling: actually remove Top-nσ from sampler(oops)

* Integrate top_n_sigma into main sampler chain

* Define COMMON_SAMPLER_TYPE_TOP_N_SIGMA

* Formatting

* Lint

* Exit early in the sampler if nsigma < 0

---------

Co-authored-by: CasualAutopsy <redacted>
7 weeks agoserver : Webui - change setText command from parent window to also send the message...
igardev [Mon, 5 May 2025 14:03:31 +0000 (17:03 +0300)]
server : Webui - change setText command from parent window to also send the message. (#13309)

* setText command from parent window for llama-vscode now sends the message automatically.

* Upgrade packages versions to fix vulnerabilities with "npm audit fix" command.

* Fix code formatting.

* Add index.html.gz changes.

* Revert "Upgrade packages versions to fix vulnerabilities with "npm audit fix" command."

This reverts commit 67687b7fda8a293724ba92ea30bb151677406bc8.

* easier approach

* add setTimeout

---------

Co-authored-by: igardev <redacted>
Co-authored-by: Xuan Son Nguyen <redacted>
7 weeks agomtmd : rename llava directory to mtmd (#13311)
Xuan-Son Nguyen [Mon, 5 May 2025 14:02:55 +0000 (16:02 +0200)]
mtmd : rename llava directory to mtmd (#13311)

* mv llava to mtmd

* change ref everywhere

7 weeks agoclip : fix confused naming ffn_up and ffn_down (#13290)
Xuan-Son Nguyen [Mon, 5 May 2025 10:54:44 +0000 (12:54 +0200)]
clip :  fix confused naming ffn_up and ffn_down (#13290)

* clip :  fix confused naming ffn_up and ffn_down

* rm ffn_i/o/g naming

* rename n_embd, n_ff

* small fix

* no check n_ff

7 weeks agoconvert : bailingmoe : set yarn metadata if present (#13312)
Sigbjørn Skjæret [Mon, 5 May 2025 10:34:26 +0000 (12:34 +0200)]
convert : bailingmoe : set yarn metadata if present (#13312)

7 weeks agoSYCL: Disable mul_mat kernels for noncontiguous tensor b (#13308)
Akarshan Biswas [Mon, 5 May 2025 08:09:10 +0000 (13:39 +0530)]
SYCL: Disable mul_mat kernels for noncontiguous tensor b (#13308)

ggml-ci

7 weeks agomtmd : add C public API (#13184)
Xuan-Son Nguyen [Sun, 4 May 2025 21:43:42 +0000 (23:43 +0200)]
mtmd : add C public API (#13184)

* init

* wip

* working version

* add mtmd::bitmaps

* add test target

* rm redundant define

* test: mtmd_input_chunks_free

* rm outdated comment

* fix merging issue

* explicitly create mtmd::input_chunks

* mtmd_input_chunk_copy

* add clone()

* add const to various places

* add warning about breaking changes

* helper: use mtmd_image_tokens_get_n_pos

7 weeks agorpc : use backend registry, support dl backends (#13304)
Diego Devesa [Sun, 4 May 2025 19:25:43 +0000 (21:25 +0200)]
rpc : use backend registry, support dl backends (#13304)

7 weeks agoggml : activate s390x simd for Q3_K (#13301)
Aaron Teo [Sun, 4 May 2025 17:49:12 +0000 (01:49 +0800)]
ggml : activate s390x simd for Q3_K (#13301)

Signed-off-by: Aaron Teo <redacted>
7 weeks agollava/mtmd : fixes to fully support dl backends (#13303)
Diego Devesa [Sun, 4 May 2025 15:05:20 +0000 (17:05 +0200)]
llava/mtmd : fixes to fully support dl backends (#13303)

7 weeks agollama : build windows releases with dl backends (#13220)
Diego Devesa [Sun, 4 May 2025 12:20:49 +0000 (14:20 +0200)]
llama : build windows releases with dl backends (#13220)

7 weeks agoCUDA: fix race condition in MMQ stream-k fixup (#13299)
Johannes Gäßler [Sun, 4 May 2025 12:16:39 +0000 (14:16 +0200)]
CUDA: fix race condition in MMQ stream-k fixup (#13299)

7 weeks agoCUDA: fix race condition in MMQ ids_dst (#13294)
Johannes Gäßler [Sun, 4 May 2025 11:58:38 +0000 (13:58 +0200)]
CUDA: fix race condition in MMQ ids_dst (#13294)

7 weeks agovulkan: Additional type support for unary, binary, and copy (#13266)
Jeff Bolz [Sun, 4 May 2025 05:17:16 +0000 (00:17 -0500)]
vulkan: Additional type support for unary, binary, and copy (#13266)

Support f16->f32 copy.
Support f16->f16 and f32->f32 unary ops.
Support all combinations of f16/f32 for src0/src1/dst for add/sub/mul/div.

7 weeks agoimatrix: fix oob writes if src1 is not contiguous (#13286)
Johannes Gäßler [Sat, 3 May 2025 22:50:37 +0000 (00:50 +0200)]
imatrix: fix oob writes if src1 is not contiguous (#13286)

8 weeks agoclip : revert the change of BOI/EOI token for GLM-edge (⚠️ breaking change) (#13259)
Xuan-Son Nguyen [Sat, 3 May 2025 18:07:54 +0000 (20:07 +0200)]
clip : revert the change of BOI/EOI token for GLM-edge (⚠️ breaking change) (#13259)

8 weeks agollama : Llama-3_1-Nemotron-Ultra-253B-v1 support (#12843)
ymcki [Sat, 3 May 2025 15:39:51 +0000 (23:39 +0800)]
llama : Llama-3_1-Nemotron-Ultra-253B-v1 support (#12843)

8 weeks agollama : move end-user examples to tools directory (#13249)
Diego Devesa [Fri, 2 May 2025 18:27:13 +0000 (20:27 +0200)]
llama : move end-user examples to tools directory (#13249)

* llama : move end-user examples to tools directory

---------

Co-authored-by: Xuan Son Nguyen <redacted>
8 weeks agosync : ggml (#13268)
Georgi Gerganov [Fri, 2 May 2025 17:54:30 +0000 (20:54 +0300)]
sync : ggml (#13268)

* vulkan : kernels for depthwise 2D convolution (CONV_2D_DW) (ggml/1204)

* vulkan : add kernels for depthwise 2d convolution (OP_CONV_2D_DW)

* review: remove src_x/y < 0 checks; add performance tests

* sync : ggml

ggml-ci

* vulkan : fix lint (#0)

---------

Co-authored-by: Acly <redacted>
8 weeks agocontext : fix reorder logic (#13267)
Georgi Gerganov [Fri, 2 May 2025 17:54:13 +0000 (20:54 +0300)]
context : fix reorder logic (#13267)

ggml-ci

8 weeks agoggml : Enable MMA for BF16 in llamafile_sgemm (#13148)
shalinib-ibm [Fri, 2 May 2025 16:53:12 +0000 (22:23 +0530)]
ggml : Enable MMA for BF16 in llamafile_sgemm (#13148)

This patch upstreams llamafile's cpu matrix multiplication kernels for ppc64le using MMA builtins for BF16 data type.

This change results in 9x - 40x gains
in total speed S t/s (ie all tokens/total time), across various batch sizes tested using llama-batched-bench benchmark.

The patch is tested with Meta-Lllama-3-8B,
and Mistral-7B models (BF16 models generated by using llama-quantize from corresponding FP32 models) on an IBM POWER10 machine.

Signed-off-by: Shalini Salomi Bodapati <redacted>
8 weeks agollama-model : support Qwen2 embedding models and pooling_mode_lasttoken (#13245)
Jared Van Bortel [Fri, 2 May 2025 15:42:30 +0000 (11:42 -0400)]
llama-model : support Qwen2 embedding models and pooling_mode_lasttoken (#13245)

8 weeks agoconvert : use correct context length for nomic-embed-text-v2 (#13216)
Jared Van Bortel [Fri, 2 May 2025 15:41:54 +0000 (11:41 -0400)]
convert : use correct context length for nomic-embed-text-v2 (#13216)

8 weeks agoconvert : converting mmproj for Qwen2/2.5VL from convert_hf_to_gguf (#13209)
Xuan-Son Nguyen [Fri, 2 May 2025 15:17:15 +0000 (17:17 +0200)]
convert : converting mmproj for Qwen2/2.5VL from convert_hf_to_gguf (#13209)

* wip

* qwen2.5vl ok

* vision: fix models missing "text_config"

* add test

* fix test repo name

* fix 32B model

* Revert "fix 32B model"

This reverts commit 651752f1ae25fe8a01c1e57c18cf2eca80b2774e.

* clarify about 32B

* rm qwen surgery script

* update llava/readme

* move V_ENC_EMBD_PATCH handling to Qwen2VLVisionModel

8 weeks agokv-cache : separate recurrent vs non-recurrent impl (#12799)
Georgi Gerganov [Fri, 2 May 2025 14:48:36 +0000 (17:48 +0300)]
kv-cache : separate recurrent vs non-recurrent impl (#12799)

* kv-cache : serparate recurrent vs non-recurrent impl (wip)

ggml-ci

* kv-cache : init -> contructor + add llama_memory_params

ggml-ci

* kv-cache : fix callback reference

ggml-ci

* context : llama_kv_cache -> llama_memory_i

ggml-ci

* context : move memory creation logic to model

ggml-ci

* llama : remove reference of memory during encode

ggml-ci

* kv-cache : hide padding details in the implementation

ggml-ci

* kv-cache : add ubatch_next()

ggml-ci

* context : simplify sbatch logic

ggml-ci

* kv-cache : hide defrag logic in the implementation

ggml-ci

* context : hide kv cache details in implementation

ggml-ci

* build : fix

ggml-ci

* cont : another fix

ggml-ci

* kv-cache : simplify interface (wip)

ggml-ci

* kv-cache : use separate KV cell structs for unified/recurrent

ggml-ci

* kv-cache : clean-up

ggml-ci

* model : better llama_model::create_model() signature

ggml-ci

* kv-cache : fix recurrent seq_rm()

ggml-ci

* kv-cache : replace `struct callbacks` with `llama_model &`

ggml-ci

* kv-cache : replace `struct graph_params` with `llama_context &`

ggml-ci

* kv-cache : fix offload check

ggml-ci

* context : avoid passing unique_ptr

ggml-ci

* kv-cache : avoid using the backends from the llama_context

ref #13113

ggml-ci

* kv-cache : more consistent debug logs [no ci]

* kv-cache : do not pass the full llama_context for kv graphs

ggml-ci

* kv-cache : remove comment

* kv-cache : ggml_rope_ext_inplace -> ggml_rope_ext

ggml-ci

* kv-cache : fix recurrent multi-user case

ggml-ci

* memory : remove comments [no ci]

8 weeks agollama : orion rope type is neox (#13261)
Sigbjørn Skjæret [Fri, 2 May 2025 10:44:24 +0000 (12:44 +0200)]
llama : orion rope type is neox (#13261)

8 weeks agollama : plamo rope type is neox (#13260)
Sigbjørn Skjæret [Fri, 2 May 2025 10:40:56 +0000 (12:40 +0200)]
llama : plamo rope type is neox (#13260)

8 weeks agollama-chat : reset glmedge chat template (#13253)
piDack [Fri, 2 May 2025 09:06:09 +0000 (17:06 +0800)]
llama-chat : reset glmedge chat template (#13253)

* reset glmedge chat template

* fix glmedge chat template

8 weeks agomtmd-cli : fix out_of_range when input image path is empty (#13244)
Shakil Ahmed [Fri, 2 May 2025 08:20:27 +0000 (14:20 +0600)]
mtmd-cli : fix out_of_range when input image path is empty (#13244)

* fix out_of_range error  to keep the chat loop running

* Update examples/llava/mtmd-cli.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* mtmd-cli : load image right away

* add a new line for readability

* rm printf

* Update examples/llava/mtmd-cli.cpp

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update examples/llava/mtmd-cli.cpp

---------

Co-authored-by: Sigbjørn Skjæret <redacted>
Co-authored-by: Xuan Son Nguyen <redacted>
Co-authored-by: Xuan-Son Nguyen <redacted>
8 weeks agoserver : add cache reuse card link to help (#13230)
Georgi Gerganov [Fri, 2 May 2025 06:48:31 +0000 (09:48 +0300)]
server : add cache reuse card link to help (#13230)

* server : add cache reuse card link to help

* args : use short url

8 weeks agoconvert : explicitly disable trust_remote_code for AutoConfig (#13246)
Xuan-Son Nguyen [Fri, 2 May 2025 06:45:10 +0000 (08:45 +0200)]
convert : explicitly disable trust_remote_code for AutoConfig (#13246)

8 weeks agoci: fix cross-compile sync issues (#12804)
bandoti [Thu, 1 May 2025 22:06:39 +0000 (19:06 -0300)]
ci: fix cross-compile sync issues (#12804)

8 weeks agorpc : avoid uninitialized memory in serialize_tensor (#13210)
Justin Santa Barbara [Thu, 1 May 2025 21:32:11 +0000 (17:32 -0400)]
rpc : avoid uninitialized memory in serialize_tensor (#13210)

Zero out the name and padding buffers.

8 weeks agoggml: Don't assert fail when tensor data changes (#13222)
Jesse Gross [Thu, 1 May 2025 20:46:10 +0000 (13:46 -0700)]
ggml: Don't assert fail when tensor data changes (#13222)

The following scenario will cause an assertion failure in the graph
allocator:
 - Build and allocate a graph containing a tensor with a non-NULL data
   pointer
 - Build and allocate a new graph where that data is NULL

Result:
ggml-alloc.c:819: GGML_ASSERT(talloc->buffer_id >= 0) failed

This happens during revalidation because we think that memory should
have been previously allocated based on the current graph but in
reality the previous graph was different. In this situation, we
should do a full reallocation pass.

8 weeks agobuild : fix build info on windows (#13239)
Diego Devesa [Thu, 1 May 2025 19:48:08 +0000 (21:48 +0200)]
build : fix build info on windows (#13239)

* build : fix build info on windows

* fix cuda host compiler msg

8 weeks agoclip : (minicpmv) Re-enable upscaling of images smaller than the CLIP image size...
Loïc Carrère [Thu, 1 May 2025 19:32:21 +0000 (21:32 +0200)]
clip : (minicpmv) Re-enable upscaling of images smaller than the CLIP image size (#13237)

8 weeks agollama-chat : update GLM4 chat template (#13238)
matteo [Thu, 1 May 2025 19:16:38 +0000 (21:16 +0200)]
llama-chat : update GLM4 chat template (#13238)

* update GLM4 chat template

* Update chat template

Co-authored-by: Xuan-Son Nguyen <redacted>
---------

Co-authored-by: Xuan-Son Nguyen <redacted>
8 weeks agovulkan: Add bfloat16 support (#12554)
Jeff Bolz [Thu, 1 May 2025 18:49:39 +0000 (13:49 -0500)]
vulkan: Add bfloat16 support (#12554)

* vulkan: Add bfloat16 support

This adds bfloat16 matrix multiply support based on VK_KHR_shader_bfloat16.
The extension is required for coopmat multiply support, but matrix-vector
multiply trivially promotes bf16 to fp32 and doesn't require the extension.
The copy/get_rows shaders also don't require the extension.

It's probably possible to fall back to non-coopmat and promote to fp32 when
the extension isn't supported, but this change doesn't do that.

The coopmat support also requires a glslc that supports the extension, which
currently requires a custom build.

* vulkan: Support bf16 tensors without the bf16 extension or coopmat support

Compile a variant of the scalar mul_mm shader that will promote the bf16
values to float, and use that when either the bf16 extension or the coopmat
extensions aren't available.

* vulkan: bfloat16 fixes (really works without bfloat16 support now)

* vulkan: fix spirv-val failure and reenable -O

8 weeks agovulkan: Handle src1 batch dimension in non-contiguous mat-vec-mul shader (#13191)
Jeff Bolz [Thu, 1 May 2025 18:19:31 +0000 (13:19 -0500)]
vulkan: Handle src1 batch dimension in non-contiguous mat-vec-mul shader (#13191)

* vulkan: Handle src1 batch dimension in non-contiguous mat-vec-mul shader

8 weeks agotest: non-cont. b in test-backend-ops -o MUL_MAT (#13187)
Johannes Gäßler [Thu, 1 May 2025 18:18:56 +0000 (20:18 +0200)]
test: non-cont. b in test-backend-ops -o MUL_MAT (#13187)

8 weeks agosync : ggml
Georgi Gerganov [Thu, 1 May 2025 14:07:13 +0000 (17:07 +0300)]
sync : ggml

ggml-ci

8 weeks agowhisper : add check that target name exists (whisper/3103)
Daniel Bevenius [Thu, 1 May 2025 08:05:24 +0000 (10:05 +0200)]
whisper : add check that target name exists (whisper/3103)

This commit adds a check to makes sure that the target exists before
trying to add compile options to ignore warnings when using MSVC.

The motivation for this is currently the build is broken depending on
the cmake options provided. With this fix it should be possible to build
even if the targets are not actually available.

Refs: https://github.com/ggml-org/whisper.cpp/pull/3090#issuecomment-2842760104

8 weeks agoggml : suppress Windows compiler warnings (whisper/3075)
Daniel Bevenius [Tue, 29 Apr 2025 13:47:55 +0000 (15:47 +0200)]
ggml : suppress Windows compiler warnings (whisper/3075)

* whisper: suppress Windows compiler warnings

This commit disables compiler warnings on window using MSVC.

The motivation for these changes is that some compilers generate
warnings for these conversion, for example Windows MSVC, and
there are quite a few of them. This makes it a little difficult to
spot new warnings that may be introduced and also can be difficult
for users/embedders of ggml where these warnings are hard to separate
from their own warnings.

* squash! whisper: suppress Windows compiler warnings

Move ggml related warnings into ggml. This commit also fixes the
indentation and adds a missing whitespace to the if statement.

8 weeks agomtmd : add **vision** support for Mistral Small 3.1 (#13231)
Xuan-Son Nguyen [Thu, 1 May 2025 15:05:42 +0000 (17:05 +0200)]
mtmd : add **vision** support for Mistral Small 3.1 (#13231)

* convert ok

* load ok, missing patch merger

* ah sheet it works

* update llava/readme

* add test

* fix test

8 weeks agoarg : remove CURLINFO_EFFECTIVE_METHOD (#13228)
Xuan-Son Nguyen [Thu, 1 May 2025 08:23:25 +0000 (10:23 +0200)]
arg : remove CURLINFO_EFFECTIVE_METHOD (#13228)

8 weeks agollama-model : fix the reported size class for nomic-embed-text-v2-moe (#13223)
Jared Van Bortel [Thu, 1 May 2025 07:09:41 +0000 (03:09 -0400)]
llama-model : fix the reported size class for nomic-embed-text-v2-moe (#13223)

8 weeks agosync : ggml
Georgi Gerganov [Thu, 1 May 2025 06:59:02 +0000 (09:59 +0300)]
sync : ggml

8 weeks agoggml : fix ggml_gallocr_ptr type (ggml/1205)
Diego Devesa [Wed, 30 Apr 2025 13:20:40 +0000 (15:20 +0200)]
ggml : fix ggml_gallocr_ptr type (ggml/1205)

8 weeks agocuda : fix unused variable compile warning (whisper/0)
Georgi Gerganov [Thu, 24 Apr 2025 15:59:06 +0000 (18:59 +0300)]
cuda : fix unused variable compile warning (whisper/0)

ggml-ci

8 weeks agoCUDA: batched+noncont MMQ, refactor bs>1 MoE code (#13199)
Johannes Gäßler [Wed, 30 Apr 2025 21:12:59 +0000 (23:12 +0200)]
CUDA: batched+noncont MMQ, refactor bs>1 MoE code (#13199)

8 weeks agoarg : -hf do not fail if url mismatch (#13219)
Xuan-Son Nguyen [Wed, 30 Apr 2025 20:29:15 +0000 (22:29 +0200)]
arg : -hf do not fail if url mismatch (#13219)

* arg : -hf do not fail if url mismatch

* do not return if cannot parse metadata json

8 weeks agofix typo: `n_ctx_pre_seq` -> `n_ctx_per_seq` (#13221)
ddh0 [Wed, 30 Apr 2025 20:28:43 +0000 (15:28 -0500)]
fix typo: `n_ctx_pre_seq` -> `n_ctx_per_seq` (#13221)

8 weeks agoconvert : improve model arch handling (#13122)
Xuan-Son Nguyen [Wed, 30 Apr 2025 14:56:24 +0000 (16:56 +0200)]
convert : improve model arch handling (#13122)

* convert : improve model arch handling

* use AutoConfig

* rm trust_remote_code

* Update convert_hf_to_gguf.py

* fix self.block_count for vision

* fix NomicBertModel

8 weeks agollava : remove duplicate include (#13207)
Tatsuya Tanaka [Wed, 30 Apr 2025 13:25:20 +0000 (22:25 +0900)]
llava : remove duplicate include (#13207)

8 weeks agocommon : add -jf / --json-schema-file flag (#12011)
Olivier Chafik [Wed, 30 Apr 2025 12:52:35 +0000 (13:52 +0100)]
common : add -jf / --json-schema-file flag (#12011)

8 weeks agovulkan: use uint array index to avoid glslang bug (#13193)
Jeff Bolz [Wed, 30 Apr 2025 12:38:37 +0000 (07:38 -0500)]
vulkan: use uint array index to avoid glslang bug (#13193)

8 weeks agoggml : fix ppc64le build (#13176)
shalinib-ibm [Wed, 30 Apr 2025 11:17:08 +0000 (16:47 +0530)]
ggml : fix ppc64le build (#13176)

Build fails with compilation error on power pc.
This patch fixes the same.

Tested with unit tests run via
 --build <build_dir> && cd <build_dir> && make test

Signed-off-by: Shalini Salomi Bodapati <redacted>
8 weeks agoconvert : correct typo image_mean --> image_std (#13208)
Xuan-Son Nguyen [Wed, 30 Apr 2025 11:06:15 +0000 (13:06 +0200)]
convert : correct typo image_mean --> image_std (#13208)

8 weeks agofeat(ggml-cpu): enable z17 compile (#13182)
Aaron Teo [Wed, 30 Apr 2025 09:47:35 +0000 (17:47 +0800)]
feat(ggml-cpu): enable z17 compile (#13182)

z17 compilation requires GCC 15.1.0 and onwards

Signed-off-by: Aaron Teo <redacted>
8 weeks agoarg : allow using -hf offline (#13202)
Xuan-Son Nguyen [Wed, 30 Apr 2025 08:46:32 +0000 (10:46 +0200)]
arg : allow using -hf offline (#13202)

* arg : allow using -hf offline

* add more comments in code [no ci]

8 weeks agodocker : do not build tests (#13204)
Xuan-Son Nguyen [Wed, 30 Apr 2025 08:44:07 +0000 (10:44 +0200)]
docker : do not build tests (#13204)

* docker : do not build tests

* include "ggml-cpu.h"

8 weeks agorpc : fix cache directory initialization (#13188)
xiaofei [Wed, 30 Apr 2025 06:29:22 +0000 (14:29 +0800)]
rpc : fix cache directory initialization (#13188)

Signed-off-by: xiaofei <redacted>
8 weeks agoscripts: n_depth for compare-llama-bench [no ci] (#13201)
Johannes Gäßler [Tue, 29 Apr 2025 21:32:04 +0000 (23:32 +0200)]
scripts: n_depth for compare-llama-bench [no ci] (#13201)

8 weeks agoserver : Prefilling assistant message in openai compatible API (#13174)
matteo [Tue, 29 Apr 2025 18:33:10 +0000 (20:33 +0200)]
server : Prefilling assistant message in openai compatible API (#13174)

* Prefilling assistant message in openai compatible API

* fixed indentation

* fixed code convention

* simplify method usage

* no more than one assistant message at end of messages

* merge checks into prefill code

* Update examples/server/utils.hpp

---------

Co-authored-by: matteo <redacted>
Co-authored-by: Xuan-Son Nguyen <redacted>
8 weeks agosampling : when top-k <= 0 -> noop (#13173)
Georgi Gerganov [Tue, 29 Apr 2025 17:22:57 +0000 (20:22 +0300)]
sampling : when top-k <= 0 -> noop (#13173)

ggml-ci

8 weeks agollama-bench: fixed size of fields to correctly map to values (#13183)
Alberto Cabrera Pérez [Tue, 29 Apr 2025 15:24:36 +0000 (16:24 +0100)]
llama-bench: fixed size of fields to correctly map to values (#13183)

8 weeks agoCUDA: fix non-cont. inputs for batched mat mul (#13155)
Johannes Gäßler [Tue, 29 Apr 2025 14:00:27 +0000 (16:00 +0200)]
CUDA: fix non-cont. inputs for batched mat mul (#13155)

8 weeks agollama : llm_type order by size (#13177)
Sigbjørn Skjæret [Tue, 29 Apr 2025 11:25:53 +0000 (13:25 +0200)]
llama : llm_type order by size (#13177)

8 weeks agomtmd : add qwen2vl and qwen2.5vl (#13141)
Xuan-Son Nguyen [Tue, 29 Apr 2025 09:47:04 +0000 (11:47 +0200)]
mtmd : add qwen2vl and qwen2.5vl (#13141)

* llava : add clip_n_output_tokens, deprecate clip_n_patches

* mtmd : add qwen2vl and qwen2.5vl

* decode_embd_batch::set_position_...

* working version

* deprecate llama-qwen2vl-cli

* correct order W, H of clip_embd_nbytes_by_img

* edit existing line in hot topics

8 weeks agollama : set qwen3 model type sizes (#13175)
Sigbjørn Skjæret [Tue, 29 Apr 2025 09:00:31 +0000 (11:00 +0200)]
llama : set qwen3 model type sizes (#13175)

8 weeks agollama-graph : fix text position for mrope (#13159)
Xuan-Son Nguyen [Tue, 29 Apr 2025 06:45:49 +0000 (08:45 +0200)]
llama-graph : fix text position for mrope (#13159)

* llama-graph : fix text position for mrope

* fix typo

* explicitly set 4th dim in the loop

2 months agomodel : Nomic Embed Text V2 with Mixture-of-Experts (MoE) architecture (#12466)
AT [Mon, 28 Apr 2025 19:52:15 +0000 (15:52 -0400)]
model : Nomic Embed Text V2 with Mixture-of-Experts (MoE) architecture (#12466)

* Nomic Embed Text V2 with Mixture-of-Experts (MoE) architecture

- Adds MoE-based embedding model supporting multilingual embeddings.
- Selects architecture variant based on hyperparameter detection (MoE layers).
- Removes unnecessary subclass initialization checks for clarity.

https://www.nomic.ai/blog/posts/nomic-embed-text-v2

Co-authored-by: Jared Van Bortel <redacted>
* fix tokenizer

* don't rename this tensor

---------

Co-authored-by: Jared Van Bortel <redacted>
2 months agoclip : fix model size display (#13153)
Xuan-Son Nguyen [Mon, 28 Apr 2025 19:23:19 +0000 (21:23 +0200)]
clip : fix model size display (#13153)

2 months agofix(rpc): Improve input validation and error handling (#13069)
Ville Vesilehto [Mon, 28 Apr 2025 18:00:20 +0000 (21:00 +0300)]
fix(rpc): Improve input validation and error handling (#13069)

* fix(rpc): Improve input validation and error handling

The `rpc-server` was vulnerable to Denial of Service attacks via
several RPC commands (`SET_TENSOR`, `GRAPH_COMPUTE`, etc.). Malformed
messages could trigger failed assertions (e.g., invalid `ggml_type`)
or out-of-bounds reads/writes leading to `GGML_ABORT` calls,
crashing the server process.

This PR introduces robust input validation and replaces `abort()`
calls with graceful error handling:

- **Type Validation:** `deserialize_tensor` now checks if the
  `tensor->type` is within the valid `GGML_TYPE_COUNT` range
  *before* calling `ggml_new_tensor_4d`. Returns `nullptr` on
  invalid type.
- **Bounds Checks:** Replaced `GGML_ABORT` in `set_tensor`,
  `set_tensor_hash`, and `get_tensor` handlers with error
  logging and returning `false` when data/offset parameters
  are out of buffer bounds.
- **Size Checks:** Added safe arithmetic checks (for overflow) in
  `graph_compute` when calculating required message sizes based
  on client-provided `n_nodes` and `n_tensors`. Returns early
  if the reported sizes conflict with the actual message size or
  would lead to overflow.
- **Error Propagation:**
    - `create_node` now checks for `nullptr` return values from
      `deserialize_tensor` and its recursive calls, propagating
      `nullptr` upwards on failure. Uses `find` instead of `at`
      for safer map access.
    - `copy_tensor` now checks for `nullptr` from `deserialize_tensor`
      and sets the response status to failure if deserialization
      or bounds checks fail.
    - `graph_compute` now checks for `nullptr` return from
      `create_node` and returns failure status correctly. The final
      return value now reflects the actual computation status.

These changes improve the RPC server's resilience
against malformed client requests, preventing crashes and ensuring
errors are handled more gracefully.

Signed-off-by: Ville Vesilehto <redacted>
* refactor(rpc): address pr comments

removed comments and unnecessary returns

Signed-off-by: Ville Vesilehto <redacted>
* refactor(rpc): ambiguous nullptr from create_node

rpc_server::create_node could previously return nullptr if the input ID
was 0 (valid) or if an internal error (deserialization, recursion
failure) occurred (invalid). This ambiguity made error handling
difficult for the caller (`graph_compute`).

This commit clarifies the meaning of nullptr:
- `graph_compute` now checks if the input 'id' was non-zero when
  `create_node` returns nullptr, correctly identifying failures
  versus intentional null links.
- `create_node` avoids recursive calls for zero IDs and propagates
  nullptr unambiguously on failure during recursion.

Signed-off-by: Ville Vesilehto <redacted>
* refactor(rpc): initial zero check in create_node

The caller (`graph_compute`) already checks `id != 0` when handling
a `nullptr` return from `create_node`, correctly distinguishing
intentional null links from actual errors. This makes the initial
`if (id == 0)` check redundant.

Also removes the log message when a tensor ID is not found in the
provided map which was added in this branch.

Signed-off-by: Ville Vesilehto <redacted>
* fix(rpc): Handle get_alloc_size failure in server

Check the return value of `server.get_alloc_size` in the RPC server
loop. If the call fails, return early to close the connection.

Signed-off-by: Ville Vesilehto <redacted>
* refactor(rpc): input size validation in graph_compute

Removes detailed, step-by-step size calculations and overflow
checks in favor of simpler direct comparisons, assuming 64-bit
overflow is unlikely.

Signed-off-by: Ville Vesilehto <redacted>
* refactor(rpc): remove extra status code setting

Removes the explicit setting of `response.result = GGML_STATUS_FAILED`
when `create_node` returns `nullptr` within `graph_compute`.
Primary signal is the `false` return value in case of failure.

Signed-off-by: Ville Vesilehto <redacted>
* refactor(rpc): remove redundant check for tensor->type

Breaks CI on ubuntu-cpu-make. Tensor type is uint32_t, thus
the check is not needed.

Signed-off-by: Ville Vesilehto <redacted>
---------

Signed-off-by: Ville Vesilehto <redacted>
2 months agollama-bench: add `-d` depth arg (#13096)
Vishal Agarwal [Mon, 28 Apr 2025 14:50:39 +0000 (20:20 +0530)]
llama-bench: add `-d` depth arg (#13096)

* add depth param

* update llama-bench README and add depth param

* llama-bench: default params for depth arg for faster execution

* Update examples/llama-bench/README.md

Co-authored-by: Johannes Gäßler <redacted>
* fix buffer print ub

* use user provided args

* remove extra whitespaces

---------

Co-authored-by: Johannes Gäßler <redacted>
2 months agomtmd : fix glm-edge redundant token count (#13139)
Xuan-Son Nguyen [Mon, 28 Apr 2025 14:12:56 +0000 (16:12 +0200)]
mtmd : fix glm-edge redundant token count (#13139)

* mtmd : fix glm-edge redundant token count

* fix chat template

* temporary disable GLMEdge test chat tmpl

2 months agocontext : do not clear output buffer on reserve (#13152)
pockers21 [Mon, 28 Apr 2025 13:45:40 +0000 (06:45 -0700)]
context : do not clear output buffer on reserve (#13152)

Co-authored-by: pockers21 <redacted>
2 months agollama : (mrope) allow using normal 1D position for text token (#13138)
Xuan-Son Nguyen [Mon, 28 Apr 2025 12:20:56 +0000 (14:20 +0200)]
llama : (mrope) allow using normal 1D position for text token (#13138)

* llama : (mrope) use normal position for text token

* rm n_pos_per_embd from llm_graph_input_attn_temp

2 months agoclip : refactor set input for cgraph + fix qwen2.5vl input (#13136)
Xuan-Son Nguyen [Mon, 28 Apr 2025 10:18:59 +0000 (12:18 +0200)]
clip : refactor set input for cgraph + fix qwen2.5vl input (#13136)

* clip : refactor set input for cgraph

* more strict assert

* minicpmv : use clip_n_mmproj_embd instead of copying the same code everywhere

* split qwen2 and qwen2.5 code blocks

* minor style fix

2 months agoSYCL: Add all missing unary kernels (#13074)
Akarshan Biswas [Mon, 28 Apr 2025 09:33:25 +0000 (15:03 +0530)]
SYCL: Add all missing unary kernels (#13074)

* SYCL: Add all missing unary kernels

ggml-ci

* decouple kernel launch range from data size using strided loop

* use ciel_div helper for num_blocks
ggml-ci

* clean auto imported header files

2 months agoreadme : update hot topics (#13150)
Georgi Gerganov [Mon, 28 Apr 2025 09:10:18 +0000 (12:10 +0300)]
readme : update hot topics (#13150)

2 months agocommon : fix noreturn compile warning (#13151)
Georgi Gerganov [Mon, 28 Apr 2025 08:57:19 +0000 (11:57 +0300)]
common : fix noreturn compile warning (#13151)

ggml-ci

2 months agollama-chat : fix typo GML --> GLM (#13143)
Xuan-Son Nguyen [Mon, 28 Apr 2025 08:11:58 +0000 (10:11 +0200)]
llama-chat : fix typo GML --> GLM (#13143)

2 months agomusa: fix typo in cc control (#13144)
R0CKSTAR [Mon, 28 Apr 2025 07:33:28 +0000 (15:33 +0800)]
musa: fix typo in cc control (#13144)

Signed-off-by: Xiaodong Ye <redacted>