]> git.djapps.eu Git - pkg/ggml/sources/ggml/log
pkg/ggml/sources/ggml
7 weeks agometal : fix loop bound in ggml_mem_ranges (llama/16412)
Georgi Gerganov [Fri, 3 Oct 2025 16:18:56 +0000 (19:18 +0300)]
metal : fix loop bound in ggml_mem_ranges (llama/16412)

7 weeks agoggml : fix graph reallocation with multiple chunks (llama/16396)
Acly [Fri, 3 Oct 2025 11:49:08 +0000 (13:49 +0200)]
ggml : fix graph reallocation with multiple chunks (llama/16396)

reallocation is needed if a single chunk grows in size,
even if total allocation size stays the same or is lower

7 weeks agovulkan: Replace uses of maxMemoryAllocationSize and VK_WHOLE_SIZE (llama/16354)
Jeff Bolz [Fri, 3 Oct 2025 10:50:46 +0000 (05:50 -0500)]
vulkan: Replace uses of maxMemoryAllocationSize and VK_WHOLE_SIZE (llama/16354)

* vulkan: Replace uses of maxMemoryAllocationSize and VK_WHOLE_SIZE

Replace maxMemoryAllocationSize check with maxBufferSize when creating buffers.
The maxMemoryAllocationSize limit is a "soft" limit and allocations can succeed
beyond that limit. This allows > 4GB buffers to be allocated on some
implementations (e.g. NVIDIA) and tensors this large can be used for im2col
and mul_mat.

For temporary buffers (prealloc_x/y/etc) check against maxStorageBufferRange.
I'm not sure this check is ideal, but we always use these buffers as a single
full size binding and the limit may be smaller than maxMemoryAllocationSize
or maxBufferSize, so I think this is reasonable.

Replace descriptor range uses of VK_WHOLE_SIZE with a manually computed range.
The maxStorageBufferRange may be smaller than the maxBufferSize or
maxMemoryAllocationSize (and the Vulkan spec warns about this in a note) and
it's invalid usage if VK_WHOLE_SIZE computes a range larger than
maxStorageBufferRange.

With this change, it should be possible to generate videos using wan networks
in stable-diffusion.cpp.

* vulkan: Add env var GGML_VK_FORCE_MAX_BUFFER_SIZE and use stoull

7 weeks agovulkan: Fix FA coopmat1 invalid array indexing (llama/16365)
Jeff Bolz [Fri, 3 Oct 2025 09:52:46 +0000 (04:52 -0500)]
vulkan: Fix FA coopmat1 invalid array indexing (llama/16365)

When computing sinks, the cm1 shader was looping r from 0 to Br rather than
to rows_per_thread. I must have copied this from the scalar path (where it is
correct), and somehow it wasn't causing failures on current drivers.

7 weeks agovulkan: in flash attention, bounds check against nem1 (don't rely on GGML_KQ_MASK_PAD...
Jeff Bolz [Fri, 3 Oct 2025 08:33:08 +0000 (03:33 -0500)]
vulkan: in flash attention, bounds check against nem1 (don't rely on GGML_KQ_MASK_PAD) (llama/16316)

7 weeks agoggml webgpu: add support for soft_max, optimize rms_norm (llama/16357)
Reese Levine [Thu, 2 Oct 2025 18:00:31 +0000 (11:00 -0700)]
ggml webgpu: add support for soft_max, optimize rms_norm (llama/16357)

* Add inplace softmax

* Move rms_norm to split row approach

* Update debug for supports_op

* clean up debug statements

* Update tests/test-backend-ops.cpp

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
7 weeks agomodel : Apertus model implementation (llama/15852)
Piotr Wilkin (ilintar) [Thu, 2 Oct 2025 17:43:22 +0000 (19:43 +0200)]
model : Apertus model implementation (llama/15852)

* First attempt

* No permute during convert (fixes qk tensors), proper norm application.

* RoPE = NeoX

* Coherence!

* Migrate xielu params from tensors to hyperparameters

* Simple CUDA kernel

* Revert stupid LLM refactorings

* Chat template support

* configchecker / flake8 errors

* Reorder unary.cu

* I do conclude that LLMs are, in fact, stupid.

* Fix after merge

* Final newline

* Make xIELU an UNARY_OP

* Final newline

* Correctly account for parameter shift

* Argh.

* Update ggml/src/ggml-cpu/unary-ops.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Refactor: remove unused methods, inline and factorize softplus, add const modifiers

* Revert CUDA changes, implement xIELU as a separate OP

* Pesky newline

* Add float2half / half2float for F16 inputs/outputs

* CUDA variants, attempt 2

* Actually, attempt 3

* Update ggml/src/ggml-cuda/unary.cu

Co-authored-by: Johannes Gäßler <redacted>
* Missing convert header

* Proper formula and reference for xIELU in the comments.

* Modify unary-ops.cpp to add the functor-based logic besides the template system to retain optimizations

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <redacted>
* Add tensor mappings for Apertus to global list instead

* Fix lazy on scalars

* Update ggml/src/ggml-cuda/unary.cu

Co-authored-by: Johannes Gäßler <redacted>
* Add comment about the constraints on positive/negative alpha

* Change `softplus` to `ggml_softplus`

---------

Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Johannes Gäßler <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
7 weeks agomusa: update compile flags (llama/16265)
R0CKSTAR [Thu, 2 Oct 2025 13:29:56 +0000 (21:29 +0800)]
musa: update compile flags (llama/16265)

Signed-off-by: Xiaodong Ye <redacted>
7 weeks agoHIP: Disable ROCWMMA fattn on CDNA when compiled against ROCWMMA 2.0.0 (llama/16221)
uvos [Wed, 1 Oct 2025 21:09:25 +0000 (23:09 +0200)]
HIP: Disable ROCWMMA fattn on CDNA when compiled against ROCWMMA 2.0.0 (llama/16221)

* HIP: Disable ROCWMMA fatt on CDNA when compiled against ROCWMMA 2.0.0

rocwmma 2.0.0 includes a bug in the code fakeing fp16 accumulation on CDNA

* CUDA: Fix volta condition in ggml_cuda_should_use_wmma_fattn

7 weeks agovulkan: make ggml_vk_default_dispatcher support older vulkan headers (llama/16345)
Eve [Wed, 1 Oct 2025 07:56:36 +0000 (07:56 +0000)]
vulkan: make ggml_vk_default_dispatcher support older vulkan headers (llama/16345)

* make ggml_vk_default_dispatcher support older vulkan headers

* simpilfy with using

7 weeks agoopencl: support pad_ext (llama/15888)
lhez [Tue, 30 Sep 2025 17:45:45 +0000 (10:45 -0700)]
opencl: support pad_ext (llama/15888)

7 weeks agoggml webgpu: support for rope,div,sub,glu,scale,cont operators (llama/16187)
Reese Levine [Tue, 30 Sep 2025 16:57:51 +0000 (09:57 -0700)]
ggml webgpu: support for rope,div,sub,glu,scale,cont operators (llama/16187)

* Work on rope

* Simplify inplace operation generation and combine mul/add generation

* Work on rope variants

* implement neox rope

* rope complete

* Add sub,div,glu operators

* implement scale op

* Update cpy shader to handle cont/more types

* formatting

* Update test vars printing for rope,rms_norm

* Avoid ROPE hardcoded constants

* Add TODO to change ROPE constants to enum

Co-authored-by: Georgi Gerganov <redacted>
* fix TODO comment

---------

Co-authored-by: Georgi Gerganov <redacted>
7 weeks agoopencl: support ne3 in get_rows (llama/15866)
lhez [Tue, 30 Sep 2025 16:55:13 +0000 (09:55 -0700)]
opencl: support ne3 in get_rows (llama/15866)

2 months agoggml : bump version to 0.9.4 (#1363) upstream/0.9.4 v0.9.4
Georgi Gerganov [Tue, 30 Sep 2025 10:42:39 +0000 (13:42 +0300)]
ggml : bump version to 0.9.4 (#1363)

2 months agosync : whisper.cpp [no ci]
Georgi Gerganov [Tue, 30 Sep 2025 10:39:06 +0000 (13:39 +0300)]
sync : whisper.cpp [no ci]

2 months agosync : llama.cpp
Georgi Gerganov [Tue, 30 Sep 2025 08:18:45 +0000 (11:18 +0300)]
sync : llama.cpp

2 months agocuda : Enable CUDA Graph usage for Nemotron Nano v2 (NemotronH) (llama/16328)
anavp-nvidia [Tue, 30 Sep 2025 08:13:22 +0000 (08:13 +0000)]
cuda : Enable CUDA Graph usage for Nemotron Nano v2 (NemotronH) (llama/16328)

* Fix Nemotron Nano v2 9B not executing as CUDA Graph on NVIDIA GPUs

* fix to ensure test-backend-ops check passes

2 months agometal : dynamic simdgroups for MV kernels (llama/16340)
Georgi Gerganov [Tue, 30 Sep 2025 08:03:23 +0000 (11:03 +0300)]
metal : dynamic simdgroups for MV kernels (llama/16340)

* metal : dynamic simdgroups for MV kernels

* cont : minor

2 months agokleidiai : fix work size and threads sync for fp16 (llama/16246)
Charles Xu [Tue, 30 Sep 2025 07:07:20 +0000 (09:07 +0200)]
kleidiai : fix work size and threads sync for fp16 (llama/16246)

2 months agotests: override test_set_rows::max_nmse_err to allow for occasional rounding differen...
Jeff Bolz [Tue, 30 Sep 2025 00:26:34 +0000 (19:26 -0500)]
tests: override test_set_rows::max_nmse_err to allow for occasional rounding differences (llama/16295)

* tests: override test_set_rows::max_nmse_err to allow for occasional rounding differences

* apply similar error bounds to test_cpy

2 months agosync : llama.cpp
Georgi Gerganov [Mon, 29 Sep 2025 14:53:28 +0000 (17:53 +0300)]
sync : llama.cpp

2 months agoggml: riscv: add riscv spacemit backend (llama/15288)
alex-spacemit [Mon, 29 Sep 2025 14:50:44 +0000 (22:50 +0800)]
ggml: riscv: add riscv spacemit backend (llama/15288)

* ggml: add spacemit backend

Change-Id: I249bdc043485d815a9c351867137bc1e27cc2e23

* add new line at end of file

Change-Id: I889ed1c85fb45e62350ecde0c06f70450cadfbe2

* add riscv zba extension limit

Change-Id: I321eb200f859751727afe5cae13074dfce2bb0ce

* fixed for review comments, file renamed and format

Change-Id: Ia20b6ec24a36638e62e0fe07cf100916a7cce3ce

* fixed for code format, after clang-format

Change-Id: I5dc33a0412da3d3f2d77075d8939185d3009eca2

* use _Float16 instead of __fp16

Change-Id: I039fb02bb95270e641bc4442204e658735859d43

* add ci for riscv64-spacemit-ime-native

Change-Id: I711c1033061df1a289ea77891b2997599dfe8279

* update debian-13-riscv64-spacemit-ime-native ci label

Change-Id: Ifb2b891e2fca57b5da604fce2ac255f27731179a

* remove license comment for spacemit ime

Change-Id: If0dc3ca30a958631ccca0a28b62e0b825f9fb0c3

* upgrade binutils for gcc ime

Change-Id: Ibf2fa74c1064408974cb5b45f044d40987e5fb45

* add spacemit ime cross jobs

Change-Id: I80d74909941d41cb9cd09e51d8baf01c985cbfc6

* remove native compile for riscv64-spacemit-ime

Change-Id: I01920afafdc73fa7424014fd648d243f8ec9e25e

* ci : add caching for spacemit ime cross toolchain

Change-Id: Ic54a192019a2fd982bbd58225ce3bbc38f4053de

* ci: bug fixed for cache path and env

Change-Id: I28c42e10b6fff053bb6580926ca2353448cb042a

* Update .github/workflows/build-linux-cross.yml for cache path

Co-authored-by: Sigbjørn Skjæret <redacted>
* bugfixed for  build-linux-cross.yml,  syntax error

Co-authored-by: Sigbjørn Skjæret <redacted>
---------

Co-authored-by: cailinxi <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agoggml-backend : add root cause in error message if loading backend library fails ...
Rafal Lewczuk [Mon, 29 Sep 2025 11:17:09 +0000 (13:17 +0200)]
ggml-backend : add root cause in error message if loading backend library fails (llama/16172)

This PR adds additional information to an error message when loading backend library via ld_load_library() fails. This helps spotting why backend library did not load (missing library, missing dependency or unresolved symbol etc.).

2 months agosync : whisper.cpp (#1359)
Georgi Gerganov [Mon, 29 Sep 2025 13:49:11 +0000 (16:49 +0300)]
sync : whisper.cpp (#1359)

* ggml : Fix MKL detection by quoting BLAS_INCLUDE_DIRS (whisper/3426)

* sync : whisper.cpp

2 months agoci : print results [no ci] (#1358)
Georgi Gerganov [Mon, 29 Sep 2025 13:20:52 +0000 (16:20 +0300)]
ci : print results [no ci] (#1358)

2 months agoci : add self-hosted workflows (#1357)
Georgi Gerganov [Mon, 29 Sep 2025 12:15:13 +0000 (15:15 +0300)]
ci : add self-hosted workflows (#1357)

* ci : add self-hosted workflows

* ci : sync env vars with llama.cpp

* cont : always install python deps

* cont : cancel ongoing runs

* cont : exclude test-backend-ops from debug build

2 months agocmake : remove metal flag (llama/0)
Georgi Gerganov [Mon, 29 Sep 2025 09:33:38 +0000 (12:33 +0300)]
cmake : remove metal flag (llama/0)

2 months agosync : llama.cpp
Georgi Gerganov [Mon, 29 Sep 2025 09:31:53 +0000 (12:31 +0300)]
sync : llama.cpp

2 months agoggml : check cuda and metal argsort limits and add test (llama/16323)
Sigbjørn Skjæret [Mon, 29 Sep 2025 09:09:00 +0000 (11:09 +0200)]
ggml : check cuda and metal argsort limits and add test (llama/16323)

* check cuda argsort limits and add test

* add metal check

2 months agoggml : fix dependencies for ggml_set_rows (llama/16318)
Georgi Gerganov [Mon, 29 Sep 2025 05:41:28 +0000 (08:41 +0300)]
ggml : fix dependencies for ggml_set_rows (llama/16318)

2 months agovulkan: Fix validation failure in quantized flash attention (llama/16292)
Jeff Bolz [Mon, 29 Sep 2025 04:50:37 +0000 (23:50 -0500)]
vulkan: Fix validation failure in quantized flash attention (llama/16292)

2 months agoggml : fix GGML_F32_VEC_FMA argument order in ggml_vec_mad1_f32 (llama/16307)
Sigbjørn Skjæret [Sun, 28 Sep 2025 21:15:03 +0000 (23:15 +0200)]
ggml : fix GGML_F32_VEC_FMA argument order in ggml_vec_mad1_f32 (llama/16307)

* fix GGML_F32_VEC_FMA argument order in ggml_vec_mad1_f32

* add test that fails on simd

2 months agovulkan: 64-bit im2col (llama/16135)
Jeff Bolz [Sun, 28 Sep 2025 06:38:37 +0000 (01:38 -0500)]
vulkan: 64-bit im2col (llama/16135)

* vulkan: 64-bit im2col

Add variants of the im2col shaders that use buffer_device_address/buffer_reference,
and use 64-bit address calculations. This is needed for large convolutions used in
stable-diffusion.cpp.

* fix validation error for large im2col

2 months agometal : extend mat-mat multiplication support (llama/16225)
Georgi Gerganov [Sun, 28 Sep 2025 06:34:44 +0000 (09:34 +0300)]
metal : extend mat-mat multiplication support (llama/16225)

* metal : support mul_mm with src1->type == GGML_TYPE_F16

* metal : support mul_mm_id with src1->type == GGML_TYPE_F16

[no ci]

* metal : mul_mm support ne00 % 32 != 0

* metal : support mul_mm_id with ne00 % 32 != 0

* cont : remove unnecessary unrolls

* cont : simplify data loading

* metal : optimize mul_mm when output bounds checks are not needed

2 months agometal : fuse non-sequential nodes (llama/16102)
Georgi Gerganov [Sun, 28 Sep 2025 06:34:05 +0000 (09:34 +0300)]
metal : fuse non-sequential nodes (llama/16102)

* metal : fuse non-sequential nodes

* cont : add comment

* cont : simplify bounds checks

2 months agovulkan: handle mat_mul with A matrix > 4GB (llama/16176)
Jeff Bolz [Sun, 28 Sep 2025 01:36:34 +0000 (20:36 -0500)]
vulkan: handle mat_mul with A matrix > 4GB (llama/16176)

* vulkan: handle mat_mul with A matrix > 4GB

This change splits mat_mul operations with huge A matrix into chunks in the M
dimension. This works well for stable-diffusion use cases where the im2col
matrix has very large M.

Fix the order of setting the stride in mul_mm_cm2 - setting the dimension
clobbers the stride, so stride should be set after.

* build fixes

2 months agovulkan: support arbitrary KV dimension in flash attention (llama/16160)
Jeff Bolz [Sat, 27 Sep 2025 20:43:39 +0000 (16:43 -0400)]
vulkan: support arbitrary KV dimension in flash attention (llama/16160)

The "Clamp" spec constant is already based on whether KV is a multiple of Bc,
so use that to control whether bounds checking is performed. Add bounds checking
to the scalar and coopmat1 paths. Coopmat2 didn't need any changes (the K/V
tensors are already optionally clamped, nothing else needed to be changed).

2 months agovulkan : make the vulkan.hpp dynamic dispatcher instance private (llama/16224)
Acly [Sat, 27 Sep 2025 20:41:03 +0000 (22:41 +0200)]
vulkan : make the vulkan.hpp dynamic dispatcher instance private (llama/16224)

* don't use VULKAN_HPP_DEFAULT_DISPATCH_LOADER_DYNAMIC_STORAGE which can cause conflicts if application or other libraries do the same

2 months agoCUDA: mul_mat_id for mmf for bs <= 64 for f16 and bs <= 32 for f32 (llama/16277)
Aman Gupta [Sat, 27 Sep 2025 16:49:32 +0000 (00:49 +0800)]
CUDA: mul_mat_id for mmf for bs <= 64 for f16 and bs <= 32 for f32 (llama/16277)

* CUDA: mul_mat_id for mmf for bs <= 64 for f16 and bs <= 32 for f32

This commit adds mul_mat_id support for ncols_dst >= 16. It does this by
packing ncols_dst tiles into the blockDim.y.

My tests on a RTX 3090 show that this is faster than the cuBLAS fallback
for f16 till bs=64, and for f32 till bs=32

* Review: refactor if statement

2 months agoCUDA: refactor and deduplicate vector FA kernels (llama/16208)
Johannes Gäßler [Sat, 27 Sep 2025 16:45:07 +0000 (18:45 +0200)]
CUDA: refactor and deduplicate vector FA kernels (llama/16208)

* CUDA: refactor and deduplicate vector FA kernels

2 months agovulkan: throw system error instead of SIGABRT during init on older devices (llama...
Dmytro Minochkin [Sat, 27 Sep 2025 16:26:46 +0000 (19:26 +0300)]
vulkan: throw system error instead of SIGABRT during init on older devices (llama/16156)

* Throw system error on old Vulkan driver rather than SIGABRT

* Optionally handle any potential error in vulkan init

2 months agovulkan: support GET_ROWS for k-quants (llama/16235)
Jeff Bolz [Sat, 27 Sep 2025 10:36:11 +0000 (06:36 -0400)]
vulkan: support GET_ROWS for k-quants (llama/16235)

The dequantize functions are copy/pasted from mul_mm_funcs.comp with very few
changes - add a_offset and divide iqs by 2. It's probably possible to call
these functions from mul_mm_funcs and avoid the duplication, but I didn't go
that far in this change.

2 months agodevops: add s390x & ppc64le CI (llama/15925)
Aaron Teo [Fri, 26 Sep 2025 18:03:33 +0000 (02:03 +0800)]
devops: add s390x & ppc64le CI (llama/15925)

* devops: move s390x and ppc64le ci build

we have access to ubuntu-24.04-s390x and ppc64le images now

Signed-off-by: Aaron Teo <redacted>
* devops: disable ppc64le for now since they have compiler errors

Signed-off-by: Aaron Teo <redacted>
* devops: stop warnings as errors

Signed-off-by: Aaron Teo <redacted>
* devops: switch to non-macro flag

Signed-off-by: Aaron Teo <redacted>
* devops: going the llama macro route

Signed-off-by: Aaron Teo <redacted>
* devops: add big-endian gguf test models

Signed-off-by: Aaron Teo <redacted>
* devops: disable ppc64le to test s390x, check test build

Signed-off-by: Aaron Teo <redacted>
* devops: dup .gguf.inp files for big-endian tests

Signed-off-by: Aaron Teo <redacted>
* devops: dup .gguf.out files for big-endian too

Signed-off-by: Aaron Teo <redacted>
* devops: add python setup and endian byteswap

Signed-off-by: Aaron Teo <redacted>
* devops: pooring thing does not have s390x python3

Signed-off-by: Aaron Teo <redacted>
* devops: add missing rust compiler for s390x

Signed-off-by: Aaron Teo <redacted>
* devops: try rust actions runner

Signed-off-by: Aaron Teo <redacted>
* Revert "devops: try rust actions runner"

This reverts commit 3f8db04356033d6c1d7eccc75ca396bc5298250c.

Signed-off-by: Aaron Teo <redacted>
* devops: try a different path for rust

Signed-off-by: Aaron Teo <redacted>
* devops: dump home directory and user info

Signed-off-by: Aaron Teo <redacted>
* devops: install gguf-py only

Signed-off-by: Aaron Teo <redacted>
* devops: missed relative path

Signed-off-by: Aaron Teo <redacted>
* devops: remove big-endian files since local swapping is working

Signed-off-by: Aaron Teo <redacted>
* devops: revert test-tokenizer-0 cmakelists

Signed-off-by: Aaron Teo <redacted>
* Fix unicode flags conversion from and to uint16_t

Bitfields are allocated in different order on s390x

Signed-off-by: Aaron Teo <redacted>
* Simplify byteswap command

Signed-off-by: Aaron Teo <redacted>
* Add byteswapping and git-lfs for test-tokenizers-ggml-vocabs

Signed-off-by: Aaron Teo <redacted>
* Fix endianness detection in vocab loader

Signed-off-by: Aaron Teo <redacted>
* Disable test-thread-safety on s390x

In this test a model is downloaded,
then immediately loaded to check if more downloads are needed,
and then used for test.

There is no clean way to separate all those steps
 to add byteswapping between them, so just skip this test.

Signed-off-by: Aaron Teo <redacted>
* Fix q8_0 test in test-quantize-fns

vec_signed uses unexpected rounding mode.
Explicitly use different rounding function.

Signed-off-by: Aaron Teo <redacted>
* devops: add big-endian stories260K

Signed-off-by: Aaron Teo <redacted>
* devops: add s390x test-eval-callback

Signed-off-by: Aaron Teo <redacted>
* devops: fix test does not exist

Signed-off-by: Aaron Teo <redacted>
* devops: fix model not found llama-eval-callback

Signed-off-by: Aaron Teo <redacted>
* Fix q3_K dot product error in test-quantize-fns on s390x

Array q8bytes had only 4 elements allocated, but 8 elements accessed.
This lead to write out of bounds and later read of overwritten values out of bounds
and incorrect result.

Signed-off-by: Aaron Teo <redacted>
* devops: re-enable ppc64le for testing

Signed-off-by: Aaron Teo <redacted>
* devops: activate test-thread-safety for s390x

Signed-off-by: Aaron Teo <redacted>
* devops: disable ppc64le tests

for some reason it keeps failing test-thread-safety tests and I do not
    have a machine that is able to replicate the tests.

Signed-off-by: Aaron Teo <redacted>
* devops: LLAMA_FATAL_WARNINGS=ON

Signed-off-by: Aaron Teo <redacted>
* Correct repository URL for s390x for test-thread-safety model

Signed-off-by: Aaron Teo <redacted>
* Fix fs_get_cache_directory

Ensure it works even if both XDG_CACHE_HOME and HOME are unset.
This might happen in containers.

Signed-off-by: Aaron Teo <redacted>
* Re-enable CI for ppc64le

Signed-off-by: Aaron Teo <redacted>
* Fortify ggml_rope_impl

Only memcpy data from sections argument if it's non-NULL.

Signed-off-by: Aaron Teo <redacted>
* Add TODO in struct unicode_cpt_flags to reimplement it in endian-independent way

* Update URL for big-endian model

* Update .github/workflows/build.yml

Co-authored-by: Sigbjørn Skjæret <redacted>
* Update remaining mentions of BE models to ggml-org/models repo

---------

Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
2 months agometal : report OOM errors (llama/16274)
Georgi Gerganov [Fri, 26 Sep 2025 11:14:28 +0000 (14:14 +0300)]
metal : report OOM errors (llama/16274)

2 months agocommon : use cpp-httplib as a cURL alternative for downloads (llama/16185)
Adrien Gallouët [Fri, 26 Sep 2025 11:12:19 +0000 (13:12 +0200)]
common : use cpp-httplib as a cURL alternative for downloads (llama/16185)

* vendor : update httplib

Signed-off-by: Adrien Gallouët <redacted>
* common : use cpp-httplib as a cURL alternative for downloads

The existing cURL implementation is intentionally left untouched to
prevent any regressions and to allow for safe, side-by-side testing by
toggling the `LLAMA_CURL` CMake option.

Signed-off-by: Adrien Gallouët <redacted>
* ggml : Bump to Windows 10

Signed-off-by: Adrien Gallouët <redacted>
---------

Signed-off-by: Adrien Gallouët <redacted>
2 months agoggml-cpu: implement MXFP4 SIMD for s390x (llama/16193)
Aaron Teo [Fri, 26 Sep 2025 10:27:25 +0000 (18:27 +0800)]
ggml-cpu: implement MXFP4 SIMD for s390x (llama/16193)

* ggml-cpu: impl mxfp4 s390x

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: missing s = sumf

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix incorrect kval_mxfp4 type

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: rework mxfp4

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: missing delta calc

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix typo

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: fix typo for vec_splats

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: expand to 2 blocks per loop

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: add unroll to boost perf

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: back to 1 block per loop to test perf

Signed-off-by: Aaron Teo <redacted>
* Revert "ggml-cpu: back to 1 block per loop to test perf"

This reverts commit 1fe55724e2dc295701101bf838bdd4a512237492.

Signed-off-by: Aaron Teo <redacted>
* ggml-cpu: rm unroll from single block

Signed-off-by: Aaron Teo <redacted>
---------

Signed-off-by: Aaron Teo <redacted>
2 months agomusa: fix build warnings (llama/15611)
R0CKSTAR [Fri, 26 Sep 2025 00:56:10 +0000 (08:56 +0800)]
musa: fix build warnings (llama/15611)

Signed-off-by: Xiaodong Ye <redacted>
2 months agoCUDA: add a fused top-K MoE kernel (llama/16130)
Aman Gupta [Thu, 25 Sep 2025 14:35:05 +0000 (22:35 +0800)]
CUDA: add a fused top-K MoE kernel (llama/16130)

* CUDA: add a fused top-K MoE kernel

This kernel does the following:
1. softmax over the logits per token [n_experts, n_tokens]
2. argmax reduce over the top-k (n_experts_used) logits
3. write weights + ids to global memory

It is intended as fusion of softmax->top-k->get_rows pipeline for MoE models

* Refactor into ggml_cuda_should_use_topk_moe

* Review: Use better coalescing pattern, use WARP_SIZE, store logits into registers before

* Review: format + micro-optimizations

* Fix bug: fix tie breakers

* Add optional norm + clean-up code

* Use smem for final write

* Add bounds check

* Use better memory pattern for writeback

2 months agoggml : fix loongarch lsx compilation error (llama/15864)
junchao-zhao [Thu, 25 Sep 2025 09:22:55 +0000 (17:22 +0800)]
ggml : fix loongarch lsx compilation error (llama/15864)

2 months agoggml : remove -dev suffix from release version (#1355)
Daniel Bevenius [Fri, 26 Sep 2025 15:34:42 +0000 (17:34 +0200)]
ggml : remove -dev suffix from release version (#1355)

This commit removes the `-dev` suffix from the version string in
CMakeLists.txt and the release script. The version will now be
just be formatted as `MAJOR.MINOR.PATCH`.

2 months agopkg-config: include the new GGML_VERSION as a version (#1348)
Christoph Reiter [Thu, 25 Sep 2025 16:59:38 +0000 (18:59 +0200)]
pkg-config: include the new GGML_VERSION as a version (#1348)

Instead of hardcoding 0.0.0

If the 0.9.2-dev version is supposed to mean "less than" 0.9.2 instead of equal,
then the .pc version format would need to be changed to 0.9.2dev, since pkgconf
uses RPM version comparison. But let's keep it simple for now.

The version in CMake was added in #1336

2 months agoexamples : fix typo mismatch in gpt (#1349)
hebangwen [Thu, 25 Sep 2025 15:39:30 +0000 (23:39 +0800)]
examples : fix typo mismatch in gpt (#1349)

2 months agoggml : bump version to 0.9.3 (#1353) v0.9.3
Daniel Bevenius [Thu, 25 Sep 2025 12:39:05 +0000 (14:39 +0200)]
ggml : bump version to 0.9.3 (#1353)

2 months agoscripts : refactor release script into prepare and finalize stages (#1352)
Daniel Bevenius [Thu, 25 Sep 2025 10:38:50 +0000 (12:38 +0200)]
scripts : refactor release script into prepare and finalize stages (#1352)

This commit splits the release process into two distinct stages:

* prepare_release: This stage handles all updating the version and
  creating a new branch with the version change. This should then be
  use to open a PR for review. Once the PR has been merged the
  finalize_release stage can be run.

* finalize_release: This stage must be run on master and master must
  have the version bump commit (this is checked for). This stage
  handles tagging, and also creates a new branch for the update of the
  new development version. The tag should then be pushed to the remote
  which will trigger the release process on GitHub. The branch should be
  used to open a new PR for the development version update.

2 months agoscripts : fix next dev version calculation [no ci] (#1351)
Daniel Bevenius [Thu, 25 Sep 2025 09:43:51 +0000 (11:43 +0200)]
scripts : fix next dev version calculation [no ci] (#1351)

This commit updates the release script to correctly calculate the next
development version after a release.

The motivation for this is that the script currently increments the new
version for the next development cycle but the version is already set to
the updated version so this should not increment again.

2 months agosync : llama.cpp
Georgi Gerganov [Thu, 25 Sep 2025 08:40:17 +0000 (11:40 +0300)]
sync : llama.cpp

2 months agometal : fuse NORM + MUL + ADD, support non-multiples of 4 (llama/16220)
Georgi Gerganov [Thu, 25 Sep 2025 08:30:16 +0000 (11:30 +0300)]
metal : fuse NORM + MUL + ADD, support non-multiples of 4 (llama/16220)

* metal : fuse NORM + MUL + ADD

* metal : support norms of non-multiple of 4

* cont : fix comment [no ci]

2 months agometal : relax reorder conditions (llama/16216)
Georgi Gerganov [Thu, 25 Sep 2025 08:29:42 +0000 (11:29 +0300)]
metal : relax reorder conditions (llama/16216)

2 months agometal : restore im2col perf (llama/16219)
Georgi Gerganov [Thu, 25 Sep 2025 08:29:08 +0000 (11:29 +0300)]
metal : restore im2col perf (llama/16219)

2 months agosync : llama.cpp
Georgi Gerganov [Thu, 25 Sep 2025 08:20:29 +0000 (11:20 +0300)]
sync : llama.cpp

2 months agorpc : use ggml logging facilities
Radoslav Gerganov [Thu, 25 Sep 2025 07:20:02 +0000 (10:20 +0300)]
rpc : use ggml logging facilities

Use RPC_DEBUG environment variable to enable debug messages.
Add helper macro LOG_DBG() which does an early
check of the env var before calling GGML_LOG_DEBUG().
Make sure we log a debug message for every server function.

2 months agoci: run the x64 and arm ci on the github machines instead (llama/16183)
Eve [Thu, 25 Sep 2025 05:06:06 +0000 (05:06 +0000)]
ci: run the x64 and arm ci on the github machines instead (llama/16183)

* run the x64 ci on regular machines

* set up the same thing for arm

fix test-quantize-perf just like #12306

* try to disable sve

* add another sve run

2 months agollama: print memory breakdown on exit (llama/15860)
Johannes Gäßler [Wed, 24 Sep 2025 14:53:48 +0000 (16:53 +0200)]
llama: print memory breakdown on exit (llama/15860)

* llama: print memory breakdown on exit

2 months agoggml : split graph allocations according to backend max buffer size (llama/15815)
Acly [Wed, 24 Sep 2025 14:17:49 +0000 (16:17 +0200)]
ggml : split graph allocations according to backend max buffer size (llama/15815)

* ggml : make gallocr respect the backend's max buffer size

* if the graph requires more memory than can fit into a single allocation, split it into multiple backend buffers
* vulkan: report the actual max  allocation size in buffer type  interface

* fix missing newline, apple-clang warning

* track size of individual chunks in ggml_dyn_tallocr and raise max chunks.
revert to use suballocation_block_size as max chunk size for vulkan.

* track (chunk, offset) pairs instead of "global" offsets through gallocr.

* simpler, don't need loops to map between local/global offsets
* touches more code

* fix dyn_tallocr_max_size and initialization

* fix memory leak when buffers are reused due to same buffer type appearing multiple times

* make vbuffer allocation follow the same logic as backend_buffer did before

* continue to use leftover unallocated space of previous chunks after a new one has been created

* treat free blocks of each chunk as separate list
* they're still allocated together, but start/end of each chunk is tracked, and allocate/free iterate over sub-ranges
* exhaust freed blocks of all chunks before considering their last blocks with unallocated space
* start with 0 chunks/blocks and create chunks as needed
* allow the last chunk to grow beyond max size

* refactor: move adding new free block and new chunk into separate functions

* allocate chunks individually with a separate free-blocks list for each one

* needs a bit more memory/allocations/indirections, but code is simpler

* fix warnings (missing static) & debug checks

2 months agoggml-cpu: Respect cpumask settings (llama/16164)
Xiangyan Sun [Tue, 23 Sep 2025 08:58:12 +0000 (01:58 -0700)]
ggml-cpu: Respect cpumask settings (llama/16164)

2 months agoggml : fix uninitialized is_on_grid in quantize_row_iq3_xxs_impl (llama/15928)
Sigbjørn Skjæret [Tue, 23 Sep 2025 08:25:20 +0000 (10:25 +0200)]
ggml : fix uninitialized is_on_grid in quantize_row_iq3_xxs_impl (llama/15928)

* fix uninitialized is_on_grid in quantize_row_iq3_xxs_impl

* change initialization to true

2 months agozdnn: refactor codebase + add docs (llama/16178)
Aaron Teo [Tue, 23 Sep 2025 06:53:05 +0000 (14:53 +0800)]
zdnn: refactor codebase + add docs (llama/16178)

* zdnn: initial matmul refactor

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: rm static from funcs

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: update ggml-zdnn.h

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: change header files to hpp

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: switch to common.hpp

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: move mulmat forward around

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: rm inline from utils

Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: code cleanup

Signed-off-by: Aaron Teo <redacted>
* docs: add zDNN docs

Signed-off-by: Aaron Teo <redacted>
---------

Signed-off-by: Aaron Teo <redacted>
2 months agoggml-cpu : fix typo in gemm comments [no ci] (llama/16189)
Daniel Bevenius [Tue, 23 Sep 2025 03:59:03 +0000 (05:59 +0200)]
ggml-cpu : fix typo in gemm comments [no ci] (llama/16189)

2 months agoggml : implement set_rows with i32 index (llama/16159)
Sigbjørn Skjæret [Mon, 22 Sep 2025 17:13:00 +0000 (19:13 +0200)]
ggml : implement set_rows with i32 index (llama/16159)

* implement set_rows with i32 index

* template fix

* test quantized path

warnings--

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <redacted>
* forgotten name change

* deduplicate cuda/sycl and test-fix

* indent++

* vulkan: support set_rows with i32 index type (llama/16162)

* disable i32 index for webgpu for now

---------

Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Jeff Bolz <redacted>
2 months agoggml : extend ggml_can_fuse to work with non-sequential nodes (llama/16123)
Georgi Gerganov [Mon, 22 Sep 2025 08:12:37 +0000 (11:12 +0300)]
ggml : extend ggml_can_fuse to work with non-sequential nodes (llama/16123)

* ggml : extend ggml_can_fuse to work with non-sequential nodes in the graph

* cont : fix wrong bounds check condition

* cont : remove unnecessary overload

2 months agoggml : add ggml_op_is_empty (llama/16122)
Georgi Gerganov [Mon, 22 Sep 2025 08:12:09 +0000 (11:12 +0300)]
ggml : add ggml_op_is_empty (llama/16122)

* ggml : add ggml_op_is_empty

* ggml : move to ggml-impl.h

2 months agoVulkan: add conv_transpose_2d operation (llama/16022)
Shin-myoung-serp [Mon, 22 Sep 2025 08:04:01 +0000 (17:04 +0900)]
Vulkan: add conv_transpose_2d operation (llama/16022)

* Vulkan: add conv_transpose_2d operation

* Vulkan: fix typo in conv_transpose_2d shader(s0mp, s0L, s1mp, s1L)

* Vulkan: fix incorrect indentation in conv_transpose_2d shader

* Vulkan: add checking the push constants size limit and reuse conv2d_mm.comp for conv_transpose_2d operation

* Vulkan: revert the order of the index calculation and bound check in conv_2d shader

* Vulkan: explicity check push constants limit in supports_op() for conv_transpose_2d operation.

* Vulkan: remove unnecessary lower bound checks for H/W_idx in the conv_2d shader.

2 months agovulkan: add RTE variants of exp shader (llama/16165)
Jeff Bolz [Mon, 22 Sep 2025 05:37:17 +0000 (00:37 -0500)]
vulkan: add RTE variants of exp shader (llama/16165)

This fixes some failures on Turing where "round to zero" rounds to the max f16
value but the CPU reference value is infinite.

2 months agovulkan: vec dot matrix multiplication fix (llama/16151)
Ruben Ortlam [Mon, 22 Sep 2025 05:22:43 +0000 (07:22 +0200)]
vulkan: vec dot matrix multiplication fix (llama/16151)

* vulkan: fix matrix multiplication index calculation for odd m/n and odd k in combination with batching

* add odd m/n + odd k test with batching

2 months agoopencl: fix concat crash on win arm64 with Adreno (llama/15944)
lhez [Sun, 21 Sep 2025 23:42:10 +0000 (16:42 -0700)]
opencl: fix concat crash on win arm64 with Adreno (llama/15944)

2 months agoopencl: initial `q8_0` mv support (llama/15732)
lhez [Sun, 21 Sep 2025 21:48:44 +0000 (14:48 -0700)]
opencl: initial `q8_0` mv support (llama/15732)

2 months agovulkan: optimize UMA buffer operations and fix driver hangs (llama/16059)
Giuseppe Scrivano [Sun, 21 Sep 2025 06:31:55 +0000 (08:31 +0200)]
vulkan: optimize UMA buffer operations and fix driver hangs (llama/16059)

* vulkan: optimize UMA buffer operations and fix driver hangs

The previous implementation was blocking the GPU for extended periods,
causing the i915 driver to reset the context due to the hangcheck
protection.

[32628.443070] i915 0000:00:02.0: [drm] GPU HANG: ecode 12:1:85dffffb, in llama-server [194114]
[32628.443091] i915 0000:00:02.0: [drm] llama-server[194114] context reset due to GPU hang

* vulkan: implement deferred_memset on UMA

---------

Signed-off-by: Giuseppe Scrivano <redacted>
2 months agovulkan: fix validation error about VK_PIPELINE_CREATE_CAPTURE_STATISTICS_BIT_KHR...
Jeff Bolz [Sun, 21 Sep 2025 06:23:37 +0000 (01:23 -0500)]
vulkan: fix validation error about VK_PIPELINE_CREATE_CAPTURE_STATISTICS_BIT_KHR (llama/16086)

2 months agoggml : prepare for development of 0.9.2-dev
Georgi Gerganov [Sat, 20 Sep 2025 13:44:23 +0000 (16:44 +0300)]
ggml : prepare for development of 0.9.2-dev

2 months agoggml : bump version to 0.9.1
Georgi Gerganov [Sat, 20 Sep 2025 13:44:23 +0000 (16:44 +0300)]
ggml : bump version to 0.9.1

2 months agoscripts : fix sed usage to work on Mac (#1345)
Georgi Gerganov [Sat, 20 Sep 2025 13:44:08 +0000 (16:44 +0300)]
scripts : fix sed usage to work on Mac (#1345)

2 months agotests : adjust to new timestep_embedding operator
Georgi Gerganov [Sat, 20 Sep 2025 10:16:10 +0000 (13:16 +0300)]
tests : adjust to new timestep_embedding operator

2 months agosync : llama.cpp
Georgi Gerganov [Sat, 20 Sep 2025 10:09:41 +0000 (13:09 +0300)]
sync : llama.cpp

2 months agovulkan: use vec dot for matrix matrix multiplications (llama/16056)
Ruben Ortlam [Sat, 20 Sep 2025 08:42:56 +0000 (10:42 +0200)]
vulkan: use vec dot for matrix matrix multiplications (llama/16056)

* vulkan: Change the mul_mm shared memory and register caching system to use vec2 instead of scalars, to enable using dot2 instructions

* use fma instead of dot to fix Nvidia and Apple performance issues

2 months agoggml : refactor forward_dup for cpu backend (llama/16062)
Xuan-Son Nguyen [Fri, 19 Sep 2025 04:31:56 +0000 (11:31 +0700)]
ggml : refactor forward_dup for cpu backend (llama/16062)

* ggml : refactor forward_dup for cpu backend

* clean up a bit

* add quant/dequant perf test

2 months agoggml-amx : fix ggml_amx_init() on generic Linux (llama/16049)
Adrien Gallouët [Thu, 18 Sep 2025 21:07:26 +0000 (23:07 +0200)]
ggml-amx : fix ggml_amx_init() on generic Linux (llama/16049)

Generalize Linux check to `__linux__` to support non-glibc systems (like musl).
Also, return `false` on unknown/untested OS.

Without this commit, the code compiles (with warnings) but fails:

    register_backend: registered backend CPU (1 devices)
    register_device: registered device CPU (Intel(R) Xeon(R) Platinum 8488C)
    build: 6487 (51c4cac6) with x86_64-linux-musl-gcc (GCC) 15.1.0 for x86_64-linux-musl (debug)
    system info: n_threads = 8, n_threads_batch = 8, total_threads = 16
    ....
    print_info: n_ctx_orig_yarn  = 262144
    print_info: rope_finetuned   = unknown
    print_info: model type       = 4B
    Illegal instruction (core dumped)

Signed-off-by: Adrien Gallouët <redacted>
2 months agocmake : fix static linking for OpenMP on Unix-like systems (llama/16031)
Adrien Gallouët [Thu, 18 Sep 2025 21:07:18 +0000 (23:07 +0200)]
cmake : fix static linking for OpenMP on Unix-like systems (llama/16031)

When compiling with GGML_STATIC=ON, the build process would produce a
binary that was still dynamically linked to OpenMP. This defeats the
purpose of a static build:

    $ cmake -B build \
            -DBUILD_SHARED_LIBS=OFF \
            -DLLAMA_CURL=OFF \
            -DGGML_CCACHE=OFF \
            -DGGML_NATIVE=OFF \
            -DGGML_STATIC=ON

    $ ldd llama-server
            linux-vdso.so.1 (0x0000e1a434e3b000)
            libgomp.so.1 => /lib/aarch64-linux-gnu/libgomp.so.1 (0x0000e1a4345a0000)
            libstdc++.so.6 => /lib/aarch64-linux-gnu/libstdc++.so.6 (0x0000e1a434300000)
            libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000e1a434240000)
            libgcc_s.so.1 => /lib/aarch64-linux-gnu/libgcc_s.so.1 (0x0000e1a434200000)
            libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000e1a434030000)
            /lib/ld-linux-aarch64.so.1 (0x0000e1a434df0000)

This commit resolves the issue by modifying `CMAKE_FIND_LIBRARY_SUFFIXES`
to prioritize `.a` files, forcing CMake to link the static version of
the library.

Signed-off-by: Adrien Gallouët <redacted>
2 months agoopencl: optimize mxfp4 kernels (llama/16037)
Shawn Gu [Thu, 18 Sep 2025 19:03:34 +0000 (12:03 -0700)]
opencl: optimize mxfp4 kernels (llama/16037)

- flatten mxfp4 and packed fp4->fp16 bit-wise convert function (replace lut)
- MoE kernel optimizations

---------

Co-authored-by: Li He <redacted>
2 months agorename optimize_graph to graph_optimize (llama/16082)
Jeff Bolz [Thu, 18 Sep 2025 18:46:17 +0000 (13:46 -0500)]
rename optimize_graph to graph_optimize (llama/16082)

2 months agoCUDA: Optimize PAD_REFLECT_1D (llama/15957)
Bowen Han [Thu, 18 Sep 2025 18:26:03 +0000 (11:26 -0700)]
CUDA: Optimize PAD_REFLECT_1D (llama/15957)

* CUDA: Optimize PAD_REFLECT_1D
feat: add more test cases for PAD_REFLECT_1D

* use fast_div to improve performance

* Apply suggestion from JohannesGaessler

Co-authored-by: Johannes Gäßler <redacted>
* Apply suggestion from JohannesGaessler

Co-authored-by: Johannes Gäßler <redacted>
* optimize

* use a concise expression to further speedup the cuda kernel

---------

Co-authored-by: Johannes Gäßler <redacted>
2 months agoCUDA: fix compilation on CC 6.0 (llama/16091)
Johannes Gäßler [Thu, 18 Sep 2025 17:28:32 +0000 (19:28 +0200)]
CUDA: fix compilation on CC 6.0 (llama/16091)

2 months agometal : use function constants for mul_mv_ext kernels (llama/16074)
Georgi Gerganov [Thu, 18 Sep 2025 13:28:41 +0000 (16:28 +0300)]
metal : use function constants for mul_mv_ext kernels (llama/16074)

* metal : use function constants for mul_mv_ext kernels

ggml-ci

* metal : remove NW template argument

ggml-ci

* metal : adjust constants

ggml-ci

2 months agocuda : add missing F32<->I32 entries in ggml_cuda_cpy_fn (llama/16060)
Sigbjørn Skjæret [Thu, 18 Sep 2025 11:28:22 +0000 (13:28 +0200)]
cuda : add missing F32<->I32 entries in ggml_cuda_cpy_fn (llama/16060)

2 months agometal : improve F32, F16 and BF16 mat-vec multiplication (llama/16057)
Georgi Gerganov [Thu, 18 Sep 2025 09:33:45 +0000 (12:33 +0300)]
metal : improve F32, F16 and BF16 mat-vec multiplication (llama/16057)

* metal : improve F32, F16 and BF16 mat-vec multiplication

ggml-ci

* metal : make the NSG a function constant in mul_mv kernels

ggml-ci

2 months agometal : avoid call free for non-owned buffer (llama/16067)
Jhen-Jie Hong [Thu, 18 Sep 2025 07:06:48 +0000 (15:06 +0800)]
metal : avoid call free for non-owned buffer (llama/16067)

2 months agometal : handle nil cv during pipeline creation (llama/16065)
Georgi Gerganov [Thu, 18 Sep 2025 07:03:24 +0000 (10:03 +0300)]
metal : handle nil cv during pipeline creation (llama/16065)

ggml-ci

2 months agoCANN: Remove print (llama/16044)
Chenguang Li [Thu, 18 Sep 2025 01:26:33 +0000 (09:26 +0800)]
CANN: Remove print (llama/16044)

Signed-off-by: noemotiovon <redacted>
2 months agoGGML WebGPU: Support for ADD, MUL, RMS_NORM, GET_ROWS operators (llama/16018)
Reese Levine [Wed, 17 Sep 2025 20:09:40 +0000 (13:09 -0700)]
GGML WebGPU: Support for ADD, MUL, RMS_NORM, GET_ROWS operators (llama/16018)

* Add paramater buffer pool, batching of submissions, refactor command building/submission

* Add header for linux builds

* Free staged parameter buffers at once

* Format with clang-format

* Fix thread-safe implementation

* Use device implicit synchronization

* Update workflow to use custom release

* Remove testing branch workflow

* some f32 tests passing

* Disable set_rows until it's implemented

* f32 add all tests passing

* Begin work on set_rows

* Work on set rows

* Add error buffers for reporting unsupported SET_ROWS indices

* Remove extra comments

* Add templated addition, clean up code

* Get addition and multiplication working

* Implement rms_norm

* Add get_rows implementation

* Add new get_rows files

* Refactor use of wg size entry

* Fix compilation

* Try manually unrolled q4_0 quant

* Revert "Try manually unrolled q4_0 quant"

This reverts commit 77f8b96515f7e640ae4b0e44f066321fbc4a6166.

* Move to constant max wg size

* Check for tensor size in supports_op

* Vectorize f32 and change default workgroup size

* Move f32 get_rows from < 4 to % 4 != 0

* fix linter errors

* Add in-place tests

---------

Co-authored-by: Neha Abbas <redacted>
2 months agometal : refactor + optimize v2 (llama/15995)
Georgi Gerganov [Sat, 20 Sep 2025 10:09:06 +0000 (13:09 +0300)]
metal : refactor + optimize v2 (llama/15995)

2 months agosync : llama.cpp
Georgi Gerganov [Sat, 20 Sep 2025 10:07:59 +0000 (13:07 +0300)]
sync : llama.cpp