]> git.djapps.eu Git - pkg/ggml/sources/ggml/log
pkg/ggml/sources/ggml
7 weeks agoCUDA: mix virt/real CUDA archs for GGML_NATIVE=OFF (llama/13135)
Johannes Gäßler [Tue, 6 May 2025 21:35:51 +0000 (23:35 +0200)]
CUDA: mix virt/real CUDA archs for GGML_NATIVE=OFF (llama/13135)

7 weeks agoSYCL: Disable reorder optimize by default and stop setting tensor extras when optimiz...
Akarshan Biswas [Tue, 6 May 2025 14:57:06 +0000 (20:27 +0530)]
SYCL: Disable reorder optimize by default and stop setting tensor extras when optimize is disabled (llama/13254)

* SYCL: Do not set tensor extras when reorder optimize is disabled

* SYCL: Disable reorder optimize by default

7 weeks agoCUDA: fix bad asserts for partial offload (llama/13337)
Johannes Gäßler [Tue, 6 May 2025 11:58:51 +0000 (13:58 +0200)]
CUDA: fix bad asserts for partial offload (llama/13337)

7 weeks agoCUDA: fix --split-mode row for MMQ (llama/13323)
Johannes Gäßler [Tue, 6 May 2025 06:36:46 +0000 (08:36 +0200)]
CUDA: fix --split-mode row for MMQ (llama/13323)

7 weeks agoCUDA: fix logic for clearing padding with -ngl 0 (llama/13320)
Johannes Gäßler [Mon, 5 May 2025 20:32:13 +0000 (22:32 +0200)]
CUDA: fix logic for clearing padding with -ngl 0 (llama/13320)

7 weeks agoSYCL: Disable mul_mat kernels for noncontiguous tensor b (llama/13308)
Akarshan Biswas [Mon, 5 May 2025 08:09:10 +0000 (13:39 +0530)]
SYCL: Disable mul_mat kernels for noncontiguous tensor b (llama/13308)

ggml-ci

7 weeks agorpc : use backend registry, support dl backends (llama/13304)
Diego Devesa [Sun, 4 May 2025 19:25:43 +0000 (21:25 +0200)]
rpc : use backend registry, support dl backends (llama/13304)

7 weeks agoggml : activate s390x simd for Q3_K (llama/13301)
Aaron Teo [Sun, 4 May 2025 17:49:12 +0000 (01:49 +0800)]
ggml : activate s390x simd for Q3_K (llama/13301)

Signed-off-by: Aaron Teo <redacted>
7 weeks agoCUDA: fix race condition in MMQ stream-k fixup (llama/13299)
Johannes Gäßler [Sun, 4 May 2025 12:16:39 +0000 (14:16 +0200)]
CUDA: fix race condition in MMQ stream-k fixup (llama/13299)

7 weeks agoCUDA: fix race condition in MMQ ids_dst (llama/13294)
Johannes Gäßler [Sun, 4 May 2025 11:58:38 +0000 (13:58 +0200)]
CUDA: fix race condition in MMQ ids_dst (llama/13294)

7 weeks agovulkan: Additional type support for unary, binary, and copy (llama/13266)
Jeff Bolz [Sun, 4 May 2025 05:17:16 +0000 (00:17 -0500)]
vulkan: Additional type support for unary, binary, and copy (llama/13266)

Support f16->f32 copy.
Support f16->f16 and f32->f32 unary ops.
Support all combinations of f16/f32 for src0/src1/dst for add/sub/mul/div.

7 weeks agosync : whisper.cpp
Georgi Gerganov [Wed, 7 May 2025 13:24:51 +0000 (16:24 +0300)]
sync : whisper.cpp

ggml-ci

7 weeks agowhisper: remove MSVC warnings pragmas (whisper/3090)
Daniel Bevenius [Mon, 5 May 2025 11:09:35 +0000 (13:09 +0200)]
whisper: remove MSVC warnings pragmas (whisper/3090)

* ggml : remove MSVC warnings pragmas

This commit removes the MSVC-specific pragmas as these are now handled
in CMakeLists.txt.

* whisper : remove MSVC warning pragmas

This commit removes the MSVC-specific pragmas. These are now handled in
the CMakeLists.txt file.

7 weeks agocmake : removed stdc++fs (whisper/3097)
Jared Tweed [Fri, 2 May 2025 09:41:35 +0000 (02:41 -0700)]
cmake : removed stdc++fs (whisper/3097)

* removed stdc++fs

* kept line, but removed stdc++fs

8 weeks agosync : llama.cpp upstream/0.0.2015
Georgi Gerganov [Fri, 2 May 2025 17:57:23 +0000 (20:57 +0300)]
sync : llama.cpp

ggml-ci

8 weeks agovulkan : fix lint (llama/0)
Georgi Gerganov [Fri, 2 May 2025 17:57:07 +0000 (20:57 +0300)]
vulkan : fix lint (llama/0)

8 weeks agoggml : Enable MMA for BF16 in llamafile_sgemm (llama/13148)
shalinib-ibm [Fri, 2 May 2025 16:53:12 +0000 (22:23 +0530)]
ggml : Enable MMA for BF16 in llamafile_sgemm (llama/13148)

This patch upstreams llamafile's cpu matrix multiplication kernels for ppc64le using MMA builtins for BF16 data type.

This change results in 9x - 40x gains
in total speed S t/s (ie all tokens/total time), across various batch sizes tested using llama-batched-bench benchmark.

The patch is tested with Meta-Lllama-3-8B,
and Mistral-7B models (BF16 models generated by using llama-quantize from corresponding FP32 models) on an IBM POWER10 machine.

Signed-off-by: Shalini Salomi Bodapati <redacted>
8 weeks agorpc : avoid uninitialized memory in serialize_tensor (llama/13210)
Justin Santa Barbara [Thu, 1 May 2025 21:32:11 +0000 (17:32 -0400)]
rpc : avoid uninitialized memory in serialize_tensor (llama/13210)

Zero out the name and padding buffers.

8 weeks agoggml: Don't assert fail when tensor data changes (llama/13222)
Jesse Gross [Thu, 1 May 2025 20:46:10 +0000 (13:46 -0700)]
ggml: Don't assert fail when tensor data changes (llama/13222)

The following scenario will cause an assertion failure in the graph
allocator:
 - Build and allocate a graph containing a tensor with a non-NULL data
   pointer
 - Build and allocate a new graph where that data is NULL

Result:
ggml-alloc.c:819: GGML_ASSERT(talloc->buffer_id >= 0) failed

This happens during revalidation because we think that memory should
have been previously allocated based on the current graph but in
reality the previous graph was different. In this situation, we
should do a full reallocation pass.

8 weeks agobuild : fix build info on windows (llama/13239)
Diego Devesa [Thu, 1 May 2025 19:48:08 +0000 (21:48 +0200)]
build : fix build info on windows (llama/13239)

* build : fix build info on windows

* fix cuda host compiler msg

8 weeks agovulkan: Add bfloat16 support (llama/12554)
Jeff Bolz [Thu, 1 May 2025 18:49:39 +0000 (13:49 -0500)]
vulkan: Add bfloat16 support (llama/12554)

* vulkan: Add bfloat16 support

This adds bfloat16 matrix multiply support based on VK_KHR_shader_bfloat16.
The extension is required for coopmat multiply support, but matrix-vector
multiply trivially promotes bf16 to fp32 and doesn't require the extension.
The copy/get_rows shaders also don't require the extension.

It's probably possible to fall back to non-coopmat and promote to fp32 when
the extension isn't supported, but this change doesn't do that.

The coopmat support also requires a glslc that supports the extension, which
currently requires a custom build.

* vulkan: Support bf16 tensors without the bf16 extension or coopmat support

Compile a variant of the scalar mul_mm shader that will promote the bf16
values to float, and use that when either the bf16 extension or the coopmat
extensions aren't available.

* vulkan: bfloat16 fixes (really works without bfloat16 support now)

* vulkan: fix spirv-val failure and reenable -O

8 weeks agovulkan: Handle src1 batch dimension in non-contiguous mat-vec-mul shader (llama/13191)
Jeff Bolz [Thu, 1 May 2025 18:19:31 +0000 (13:19 -0500)]
vulkan: Handle src1 batch dimension in non-contiguous mat-vec-mul shader (llama/13191)

* vulkan: Handle src1 batch dimension in non-contiguous mat-vec-mul shader

8 weeks agotest: non-cont. b in test-backend-ops -o MUL_MAT (llama/13187)
Johannes Gäßler [Thu, 1 May 2025 18:18:56 +0000 (20:18 +0200)]
test: non-cont. b in test-backend-ops -o MUL_MAT (llama/13187)

8 weeks agovulkan : kernels for depthwise 2D convolution (CONV_2D_DW) (#1204)
Acly [Fri, 2 May 2025 16:02:34 +0000 (18:02 +0200)]
vulkan : kernels for depthwise 2D convolution (CONV_2D_DW) (#1204)

* vulkan : add kernels for depthwise 2d convolution (OP_CONV_2D_DW)

* review: remove src_x/y < 0 checks; add performance tests

8 weeks agosync : whisper.cpp
Georgi Gerganov [Thu, 1 May 2025 13:58:14 +0000 (16:58 +0300)]
sync : whisper.cpp

ggml-ci

8 weeks agowhisper : add check that target name exists (whisper/3103)
Daniel Bevenius [Thu, 1 May 2025 08:05:24 +0000 (10:05 +0200)]
whisper : add check that target name exists (whisper/3103)

This commit adds a check to makes sure that the target exists before
trying to add compile options to ignore warnings when using MSVC.

The motivation for this is currently the build is broken depending on
the cmake options provided. With this fix it should be possible to build
even if the targets are not actually available.

Refs: https://github.com/ggml-org/whisper.cpp/pull/3090#issuecomment-2842760104

8 weeks agoggml : suppress Windows compiler warnings (whisper/3075)
Daniel Bevenius [Tue, 29 Apr 2025 13:47:55 +0000 (15:47 +0200)]
ggml : suppress Windows compiler warnings (whisper/3075)

* whisper: suppress Windows compiler warnings

This commit disables compiler warnings on window using MSVC.

The motivation for these changes is that some compilers generate
warnings for these conversion, for example Windows MSVC, and
there are quite a few of them. This makes it a little difficult to
spot new warnings that may be introduced and also can be difficult
for users/embedders of ggml where these warnings are hard to separate
from their own warnings.

* squash! whisper: suppress Windows compiler warnings

Move ggml related warnings into ggml. This commit also fixes the
indentation and adds a missing whitespace to the if statement.

8 weeks agosync : llama.cpp
Georgi Gerganov [Thu, 1 May 2025 07:01:20 +0000 (10:01 +0300)]
sync : llama.cpp

ggml-ci

8 weeks agoCUDA: batched+noncont MMQ, refactor bs>1 MoE code (llama/13199)
Johannes Gäßler [Wed, 30 Apr 2025 21:12:59 +0000 (23:12 +0200)]
CUDA: batched+noncont MMQ, refactor bs>1 MoE code (llama/13199)

8 weeks agovulkan: use uint array index to avoid glslang bug (llama/13193)
Jeff Bolz [Wed, 30 Apr 2025 12:38:37 +0000 (07:38 -0500)]
vulkan: use uint array index to avoid glslang bug (llama/13193)

8 weeks agoggml : fix ppc64le build (llama/13176)
shalinib-ibm [Wed, 30 Apr 2025 11:17:08 +0000 (16:47 +0530)]
ggml : fix ppc64le build (llama/13176)

Build fails with compilation error on power pc.
This patch fixes the same.

Tested with unit tests run via
 --build <build_dir> && cd <build_dir> && make test

Signed-off-by: Shalini Salomi Bodapati <redacted>
8 weeks agofeat(ggml-cpu): enable z17 compile (llama/13182)
Aaron Teo [Wed, 30 Apr 2025 09:47:35 +0000 (17:47 +0800)]
feat(ggml-cpu): enable z17 compile (llama/13182)

z17 compilation requires GCC 15.1.0 and onwards

Signed-off-by: Aaron Teo <redacted>
8 weeks agoCUDA: fix non-cont. inputs for batched mat mul (llama/13155)
Johannes Gäßler [Tue, 29 Apr 2025 14:00:27 +0000 (16:00 +0200)]
CUDA: fix non-cont. inputs for batched mat mul (llama/13155)

8 weeks agofix(rpc): Improve input validation and error handling (llama/13069)
Ville Vesilehto [Mon, 28 Apr 2025 18:00:20 +0000 (21:00 +0300)]
fix(rpc): Improve input validation and error handling (llama/13069)

* fix(rpc): Improve input validation and error handling

The `rpc-server` was vulnerable to Denial of Service attacks via
several RPC commands (`SET_TENSOR`, `GRAPH_COMPUTE`, etc.). Malformed
messages could trigger failed assertions (e.g., invalid `ggml_type`)
or out-of-bounds reads/writes leading to `GGML_ABORT` calls,
crashing the server process.

This PR introduces robust input validation and replaces `abort()`
calls with graceful error handling:

- **Type Validation:** `deserialize_tensor` now checks if the
  `tensor->type` is within the valid `GGML_TYPE_COUNT` range
  *before* calling `ggml_new_tensor_4d`. Returns `nullptr` on
  invalid type.
- **Bounds Checks:** Replaced `GGML_ABORT` in `set_tensor`,
  `set_tensor_hash`, and `get_tensor` handlers with error
  logging and returning `false` when data/offset parameters
  are out of buffer bounds.
- **Size Checks:** Added safe arithmetic checks (for overflow) in
  `graph_compute` when calculating required message sizes based
  on client-provided `n_nodes` and `n_tensors`. Returns early
  if the reported sizes conflict with the actual message size or
  would lead to overflow.
- **Error Propagation:**
    - `create_node` now checks for `nullptr` return values from
      `deserialize_tensor` and its recursive calls, propagating
      `nullptr` upwards on failure. Uses `find` instead of `at`
      for safer map access.
    - `copy_tensor` now checks for `nullptr` from `deserialize_tensor`
      and sets the response status to failure if deserialization
      or bounds checks fail.
    - `graph_compute` now checks for `nullptr` return from
      `create_node` and returns failure status correctly. The final
      return value now reflects the actual computation status.

These changes improve the RPC server's resilience
against malformed client requests, preventing crashes and ensuring
errors are handled more gracefully.

Signed-off-by: Ville Vesilehto <redacted>
* refactor(rpc): address pr comments

removed comments and unnecessary returns

Signed-off-by: Ville Vesilehto <redacted>
* refactor(rpc): ambiguous nullptr from create_node

rpc_server::create_node could previously return nullptr if the input ID
was 0 (valid) or if an internal error (deserialization, recursion
failure) occurred (invalid). This ambiguity made error handling
difficult for the caller (`graph_compute`).

This commit clarifies the meaning of nullptr:
- `graph_compute` now checks if the input 'id' was non-zero when
  `create_node` returns nullptr, correctly identifying failures
  versus intentional null links.
- `create_node` avoids recursive calls for zero IDs and propagates
  nullptr unambiguously on failure during recursion.

Signed-off-by: Ville Vesilehto <redacted>
* refactor(rpc): initial zero check in create_node

The caller (`graph_compute`) already checks `id != 0` when handling
a `nullptr` return from `create_node`, correctly distinguishing
intentional null links from actual errors. This makes the initial
`if (id == 0)` check redundant.

Also removes the log message when a tensor ID is not found in the
provided map which was added in this branch.

Signed-off-by: Ville Vesilehto <redacted>
* fix(rpc): Handle get_alloc_size failure in server

Check the return value of `server.get_alloc_size` in the RPC server
loop. If the call fails, return early to close the connection.

Signed-off-by: Ville Vesilehto <redacted>
* refactor(rpc): input size validation in graph_compute

Removes detailed, step-by-step size calculations and overflow
checks in favor of simpler direct comparisons, assuming 64-bit
overflow is unlikely.

Signed-off-by: Ville Vesilehto <redacted>
* refactor(rpc): remove extra status code setting

Removes the explicit setting of `response.result = GGML_STATUS_FAILED`
when `create_node` returns `nullptr` within `graph_compute`.
Primary signal is the `false` return value in case of failure.

Signed-off-by: Ville Vesilehto <redacted>
* refactor(rpc): remove redundant check for tensor->type

Breaks CI on ubuntu-cpu-make. Tensor type is uint32_t, thus
the check is not needed.

Signed-off-by: Ville Vesilehto <redacted>
---------

Signed-off-by: Ville Vesilehto <redacted>
8 weeks agoSYCL: Add all missing unary kernels (llama/13074)
Akarshan Biswas [Mon, 28 Apr 2025 09:33:25 +0000 (15:03 +0530)]
SYCL: Add all missing unary kernels (llama/13074)

* SYCL: Add all missing unary kernels

ggml-ci

* decouple kernel launch range from data size using strided loop

* use ciel_div helper for num_blocks
ggml-ci

* clean auto imported header files

8 weeks agomusa: fix typo in cc control (llama/13144)
R0CKSTAR [Mon, 28 Apr 2025 07:33:28 +0000 (15:33 +0800)]
musa: fix typo in cc control (llama/13144)

Signed-off-by: Xiaodong Ye <redacted>
8 weeks agoCUDA: fix q_nope_absorbed prec for DS 2 Lite f16 (llama/13137)
Johannes Gäßler [Mon, 28 Apr 2025 07:29:26 +0000 (09:29 +0200)]
CUDA: fix q_nope_absorbed prec for DS 2 Lite f16 (llama/13137)

8 weeks agomusa: fix build warning (llama/13129)
R0CKSTAR [Sun, 27 Apr 2025 11:22:49 +0000 (19:22 +0800)]
musa: fix build warning (llama/13129)

Signed-off-by: Xiaodong Ye <redacted>
8 weeks agoggml: move fp16/bf16 conversion optimizations to CPU backend + export conversion...
SXX [Sat, 26 Apr 2025 14:05:31 +0000 (22:05 +0800)]
ggml: move fp16/bf16 conversion optimizations to CPU backend + export conversion APIs (llama/13107)

* ggml: dynamic x86_64 feature detection for FP32 <-> FP16/BF16 conversion

* move fp converter to ggml-cpu

* Switch ggml_compute_forward_get_rows_f16/bf16 to new ggml_cpu_fp16/bf16_to_fp32

8 weeks agoclip : fix pixtral on some GPU backends (llama/13097)
Xuan-Son Nguyen [Fri, 25 Apr 2025 12:31:42 +0000 (14:31 +0200)]
clip : fix pixtral on some GPU backends (llama/13097)

* clip : fix pixtral on some GPU backends

* refactor inp_raw set

* rm outdated comment

* fix dynamic size

* add TODO

8 weeks agochange the reorder tensor from init to execute OP (llama/13003)
Neo Zhang Jianyu [Fri, 25 Apr 2025 09:37:51 +0000 (17:37 +0800)]
change the reorder tensor from init to execute OP (llama/13003)

8 weeks agorpc : do not wait for response when sending RPC_CMD_SET_TENSOR (llama/12943)
Radoslav Gerganov [Fri, 25 Apr 2025 07:08:08 +0000 (10:08 +0300)]
rpc : do not wait for response when sending RPC_CMD_SET_TENSOR (llama/12943)

RPC_CMD_SET_TENSOR always returns an empty response and we send this 4
times per token. We can improve TG speed if we don't wait for this empty
response.

The performance impact of this change depends on the network latency.

8 weeks agoggml : fix ggml_gallocr_ptr type (#1205)
Diego Devesa [Wed, 30 Apr 2025 13:20:40 +0000 (15:20 +0200)]
ggml : fix ggml_gallocr_ptr type (#1205)

8 weeks agomedia : rm logos (#1203)
Georgi Gerganov [Wed, 30 Apr 2025 05:05:22 +0000 (08:05 +0300)]
media : rm logos (#1203)

* media : add

* media : experiment with cards

* media : add common.sh

* cards : fix frame and shadow

* cards : adjustments

* cards : add qwen3

* media : rm

2 months agosync : whisper.cpp
Georgi Gerganov [Fri, 25 Apr 2025 12:59:04 +0000 (15:59 +0300)]
sync : whisper.cpp

ggml-ci

2 months agocuda : fix unused variable compile warning (whisper/0)
Georgi Gerganov [Thu, 24 Apr 2025 15:59:06 +0000 (18:59 +0300)]
cuda : fix unused variable compile warning (whisper/0)

ggml-ci

2 months agoopencl : remove obsolete files (skip) (#1200)
Georgi Gerganov [Thu, 24 Apr 2025 15:41:17 +0000 (18:41 +0300)]
opencl : remove obsolete files (skip) (#1200)

2 months agosync : llama.cpp upstream/0.0.1982
Georgi Gerganov [Thu, 24 Apr 2025 14:48:02 +0000 (17:48 +0300)]
sync : llama.cpp

ggml-ci

2 months agometal : add memory pool for temp allocs (llama/12850)
Georgi Gerganov [Thu, 24 Apr 2025 14:47:31 +0000 (17:47 +0300)]
metal : add memory pool for temp allocs (llama/12850)

2 months agoopencl: split ggml-opencl.cl into multiple files and cleanup (llama/12886)
lhez [Thu, 24 Apr 2025 14:46:49 +0000 (17:46 +0300)]
opencl: split ggml-opencl.cl into multiple files and cleanup (llama/12886)

---------

Co-authored-by: Shangqing Gu <redacted>
2 months agoggml : fix trailing whitespaces (llama/0)
Georgi Gerganov [Thu, 24 Apr 2025 14:22:27 +0000 (17:22 +0300)]
ggml : fix trailing whitespaces (llama/0)

2 months agoCUDA: use switch statements in constexpr functions (llama/13095)
Johannes Gäßler [Thu, 24 Apr 2025 13:57:10 +0000 (15:57 +0200)]
CUDA: use switch statements in constexpr functions (llama/13095)

2 months agometal : fix floating-point range of attention scores in FA kernels (llama/13090)
Georgi Gerganov [Thu, 24 Apr 2025 07:38:30 +0000 (10:38 +0300)]
metal : fix floating-point range of attention scores in FA kernels (llama/13090)

ggml-ci

2 months agovulkan: matmul gcn tuning (llama/13016)
Eve [Thu, 24 Apr 2025 07:18:33 +0000 (07:18 +0000)]
vulkan: matmul gcn tuning (llama/13016)

* tune matmul for gcn

* this one is more power efficient

* Update src/ggml-vulkan/ggml-vulkan.cpp

Co-authored-by: 0cc4m <redacted>
* disable this tune for the proprietary driver

---------

Co-authored-by: 0cc4m <redacted>
2 months agoCUDA: noncont MMVQ + batched bs1 MUL_MAT_ID (llama/13014)
Johannes Gäßler [Tue, 22 Apr 2025 19:27:40 +0000 (21:27 +0200)]
CUDA: noncont MMVQ + batched bs1 MUL_MAT_ID (llama/13014)

* CUDA: noncont MMVQ + batched bs1 MUL_MAT_ID

* fix logic for RoPE support, CUDA graphs

2 months agoggml : add SSE 4.2 and x64 base variant for CPUs without AVX (llama/12871)
Diego Devesa [Mon, 21 Apr 2025 16:13:51 +0000 (18:13 +0200)]
ggml : add SSE 4.2 and x64 base variant for CPUs without AVX (llama/12871)

* ggml : add SSE 4.2 variant for CPUs without AVX

* ggml : add x64 base ABI variant

2 months agoSYCL: Add non-contiguous support in ROPE (llama/12993)
Akarshan Biswas [Mon, 21 Apr 2025 13:43:30 +0000 (19:13 +0530)]
SYCL: Add non-contiguous support in ROPE (llama/12993)

ggml-ci

2 months agovulkan: support noncontiguous rms_norm (llama/13031)
Jeff Bolz [Sun, 20 Apr 2025 08:50:02 +0000 (03:50 -0500)]
vulkan: support noncontiguous rms_norm (llama/13031)

2 months agometal: add neg operator (llama/13029)
Jeffrey Morgan [Sun, 20 Apr 2025 05:28:40 +0000 (22:28 -0700)]
metal: add neg operator (llama/13029)

2 months agoSYCL: Refactor and enable FP16 in binary broadcast OPs (llama/12975)
Akarshan Biswas [Fri, 18 Apr 2025 13:57:56 +0000 (19:27 +0530)]
SYCL: Refactor and enable FP16 in binary broadcast OPs (llama/12975)

* SYCL: refactor move to a separate file

* Fix binbcast

* Remove duplicates

* fix include formatting

* fix typo

2 months agorpc : add RPC_CMD_HELLO (llama/12955)
Radoslav Gerganov [Fri, 18 Apr 2025 07:13:42 +0000 (10:13 +0300)]
rpc : add RPC_CMD_HELLO (llama/12955)

Add RPC_CMD_HELLO for getting the version of the protocol implemend by
the server. Follow the semantic versioning rules at https://semver.org

Hopefully this bring better user experience when we make breaking
changes at the protocol level and avoid issues like #12465

2 months agograph : make FA compatible with MLA + add initial Metal kernels (llama/12953)
Georgi Gerganov [Thu, 17 Apr 2025 15:16:36 +0000 (18:16 +0300)]
graph : make FA compatible with MLA + add initial Metal kernels (llama/12953)

* graph : make mla compatible with FA

* metal : add exp FA kernels for DeepSeek models

ggml-ci

* llama : minor naming updates

ggml-ci

* ggml : disable FA for DS head sizes

* tests : add FA tests for MLA shapes

ggml-ci

2 months agoggml: Re-enable CUDA graphs in presence of CONT and DUP nodes (llama/12970)
Alan Gray [Thu, 17 Apr 2025 13:19:42 +0000 (14:19 +0100)]
ggml: Re-enable CUDA graphs in presence of CONT and DUP nodes (llama/12970)

2 months agoCANN: Add support for async operator submission (llama/12864)
hipudding [Thu, 17 Apr 2025 12:34:16 +0000 (20:34 +0800)]
CANN: Add support for async operator submission (llama/12864)

Submit operators using asynchronous threads to improve performance.

Use the environment variable GGML_CANN_ASYNC_MODE to control whether
asynchronous submission is enabled. It is disabled by default.

Testing shows a 10%–20% performance improvement in scenarios with
small parameter sizes, especially in quantized models.

2 months agoopencl: fix incorrect local_size index in profiling log (llama/12868)
kimminsu [Wed, 16 Apr 2025 21:25:57 +0000 (06:25 +0900)]
opencl: fix incorrect local_size index in profiling log (llama/12868)

2 months agovulkan: enable coopmat2 FA gqa and split_k optimizations more often (llama/12931)
Jeff Bolz [Wed, 16 Apr 2025 18:37:25 +0000 (13:37 -0500)]
vulkan: enable coopmat2 FA gqa and split_k optimizations more often (llama/12931)

The grouped query attention optmization doesn't require a power of two ratio,
the only thing relying on it was the modulo operation written as bitwise &.

split_k need not depend on gqa_ratio - enable it any time there's only one
workgroup in the X dimension. The shader gets the split index from the x coord,
and multiple workgroups in the X dimension (pre-split) indicates a larger
FA operation that wouldn't need splitting.

2 months agoCANN: Add 310P operator support check (llama/12962)
Chenguang Li [Wed, 16 Apr 2025 08:21:05 +0000 (16:21 +0800)]
CANN: Add 310P operator support check (llama/12962)

2 months agometal : add FA-vec kernels for head size 96 (llama/12952)
Georgi Gerganov [Tue, 15 Apr 2025 11:45:05 +0000 (14:45 +0300)]
metal : add FA-vec kernels for head size 96 (llama/12952)

ggml-ci

2 months agoCANN: Add x86 build ci (llama/12950)
hipudding [Tue, 15 Apr 2025 11:08:55 +0000 (19:08 +0800)]
CANN: Add x86 build ci (llama/12950)

* CANN: Add x86 build ci

* CANN: fix code format

2 months agoCUDA/HIP: Share the same unified memory allocation logic. (llama/12934)
David Huang [Tue, 15 Apr 2025 09:20:38 +0000 (17:20 +0800)]
CUDA/HIP: Share the same unified memory allocation logic. (llama/12934)

Replace compile-time `GGML_HIP_UMA` with environment variable `GGML_CUDA_ENABLE_UNIFIED_MEMORY`. This unifies the usage on NVIDIA and AMD GPUs, and allows a single binary to be shared between integrated and dedicated GPUs.

2 months agoSYCL: Add ROPE vision kernel (llama/12887)
Akarshan Biswas [Tue, 15 Apr 2025 08:37:42 +0000 (14:07 +0530)]
SYCL: Add ROPE vision kernel (llama/12887)

* SYCL: Add ROPE vision kernel

* Add comment about rope mode

2 months agoggml : Add AVX512 implementation of GEMM - Q4_Kx8 (llama/12829)
Srihari-mcw [Tue, 15 Apr 2025 06:22:36 +0000 (11:52 +0530)]
ggml : Add AVX512 implementation of GEMM - Q4_Kx8 (llama/12829)

* Add AVX512 implementation of GEMM - q4kx8

* Update changes to remove unnecessary whitespaces

2 months agoCANN: Opt ROPE optimization (llama/12865)
Chenguang Li [Tue, 15 Apr 2025 02:09:35 +0000 (10:09 +0800)]
CANN: Opt ROPE optimization (llama/12865)

* [CANN]Opt ROPE optimization

* [CANN]Codestyle adjustment

* [CANN]Fix the ROPE precision issue

* [CANN]codestyle fix

* [CANN]add rope unsupport case

Signed-off-by: noemotiovon <redacted>
2 months agoCANN: Optimize CANN buffer pool memory management (llama/12875)
Xinpeng Dou [Tue, 15 Apr 2025 02:04:24 +0000 (10:04 +0800)]
CANN: Optimize CANN buffer pool memory management (llama/12875)

Multiple optional memory pools are provided for CANN, including VMM,
priority queue-based, and traditional memory pools.
1.When the memory pool is available and GGML_CANN_DISABLE_VMM_POOL
   is not defined, the VMM pool is selected by default.
2.Otherwise, if GGML_CANN_ENABLE_BUF_PRIO_POOL is defined,
   the priority queue-based memory pool is used.
3.If neither condition is met, the default memory pool is used.

2 months agoSYCL: Fix im2col (llama/12910)
Akarshan Biswas [Mon, 14 Apr 2025 12:23:53 +0000 (17:53 +0530)]
SYCL: Fix im2col (llama/12910)

* SYCL: Fix im2col

* restore local workgroup size adjustments for large inputs

* restore format

2 months agorpc : use ggml_context_ptr (llama/12938)
Radoslav Gerganov [Mon, 14 Apr 2025 10:59:34 +0000 (13:59 +0300)]
rpc : use ggml_context_ptr (llama/12938)

2 months agoscripts : update sync-llama-am.sh
Georgi Gerganov [Thu, 24 Apr 2025 14:40:08 +0000 (17:40 +0300)]
scripts : update sync-llama-am.sh

2 months agotests : Fix a few small Windows / MSVC build issues (#1193)
Leonard Mosescu [Sat, 19 Apr 2025 05:36:38 +0000 (22:36 -0700)]
tests : Fix a few small Windows / MSVC build issues (#1193)

* Fix a few small Windows / MSVC build issues

* Fix MSVC build and keep C++17 conformance

* Update tests/test-pad-reflect-1d.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Incorporate review feedback

---------

Co-authored-by: Georgi Gerganov <redacted>
2 months agoggml : Depthwise 2D convolution (#1152)
Acly [Thu, 17 Apr 2025 12:16:45 +0000 (14:16 +0200)]
ggml : Depthwise 2D convolution (#1152)

* ggml-cpu : kernels for faster depthwise 2D convolution

* fix compile: remove static after moving to ops.cpp

* add dilation for depthwise_conv_2d

* review: rename to ggml_conv_2d_dw_direct, remove redundant struct keywords, pass by ref, whitespace

* review: rename depthwise_conv_2d -> conv_2d_dw everywhere

2 months agosync : llama.cpp
Georgi Gerganov [Mon, 14 Apr 2025 06:26:45 +0000 (09:26 +0300)]
sync : llama.cpp

ggml-ci

2 months agoggml: use _mm[512/256]_dpbusd[_avx]_epi32 to directly accumulate into the result...
SXX [Mon, 14 Apr 2025 05:47:55 +0000 (13:47 +0800)]
ggml: use _mm[512/256]_dpbusd[_avx]_epi32 to directly accumulate into the result register (llama/12773)

* ggml: use _mm[512/256]_dpbusd[_avx]_epi32 to directly accumulate into the result register

* simplifies the codebase by removing redundant functions

2 months agoggml: disable CUDA graphs for unsupported DUP and CONT node types (llama/12891)
Alan Gray [Sun, 13 Apr 2025 21:12:21 +0000 (22:12 +0100)]
ggml: disable CUDA graphs for unsupported DUP and CONT node types (llama/12891)

Fixes #12798

2 months agovulkan: use aligned loads for flash attention mask (llama/12853)
Jeff Bolz [Sat, 12 Apr 2025 08:44:48 +0000 (03:44 -0500)]
vulkan: use aligned loads for flash attention mask (llama/12853)

Rewrite the stride logic for the mask tensor in the FA shader to force the
stride to be aligned, to allow using more efficient loads.

2 months agosycl: Support sycl_ext_oneapi_limited_graph (llama/12873)
Ewan Crawford [Fri, 11 Apr 2025 13:32:14 +0000 (15:32 +0200)]
sycl: Support sycl_ext_oneapi_limited_graph (llama/12873)

The current usage of the SYCL-Graph extension checks for
the `sycl_ext_oneapi_graph` device aspect. However, it is also
possible to support `sycl_ext_oneapi_limied_graph` devices that
don't support update

2 months agoSYCL: Add fp16 type support to unary op kernels (llama/12788)
Akarshan Biswas [Fri, 11 Apr 2025 08:03:50 +0000 (13:33 +0530)]
SYCL: Add fp16 type support to unary op kernels (llama/12788)

* SYCL: Add fp16 support to some elementwise OP kernels

* remove comment

ggml-ci

* Use static_cast directly

* remove not needed cast from tanh

* Use static cast and remove unneeded castings

* Adjust device_support_op for unary OPs

* Use cast_data and typed_data struct to deduplicate casting code

2 months agoggml: fix compilation error s390x (llama/12848)
Aaron Teo [Fri, 11 Apr 2025 05:20:07 +0000 (13:20 +0800)]
ggml: fix compilation error s390x (llama/12848)

* ggml: fixes #12846 compilation error

Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
* ggml: add documentation for code change

Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
* ggml: refactor to type-cast and update documentation

Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
* ggml: update documentation to provide full issue link

Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
---------

Co-authored-by: Aleksei Nikiforov <redacted>
2 months agotests : fix init order (llama/0)
Georgi Gerganov [Thu, 10 Apr 2025 21:04:25 +0000 (00:04 +0300)]
tests : fix init order (llama/0)

ggml-ci

2 months agocpu: fix cpu backend's supports-op for GET_ROWS_BACK. fixes a fatal when running...
cmdr2 [Fri, 11 Apr 2025 06:44:19 +0000 (12:14 +0530)]
cpu: fix cpu backend's supports-op for GET_ROWS_BACK. fixes a fatal when running test-backend-ops with only the CPU backend (#1190)

2 months agosync : fix (skip) (#0)
Georgi Gerganov [Thu, 10 Apr 2025 21:02:24 +0000 (00:02 +0300)]
sync : fix (skip) (#0)

ggml-ci

2 months agosync : llama.cpp
Georgi Gerganov [Thu, 10 Apr 2025 20:46:46 +0000 (23:46 +0300)]
sync : llama.cpp

ggml-ci

2 months agoCANN: Support more ops (llama/12841)
Chenguang Li [Thu, 10 Apr 2025 00:51:52 +0000 (08:51 +0800)]
CANN: Support more ops (llama/12841)

* [CANN]Support Opt LOG && MEAN && PAD_REFLECT_1D

* [CANN]Support COUNT_EQUAL && STEP && SGN

* [CANN]codestyle adjustment

* [CANN]codestyle adjustment

---------

Signed-off-by: noemotiovon <redacted>
2 months agoFixes #12823 (llama/12830)
Prajwal B Mehendarkar [Wed, 9 Apr 2025 23:18:01 +0000 (04:48 +0530)]
Fixes #12823 (llama/12830)

* Including limits file on AIX

* Fixes #12823

2 months agoggml-cpu-impl.h: do not redefine bool on POWER9 (llama/12856)
Piotr Kubaj [Wed, 9 Apr 2025 23:00:34 +0000 (23:00 +0000)]
ggml-cpu-impl.h: do not redefine bool on POWER9 (llama/12856)

error: unknown type name '_Bool'

2 months agoggml-impl.h: fix build on POWER9 (llama/12855)
Piotr Kubaj [Wed, 9 Apr 2025 23:00:25 +0000 (23:00 +0000)]
ggml-impl.h: fix build on POWER9 (llama/12855)

error: ISO C++17 does not allow 'register' storage class specifier

2 months agoCANN: Support Opt CONV_TRANSPOSE_1D and ELU (llama/12786)
Chenguang Li [Wed, 9 Apr 2025 06:04:14 +0000 (14:04 +0800)]
CANN: Support Opt CONV_TRANSPOSE_1D and ELU (llama/12786)

* [CANN] Support ELU and CONV_TRANSPOSE_1D

* [CANN]Modification review comments

* [CANN]Modification review comments

* [CANN]name adjustment

* [CANN]remove lambda used in template

* [CANN]Use std::func instead of template

* [CANN]Modify the code according to the review comments

---------

Signed-off-by: noemotiovon <redacted>
2 months agovulkan: In coopmat2 mmq, load q4_k/q5_k scales through shared memory (llama/12833)
Jeff Bolz [Wed, 9 Apr 2025 05:25:08 +0000 (00:25 -0500)]
vulkan: In coopmat2 mmq, load q4_k/q5_k scales through shared memory (llama/12833)

q4_k and q5_k had a lot of redundant global loads where the same 16B of
scale information is repeatedly loaded and decoded during each loop iteration.
This change restructures the loops to more explicitly iterate over whole
blocks in the outer loop (with unrolled inner loop) and to copy/decode the
scale data into shared memory once at the start of each outer loop. The copy
is pipelined so the scale load from global memory is relatively cheap.

This improves q4_k/q5_k model prompt processing performance by around 5-7%.
I briefly tried applying this to q6_k and q4_0, and it didn't help for q6_k
and hurt for q4_0.

The big "else" path in mul_mm_cm2.comp that had all the clamped/unclamped
variants isn't used as often as it originally was (e.g. due to the padded_N
change), so I trimmed it down to offset some of the new complexity of the
semi-manual loop unrolling.

2 months agovulkan: Use fp16 for the flash attention P*V multiplication (llama/12783)
Jeff Bolz [Wed, 9 Apr 2025 05:12:57 +0000 (00:12 -0500)]
vulkan: Use fp16 for the flash attention P*V multiplication (llama/12783)

This is consistent with the ggml-cuda behavior and the mul_mat fallback.

2 months agocuda : add f32 to bf16 copy op (llama/12806)
Sigbjørn Skjæret [Tue, 8 Apr 2025 21:21:31 +0000 (23:21 +0200)]
cuda : add f32 to bf16 copy op (llama/12806)

This allows BF16 KV-cache on CUDA.

2 months agollama : fix FA when KV cache is not used (i.e. embeddings) (llama/12825)
Georgi Gerganov [Tue, 8 Apr 2025 16:54:51 +0000 (19:54 +0300)]
llama : fix FA when KV cache is not used (i.e. embeddings) (llama/12825)

* ggml : FA supports F32 V

* graph : cast KV to F16 when the KV cache is not used

ggml-ci

* server : add test that exercises embeddings with FA enabled

ggml-ci

2 months agoggml: don't include arm_neon.h when using CUDA 12 with ARM Neon (#1187)
cmdr2 [Thu, 10 Apr 2025 12:23:08 +0000 (17:53 +0530)]
ggml: don't include arm_neon.h when using CUDA 12 with ARM Neon (#1187)

fix #1186