Christoph Reiter [Thu, 25 Sep 2025 16:59:38 +0000 (18:59 +0200)]
pkg-config: include the new GGML_VERSION as a version (#1348)
Instead of hardcoding 0.0.0
If the 0.9.2-dev version is supposed to mean "less than" 0.9.2 instead of equal,
then the .pc version format would need to be changed to 0.9.2dev, since pkgconf
uses RPM version comparison. But let's keep it simple for now.
Daniel Bevenius [Thu, 25 Sep 2025 10:38:50 +0000 (12:38 +0200)]
scripts : refactor release script into prepare and finalize stages (#1352)
This commit splits the release process into two distinct stages:
* prepare_release: This stage handles all updating the version and
creating a new branch with the version change. This should then be
use to open a PR for review. Once the PR has been merged the
finalize_release stage can be run.
* finalize_release: This stage must be run on master and master must
have the version bump commit (this is checked for). This stage
handles tagging, and also creates a new branch for the update of the
new development version. The tag should then be pushed to the remote
which will trigger the release process on GitHub. The branch should be
used to open a new PR for the development version update.
Daniel Bevenius [Thu, 25 Sep 2025 09:43:51 +0000 (11:43 +0200)]
scripts : fix next dev version calculation [no ci] (#1351)
This commit updates the release script to correctly calculate the next
development version after a release.
The motivation for this is that the script currently increments the new
version for the next development cycle but the version is already set to
the updated version so this should not increment again.
Use RPC_DEBUG environment variable to enable debug messages.
Add helper macro LOG_DBG() which does an early
check of the env var before calling GGML_LOG_DEBUG().
Make sure we log a debug message for every server function.
ggml : split graph allocations according to backend max buffer size (llama/15815)
* ggml : make gallocr respect the backend's max buffer size
* if the graph requires more memory than can fit into a single allocation, split it into multiple backend buffers
* vulkan: report the actual max allocation size in buffer type interface
* fix missing newline, apple-clang warning
* track size of individual chunks in ggml_dyn_tallocr and raise max chunks.
revert to use suballocation_block_size as max chunk size for vulkan.
* track (chunk, offset) pairs instead of "global" offsets through gallocr.
* simpler, don't need loops to map between local/global offsets
* touches more code
* fix dyn_tallocr_max_size and initialization
* fix memory leak when buffers are reused due to same buffer type appearing multiple times
* make vbuffer allocation follow the same logic as backend_buffer did before
* continue to use leftover unallocated space of previous chunks after a new one has been created
* treat free blocks of each chunk as separate list
* they're still allocated together, but start/end of each chunk is tracked, and allocate/free iterate over sub-ranges
* exhaust freed blocks of all chunks before considering their last blocks with unallocated space
* start with 0 chunks/blocks and create chunks as needed
* allow the last chunk to grow beyond max size
* refactor: move adding new free block and new chunk into separate functions
* allocate chunks individually with a separate free-blocks list for each one
* needs a bit more memory/allocations/indirections, but code is simpler
cmake : fix static linking for OpenMP on Unix-like systems (llama/16031)
When compiling with GGML_STATIC=ON, the build process would produce a
binary that was still dynamically linked to OpenMP. This defeats the
purpose of a static build:
This commit resolves the issue by modifying `CMAKE_FIND_LIBRARY_SUFFIXES`
to prioritize `.a` files, forcing CMake to link the static version of
the library.
Chenguang Li [Wed, 17 Sep 2025 06:33:08 +0000 (14:33 +0800)]
CANN: Optimize ggml_cann_set_device (llama/15935)
* CANN: Fix ggml_cann_set_device to avoid redundant device switches
- Added a check to skip aclrtSetDevice if the current device is already set.
- Prevents unnecessary context switches while keeping thread/device consistency.
Daniel Bevenius [Tue, 16 Sep 2025 13:25:57 +0000 (15:25 +0200)]
ggml : fix padding in timestep embedding kernels (llama/15932)
* ggml : remove adding extra dim timestep embedding
This commit updates the ggml_timestep_embedding function to no longer
add an extra dimension when the specified dimension is odd.
The motivation for this change is that this introduces an unnecessary
dimension when the dimension is odd, which caused an issue in the
kernels which were not expecting this extra dimension and it resulted in
uninitialized memory for the second to last dimension.
* ggml-cuda : fix padding in timestep embedding kernel
This commit removes the zeroing out of the last dimension now that we
are not adding the extra padding dimension.
* ggml-metal : fix padding in timestep embedding kernel
This commit fixes the zero padding for odd dimensions in
the timestep embedding kernel
* ggml-opencl : fix padding in timestep embedding kernel
This commit fixes the zero padding for odd dimensions in
the timestep embedding kernel.
* ggml-sycl : fix padding in timestep embedding kernel
This commit fixes the zero padding for odd dimensions in
the timestep embedding kernel.
* ggml-vulkan : fix padding in timestep embedding kernel
This commit fixes the zero padding for odd dimensions in
the timestep embedding kernel.
* ggml-cpu : fix padding in timestep embedding function
This commit removes the zeroing out of the last dimension now that we
are not adding the extra padding dimension.
CUDA: fix im2col_3d to respect non-contiguous inputs (views) (llama/15956)
* fix im2col_3d to respect non-contiguous inputs (views)
The CUDA 3D im2col kernel computed source addresses assuming compact layout (products of dims), ignoring nb[] strides.
This patch switches im2col_3d source indexing to use true strides derived from src1->nb[] (in elements), mirroring the approach used in the 2D CUDA im2col path. Destination indexing is unchanged.
* use ggml_element_size() for src strides
Co-authored-by: Johannes Gäßler <redacted>
---------
Jeff Bolz [Sat, 13 Sep 2025 15:23:30 +0000 (16:23 +0100)]
vulkan: initialize vulkan-hpp to allow using extension function pointers (llama/15705)
Use this to query register count for shader compiles on NVIDIA. Currently
this is only for performance debug, but it could eventually be used in some
heuristics like split_k.
Daniel Bevenius [Thu, 11 Sep 2025 13:39:12 +0000 (15:39 +0200)]
ggml-cpu : add check for ARM MATMUL_INT8/i8mm support (llama/15922)
This commit adds a check for GGML_MACHINE_SUPPORTS_i8mm when enabling
MATMUL_INT8 features, ensuring that i8mm intrinsics are only used when
the target hardware actually supports them.
The motivation for this is to fix ggml CI build failures where the
feature detection correctly identifies that i8mm is not supported,
adding the +noi8mm flag, but MATMUL_INT8 preprocessor definitions are
still enabled, causing the compiler to attempt to use vmmlaq_s32
intrinsics without i8mm support.
CANN: Disable acl_graph for prefill stage (llama/15933)
Since the prefill length is not fixed, graphs constructed for the
prefill stage cannot be reused. For this reason, ACL graph
execution is disabled by default during prefill.
Daniel Bevenius [Wed, 10 Sep 2025 15:31:40 +0000 (17:31 +0200)]
ggml-cpu : fix padding in ggml_timestep_embedding (llama/15917)
This commit fixes the zero padding for odd dimensions in
ggml_compute_forward_timestep_embedding_f32.
The motivation for this is that currently if an odd dimension is used,
the padding check incorrectly uses the dimension value for indexing.
For example, with dim=15:
Elements 0-6 are set to cosine values
Elements 7-13 are set to sine values
Element 14 is left uninitialized (contains garbage)
Element 15 is correctly set to zero
This fix changes embed_data[dim] to embed_data[2 * half] so that
element 14 (the first unused element) is properly set to zero as well
as the last element.
Chenguang Li [Wed, 10 Sep 2025 10:42:00 +0000 (18:42 +0800)]
CANN: Add ROPE sin/cos cache for reuse (llama/15912)
* CANN: Add ROPE sin/cos cache for reuse
Introduce sin/cos caching mechanism in ROPE to avoid redundant
computation across layers. The cache is built on the first layer
per device and reused by subsequent layers if parameters match.
- Added sin_cache / cos_cache pointers and position_length tracking
- Introduced cache validity flags and properties:
(ext_factor, theta_scale, freq_scale, attn_factor, is_neox)
- Accelerates ROPE by eliminating repeated sin/cos generation
This change reduces overhead in multi-layer scenarios while
preserving correctness by verifying parameter consistency.
Chenguang Li [Wed, 10 Sep 2025 07:29:12 +0000 (15:29 +0800)]
CANN: implement LRU cache for ACL graphs (llama/15814)
* CANN: implement LRU cache for ACL graphs in CANN backend
- Introduce ggml_cann_graph_lru_cache to store multiple ggml_cann_graph objects.
- Graphs are loaded on demand and evicted using LRU policy when capacity is exceeded.
- Updated push, move_to_front, and clear methods to manage cached graphs efficiently.
- Ensures reuse of graphs, reducing graph reconstruction overhead in CANN backend.
* fix typo
* The LRU cache capacity can be configured via an env variable
Jeff Bolz [Mon, 8 Sep 2025 18:10:07 +0000 (13:10 -0500)]
vulkan: sort graph to allow more parallel execution (llama/15850)
* vulkan: sort graph to allow more parallel execution
Add a backend proc to allow the backend to modify the graph. The
vulkan implementation looks at which nodes depend on each other
and greedily reorders them to group together nodes that don't
depend on each other. It only reorders the nodes, doesn't change
the contents of any of them.
With #15489, this reduces the number of synchronizations needed.