]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Jeff Bolz [Wed, 19 Nov 2025 16:25:50 +0000 (10:25 -0600)]
vulkan: support larger argsort (#17313)
* vulkan: support larger argsort
This is an extension of the original bitonic sorting shader that puts the
temporary values in global memory and when more than 1024 threads are needed
it runs multiple workgroups and synchronizes through a pipelinebarrier.
To improve the memory access pattern, a copy of the float value is kept with
the index value. I've applied this same change to the original shared memory
version of the shader, which is still used when ncols <= 1024.
* Reduce the number of shader variants. Use smaller workgroups when doing a single pass, for a modest perf boost
* reduce loop overhead
* run multiple cols per invocation, to reduce barrier overhead
Jeff Bolz [Wed, 19 Nov 2025 15:50:43 +0000 (09:50 -0600)]
vulkan: Add copy_transpose shader (#17371)
Aleksander Grygier [Wed, 19 Nov 2025 13:39:50 +0000 (14:39 +0100)]
webui: Add a "Continue" Action for Assistant Message (#16971)
* feat: Add "Continue" action for assistant messages
* feat: Continuation logic & prompt improvements
* chore: update webui build output
* feat: Improve logic for continuing the assistant message
* chore: update webui build output
* chore: Linting
* chore: update webui build output
* fix: Remove synthetic prompt logic, use the prefill feature by sending the conversation payload ending with assistant message
* chore: update webui build output
* feat: Enable "Continue" button based on config & non-reasoning model type
* chore: update webui build output
* chore: Update packages with `npm audit fix`
* fix: Remove redundant error
* chore: update webui build output
* chore: Update `.gitignore`
* fix: Add missing change
* feat: Add auto-resizing for Edit Assistant/User Message textareas
* chore: update webui build output
Sigbjørn Skjæret [Wed, 19 Nov 2025 10:52:38 +0000 (11:52 +0100)]
convert : use self.block_count everywhere instead of reading hparams (#17359)
Aman Gupta [Wed, 19 Nov 2025 10:25:05 +0000 (18:25 +0800)]
cuda: fix rope fusion for gemma3 (#17378)
Piotr Wilkin (ilintar) [Wed, 19 Nov 2025 09:36:33 +0000 (10:36 +0100)]
Fix too relaxed check on CUDA "fast copy" (can_be_transposed) condition (#17332)
* Fix too relaxed check on CUDA "fast copy" (can_be_transposed) condition
* Argh.
* Making CISC happy ;)
* Integrate CONT tests
* Use loopy loop
* Skip new tests for (B)F16 for now.
Ruben Ortlam [Wed, 19 Nov 2025 07:46:26 +0000 (08:46 +0100)]
vulkan: force full subgroups for flash attention to fix intel subgroup crash (#17356)
Jeremy Rand [Wed, 19 Nov 2025 06:19:00 +0000 (06:19 +0000)]
ggml-cpu: Don't pass -mpowerpc64 when -mcpu already implies it (#17308)
Xuan-Son Nguyen [Tue, 18 Nov 2025 18:11:53 +0000 (19:11 +0100)]
chat: fix int overflow, prevent size calculation in float/double (#17357)
* chat: fix int overflow, prevent size calculation in float/double
* Update common/chat.cpp
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Haiyue Wang [Tue, 18 Nov 2025 17:58:22 +0000 (01:58 +0800)]
vocab : call reserve() for building plamo-2-translate suffix (#17343)
Test 'Q4_K_M' quantization on https://huggingface.co/pfnet/plamo-2-translate
The 'suffix_to_score' size is 193510, it needs 19 memory allocation with final
capacity 262144 to hold the value, if not preserve the memory.
Signed-off-by: Haiyue Wang <redacted>
hksdpc255 [Tue, 18 Nov 2025 17:54:15 +0000 (04:54 +1100)]
common : Generalized XML-style tool-call parsing with streaming support (GLM 4.5/4.6 + MiniMax M2 + SeedOSS + Kimi-K2 + Qwen3-Coder + Apriel-1.5 + Xiaomi-MiMo) (#16932)
* Add files via upload
* fix unit test
* fix crashes for --reasoning-format=none
* Patch buggy official MiniMax-M2 chat template
* add upstream minja fix: https://github.com/ochafik/minja/pull/7
* Fix <think> token not generated
* add test copied from https://github.com/ggml-org/llama.cpp/pull/16946
* cleanup
* Hopes to fix the compilation error on CI
* Delete chat template patching since it’s fixed by upstream Minja
* Remove undeeded Minimax-M2 template patch
https://github.com/ochafik/minja/pull/7#issuecomment-
3480356100
* Add proper handling of optional parameters with test
merged tests from: https://github.com/ggml-org/llama.cpp/pull/16946/commits/
23d4bb75c485c12ac89f81c424dc03c87a640e8c
* Fix making all tool parameters optional
* Move xml tool parser to separate file
* cleanup & add tests for GLM4.5
* add streaming tests & enhancement & cleanups
Add streaming test for both GLM 4.5 and minimax-m2.
Cleanup for preserved_tokens.
Cleanup for grammar rule name.
Enhance the parser's stability.
* cleanup & add support for Kimi-K2 Qwen3-Coder Apriel-1.5 Xiaomi-MiMo
* apply suggestions from reviewers
* fix a misuse for data.grammar_lazy
* fix grammar when tool have no argument
* Fix `no triggers set for lazy grammar!` for GLM4.5/4.6. Insert additional stops for Kimi-K2
* update chat.cpp
* fix grammar for GLM 4.5/4.6
* Try fix Jinja template for GLM
* Try fix GLM-4.6.jinja
* Update common/chat-parser-xml-toolcall.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update tests/test-chat.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* improve chat template for GLM, rename Kimi-K2 template to Kimi-K2-Thinking
* Improve Kimi-K2 chat template
* Fix unit test
* Fix "Invalid tool call arguments passed" in a rare case.
In a rare case, the model may emit a raw string that begins with a valid JSON string. This commit adds unit tests to cover that scenario and fixes the regression introduced during the Kimi-K2 adaptation.
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
jiahao su [Tue, 18 Nov 2025 17:10:23 +0000 (01:10 +0800)]
ci : change the openEuler-310p image to fix release (#17361)
Georgi Gerganov [Tue, 18 Nov 2025 14:44:53 +0000 (16:44 +0200)]
gitignore : be more specific about ignored stuff (#17354)
Chenguang Li [Tue, 18 Nov 2025 08:41:52 +0000 (16:41 +0800)]
CANN: fix acl_tensor_ptr usage in ASCEND_310P ROPE (#17347)
* cann: fix acl_tensor_ptr usage in ASCEND_310P ROPE implementation
Fix compilation errors in the ASCEND_310P-specific ROPE operation code
by adding .get() calls when passing acl_tensor_ptr smart pointers to
functions expecting raw aclTensor* pointers.
This fixes the code that was missed in the previous refactoring commit
(
8981848 ) which changed ggml_cann_create_tensor() return type from
aclTensor* to acl_tensor_ptr.
* cann: format code
o7si [Tue, 18 Nov 2025 08:10:47 +0000 (16:10 +0800)]
fix: resolve undefined variable 'svr' compilation error (#17348)
jiahao su [Tue, 18 Nov 2025 08:08:55 +0000 (16:08 +0800)]
CANN: Add openEuler-cann in build and release (#17192)
Update openEuler version
Remove variable ASCEND_SOC_TYPE
Modify the chip type
Fix case in zip filename
Change "device" to "chip_type"
Modify the value of chip_type
Jeff Bolz [Tue, 18 Nov 2025 06:41:24 +0000 (00:41 -0600)]
vulkan: support noncontig i32 copy (#17328)
Xuan-Son Nguyen [Mon, 17 Nov 2025 21:05:44 +0000 (22:05 +0100)]
server: split HTTP into its own interface (#17216)
* server: split HTTP into its own interface
* move server-http and httplib to its own file
* add the remaining endpoints
* fix exception/error handling
* renaming
* missing header
* fix missing windows header
* fix error responses from http layer
* fix slot save/restore handler
* fix case where only one stream chunk is returned
* add NOMINMAX
* do not call sink.write on empty data
* use safe_json_to_str for SSE
* clean up
* add some comments
* improve usage of next()
* bring back the "server is listening on" message
* more generic handler
* add req.headers
* move the chat template print to init()
* add req.path
* cont : minor
---------
Co-authored-by: Georgi Gerganov <redacted>
Ruben Ortlam [Mon, 17 Nov 2025 20:37:49 +0000 (21:37 +0100)]
vulkan: add log RTE support to fix Nvidia CI (#17320)
* vulkan: add log RTE support to fix Nvidia CI
* actually use the rte shader
Adrien Gallouët [Mon, 17 Nov 2025 20:37:29 +0000 (21:37 +0100)]
cmake : fix ARM feature verification (#17170)
* cmake : fix ARM feature verification
Use check_cxx_source_compiles to prevent conflicts with
the existing GGML_NATIVE detection code.
Signed-off-by: Adrien Gallouët <redacted>
* cmake : unset __ARM_FEATURE when feature is disabled
Signed-off-by: Adrien Gallouët <redacted>
* cmake : fix scope, this is really a macro
Signed-off-by: Adrien Gallouët <redacted>
* arm_neon.h is useless
Signed-off-by: Adrien Gallouët <redacted>
---------
Signed-off-by: Adrien Gallouët <redacted>
Adrien Gallouët [Mon, 17 Nov 2025 11:12:00 +0000 (12:12 +0100)]
ggml : add missing AVX512 feature checks (#17270)
_mm512_cvtepu8_epi16 requires __AVX512BW__
_mm512_srli_epi16 requires __AVX512BW__
__builtin_ia32_inserti32x8 requires __AVX512DQ__
Signed-off-by: Adrien Gallouët <redacted>
Georgi Gerganov [Mon, 17 Nov 2025 09:52:00 +0000 (11:52 +0200)]
metal : support I32 -> I32 copy (#17317)
Georgi Gerganov [Mon, 17 Nov 2025 09:51:48 +0000 (11:51 +0200)]
metal : faster argsort (#17315)
* metal : faster argsort
* cont : keep data in registers
Georgi Gerganov [Mon, 17 Nov 2025 09:51:13 +0000 (11:51 +0200)]
metal : add cumsum (#17305)
hipudding [Mon, 17 Nov 2025 00:43:59 +0000 (08:43 +0800)]
CANN: Use smart pointers to manage ACL objects (#17238)
* CANN: Use smart pointers to manage ACL objects
Previously, ACL objects were managed via manual destruction, which
led to multiple memory-leak issues during runtime. This patch replaces
manual memory management with smart pointers so that ACL objects
are properly released and ownership is clearly defined.
Note that the ownership of an ACL object belongs to the function
that creates it. Other internal functions should operate on these ACL
objects using raw pointers to avoid unintended ownership transfers.
Additionally, since aclTensorList automatically frees its contained
aclTensor objects, any aclTensor added to a tensor list must release
ownership to avoid double free operations.
This PR also removes the asynchronous task submission mechanism.
Due to changes in recent CANN versions, tiling time has significantly
decreased. Even with a dual-thread submission model, the dispatch
overhead still falls on the critical path, making async submission
less beneficial. Moreover, aclGraph support provides a much better
path to reducing operator dispatch latency.
* CANN: resolve review comments
Pavels Zaicenkovs [Sun, 16 Nov 2025 21:50:09 +0000 (22:50 +0100)]
vulkan: add LOG operation support for F32 and F16 (#17183)
* vulkan: add LOG operation support for F32 and F16
Part of #14909.
* vulkan: Fix LOG operation types
* docs: Update operation support documentation for Vulkan LOG operation
* vulkan: fix log_f16 shader
* docs: restore missing LOG test cases and regenerate ops.md
Ruben Ortlam [Sun, 16 Nov 2025 18:38:17 +0000 (19:38 +0100)]
vulkan: fix MMQ quantize_y condition (#17301)
Eve [Sun, 16 Nov 2025 18:09:17 +0000 (18:09 +0000)]
ci : revert #16249 (#17303)
* Delete .github/workflows/build-amd.yml
* Update build.yml
Georgi Gerganov [Sun, 16 Nov 2025 07:50:26 +0000 (09:50 +0200)]
metal : remove obosolete asserts (#17295)
Georgi Gerganov [Sun, 16 Nov 2025 07:23:37 +0000 (09:23 +0200)]
server : handle context overflow during decode (#17267)
* server : handle context overflow during decode
* server : minor refactor
lhez [Sun, 16 Nov 2025 01:40:14 +0000 (17:40 -0800)]
opencl: fix rms_norm_mul (#17250)
* opencl: use subgrroup reduce for reduction in rms_norm_mul
* opencl: add comment about workgroup size
shaofeiqi [Sun, 16 Nov 2025 01:33:10 +0000 (17:33 -0800)]
opencl: add kernel to handle mat mul in attention to improve encoding speed (#17181)
* Add mul_mm_f16_f32_kq_kqv kernel
* Add ggml_cl_mul_mat_kq_kqv_adreno func
* fix whitespace
* remove unused variable
* remove redundant
* refactor and clean up
* remove trailing whitespace
shani-f [Sat, 15 Nov 2025 23:52:42 +0000 (01:52 +0200)]
sycl : unify unary kernels with a generic implementation and enable wide operator support (#17213)
* SYCL: add generic unary op implementation for multiple ops (ABS/SGN/…); unify non-contiguous access
* SYCL: update documentation and sycl.csv to reflect new unary op support
* update ops.md after syncing SYCL.csv changes
* Fix SYCL.csv merge conflict
* Update ops.md after fixing SYCL.csv conflicts
* Fix SYCL.csv tail after merge conflict and regenerate ops.md
* Fix line endings and final newline in SYCL.csv
* Remove TOPK_MOE entries from SYCL.csv as requested
* Update ops.md after removing TOPK_MOE from SYCL.csv
* Regenerated SYCL.csv and synced ops.md with upstream
* Update ops.md using create_ops_docs.py
Aleksander Grygier [Sat, 15 Nov 2025 21:41:41 +0000 (22:41 +0100)]
webui: Fix clickability around chat processing statistics UI (#17278)
* fix: Better pointer events handling in chat processing info elements
* chore: update webui build output
Pascal [Sat, 15 Nov 2025 20:09:32 +0000 (21:09 +0100)]
webui: add OAI-Compat Harmony tool-call streaming visualization and persistence in chat UI (#16618)
* webui: add OAI-Compat Harmony tool-call live streaming visualization and persistence in chat UI
- Purely visual and diagnostic change, no effect on model context, prompt
construction, or inference behavior
- Captured assistant tool call payloads during streaming and non-streaming
completions, and persisted them in chat state and storage for downstream use
- Exposed parsed tool call labels beneath the assistant's model info line
with graceful fallback when parsing fails
- Added tool call badges beneath assistant responses that expose JSON tooltips
and copy their payloads when clicked, matching the existing model badge styling
- Added a user-facing setting to toggle tool call visibility to the Developer
settings section directly under the model selector option
* webui: remove scroll listener causing unnecessary layout updates (model selector)
* Update tools/server/webui/src/lib/components/app/chat/ChatMessages/ChatMessageAssistant.svelte
Co-authored-by: Aleksander Grygier <redacted>
* Update tools/server/webui/src/lib/components/app/chat/ChatMessages/ChatMessageAssistant.svelte
Co-authored-by: Aleksander Grygier <redacted>
* chore: npm run format & update webui build output
* chore: update webui build output
---------
Co-authored-by: Aleksander Grygier <redacted>
Sigbjørn Skjæret [Sat, 15 Nov 2025 19:58:59 +0000 (20:58 +0100)]
convert : remove unnecessary chat template patching (#17289)
Jeff Bolz [Sat, 15 Nov 2025 18:54:23 +0000 (12:54 -0600)]
vulkan: Fuse mul_mat_id+add_id+mul and mul_mat+add+add. (#17287)
These both show up in gpt-oss. Also, cleanup the mul_mat_vec fusion code a bit.
Ruben Ortlam [Sat, 15 Nov 2025 14:18:58 +0000 (15:18 +0100)]
vulkan: Replace 16-bit unpack8 calls to work around legacy Windows AMD driver bug (#17285)
Sigbjørn Skjæret [Sat, 15 Nov 2025 13:12:39 +0000 (14:12 +0100)]
convert : use all parts in safetensors index (#17286)
Sigbjørn Skjæret [Sat, 15 Nov 2025 13:06:24 +0000 (14:06 +0100)]
convert : set expert gating func in base class (#17279)
Ankur Verma [Sat, 15 Nov 2025 11:41:16 +0000 (03:41 -0800)]
mtmd-cli: Avoid logging to stdout for model loading messages in mtmd-cli (#17277)
Giuseppe Scrivano [Sat, 15 Nov 2025 11:00:29 +0000 (12:00 +0100)]
vulkan: implement ABS and NEG (#17245)
* docs: update Vulkan ops
* vulkan: add NEG op
* vulkan: add ABS op
---------
Signed-off-by: Giuseppe Scrivano <redacted>
Jeff Bolz [Sat, 15 Nov 2025 10:56:15 +0000 (04:56 -0600)]
vulkan: Use ggml_vk_tensor_subbuffer in mul_mat_vec(id) paths (#17244)
* vulkan: Use ggml_vk_tensor_subbuffer in mul_mat_vec(id) paths
* set allow_misalign
Jeff Bolz [Sat, 15 Nov 2025 09:37:25 +0000 (03:37 -0600)]
vulkan: skip all-negative-inf blocks in FA (#17186)
Jeff Bolz [Sat, 15 Nov 2025 08:06:41 +0000 (02:06 -0600)]
vulkan: change graph_compute to be async and enable get_tensor_async (#17158)
* vulkan: change graph_compute to be async and enable get_tensor_async
This allows some additional CPU/GPU overlap for large pp workloads. Also seems
to help a bit for token gen, maybe getting rid of a small bubble between
graph_compute and get_tensor.
Async set and copy functions seem to be very rarely used, so I didn't enable
them because I didn't have a good way to test them.
The async commands need to be ordered against each other, so put them all on
the compute queue. The non-async commands still use the transfer queue.
The fence for graph_compute/get_tensor_async is submitted and waited on in
ggml_vk_synchronize.
* fix thread safety errors
* teardown context cleanly
* Handle async read to non-pinned dst
Xuan-Son Nguyen [Fri, 14 Nov 2025 14:56:19 +0000 (15:56 +0100)]
mtmd: add mtmd_log_set (#17268)
Bartowski [Fri, 14 Nov 2025 12:54:10 +0000 (07:54 -0500)]
model : add AfmoeForCausalLM support (#16477)
* Add AFMOE model support
* Update to vocab
* Add model sizing
* Undo Rope change for ARCEE model
* Address review comments
* Update modeling code is_sliding -> use_rope, replace hard-coded logic
* Fix AFMOE tokenizer
* Update convert_hf_to_gguf.py
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update convert_hf_to_gguf.py
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update AFMoE tokenizer class identification to be more unique
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
Marek Hradil jr. [Fri, 14 Nov 2025 12:35:26 +0000 (13:35 +0100)]
fix : Dangling pointer for non-empty trigger words in lazy grammar construction (#17048)
* fix : Dangling pointer for non-empty trigger words in llama_sampler_init_grammar_impl (#17047)
* Replace 'static' workaround, with keeping variable in scope for longer
* Create std::array directly and pass into llama_grammar_init_impl
* Add back the trigger pattern
* Missed array include
Georgi Gerganov [Fri, 14 Nov 2025 12:03:45 +0000 (14:03 +0200)]
server : fix "can batch with" bug (#17263)
Georgi Gerganov [Fri, 14 Nov 2025 07:36:06 +0000 (09:36 +0200)]
metal : support argsort for ne00 > 1024 (#17247)
* metal : refactor argsort
* cont : sort chunks
* cont : merge sorted buckets
* cont : cleanup
Georgi Gerganov [Fri, 14 Nov 2025 07:13:34 +0000 (09:13 +0200)]
metal : make the FA extra sizes consistent (#17143)
ixgbe [Fri, 14 Nov 2025 07:12:56 +0000 (15:12 +0800)]
readme : add RVV,ZVFH,ZFH,ZICBOP support for RISC-V (#17259)
Signed-off-by: Wang Yang <redacted>
Aleksander Grygier [Fri, 14 Nov 2025 00:19:08 +0000 (01:19 +0100)]
Better UX for handling multiple attachments in WebUI (#17246)
Alberto Cabrera Pérez [Thu, 13 Nov 2025 20:53:00 +0000 (20:53 +0000)]
ggml-cpu: handle 3d tensors in repack mat_mul (#17241)
* ggml-cpu: handle 3d tensors in repack mul_mat
* Removed unnecessary branch, removed need for <algorithm>
* Fixed dst_ptr pointer in chunk + clang_format
* GGML_ASSERT to check wdata within bounds
* Accidental ggml.h inclusion
* Improved GGML_ASSERT on wdata boundaries
* Address performance regression in Qwen and llama.cpp due to chunking
Xuan-Son Nguyen [Thu, 13 Nov 2025 19:53:47 +0000 (20:53 +0100)]
server: fixing naming conflict res_error (#17243)
Piotr Wilkin (ilintar) [Thu, 13 Nov 2025 18:54:47 +0000 (19:54 +0100)]
ggml : add ops SOFTPLUS, EXPM1, TRI, SOLVE_TRI, CUMSUM (#17063)
* Add ops needed for new hybrid models: SOFTPLUS, EXPM1, TRI, SOLVE_TRI, CUMSUM
* Update ggml/include/ggml.h
Co-authored-by: Georgi Gerganov <redacted>
* Update tests/test-backend-ops.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Code review
* Whitespace
* Update tests/test-backend-ops.cpp
Co-authored-by: Diego Devesa <redacted>
* This is actually sigmoid, duh.
* Add CONST, remove TRI_KEEP, other changes from review
* Update tests/test-backend-ops.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Update ggml/src/ggml.c
Co-authored-by: Georgi Gerganov <redacted>
* Update ggml/src/ggml.c
Co-authored-by: Georgi Gerganov <redacted>
* Update ggml/src/ggml-cuda/unary.cu
Co-authored-by: Aman Gupta <redacted>
* Remove extra script
* Update ggml/src/ggml.c
Co-authored-by: Diego Devesa <redacted>
* Update tests/test-backend-ops.cpp
Co-authored-by: Diego Devesa <redacted>
* moving changes from laptop [no ci]
* pre-rebase
* Update tests/test-backend-ops.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update tests/test-backend-ops.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* Refactor tests
* ggml : cleanup
* cont : fix ggml_fill srcs
* tests : add note
* ggml : add ggml_fill_inplace
* ggml : add asserts
* ggml : fix ggml_fill constant cast
* cont : ggml_tri minor
* Use TENSOR_LOCALS
* Fix regression from #14596, regenerate
* Don't make commits at night...
---------
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Diego Devesa <redacted>
Co-authored-by: Aman Gupta <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
Ruben Ortlam [Thu, 13 Nov 2025 13:51:21 +0000 (14:51 +0100)]
vulkan: remove shell call from vulkan-shaders-gen tool, revert file check (#17219)
* vulkan: remove shell call from vulkan-shaders-gen tool
* use string vector for command execution
* Fix condition
* use string, remove const_cast
* Fix dependency file quotation on Windows
---------
Co-authored-by: Jeff Bolz <redacted>
Diego Devesa [Thu, 13 Nov 2025 12:14:02 +0000 (04:14 -0800)]
sched : fix reserve ignoring user tensor assignments (#17232)
ixgbe [Thu, 13 Nov 2025 12:13:32 +0000 (20:13 +0800)]
ggml-cpu : add RISC-V vector intrinsic support for silu and cvar operations (#17227)
Signed-off-by: Wang Yang <redacted>
bagheera [Thu, 13 Nov 2025 11:32:44 +0000 (05:32 -0600)]
metal: accelerated conv2d (#17175)
* metal: accelerated conv2d
* cont : cleanup
---------
Co-authored-by: bghira <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Thu, 13 Nov 2025 10:59:37 +0000 (12:59 +0200)]
Revert "ggml-cpu: handle 3d tensors in repack mat_mul (#17030)" (#17233)
This reverts commit
1c398dc9eca9c366ce98deb0e6f3538e444ebc8a .
Diego Devesa [Thu, 13 Nov 2025 08:59:05 +0000 (00:59 -0800)]
ggml-cpu : use template for argsort (#17222)
TecJesh [Thu, 13 Nov 2025 01:39:51 +0000 (09:39 +0800)]
CANN: Add cross_entropy_loss op support (#16886)
* update L2_NORM op support
* update L2_NORM op support
* remove extra whitespace
* cann: update cross_entropy_loss op support
* remove trailing whitespaces
* rebase the latest code in the main repository and remove the l2_norm operator that already exists in another pull request.
* undo the l2_norm operator deletion
Aman Gupta [Thu, 13 Nov 2025 00:50:01 +0000 (08:50 +0800)]
CUDA: fuse rope + set_rows (#16884)
* CUDA: add fused rope
* move k forward_expand up
* create helper function instead of re-using params
* make assert statement more in line with comment
* rope_norm: coalesced writes to global mem
Neo Zhang Jianyu [Thu, 13 Nov 2025 00:42:23 +0000 (08:42 +0800)]
update SYCL support OPs (#17208)
Co-authored-by: Zhang Jianyu <redacted>
o7si [Wed, 12 Nov 2025 22:41:02 +0000 (06:41 +0800)]
vocab : correct bounds check for UGM XCDA array access (#17215)
Johannes Gäßler [Wed, 12 Nov 2025 22:13:55 +0000 (23:13 +0100)]
CUDA: static assert to prevent misuse of memcpy_1 (#17198)
Mike Abbott [Wed, 12 Nov 2025 19:33:55 +0000 (12:33 -0700)]
docker : preserve .so symlinks for docker container builds (#17214)
Georgi Gerganov [Wed, 12 Nov 2025 18:43:38 +0000 (20:43 +0200)]
ggml : use std::sort in ggml_argsort CPU implementation (#17211)
* ggml : use std::sort in ggml_argsort CPU implementation
* cont : add missing header
Aleksander Grygier [Wed, 12 Nov 2025 18:01:48 +0000 (19:01 +0100)]
Update packages + upgrade Storybook to v10 (#17201)
* chore: Update packages + upgrade Storybook to v10
* fix: Increase timeout for UI tests
Xuan-Son Nguyen [Wed, 12 Nov 2025 17:50:52 +0000 (18:50 +0100)]
server: (refactor) implement generator-based API for task results (#17174)
* server: (refactor) implement generator-based API for task results
* improve
* moving some code
* fix "Response ended prematurely"
* add sink.done before return false
* rm redundant check
* rm unused var
* rename generator --> reader
Xuan-Son Nguyen [Wed, 12 Nov 2025 13:56:02 +0000 (14:56 +0100)]
ci: add check vendor job (#17179)
* ci: add check vendor job
* use dev version of miniaudio
* move to dedicated workflow, only run on related files changed
Xuan-Son Nguyen [Wed, 12 Nov 2025 13:17:24 +0000 (14:17 +0100)]
server: move res_error/res_ok to static function (#17167)
Alberto Cabrera Pérez [Wed, 12 Nov 2025 12:52:19 +0000 (12:52 +0000)]
ggml-cpu: handle 3d tensors in repack mat_mul (#17030)
* ggml-cpu: handle 3d tensors in repack mul_mat
* Removed unnecessary branch, removed need for <algorithm>
* Fixed dst_ptr pointer in chunk + clang_format
* GGML_ASSERT to check wdata within bounds
* Accidental ggml.h inclusion
* Improved GGML_ASSERT on wdata boundaries
Adrien Gallouët [Wed, 12 Nov 2025 12:48:30 +0000 (13:48 +0100)]
cmake : cleanup (#17199)
Adrien Gallouët [Wed, 12 Nov 2025 11:32:50 +0000 (12:32 +0100)]
cmake : move OpenSSL linking to vendor/cpp-httplib (#17177)
* cmake : move OpenSSL linking to vendor/cpp-httplib
Signed-off-by: Adrien Gallouët <redacted>
* bring back httplib 0.27.0
* add -DLLAMA_HTTPLIB
* update cmake config for visionos
---------
Signed-off-by: Adrien Gallouët <redacted>
Co-authored-by: Xuan Son Nguyen <redacted>
TecJesh [Wed, 12 Nov 2025 07:11:42 +0000 (15:11 +0800)]
CANN: Add L2_NORM op support (#16856)
* update L2_NORM op support
* update L2_NORM op support
* remove extra whitespace
Neo Zhang Jianyu [Wed, 12 Nov 2025 06:44:29 +0000 (14:44 +0800)]
[SYCL]fix ci crash about SSM_CONV (#17169)
* fix ci crash
* Update ggml-sycl.cpp
* Update ggml/src/ggml-sycl/ggml-sycl.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
---------
Co-authored-by: Zhang Jianyu <redacted>
Co-authored-by: Sigbjørn Skjæret <redacted>
Raul Torres [Wed, 12 Nov 2025 06:37:52 +0000 (06:37 +0000)]
CANN: GGML_CANN_ACL_GRAPH works only USE_ACL_GRAPH enabled (#16861)
The documentation should state that `GGML_CANN_ACL_GRAPH` is only effective if `USE_ACL_GRAPH` was enabled at compilation time.
Max Krasnyansky [Tue, 11 Nov 2025 23:25:04 +0000 (15:25 -0800)]
hexagon: various Op fixes (#17135)
* hexagon: explicitly check for ops with zero nrows
llm_graph_context::build_inp_out_ids() can generate tensors with zero nrows.
Somehow other backends seems to handle this without obvious explicit checks.
In the hexagon case we need to check explicitly and skip them.
* hexagon: introduce fastdiv, fix test-backend-ops for ADD/SUB/MUL
Co-authored-by: chraac <redacted>
* hexagon: use fastdiv in ADD_ID
* hexagon: use ggml_op_is_empty and ggml_is_empty to check for NOPs
---------
Co-authored-by: chraac <redacted>
Eve [Tue, 11 Nov 2025 18:53:30 +0000 (18:53 +0000)]
disable rms norm mul rope for chips with no fp16 rte (#17134)
sudhiarm [Tue, 11 Nov 2025 15:58:05 +0000 (15:58 +0000)]
ci: add Arm-hosted Graviton4 runner (#17021)
* ci: add Arm-hosted Graviton4 runner
* ci: add missing dependencies for graviton4 build
* ci: enable LFS checkout on graviton4
* ci: move git-lfs install to dependencies in Graviton4 workflow
Xuan-Son Nguyen [Tue, 11 Nov 2025 12:32:58 +0000 (13:32 +0100)]
vendor: split httplib to cpp/h files (#17150)
* vendor: split httplib to cpp/h files
* move defines
* include httplib if curl is not used
* add TODO
* fix build ios
* fix build visionos instead
ixgbe [Tue, 11 Nov 2025 11:41:51 +0000 (19:41 +0800)]
ggml-cpu : add RISC-V RVV (Zvfh) optimization for FP16 to FP32 conversion (#17161)
Signed-off-by: Wang Yang <redacted>
duduta [Tue, 11 Nov 2025 11:33:24 +0000 (13:33 +0200)]
ggml-cpu: templateify ggml_compute_forward_rope_f32 and _f16 (#16805)
* extract rotate_pairs logic from ggml_compute_forward_rope_f32
* templateify ggml_compute_forward_rope_f32 and _f16
* abort when rope type not supported, remove GLM from test-rope
* add imrope branch to switch
* add rope tests for perf
* Update ggml/src/ggml-cpu/ops.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Update ggml/src/ggml-cpu/ops.cpp
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Charles Xu [Tue, 11 Nov 2025 11:20:31 +0000 (12:20 +0100)]
kleidiai: add optimized per-channel kernels for Q8_0 (#16993)
Mike Abbott [Tue, 11 Nov 2025 11:19:50 +0000 (04:19 -0700)]
cmake : add version to all shared object files (#17091)
When compiling llama.cpp in Yocto, it fails QA checks because the generated so files aren't versioned. This applies a version to all generated so files, allowing the package to build without errors.
Nicolas B. Pierron [Tue, 11 Nov 2025 10:53:59 +0000 (11:53 +0100)]
Install rpc-server when GGML_RPC is ON. (#17149)
levkropp [Tue, 11 Nov 2025 08:38:30 +0000 (03:38 -0500)]
convert : register UMT5Model architecture for T5 conversion (#17160)
Register UMT5Model as a supported architecture variant for T5 model conversion.
This allows the conversion to work for models downloaded with AutoModel.
lhez [Mon, 10 Nov 2025 23:00:13 +0000 (15:00 -0800)]
opencl: add fastdiv and use it in set_rows, ported from cuda (#17090)
* opencl: add fastdiv for mm q8_0
* opencl: use uint4 for fastdiv vals
* opencl: use fastdiv for set_rows
* opencl: do not use fastdiv for q8_0 mm
Sigbjørn Skjæret [Mon, 10 Nov 2025 21:55:30 +0000 (22:55 +0100)]
models : move build_inp_out_ids outside loop (#17151)
* move build_inp_out_ids outside loop
* realign
Max Krasnyansky [Mon, 10 Nov 2025 20:44:49 +0000 (12:44 -0800)]
cpu: skip NOPs to avoid barriers (#17133)
* cpu: skip NOPs to avoid barriers
* cpu: use ggml_op_is_empty
Georgi Gerganov [Mon, 10 Nov 2025 19:33:35 +0000 (21:33 +0200)]
metal : cap threadgroups size of set_rows (#17146)
Adrien Gallouët [Mon, 10 Nov 2025 19:03:36 +0000 (20:03 +0100)]
ggml-cpu : inspect -march and -mcpu to found the CPU (#16333)
Signed-off-by: Adrien Gallouët <redacted>
Ruben Ortlam [Mon, 10 Nov 2025 15:59:26 +0000 (16:59 +0100)]
vulkan: check glslc executable string (#17144)
Ruben Ortlam [Mon, 10 Nov 2025 15:59:10 +0000 (16:59 +0100)]
vulkan: fix validation issue introduced by #16868 (#17145)
Gabe Goodhart [Mon, 10 Nov 2025 15:14:23 +0000 (08:14 -0700)]
memory: Hybrid context shift (#17009)
* feat(memory): Only fail partial erasure of recurrent tail
The recurrent state is always assumed to be the state as of the last update
from the final token in the sequence. When doing a partial erasure, if the
range does not include the final token, the erasure can be considered a
success since any memory used for the sequence prior to the final token
(which is no memory) has been successfully removed.
There is one potential case that this doesn't address which is the pruning
of cache to remove sensitive data from the context. This wouldn't work for
attention cache partial removal (in the middle) either since the KV state
is linearly-dependent and states in later sequence positions would still be
based on the state from the sensitive data, even if that data is no longer
cached, so I don't think this is relevant, but it is worth noting that the
semantics of this change for a partial erasure in the middle of the cache
are essentially "my context is already compressed" and not "all trace of
the removed tokens has been removed."
https://github.com/ggml-org/llama.cpp/issues/16768
Branch: HybridContextShift-16768
Signed-off-by: Gabe Goodhart <redacted>
* fix(main): Check the output of seq_rm for prefix matching
This prefix matching is explicitly attempting to remove the tokens at the
end of the sequence that don't match. This is the operation that can't be
performed on a recurrent cache due to the state being updated in place, so
if this removal fails, we need to clear the whole cache.
https://github.com/ggml-org/llama.cpp/issues/16768
Branch: HybridContextShift-16768
Signed-off-by: Gabe Goodhart <redacted>
* fix(memory): Fix condition for partial erasure failure if p0 > pos
Signed-off-by: Gabe Goodhart <redacted>
Co-authored-by: compilade <redacted>
* style: Fix extra parens
Signed-off-by: Gabe Goodhart <redacted>
Co-authored-by: Georgi Gerganov <redacted>
* fix(main.cpp): Set n_matching_session_tokens to 0 on cache clear
https://github.com/ggml-org/llama.cpp/issues/16768
Branch: HybridContextShift-16768
Signed-off-by: Gabe Goodhart <redacted>
---------
Signed-off-by: Gabe Goodhart <redacted>
Co-authored-by: compilade <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Mon, 10 Nov 2025 13:38:42 +0000 (15:38 +0200)]
metal : enable tensor API for A19 (#17087)
fj-y-saito [Mon, 10 Nov 2025 13:12:59 +0000 (22:12 +0900)]
arm64: add i8mm route with SVE ggml_vec_dot_q4_K_q8_K and ggml_vec_dot_q6_K_… (#15277)
* add i8mm route with SVE ggml_vec_dot_q4_K_q8_K and ggml_vec_dot_q6_K_q8_K
* Surround SVE function with compiler directive
* fix compile switch
* fix coding style
* ggml : fix indent
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Mon, 10 Nov 2025 10:59:29 +0000 (12:59 +0200)]
batched-bench : add "separate text gen" mode (#17103)