]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
2 months agoSYCL: Add ROPE vision kernel (#12887)
Akarshan Biswas [Tue, 15 Apr 2025 08:37:42 +0000 (14:07 +0530)]
SYCL: Add ROPE vision kernel (#12887)

* SYCL: Add ROPE vision kernel

* Add comment about rope mode

2 months agollama : DeepSeek V2/V3 MLA implementation (#12801)
Juk Armstrong [Tue, 15 Apr 2025 06:49:57 +0000 (07:49 +0100)]
llama : DeepSeek V2/V3 MLA implementation (#12801)

* Merged using squash to remove all noise commit messages

* Force flash attention off for `LLM_ARCH_DEEPSEEK2` - embedding too large

* Removed 3 conts (2x RoPE and 1x RMS-norm)

* Changed to use `<cmath>` instead of `<math.h>`

* Reverted removal of the 3 conts

* Used `reshape` in `llm_graph_context::build_attn_mha()`

* Use `k_pe = ggml_reshape`

* Removed the 3 conts again

* Removed the 3D views of `wk_b` and `wv_b`, and just save and 3D in GGUF

* Removed MQA optimisation from `build_attn_mha()` as no gains now

* Simplified `is_mla` branch in `llm_build_deepseek2()`

* Removed `build_attn_mla` and added `nullptr` to all `build_atnn` calls

* Fixed call to `build_attn` in `llm_build_t5_enc`

2 months agoggml : Add AVX512 implementation of GEMM - Q4_Kx8 (#12829)
Srihari-mcw [Tue, 15 Apr 2025 06:22:36 +0000 (11:52 +0530)]
ggml : Add AVX512 implementation of GEMM - Q4_Kx8 (#12829)

* Add AVX512 implementation of GEMM - q4kx8

* Update changes to remove unnecessary whitespaces

2 months agoCANN: Opt ROPE optimization (#12865)
Chenguang Li [Tue, 15 Apr 2025 02:09:35 +0000 (10:09 +0800)]
CANN: Opt ROPE optimization (#12865)

* [CANN]Opt ROPE optimization

* [CANN]Codestyle adjustment

* [CANN]Fix the ROPE precision issue

* [CANN]codestyle fix

* [CANN]add rope unsupport case

Signed-off-by: noemotiovon <redacted>
2 months agoCANN: Optimize CANN buffer pool memory management (#12875)
Xinpeng Dou [Tue, 15 Apr 2025 02:04:24 +0000 (10:04 +0800)]
CANN: Optimize CANN buffer pool memory management (#12875)

Multiple optional memory pools are provided for CANN, including VMM,
priority queue-based, and traditional memory pools.
1.When the memory pool is available and GGML_CANN_DISABLE_VMM_POOL
   is not defined, the VMM pool is selected by default.
2.Otherwise, if GGML_CANN_ENABLE_BUF_PRIO_POOL is defined,
   the priority queue-based memory pool is used.
3.If neither condition is met, the default memory pool is used.

2 months agoAdd performance print for gemma3 in example (#12929)
Russyyds [Mon, 14 Apr 2025 17:18:20 +0000 (01:18 +0800)]
Add performance print for gemma3 in example (#12929)

2 months agoSYCL: Fix im2col (#12910)
Akarshan Biswas [Mon, 14 Apr 2025 12:23:53 +0000 (17:53 +0530)]
SYCL: Fix im2col (#12910)

* SYCL: Fix im2col

* restore local workgroup size adjustments for large inputs

* restore format

2 months agorpc : use ggml_context_ptr (#12938)
Radoslav Gerganov [Mon, 14 Apr 2025 10:59:34 +0000 (13:59 +0300)]
rpc : use ggml_context_ptr (#12938)

2 months agodsiable curl lib check, this action is missed by commit bd3f59f81289b920bcc597a208c14...
Neo Zhang Jianyu [Mon, 14 Apr 2025 10:19:07 +0000 (18:19 +0800)]
dsiable curl lib check, this action is missed by commit bd3f59f81289b920bcc597a208c14f55e39ed37e (#12761) (#12937)

2 months agosync : ggml
Georgi Gerganov [Mon, 14 Apr 2025 05:52:10 +0000 (08:52 +0300)]
sync : ggml

ggml-ci

2 months agocpu: fix cpu backend's supports-op for GET_ROWS_BACK. fixes a fatal when running...
cmdr2 [Fri, 11 Apr 2025 06:44:19 +0000 (12:14 +0530)]
cpu: fix cpu backend's supports-op for GET_ROWS_BACK. fixes a fatal when running test-backend-ops with only the CPU backend (ggml/1190)

2 months agoggml: use _mm[512/256]_dpbusd[_avx]_epi32 to directly accumulate into the result...
SXX [Mon, 14 Apr 2025 05:47:55 +0000 (13:47 +0800)]
ggml: use _mm[512/256]_dpbusd[_avx]_epi32 to directly accumulate into the result register (#12773)

* ggml: use _mm[512/256]_dpbusd[_avx]_epi32 to directly accumulate into the result register

* simplifies the codebase by removing redundant functions

2 months agoggml: disable CUDA graphs for unsupported DUP and CONT node types (#12891)
Alan Gray [Sun, 13 Apr 2025 21:12:21 +0000 (22:12 +0100)]
ggml: disable CUDA graphs for unsupported DUP and CONT node types (#12891)

Fixes #12798

2 months agoquantize: Handle user-defined quantization levels for additional tensors (#12511)
Ed Addario [Sun, 13 Apr 2025 18:29:28 +0000 (19:29 +0100)]
quantize: Handle user-defined quantization levels for additional tensors (#12511)

* Add llama_model_quantize_params parameters

* Add new quantize parameters parsing and validation

* Update usage

* Add new parameters defaults

* Add new quantization parameters logic

* Add llama_model_quantize_params parameters

* Add new quantize parameters parsing and validation

* Update usage

* Add new parameters defaults

* Add new quantization parameters logic

* Minor refactoring as per the contributors' coding guidelines

* Update descriptions to match existing style

* Add llama_model_quantize_params parameters

* Add new quantize parameters parsing and validation

* Update usage

* Add new parameters defaults

* Add new quantization parameters logic

* Minor refactoring as per the contributors' guidelines

* Implement general --tensor-type instead of tensor-specific command option

* Fix implied type bug

* Restore missing #includes

* Add regex capability for tensor selection

* Refactor function name and update ALLOWED_TENSOR_TYPE

* Add missing #include

* Handle edge case when tensor name is cls.output

* Minor logging improvement

2 months agocommon : Define cache directory on AIX (#12915)
Prajwal B Mehendarkar [Sat, 12 Apr 2025 15:33:39 +0000 (21:03 +0530)]
common : Define cache directory on AIX (#12915)

2 months agovulkan: use aligned loads for flash attention mask (#12853)
Jeff Bolz [Sat, 12 Apr 2025 08:44:48 +0000 (03:44 -0500)]
vulkan: use aligned loads for flash attention mask (#12853)

Rewrite the stride logic for the mask tensor in the FA shader to force the
stride to be aligned, to allow using more efficient loads.

2 months agollava: Fix cpu-only clip image encoding sefault (#12907)
Matt Clayton [Sat, 12 Apr 2025 05:29:03 +0000 (01:29 -0400)]
llava: Fix cpu-only clip image encoding sefault (#12907)

* llava: Fix cpu-only clip image encoding

* clip : no smart ptr for ggml_backend_t

* Fix for backend_ptr push_back

---------

Co-authored-by: Xuan Son Nguyen <redacted>
2 months agoserver : add VSCode's Github Copilot Chat support (#12896)
Georgi Gerganov [Fri, 11 Apr 2025 20:37:41 +0000 (23:37 +0300)]
server : add VSCode's Github Copilot Chat support (#12896)

* server : add VSCode's Github Copilot Chat support

* cont : update handler name

2 months agorpc : Set cache directory in rpc-server.cpp on FreeBSD (#12903)
yuri@FreeBSD [Fri, 11 Apr 2025 20:04:14 +0000 (13:04 -0700)]
rpc : Set cache directory in rpc-server.cpp on FreeBSD (#12903)

2 months ago`tool-call`: fix non-tool-calling grammar crashes w/ Qwen / Hermes 2 templates (...
Olivier Chafik [Fri, 11 Apr 2025 19:47:52 +0000 (12:47 -0700)]
`tool-call`: fix non-tool-calling grammar crashes w/ Qwen / Hermes 2 templates (#12900)

* `tool-call`: don't call common_chat_params_init_hermes_2_pro when there aren't tools (or when there's a schema)

* test all chat formats w/o tools

2 months agocommon : Define cache directory on FreeBSD (#12892)
yuri@FreeBSD [Fri, 11 Apr 2025 19:45:44 +0000 (12:45 -0700)]
common : Define cache directory on FreeBSD (#12892)

2 months agosycl: Support sycl_ext_oneapi_limited_graph (#12873)
Ewan Crawford [Fri, 11 Apr 2025 13:32:14 +0000 (15:32 +0200)]
sycl: Support sycl_ext_oneapi_limited_graph (#12873)

The current usage of the SYCL-Graph extension checks for
the `sycl_ext_oneapi_graph` device aspect. However, it is also
possible to support `sycl_ext_oneapi_limied_graph` devices that
don't support update

2 months agocontrib: support modelscope community (#12664)
tastelikefeet [Fri, 11 Apr 2025 12:01:56 +0000 (20:01 +0800)]
contrib: support modelscope community (#12664)

* support download from modelscope

* support login

* remove comments

* add arguments

* fix code

* fix win32

* test passed

* fix readme

* revert readme

* change to MODEL_ENDPOINT

* revert tail line

* fix readme

* refactor model endpoint

* remove blank line

* fix header

* fix as comments

* update comment

* update readme

---------

Co-authored-by: tastelikefeet <redacted>
2 months agollama-model : add Glm4Model implementation for GLM-4-0414 (#12867)
Yuxuan Zhang [Fri, 11 Apr 2025 10:10:10 +0000 (18:10 +0800)]
llama-model : add Glm4Model implementation for GLM-4-0414 (#12867)

* GLM-4-0414

* use original one

* Using with tensor map

* fix bug

* change order

* change order

* format with flask8

2 months agoclip : use smart pointer (⚠️ breaking change) (#12869)
Xuan-Son Nguyen [Fri, 11 Apr 2025 10:09:39 +0000 (12:09 +0200)]
clip : use smart pointer (⚠️ breaking change) (#12869)

* clip : use smart pointers

* fix warmup

* add forward declaration

* misisng include

* fix include (2)

* composite

* simplify batch ptr

* fix conflict

2 months agoSYCL: Add fp16 type support to unary op kernels (#12788)
Akarshan Biswas [Fri, 11 Apr 2025 08:03:50 +0000 (13:33 +0530)]
SYCL: Add fp16 type support to unary op kernels (#12788)

* SYCL: Add fp16 support to some elementwise OP kernels

* remove comment

ggml-ci

* Use static_cast directly

* remove not needed cast from tanh

* Use static cast and remove unneeded castings

* Adjust device_support_op for unary OPs

* Use cast_data and typed_data struct to deduplicate casting code

2 months agoconvert : Llama4 RoPE fix (#12889)
Daniel Han [Fri, 11 Apr 2025 07:49:09 +0000 (00:49 -0700)]
convert : Llama4 RoPE fix (#12889)

2 months agoci : Replace freediskspace to free_disk_space in docker.yml (#12861)
R0CKSTAR [Fri, 11 Apr 2025 07:26:17 +0000 (15:26 +0800)]
ci : Replace freediskspace to free_disk_space in docker.yml (#12861)

Signed-off-by: Xiaodong Ye <redacted>
2 months agoxcf : add check for visionos build version (#12854)
Daniel Bevenius [Fri, 11 Apr 2025 07:24:34 +0000 (09:24 +0200)]
xcf : add check for visionos build version (#12854)

This commit adds a check for the visionos build version used with vtool
in build-xcframework.sh. The script now checks the Xcode version and
determines whether to use "xros" or "visionos" for the build version.

This commit also uses xcrun for the vtool so that the version of vtool
in xcode command line tools is used instead of the one in the system
path.

Refs: https://github.com/ggml-org/whisper.cpp/pull/2994#issuecomment-2773292223

2 months agoconvert : proper tensor name mapping for llama4 (#12870)
Xuan-Son Nguyen [Fri, 11 Apr 2025 07:23:37 +0000 (09:23 +0200)]
convert : proper tensor name mapping for llama4 (#12870)

* Llama-4 mapping

* remove hacky renaming

---------

Co-authored-by: Daniel Han <redacted>
2 months agollama : correct rms norm for llama 4 (#12882)
Xuan-Son Nguyen [Fri, 11 Apr 2025 06:49:50 +0000 (08:49 +0200)]
llama : correct rms norm for llama 4 (#12882)

2 months agoggml: fix compilation error s390x (#12848)
Aaron Teo [Fri, 11 Apr 2025 05:20:07 +0000 (13:20 +0800)]
ggml: fix compilation error s390x (#12848)

* ggml: fixes #12846 compilation error

Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
* ggml: add documentation for code change

Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
* ggml: refactor to type-cast and update documentation

Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
* ggml: update documentation to provide full issue link

Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
---------

Co-authored-by: Aleksei Nikiforov <redacted>
2 months agosync : ggml
Georgi Gerganov [Thu, 10 Apr 2025 21:08:23 +0000 (00:08 +0300)]
sync : ggml

2 months agotests : fix init order (#0)
Georgi Gerganov [Thu, 10 Apr 2025 21:04:25 +0000 (00:04 +0300)]
tests : fix init order (#0)

ggml-ci

2 months agosync : ggml
Georgi Gerganov [Thu, 10 Apr 2025 20:59:16 +0000 (23:59 +0300)]
sync : ggml

ggml-ci

2 months agoggml: don't include arm_neon.h when using CUDA 12 with ARM Neon (ggml/1187)
cmdr2 [Thu, 10 Apr 2025 12:23:08 +0000 (17:53 +0530)]
ggml: don't include arm_neon.h when using CUDA 12 with ARM Neon (ggml/1187)

fix #1186

2 months agoggml : add bilinear upscale support (ggml/1185)
Diego Devesa [Wed, 9 Apr 2025 10:32:13 +0000 (12:32 +0200)]
ggml : add bilinear upscale support (ggml/1185)

2 months agoggml : add more generic custom op, remove deprecated custom ops (ggml/1183)
Diego Devesa [Wed, 9 Apr 2025 10:31:34 +0000 (12:31 +0200)]
ggml : add more generic custom op, remove deprecated custom ops (ggml/1183)

* ggml : add more generic ggml_custom op

* ggml : remove deprecated custom ops

2 months agoscripts : fix sync-ggml-am.sh
Georgi Gerganov [Thu, 10 Apr 2025 20:59:01 +0000 (23:59 +0300)]
scripts : fix sync-ggml-am.sh

2 months agollava : introduce libmtmd (#12849)
Xuan-Son Nguyen [Thu, 10 Apr 2025 20:57:16 +0000 (22:57 +0200)]
llava : introduce libmtmd (#12849)

* wip llava2

* migrated gemma3 to llava2

* add timings

* correct pre/postfix

* fix missing include

* fix compilation unused var warn

* update llava2_tokenize

* change name llava2 --> mtmd

* improve api

* refine helpers

* Update examples/llava/mtmd.cpp

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
2 months agoconvert : ability to lazy-load safetensors remotely without downloading to disk ...
Xuan-Son Nguyen [Thu, 10 Apr 2025 15:24:44 +0000 (17:24 +0200)]
convert : ability to lazy-load safetensors remotely without downloading to disk (#12820)

* gguf util : add SafetensorRemote

* fix style

* convert: add --remote option

* convert : allow using lazy remote tensors

It's a bit slow for now since everything is blocking and single-threaded.

* correct metadata.name

* small style fix

* support HF_TOKEN

* convert : use writeable buffer for remote lazy tensors

* convert : fix flake8 lint regarding lamdba assigment

* multithreaded download

* multithread: print debug

* fix style

* Revert "multithreaded download"

This reverts commit 42fc895ace385edc972ad819c76c704aeea61791.

* bring back _get_request_headers

---------

Co-authored-by: Francis Couture-Harpin <redacted>
2 months agoCANN: Support more ops (#12841)
Chenguang Li [Thu, 10 Apr 2025 00:51:52 +0000 (08:51 +0800)]
CANN: Support more ops (#12841)

* [CANN]Support Opt LOG && MEAN && PAD_REFLECT_1D

* [CANN]Support COUNT_EQUAL && STEP && SGN

* [CANN]codestyle adjustment

* [CANN]codestyle adjustment

---------

Signed-off-by: noemotiovon <redacted>
2 months agoFixes #12823 (#12830)
Prajwal B Mehendarkar [Wed, 9 Apr 2025 23:18:01 +0000 (04:48 +0530)]
Fixes #12823 (#12830)

* Including limits file on AIX

* Fixes #12823

2 months agodocker : added all CPU to GPU images (#12749)
Rudi Servo [Wed, 9 Apr 2025 23:17:12 +0000 (23:17 +0000)]
docker : added all CPU to GPU images (#12749)

2 months agoggml-cpu-impl.h: do not redefine bool on POWER9 (#12856)
Piotr Kubaj [Wed, 9 Apr 2025 23:00:34 +0000 (23:00 +0000)]
ggml-cpu-impl.h: do not redefine bool on POWER9 (#12856)

error: unknown type name '_Bool'

2 months agoggml-impl.h: fix build on POWER9 (#12855)
Piotr Kubaj [Wed, 9 Apr 2025 23:00:25 +0000 (23:00 +0000)]
ggml-impl.h: fix build on POWER9 (#12855)

error: ISO C++17 does not allow 'register' storage class specifier

2 months agollama : Support Qwen3 and Qwen3MoE (#12828)
Bo Zheng [Wed, 9 Apr 2025 09:47:36 +0000 (17:47 +0800)]
llama : Support Qwen3 and Qwen3MoE (#12828)

* add qwen3 & qwen3moe support.

* fix

---------

Co-authored-by: bozheng-hit <redacted>
2 months agomusa: enable freediskspace for docker image build (#12839)
R0CKSTAR [Wed, 9 Apr 2025 09:22:30 +0000 (17:22 +0800)]
musa: enable freediskspace for docker image build (#12839)

Signed-off-by: Xiaodong Ye <redacted>
2 months agosycl: update documentation to use -no-cnv (#12845)
Romain Biessy [Wed, 9 Apr 2025 09:22:04 +0000 (11:22 +0200)]
sycl: update documentation to use -no-cnv (#12845)

2 months agoci: detach common from the library (#12827)
Plamen Minev [Wed, 9 Apr 2025 08:11:11 +0000 (11:11 +0300)]
ci: detach common from the library (#12827)

* fix: detach common from the library

* fix: building chat test template

2 months agoclip : do not print ftype (#12832)
Xuan-Son Nguyen [Wed, 9 Apr 2025 08:09:53 +0000 (10:09 +0200)]
clip : do not print ftype (#12832)

2 months agoreadme : add rpc backend (#12842)
Georgi Gerganov [Wed, 9 Apr 2025 07:54:42 +0000 (10:54 +0300)]
readme : add rpc backend (#12842)

2 months agoCANN: Support Opt CONV_TRANSPOSE_1D and ELU (#12786)
Chenguang Li [Wed, 9 Apr 2025 06:04:14 +0000 (14:04 +0800)]
CANN: Support Opt CONV_TRANSPOSE_1D and ELU (#12786)

* [CANN] Support ELU and CONV_TRANSPOSE_1D

* [CANN]Modification review comments

* [CANN]Modification review comments

* [CANN]name adjustment

* [CANN]remove lambda used in template

* [CANN]Use std::func instead of template

* [CANN]Modify the code according to the review comments

---------

Signed-off-by: noemotiovon <redacted>
2 months agovulkan: In coopmat2 mmq, load q4_k/q5_k scales through shared memory (#12833)
Jeff Bolz [Wed, 9 Apr 2025 05:25:08 +0000 (00:25 -0500)]
vulkan: In coopmat2 mmq, load q4_k/q5_k scales through shared memory (#12833)

q4_k and q5_k had a lot of redundant global loads where the same 16B of
scale information is repeatedly loaded and decoded during each loop iteration.
This change restructures the loops to more explicitly iterate over whole
blocks in the outer loop (with unrolled inner loop) and to copy/decode the
scale data into shared memory once at the start of each outer loop. The copy
is pipelined so the scale load from global memory is relatively cheap.

This improves q4_k/q5_k model prompt processing performance by around 5-7%.
I briefly tried applying this to q6_k and q4_0, and it didn't help for q6_k
and hurt for q4_0.

The big "else" path in mul_mm_cm2.comp that had all the clamped/unclamped
variants isn't used as often as it originally was (e.g. due to the padded_N
change), so I trimmed it down to offset some of the new complexity of the
semi-manual loop unrolling.

2 months agovulkan: Use fp16 for the flash attention P*V multiplication (#12783)
Jeff Bolz [Wed, 9 Apr 2025 05:12:57 +0000 (00:12 -0500)]
vulkan: Use fp16 for the flash attention P*V multiplication (#12783)

This is consistent with the ggml-cuda behavior and the mul_mat fallback.

2 months agocuda : add f32 to bf16 copy op (#12806)
Sigbjørn Skjæret [Tue, 8 Apr 2025 21:21:31 +0000 (23:21 +0200)]
cuda : add f32 to bf16 copy op (#12806)

This allows BF16 KV-cache on CUDA.

2 months agollava: improve clip_ctx destructor to not memleak load_image_size (#12834)
Matt Clayton [Tue, 8 Apr 2025 20:01:58 +0000 (16:01 -0400)]
llava: improve clip_ctx destructor to not memleak load_image_size (#12834)

2 months agollama : fix FA when KV cache is not used (i.e. embeddings) (#12825)
Georgi Gerganov [Tue, 8 Apr 2025 16:54:51 +0000 (19:54 +0300)]
llama : fix FA when KV cache is not used (i.e. embeddings) (#12825)

* ggml : FA supports F32 V

* graph : cast KV to F16 when the KV cache is not used

ggml-ci

* server : add test that exercises embeddings with FA enabled

ggml-ci

2 months agoserver : fix thread.join() on exit (#12831)
Xuan-Son Nguyen [Tue, 8 Apr 2025 16:37:06 +0000 (18:37 +0200)]
server : fix thread.join() on exit (#12831)

2 months agollava: add more helper functions to check projector types in clip context (#12824)
dm4 [Tue, 8 Apr 2025 13:49:13 +0000 (21:49 +0800)]
llava: add more helper functions to check projector types in clip context (#12824)

Signed-off-by: dm4 <redacted>
2 months agoarg : Including limits file on AIX (#12822)
Prajwal B Mehendarkar [Tue, 8 Apr 2025 12:30:59 +0000 (18:00 +0530)]
arg : Including limits file on AIX (#12822)

2 months agoserver : webui : Improve Chat Input with Auto-Sizing Textarea (#12785)
characharm [Tue, 8 Apr 2025 09:14:59 +0000 (14:14 +0500)]
server : webui : Improve Chat Input with Auto-Sizing Textarea (#12785)

* Update ChatScreen.tsx

* useAutosizeTextarea.ts

useAutosizeTextarea to encapsulate the logic.

* Implement responsive auto-sizing chat textarea

Replaces the manual textarea resizing with an automatic height adjustment based on content.

- `useChatTextarea` hook to manage textarea state and auto-sizing logic via refs, preserving the optimization
- Textarea now grows vertically up to a maximum height (`lg:max-h-48`) on large screens (lg breakpoint and up).
- Disables auto-sizing and enables manual vertical resizing (`resize-vertical`) on smaller screens for better mobile usability.
- Aligns the "Send" button to the bottom of the textarea (`items-end`) for consistent positioning during resize.

* -update compressed index.html.gz after npm run build
-refactor: replace OptimizedTextareaValue with AutosizeTextareaApi in VSCode context hook

* chore: normalize line endings to LF
refactor: AutosizeTextareaApi -> chatTextareaApi

* refactor: Rename interface to PascalCase

---------

Co-authored-by: Xuan Son Nguyen <redacted>
2 months agoRevert "sycl:remove redundant memcopy in function ggml_backend_sycl_buffer_set_tensor...
Neo Zhang Jianyu [Tue, 8 Apr 2025 07:03:21 +0000 (15:03 +0800)]
Revert "sycl:remove redundant memcopy in function ggml_backend_sycl_buffer_set_tensor" (#12812)

* Revert "sycl: remove redundant memcopy in function ggml_backend_sycl_buffer_s…"

This reverts commit 518a01480eb3a7c80a4951b430db9dee55428310.

* Update ggml/src/ggml-sycl/ggml-sycl.cpp

* Update ggml/src/ggml-sycl/ggml-sycl.cpp

* rm tail space

2 months agogguf-py : support lazy tensor splitting (#12809)
compilade [Tue, 8 Apr 2025 07:03:07 +0000 (03:03 -0400)]
gguf-py : support lazy tensor splitting (#12809)

* gguf-py : support lazy tensor splitting

Splitting usually involves returning tuples of tensors,
which need to be handled properly to avoid early eager evaluation.

* gguf-py : fix flake8 lint

2 months agollama : Support llama 4 text-only (#12791)
Xuan-Son Nguyen [Mon, 7 Apr 2025 21:06:44 +0000 (23:06 +0200)]
llama : Support llama 4 text-only (#12791)

* llama4 conversion

* initial support, no chat template

* clean up a bit

* fix tokenizer conversion

* correct hparams

* try this

* fix shexp

* ffn_inp_normed

* chat template

* clean up model conversion

* add_bos

* add scale_before_ffn

* fix order

* weight_before_ffn

* llm_graph_input_attn_temp

* add chunk attn mask

* build_inp_attn_scale()

* add comment about ggml_repeat

* clarify comments

* fix build

2 months agoopencl: better identify Adreno GPU (#12760)
lhez [Mon, 7 Apr 2025 20:22:54 +0000 (13:22 -0700)]
opencl: better identify Adreno GPU (#12760)

2 months agohellaswag: display estimated score confidence interval (#12797)
stduhpf [Mon, 7 Apr 2025 15:47:08 +0000 (17:47 +0200)]
hellaswag: display estimated score confidence interval (#12797)

2 months agocuda : fix HIP and MUSA BF16 (#0)
Georgi Gerganov [Mon, 7 Apr 2025 10:18:07 +0000 (13:18 +0300)]
cuda : fix HIP and MUSA BF16 (#0)

ggml-ci

2 months agosync : ggml
Georgi Gerganov [Mon, 7 Apr 2025 09:32:39 +0000 (12:32 +0300)]
sync : ggml

ggml-ci

2 months agoggml : simplify Arm fp16 CPU logic (ggml/1177)
Georgi Gerganov [Mon, 7 Apr 2025 09:25:15 +0000 (12:25 +0300)]
ggml : simplify Arm fp16 CPU logic (ggml/1177)

* ggml : simlpify Arm fp16 CPU logic

ggml-ci

* cont : bring back CUDA/MUSA checks

ggml-ci

2 months agoCUDA: don't convert BF16 weights to FP32 (ggml/1174)
Sigbjørn Skjæret [Fri, 4 Apr 2025 19:05:12 +0000 (21:05 +0200)]
CUDA: don't convert BF16 weights to FP32 (ggml/1174)

* add bf16 support

* use convert_from_bf16_cuda instead of convert_unary_cuda for f32

* revert 7ec5085

* move functionality into convert_unary with constexpr

2 months agocpu: move all the operators into a separate c++ file (except mul_mat) (ggml/1167)
cmdr2 [Wed, 2 Apr 2025 12:16:16 +0000 (17:46 +0530)]
cpu: move all the operators into a separate c++ file (except mul_mat) (ggml/1167)

* cpu: refactor SIMD mappings and vectorized op functions into separate files

* Fix warning for ggml_float to float

* Fix warnings

* cpu: move all the operations (except mul_mat) to a separate c++ file

* fix whitespace

* Update ggml/src/ggml-cpu/vec.h

Co-authored-by: Diego Devesa <redacted>
* Fix PR comments - use GGML_UNUSED, use cassert in ops.cpp

* Reverse the order of import for ops.h and vec.h, to match what was present in ggml-cpu.c previously

---------

Co-authored-by: Diego Devesa <redacted>
2 months agosycl: remove redundant memcopy in function ggml_backend_sycl_buffer_set_tensor (...
zhouwg [Mon, 7 Apr 2025 15:22:57 +0000 (23:22 +0800)]
sycl: remove redundant memcopy in function ggml_backend_sycl_buffer_set_tensor (#12734)

2 months agoci : no curl on ggml-ci (#12796)
Xuan-Son Nguyen [Mon, 7 Apr 2025 12:37:28 +0000 (14:37 +0200)]
ci : no curl on ggml-ci (#12796)

2 months agocmake : enable curl by default (#12761)
Xuan-Son Nguyen [Mon, 7 Apr 2025 11:35:19 +0000 (13:35 +0200)]
cmake : enable curl by default (#12761)

* cmake : enable curl by default

* no curl if no examples

* fix build

* fix build-linux-cross

* add windows-setup-curl

* fix

* shell

* fix path

* fix windows-latest-cmake*

* run: include_directories

* LLAMA_RUN_EXTRA_LIBS

* sycl: no llama_curl

* no test-arg-parser on windows

* clarification

* try riscv64 / arm64

* windows: include libcurl inside release binary

* add msg

* fix mac / ios / android build

* will this fix xcode?

* try clearing the cache

* add bunch of licenses

* revert clear cache

* fix xcode

* fix xcode (2)

* fix typo

2 months agoCANN: fix typo in ggml-cann (#12733)
zhouwg [Mon, 7 Apr 2025 11:34:14 +0000 (19:34 +0800)]
CANN: fix typo in ggml-cann (#12733)

2 months agoCANN: Refactor to reduce duplicate code (#12731)
hipudding [Mon, 7 Apr 2025 09:10:36 +0000 (17:10 +0800)]
CANN: Refactor to reduce duplicate code (#12731)

* CANN: Refactor to reduce duplicate code

* CANN: fix review comment

2 months agomusa: fix compilation warnings in mp_22/31 (#12780)
R0CKSTAR [Sun, 6 Apr 2025 13:23:54 +0000 (21:23 +0800)]
musa: fix compilation warnings in mp_22/31 (#12780)

Signed-off-by: Xiaodong Ye <redacted>
2 months agovulkan: fix NaN issue in flash attention shader (#12776)
Jeff Bolz [Sun, 6 Apr 2025 09:03:47 +0000 (04:03 -0500)]
vulkan: fix NaN issue in flash attention shader (#12776)

Use -FLT_MAX/2 rather than -inf as the initial value for computing the maximum.

2 months agovulkan: Use unclamped loads for flash attention mask (#12720)
Jeff Bolz [Sun, 6 Apr 2025 08:47:13 +0000 (03:47 -0500)]
vulkan: Use unclamped loads for flash attention mask (#12720)

nem1 must be a multiple of GGML_KQ_MASK_PAD, and GGML_KQ_MASK_PAD is a multiple
of the number of rows in the matrix. The KV dim is a multiple of the number of
columns for the aligned shader.

2 months agoVulkan: Tune Vulkan mmq int dot shader for performance (#12767)
0cc4m [Sat, 5 Apr 2025 16:04:03 +0000 (18:04 +0200)]
Vulkan: Tune Vulkan mmq int dot shader for performance (#12767)

2 months agocommon : fix includes in arg.cpp and gemma3-cli.cpp (#12766)
Sergey Fedorov [Sat, 5 Apr 2025 15:46:00 +0000 (23:46 +0800)]
common : fix includes in arg.cpp and gemma3-cli.cpp (#12766)

* arg.cpp: add a missing include

* gemma3-cli.cpp: fix cinttypes include

2 months agoclip : refactor clip_init, add tests (#12757)
Xuan-Son Nguyen [Sat, 5 Apr 2025 15:17:40 +0000 (17:17 +0200)]
clip : refactor clip_init, add tests (#12757)

* refactor clip_init

* fix loading file

* fix style

* test ok

* better test with report

* add missing headers

* clarify

* add KEY_MM_PATCH_MERGE_TYPE

* remove bool has_* pattern

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <redacted>
* Update examples/llava/clip.cpp

Co-authored-by: Georgi Gerganov <redacted>
* use ggml_soft_max_ext

* refactor logging system

* add minicpm-v-o 2.6 for testing

* use nullptr everywhere

* fix Yi-VL model

---------

Co-authored-by: Georgi Gerganov <redacted>
2 months agocommon: custom hf endpoint support (#12769)
エシュナヴァリシア [Sat, 5 Apr 2025 13:31:42 +0000 (21:31 +0800)]
common: custom hf endpoint support (#12769)

* common: custom hf endpoint support

Add support for custom huggingface endpoints via HF_ENDPOINT environment variable

You can now specify a custom huggingface endpoint using the HF_ENDPOINT environment variable when using the --hf-repo flag, which works similarly to huggingface-cli's endpoint configuration.

Example usage:
HF_ENDPOINT=https://hf-mirror.com/ ./bin/llama-cli --hf-repo Qwen/Qwen1.5-0.5B-Chat-GGUF --hf-file qwen1_5-0_5b-chat-q2_k.gguf -p "The meaning to life and the universe is"

The trailing slash in the URL is optional:
HF_ENDPOINT=https://hf-mirror.com ./bin/llama-cli --hf-repo Qwen/Qwen1.5-0.5B-Chat-GGUF --hf-file qwen1_5-0_5b-chat-q2_k.gguf -p "The meaning to life and the universe is"

* Update common/arg.cpp

readability Improvement

Co-authored-by: Xuan-Son Nguyen <redacted>
* Apply suggestions from code review

---------

Co-authored-by: ベアトリーチェ <redacted>
Co-authored-by: Xuan-Son Nguyen <redacted>
2 months agosync: minja (#12739)
Olivier Chafik [Fri, 4 Apr 2025 20:16:39 +0000 (13:16 -0700)]
sync: minja (#12739)

* sync: minja

https://github.com/google/minja/pull/57

* fix json include

2 months agokv-cache : simplify + fix warning for recurrent models (#12756)
Georgi Gerganov [Fri, 4 Apr 2025 18:48:10 +0000 (21:48 +0300)]
kv-cache : simplify + fix warning for recurrent models (#12756)

ggml-ci

2 months agoci: add Linux cross-compile build (#12428)
bandoti [Fri, 4 Apr 2025 17:05:12 +0000 (14:05 -0300)]
ci: add Linux cross-compile build (#12428)

2 months agoserver : webui : Upgrade daisyui, tailwindcss. (#12735)
Nauful Shaikh [Fri, 4 Apr 2025 14:09:52 +0000 (09:09 -0500)]
server : webui : Upgrade daisyui, tailwindcss. (#12735)

* Upgrade daisyui, tailwindcss.

* Switch to all themes.

* Revert a change.

* Update formatting.

* Install packages before npm build.

* Revert "Install packages before npm build."

This reverts commit 336c5147e614e60993162794ba9d9d4629a916f8.

* Add index.html.gz

* run build

---------

Co-authored-by: Xuan Son Nguyen <redacted>
2 months agogguf-split : --merge now respects --dry-run option (#12681)
nick huang [Fri, 4 Apr 2025 14:09:12 +0000 (22:09 +0800)]
gguf-split : --merge now respects --dry-run option (#12681)

* gguf-split now respects dry-run option

* removing trailing space

2 months agosycl: allow ggml-sycl configuration and compilation using Visual Studio project/solut...
Nicolò Scipione [Fri, 4 Apr 2025 14:00:46 +0000 (16:00 +0200)]
sycl: allow ggml-sycl configuration and compilation using Visual Studio project/solution (#12625)

2 months agocmake: fix ggml-shaders-gen compiler paths containing spaces (#12747)
Ronny Brendel [Fri, 4 Apr 2025 13:12:40 +0000 (15:12 +0200)]
cmake: fix ggml-shaders-gen compiler paths containing spaces (#12747)

fixes error for compiler paths with spaces

2 months agodocs : add XCFramework section to README.md [no ci] (#12746)
Daniel Bevenius [Fri, 4 Apr 2025 08:24:12 +0000 (10:24 +0200)]
docs : add XCFramework section to README.md [no ci] (#12746)

This commit adds a new section to the README.md file, detailing the
usage of the XCFramework.

The motivation for this is that it might not be immediately clear to
users how to use the XCFramework in their projects and hopefully this
will help.

2 months agovulkan: Hybrid waitForFences/getFenceStatus to reduce fence latency (#12630)
Jeff Bolz [Fri, 4 Apr 2025 05:54:35 +0000 (00:54 -0500)]
vulkan: Hybrid waitForFences/getFenceStatus to reduce fence latency (#12630)

There seems to be a bubble waking up from waitForFences, which costs a few
percent performance and also increased variance in performance. This change
inserts an "almost_ready" fence when the graph is about 80% complete and we
waitForFences for the almost_ready fence and then spin (with _mm_pauses) waiting
for the final fence to be signaled.

2 months agovulkan: set cmake minimum and project name in vulkan-shaders (#12744)
Jeff Bolz [Fri, 4 Apr 2025 05:53:20 +0000 (00:53 -0500)]
vulkan: set cmake minimum and project name in vulkan-shaders (#12744)

2 months agoopencl: update doc for OpenCL (#12702)
lhez [Fri, 4 Apr 2025 05:18:17 +0000 (22:18 -0700)]
opencl: update doc for OpenCL (#12702)

* opencl: add OpenCL to build.md

* opencl: remove fixed issue/TODO

* opencl: add link to OPENCL.md

* opencl: update doc - refine tools requirement for Windows 11 arm64

2 months agoCUDA: Prefer vector flash decoding kernel for Gemma models (#12738)
Gaurav Garg [Thu, 3 Apr 2025 16:20:29 +0000 (21:50 +0530)]
CUDA: Prefer vector flash decoding kernel for Gemma models (#12738)

* Prefer vector flash decoding kernel for Gemma models

Vector flash decoding kernel was not being picked for models with head dimension 256. Gemma models are in this category.
Removing this limit improves e2e performance by upto 12% in gen phase throughput for Gemm models.

* Update ggml/src/ggml-cuda/fattn.cu

Co-authored-by: Johannes Gäßler <redacted>
---------

Co-authored-by: Johannes Gäßler <redacted>
2 months agovocab : use string_view::find() to avoid unnecessary looking up beyond the fragment...
yumeyao [Thu, 3 Apr 2025 15:32:54 +0000 (23:32 +0800)]
vocab : use string_view::find() to avoid unnecessary looking up beyond the fragment range (#12706)

2 months agovulkan: Fix missing cmake logic for dot product extension (#12721)
Jeff Bolz [Thu, 3 Apr 2025 15:08:26 +0000 (10:08 -0500)]
vulkan: Fix missing cmake logic for dot product extension (#12721)

2 months agoci : add env variable in ggml-ci and document the same in SYCL.md (#12736)
Atharva Dubey [Thu, 3 Apr 2025 12:12:39 +0000 (13:12 +0100)]
ci : add env variable in ggml-ci and document the same in SYCL.md (#12736)

2 months agosync : minja (inclusionAI/Ling) and update tests (#12699)
R0CKSTAR [Thu, 3 Apr 2025 11:51:35 +0000 (19:51 +0800)]
sync : minja (inclusionAI/Ling) and update tests (#12699)

Signed-off-by: Xiaodong Ye <redacted>