]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Ed Addario [Sun, 13 Apr 2025 18:29:28 +0000 (19:29 +0100)]
quantize: Handle user-defined quantization levels for additional tensors (#12511)
* Add llama_model_quantize_params parameters
* Add new quantize parameters parsing and validation
* Update usage
* Add new parameters defaults
* Add new quantization parameters logic
* Add llama_model_quantize_params parameters
* Add new quantize parameters parsing and validation
* Update usage
* Add new parameters defaults
* Add new quantization parameters logic
* Minor refactoring as per the contributors' coding guidelines
* Update descriptions to match existing style
* Add llama_model_quantize_params parameters
* Add new quantize parameters parsing and validation
* Update usage
* Add new parameters defaults
* Add new quantization parameters logic
* Minor refactoring as per the contributors' guidelines
* Implement general --tensor-type instead of tensor-specific command option
* Fix implied type bug
* Restore missing #includes
* Add regex capability for tensor selection
* Refactor function name and update ALLOWED_TENSOR_TYPE
* Add missing #include
* Handle edge case when tensor name is cls.output
* Minor logging improvement
Prajwal B Mehendarkar [Sat, 12 Apr 2025 15:33:39 +0000 (21:03 +0530)]
common : Define cache directory on AIX (#12915)
Jeff Bolz [Sat, 12 Apr 2025 08:44:48 +0000 (03:44 -0500)]
vulkan: use aligned loads for flash attention mask (#12853)
Rewrite the stride logic for the mask tensor in the FA shader to force the
stride to be aligned, to allow using more efficient loads.
Matt Clayton [Sat, 12 Apr 2025 05:29:03 +0000 (01:29 -0400)]
llava: Fix cpu-only clip image encoding sefault (#12907)
* llava: Fix cpu-only clip image encoding
* clip : no smart ptr for ggml_backend_t
* Fix for backend_ptr push_back
---------
Co-authored-by: Xuan Son Nguyen <redacted>
Georgi Gerganov [Fri, 11 Apr 2025 20:37:41 +0000 (23:37 +0300)]
server : add VSCode's Github Copilot Chat support (#12896)
* server : add VSCode's Github Copilot Chat support
* cont : update handler name
yuri@FreeBSD [Fri, 11 Apr 2025 20:04:14 +0000 (13:04 -0700)]
rpc : Set cache directory in rpc-server.cpp on FreeBSD (#12903)
Olivier Chafik [Fri, 11 Apr 2025 19:47:52 +0000 (12:47 -0700)]
`tool-call`: fix non-tool-calling grammar crashes w/ Qwen / Hermes 2 templates (#12900)
* `tool-call`: don't call common_chat_params_init_hermes_2_pro when there aren't tools (or when there's a schema)
* test all chat formats w/o tools
yuri@FreeBSD [Fri, 11 Apr 2025 19:45:44 +0000 (12:45 -0700)]
common : Define cache directory on FreeBSD (#12892)
Ewan Crawford [Fri, 11 Apr 2025 13:32:14 +0000 (15:32 +0200)]
sycl: Support sycl_ext_oneapi_limited_graph (#12873)
The current usage of the SYCL-Graph extension checks for
the `sycl_ext_oneapi_graph` device aspect. However, it is also
possible to support `sycl_ext_oneapi_limied_graph` devices that
don't support update
tastelikefeet [Fri, 11 Apr 2025 12:01:56 +0000 (20:01 +0800)]
contrib: support modelscope community (#12664)
* support download from modelscope
* support login
* remove comments
* add arguments
* fix code
* fix win32
* test passed
* fix readme
* revert readme
* change to MODEL_ENDPOINT
* revert tail line
* fix readme
* refactor model endpoint
* remove blank line
* fix header
* fix as comments
* update comment
* update readme
---------
Co-authored-by: tastelikefeet <redacted>
Yuxuan Zhang [Fri, 11 Apr 2025 10:10:10 +0000 (18:10 +0800)]
llama-model : add Glm4Model implementation for GLM-4-0414 (#12867)
* GLM-4-0414
* use original one
* Using with tensor map
* fix bug
* change order
* change order
* format with flask8
Xuan-Son Nguyen [Fri, 11 Apr 2025 10:09:39 +0000 (12:09 +0200)]
clip : use smart pointer (⚠️ breaking change) (#12869)
* clip : use smart pointers
* fix warmup
* add forward declaration
* misisng include
* fix include (2)
* composite
* simplify batch ptr
* fix conflict
Akarshan Biswas [Fri, 11 Apr 2025 08:03:50 +0000 (13:33 +0530)]
SYCL: Add fp16 type support to unary op kernels (#12788)
* SYCL: Add fp16 support to some elementwise OP kernels
* remove comment
ggml-ci
* Use static_cast directly
* remove not needed cast from tanh
* Use static cast and remove unneeded castings
* Adjust device_support_op for unary OPs
* Use cast_data and typed_data struct to deduplicate casting code
Daniel Han [Fri, 11 Apr 2025 07:49:09 +0000 (00:49 -0700)]
convert : Llama4 RoPE fix (#12889)
R0CKSTAR [Fri, 11 Apr 2025 07:26:17 +0000 (15:26 +0800)]
ci : Replace freediskspace to free_disk_space in docker.yml (#12861)
Signed-off-by: Xiaodong Ye <redacted>
Daniel Bevenius [Fri, 11 Apr 2025 07:24:34 +0000 (09:24 +0200)]
xcf : add check for visionos build version (#12854)
This commit adds a check for the visionos build version used with vtool
in build-xcframework.sh. The script now checks the Xcode version and
determines whether to use "xros" or "visionos" for the build version.
This commit also uses xcrun for the vtool so that the version of vtool
in xcode command line tools is used instead of the one in the system
path.
Refs: https://github.com/ggml-org/whisper.cpp/pull/2994#issuecomment-
2773292223
Xuan-Son Nguyen [Fri, 11 Apr 2025 07:23:37 +0000 (09:23 +0200)]
convert : proper tensor name mapping for llama4 (#12870)
* Llama-4 mapping
* remove hacky renaming
---------
Co-authored-by: Daniel Han <redacted>
Xuan-Son Nguyen [Fri, 11 Apr 2025 06:49:50 +0000 (08:49 +0200)]
llama : correct rms norm for llama 4 (#12882)
Aaron Teo [Fri, 11 Apr 2025 05:20:07 +0000 (13:20 +0800)]
ggml: fix compilation error s390x (#12848)
* ggml: fixes #12846 compilation error
Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
* ggml: add documentation for code change
Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
* ggml: refactor to type-cast and update documentation
Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
* ggml: update documentation to provide full issue link
Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
---------
Co-authored-by: Aleksei Nikiforov <redacted>
Georgi Gerganov [Thu, 10 Apr 2025 21:08:23 +0000 (00:08 +0300)]
sync : ggml
Georgi Gerganov [Thu, 10 Apr 2025 21:04:25 +0000 (00:04 +0300)]
tests : fix init order (#0)
ggml-ci
Georgi Gerganov [Thu, 10 Apr 2025 20:59:16 +0000 (23:59 +0300)]
sync : ggml
ggml-ci
cmdr2 [Thu, 10 Apr 2025 12:23:08 +0000 (17:53 +0530)]
ggml: don't include arm_neon.h when using CUDA 12 with ARM Neon (ggml/1187)
fix #1186
Diego Devesa [Wed, 9 Apr 2025 10:32:13 +0000 (12:32 +0200)]
ggml : add bilinear upscale support (ggml/1185)
Diego Devesa [Wed, 9 Apr 2025 10:31:34 +0000 (12:31 +0200)]
ggml : add more generic custom op, remove deprecated custom ops (ggml/1183)
* ggml : add more generic ggml_custom op
* ggml : remove deprecated custom ops
Georgi Gerganov [Thu, 10 Apr 2025 20:59:01 +0000 (23:59 +0300)]
scripts : fix sync-ggml-am.sh
Xuan-Son Nguyen [Thu, 10 Apr 2025 20:57:16 +0000 (22:57 +0200)]
llava : introduce libmtmd (#12849)
* wip llava2
* migrated gemma3 to llava2
* add timings
* correct pre/postfix
* fix missing include
* fix compilation unused var warn
* update llava2_tokenize
* change name llava2 --> mtmd
* improve api
* refine helpers
* Update examples/llava/mtmd.cpp
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Xuan-Son Nguyen [Thu, 10 Apr 2025 15:24:44 +0000 (17:24 +0200)]
convert : ability to lazy-load safetensors remotely without downloading to disk (#12820)
* gguf util : add SafetensorRemote
* fix style
* convert: add --remote option
* convert : allow using lazy remote tensors
It's a bit slow for now since everything is blocking and single-threaded.
* correct metadata.name
* small style fix
* support HF_TOKEN
* convert : use writeable buffer for remote lazy tensors
* convert : fix flake8 lint regarding lamdba assigment
* multithreaded download
* multithread: print debug
* fix style
* Revert "multithreaded download"
This reverts commit
42fc895ace385edc972ad819c76c704aeea61791 .
* bring back _get_request_headers
---------
Co-authored-by: Francis Couture-Harpin <redacted>
Chenguang Li [Thu, 10 Apr 2025 00:51:52 +0000 (08:51 +0800)]
CANN: Support more ops (#12841)
* [CANN]Support Opt LOG && MEAN && PAD_REFLECT_1D
* [CANN]Support COUNT_EQUAL && STEP && SGN
* [CANN]codestyle adjustment
* [CANN]codestyle adjustment
---------
Signed-off-by: noemotiovon <redacted>
Prajwal B Mehendarkar [Wed, 9 Apr 2025 23:18:01 +0000 (04:48 +0530)]
Fixes #12823 (#12830)
* Including limits file on AIX
* Fixes #12823
Rudi Servo [Wed, 9 Apr 2025 23:17:12 +0000 (23:17 +0000)]
docker : added all CPU to GPU images (#12749)
Piotr Kubaj [Wed, 9 Apr 2025 23:00:34 +0000 (23:00 +0000)]
ggml-cpu-impl.h: do not redefine bool on POWER9 (#12856)
error: unknown type name '_Bool'
Piotr Kubaj [Wed, 9 Apr 2025 23:00:25 +0000 (23:00 +0000)]
ggml-impl.h: fix build on POWER9 (#12855)
error: ISO C++17 does not allow 'register' storage class specifier
Bo Zheng [Wed, 9 Apr 2025 09:47:36 +0000 (17:47 +0800)]
llama : Support Qwen3 and Qwen3MoE (#12828)
* add qwen3 & qwen3moe support.
* fix
---------
Co-authored-by: bozheng-hit <redacted>
R0CKSTAR [Wed, 9 Apr 2025 09:22:30 +0000 (17:22 +0800)]
musa: enable freediskspace for docker image build (#12839)
Signed-off-by: Xiaodong Ye <redacted>
Romain Biessy [Wed, 9 Apr 2025 09:22:04 +0000 (11:22 +0200)]
sycl: update documentation to use -no-cnv (#12845)
Plamen Minev [Wed, 9 Apr 2025 08:11:11 +0000 (11:11 +0300)]
ci: detach common from the library (#12827)
* fix: detach common from the library
* fix: building chat test template
Xuan-Son Nguyen [Wed, 9 Apr 2025 08:09:53 +0000 (10:09 +0200)]
clip : do not print ftype (#12832)
Georgi Gerganov [Wed, 9 Apr 2025 07:54:42 +0000 (10:54 +0300)]
readme : add rpc backend (#12842)
Chenguang Li [Wed, 9 Apr 2025 06:04:14 +0000 (14:04 +0800)]
CANN: Support Opt CONV_TRANSPOSE_1D and ELU (#12786)
* [CANN] Support ELU and CONV_TRANSPOSE_1D
* [CANN]Modification review comments
* [CANN]Modification review comments
* [CANN]name adjustment
* [CANN]remove lambda used in template
* [CANN]Use std::func instead of template
* [CANN]Modify the code according to the review comments
---------
Signed-off-by: noemotiovon <redacted>
Jeff Bolz [Wed, 9 Apr 2025 05:25:08 +0000 (00:25 -0500)]
vulkan: In coopmat2 mmq, load q4_k/q5_k scales through shared memory (#12833)
q4_k and q5_k had a lot of redundant global loads where the same 16B of
scale information is repeatedly loaded and decoded during each loop iteration.
This change restructures the loops to more explicitly iterate over whole
blocks in the outer loop (with unrolled inner loop) and to copy/decode the
scale data into shared memory once at the start of each outer loop. The copy
is pipelined so the scale load from global memory is relatively cheap.
This improves q4_k/q5_k model prompt processing performance by around 5-7%.
I briefly tried applying this to q6_k and q4_0, and it didn't help for q6_k
and hurt for q4_0.
The big "else" path in mul_mm_cm2.comp that had all the clamped/unclamped
variants isn't used as often as it originally was (e.g. due to the padded_N
change), so I trimmed it down to offset some of the new complexity of the
semi-manual loop unrolling.
Jeff Bolz [Wed, 9 Apr 2025 05:12:57 +0000 (00:12 -0500)]
vulkan: Use fp16 for the flash attention P*V multiplication (#12783)
This is consistent with the ggml-cuda behavior and the mul_mat fallback.
Sigbjørn Skjæret [Tue, 8 Apr 2025 21:21:31 +0000 (23:21 +0200)]
cuda : add f32 to bf16 copy op (#12806)
This allows BF16 KV-cache on CUDA.
Matt Clayton [Tue, 8 Apr 2025 20:01:58 +0000 (16:01 -0400)]
llava: improve clip_ctx destructor to not memleak load_image_size (#12834)
Georgi Gerganov [Tue, 8 Apr 2025 16:54:51 +0000 (19:54 +0300)]
llama : fix FA when KV cache is not used (i.e. embeddings) (#12825)
* ggml : FA supports F32 V
* graph : cast KV to F16 when the KV cache is not used
ggml-ci
* server : add test that exercises embeddings with FA enabled
ggml-ci
Xuan-Son Nguyen [Tue, 8 Apr 2025 16:37:06 +0000 (18:37 +0200)]
server : fix thread.join() on exit (#12831)
dm4 [Tue, 8 Apr 2025 13:49:13 +0000 (21:49 +0800)]
llava: add more helper functions to check projector types in clip context (#12824)
Signed-off-by: dm4 <redacted>
Prajwal B Mehendarkar [Tue, 8 Apr 2025 12:30:59 +0000 (18:00 +0530)]
arg : Including limits file on AIX (#12822)
characharm [Tue, 8 Apr 2025 09:14:59 +0000 (14:14 +0500)]
server : webui : Improve Chat Input with Auto-Sizing Textarea (#12785)
* Update ChatScreen.tsx
* useAutosizeTextarea.ts
useAutosizeTextarea to encapsulate the logic.
* Implement responsive auto-sizing chat textarea
Replaces the manual textarea resizing with an automatic height adjustment based on content.
- `useChatTextarea` hook to manage textarea state and auto-sizing logic via refs, preserving the optimization
- Textarea now grows vertically up to a maximum height (`lg:max-h-48`) on large screens (lg breakpoint and up).
- Disables auto-sizing and enables manual vertical resizing (`resize-vertical`) on smaller screens for better mobile usability.
- Aligns the "Send" button to the bottom of the textarea (`items-end`) for consistent positioning during resize.
* -update compressed index.html.gz after npm run build
-refactor: replace OptimizedTextareaValue with AutosizeTextareaApi in VSCode context hook
* chore: normalize line endings to LF
refactor: AutosizeTextareaApi -> chatTextareaApi
* refactor: Rename interface to PascalCase
---------
Co-authored-by: Xuan Son Nguyen <redacted>
Neo Zhang Jianyu [Tue, 8 Apr 2025 07:03:21 +0000 (15:03 +0800)]
Revert "sycl:remove redundant memcopy in function ggml_backend_sycl_buffer_set_tensor" (#12812)
* Revert "sycl: remove redundant memcopy in function ggml_backend_sycl_buffer_s…"
This reverts commit
518a01480eb3a7c80a4951b430db9dee55428310 .
* Update ggml/src/ggml-sycl/ggml-sycl.cpp
* Update ggml/src/ggml-sycl/ggml-sycl.cpp
* rm tail space
compilade [Tue, 8 Apr 2025 07:03:07 +0000 (03:03 -0400)]
gguf-py : support lazy tensor splitting (#12809)
* gguf-py : support lazy tensor splitting
Splitting usually involves returning tuples of tensors,
which need to be handled properly to avoid early eager evaluation.
* gguf-py : fix flake8 lint
Xuan-Son Nguyen [Mon, 7 Apr 2025 21:06:44 +0000 (23:06 +0200)]
llama : Support llama 4 text-only (#12791)
* llama4 conversion
* initial support, no chat template
* clean up a bit
* fix tokenizer conversion
* correct hparams
* try this
* fix shexp
* ffn_inp_normed
* chat template
* clean up model conversion
* add_bos
* add scale_before_ffn
* fix order
* weight_before_ffn
* llm_graph_input_attn_temp
* add chunk attn mask
* build_inp_attn_scale()
* add comment about ggml_repeat
* clarify comments
* fix build
lhez [Mon, 7 Apr 2025 20:22:54 +0000 (13:22 -0700)]
opencl: better identify Adreno GPU (#12760)
stduhpf [Mon, 7 Apr 2025 15:47:08 +0000 (17:47 +0200)]
hellaswag: display estimated score confidence interval (#12797)
Georgi Gerganov [Mon, 7 Apr 2025 10:18:07 +0000 (13:18 +0300)]
cuda : fix HIP and MUSA BF16 (#0)
ggml-ci
Georgi Gerganov [Mon, 7 Apr 2025 09:32:39 +0000 (12:32 +0300)]
sync : ggml
ggml-ci
Georgi Gerganov [Mon, 7 Apr 2025 09:25:15 +0000 (12:25 +0300)]
ggml : simplify Arm fp16 CPU logic (ggml/1177)
* ggml : simlpify Arm fp16 CPU logic
ggml-ci
* cont : bring back CUDA/MUSA checks
ggml-ci
Sigbjørn Skjæret [Fri, 4 Apr 2025 19:05:12 +0000 (21:05 +0200)]
CUDA: don't convert BF16 weights to FP32 (ggml/1174)
* add bf16 support
* use convert_from_bf16_cuda instead of convert_unary_cuda for f32
* revert
7ec5085
* move functionality into convert_unary with constexpr
cmdr2 [Wed, 2 Apr 2025 12:16:16 +0000 (17:46 +0530)]
cpu: move all the operators into a separate c++ file (except mul_mat) (ggml/1167)
* cpu: refactor SIMD mappings and vectorized op functions into separate files
* Fix warning for ggml_float to float
* Fix warnings
* cpu: move all the operations (except mul_mat) to a separate c++ file
* fix whitespace
* Update ggml/src/ggml-cpu/vec.h
Co-authored-by: Diego Devesa <redacted>
* Fix PR comments - use GGML_UNUSED, use cassert in ops.cpp
* Reverse the order of import for ops.h and vec.h, to match what was present in ggml-cpu.c previously
---------
Co-authored-by: Diego Devesa <redacted>
zhouwg [Mon, 7 Apr 2025 15:22:57 +0000 (23:22 +0800)]
sycl: remove redundant memcopy in function ggml_backend_sycl_buffer_set_tensor (#12734)
Xuan-Son Nguyen [Mon, 7 Apr 2025 12:37:28 +0000 (14:37 +0200)]
ci : no curl on ggml-ci (#12796)
Xuan-Son Nguyen [Mon, 7 Apr 2025 11:35:19 +0000 (13:35 +0200)]
cmake : enable curl by default (#12761)
* cmake : enable curl by default
* no curl if no examples
* fix build
* fix build-linux-cross
* add windows-setup-curl
* fix
* shell
* fix path
* fix windows-latest-cmake*
* run: include_directories
* LLAMA_RUN_EXTRA_LIBS
* sycl: no llama_curl
* no test-arg-parser on windows
* clarification
* try riscv64 / arm64
* windows: include libcurl inside release binary
* add msg
* fix mac / ios / android build
* will this fix xcode?
* try clearing the cache
* add bunch of licenses
* revert clear cache
* fix xcode
* fix xcode (2)
* fix typo
zhouwg [Mon, 7 Apr 2025 11:34:14 +0000 (19:34 +0800)]
CANN: fix typo in ggml-cann (#12733)
hipudding [Mon, 7 Apr 2025 09:10:36 +0000 (17:10 +0800)]
CANN: Refactor to reduce duplicate code (#12731)
* CANN: Refactor to reduce duplicate code
* CANN: fix review comment
R0CKSTAR [Sun, 6 Apr 2025 13:23:54 +0000 (21:23 +0800)]
musa: fix compilation warnings in mp_22/31 (#12780)
Signed-off-by: Xiaodong Ye <redacted>
Jeff Bolz [Sun, 6 Apr 2025 09:03:47 +0000 (04:03 -0500)]
vulkan: fix NaN issue in flash attention shader (#12776)
Use -FLT_MAX/2 rather than -inf as the initial value for computing the maximum.
Jeff Bolz [Sun, 6 Apr 2025 08:47:13 +0000 (03:47 -0500)]
vulkan: Use unclamped loads for flash attention mask (#12720)
nem1 must be a multiple of GGML_KQ_MASK_PAD, and GGML_KQ_MASK_PAD is a multiple
of the number of rows in the matrix. The KV dim is a multiple of the number of
columns for the aligned shader.
0cc4m [Sat, 5 Apr 2025 16:04:03 +0000 (18:04 +0200)]
Vulkan: Tune Vulkan mmq int dot shader for performance (#12767)
Sergey Fedorov [Sat, 5 Apr 2025 15:46:00 +0000 (23:46 +0800)]
common : fix includes in arg.cpp and gemma3-cli.cpp (#12766)
* arg.cpp: add a missing include
* gemma3-cli.cpp: fix cinttypes include
Xuan-Son Nguyen [Sat, 5 Apr 2025 15:17:40 +0000 (17:17 +0200)]
clip : refactor clip_init, add tests (#12757)
* refactor clip_init
* fix loading file
* fix style
* test ok
* better test with report
* add missing headers
* clarify
* add KEY_MM_PATCH_MERGE_TYPE
* remove bool has_* pattern
* Apply suggestions from code review
Co-authored-by: Georgi Gerganov <redacted>
* Update examples/llava/clip.cpp
Co-authored-by: Georgi Gerganov <redacted>
* use ggml_soft_max_ext
* refactor logging system
* add minicpm-v-o 2.6 for testing
* use nullptr everywhere
* fix Yi-VL model
---------
Co-authored-by: Georgi Gerganov <redacted>
エシュナヴァリシア [Sat, 5 Apr 2025 13:31:42 +0000 (21:31 +0800)]
common: custom hf endpoint support (#12769)
* common: custom hf endpoint support
Add support for custom huggingface endpoints via HF_ENDPOINT environment variable
You can now specify a custom huggingface endpoint using the HF_ENDPOINT environment variable when using the --hf-repo flag, which works similarly to huggingface-cli's endpoint configuration.
Example usage:
HF_ENDPOINT=https://hf-mirror.com/ ./bin/llama-cli --hf-repo Qwen/Qwen1.5-0.5B-Chat-GGUF --hf-file qwen1_5-0_5b-chat-q2_k.gguf -p "The meaning to life and the universe is"
The trailing slash in the URL is optional:
HF_ENDPOINT=https://hf-mirror.com ./bin/llama-cli --hf-repo Qwen/Qwen1.5-0.5B-Chat-GGUF --hf-file qwen1_5-0_5b-chat-q2_k.gguf -p "The meaning to life and the universe is"
* Update common/arg.cpp
readability Improvement
Co-authored-by: Xuan-Son Nguyen <redacted>
* Apply suggestions from code review
---------
Co-authored-by: ベアトリーチェ <redacted>
Co-authored-by: Xuan-Son Nguyen <redacted>
Olivier Chafik [Fri, 4 Apr 2025 20:16:39 +0000 (13:16 -0700)]
sync: minja (#12739)
* sync: minja
https://github.com/google/minja/pull/57
* fix json include
Georgi Gerganov [Fri, 4 Apr 2025 18:48:10 +0000 (21:48 +0300)]
kv-cache : simplify + fix warning for recurrent models (#12756)
ggml-ci
bandoti [Fri, 4 Apr 2025 17:05:12 +0000 (14:05 -0300)]
ci: add Linux cross-compile build (#12428)
Nauful Shaikh [Fri, 4 Apr 2025 14:09:52 +0000 (09:09 -0500)]
server : webui : Upgrade daisyui, tailwindcss. (#12735)
* Upgrade daisyui, tailwindcss.
* Switch to all themes.
* Revert a change.
* Update formatting.
* Install packages before npm build.
* Revert "Install packages before npm build."
This reverts commit
336c5147e614e60993162794ba9d9d4629a916f8 .
* Add index.html.gz
* run build
---------
Co-authored-by: Xuan Son Nguyen <redacted>
nick huang [Fri, 4 Apr 2025 14:09:12 +0000 (22:09 +0800)]
gguf-split : --merge now respects --dry-run option (#12681)
* gguf-split now respects dry-run option
* removing trailing space
Nicolò Scipione [Fri, 4 Apr 2025 14:00:46 +0000 (16:00 +0200)]
sycl: allow ggml-sycl configuration and compilation using Visual Studio project/solution (#12625)
Ronny Brendel [Fri, 4 Apr 2025 13:12:40 +0000 (15:12 +0200)]
cmake: fix ggml-shaders-gen compiler paths containing spaces (#12747)
fixes error for compiler paths with spaces
Daniel Bevenius [Fri, 4 Apr 2025 08:24:12 +0000 (10:24 +0200)]
docs : add XCFramework section to README.md [no ci] (#12746)
This commit adds a new section to the README.md file, detailing the
usage of the XCFramework.
The motivation for this is that it might not be immediately clear to
users how to use the XCFramework in their projects and hopefully this
will help.
Jeff Bolz [Fri, 4 Apr 2025 05:54:35 +0000 (00:54 -0500)]
vulkan: Hybrid waitForFences/getFenceStatus to reduce fence latency (#12630)
There seems to be a bubble waking up from waitForFences, which costs a few
percent performance and also increased variance in performance. This change
inserts an "almost_ready" fence when the graph is about 80% complete and we
waitForFences for the almost_ready fence and then spin (with _mm_pauses) waiting
for the final fence to be signaled.
Jeff Bolz [Fri, 4 Apr 2025 05:53:20 +0000 (00:53 -0500)]
vulkan: set cmake minimum and project name in vulkan-shaders (#12744)
lhez [Fri, 4 Apr 2025 05:18:17 +0000 (22:18 -0700)]
opencl: update doc for OpenCL (#12702)
* opencl: add OpenCL to build.md
* opencl: remove fixed issue/TODO
* opencl: add link to OPENCL.md
* opencl: update doc - refine tools requirement for Windows 11 arm64
Gaurav Garg [Thu, 3 Apr 2025 16:20:29 +0000 (21:50 +0530)]
CUDA: Prefer vector flash decoding kernel for Gemma models (#12738)
* Prefer vector flash decoding kernel for Gemma models
Vector flash decoding kernel was not being picked for models with head dimension 256. Gemma models are in this category.
Removing this limit improves e2e performance by upto 12% in gen phase throughput for Gemm models.
* Update ggml/src/ggml-cuda/fattn.cu
Co-authored-by: Johannes Gäßler <redacted>
---------
Co-authored-by: Johannes Gäßler <redacted>
yumeyao [Thu, 3 Apr 2025 15:32:54 +0000 (23:32 +0800)]
vocab : use string_view::find() to avoid unnecessary looking up beyond the fragment range (#12706)
Jeff Bolz [Thu, 3 Apr 2025 15:08:26 +0000 (10:08 -0500)]
vulkan: Fix missing cmake logic for dot product extension (#12721)
Atharva Dubey [Thu, 3 Apr 2025 12:12:39 +0000 (13:12 +0100)]
ci : add env variable in ggml-ci and document the same in SYCL.md (#12736)
R0CKSTAR [Thu, 3 Apr 2025 11:51:35 +0000 (19:51 +0800)]
sync : minja (inclusionAI/Ling) and update tests (#12699)
Signed-off-by: Xiaodong Ye <redacted>
a3sh [Thu, 3 Apr 2025 07:32:55 +0000 (15:32 +0800)]
fix MUSA compiler warning (#12704)
* fix MUSA compiler warning
* replace (void) with GGML_UNUSED
Chenguang Li [Thu, 3 Apr 2025 07:18:08 +0000 (15:18 +0800)]
CANN: Support operator SIN COS ARGMAX (#12709)
* [CANN]support sin cos argmax
Signed-off-by: noemotiovon <redacted>
* [CANN]codestyle adjustment
Signed-off-by: noemotiovon <redacted>
* [CANN]Remove redundant code
Signed-off-by: noemotiovon <redacted>
---------
Signed-off-by: noemotiovon <redacted>
Co-authored-by: noemotiovon <redacted>
Alan Gray [Thu, 3 Apr 2025 01:31:15 +0000 (02:31 +0100)]
Simplify and improve CUDA graphs through use of indirect copy pointers (#9017)
* CUDA: Simplify and improve CUDA graphs through use of indirect copy pointers
Previously there was complexity in the CUDA graphs implementation due
frequently changing parameters to copy kernels associated with K and V
cache pointers. This patch simplifies by using indirection to avoid
such parameters frequently changing, avoiding the need for frequent
graph updates.
Fixes #12152
* Addressed comments
* fix HIP builds
* properly sync to stream
* removed ggml_cuda_cpy_fn_ptrs
* move stream sync before free
* guard to only use indirection with graphs
* style fixes
* check for errors
---------
Co-authored-by: slaren <redacted>
hipudding [Thu, 3 Apr 2025 00:49:51 +0000 (08:49 +0800)]
CANN: Fix failed test cases (#12708)
* CANN: Fix memory waste in aclnn_tensor
* CANN: fix backend ops fail
* CANN: fix acl_tensor memory alloc.
* CANN: format
* CANN: remove trailing whitespace
lhez [Thu, 3 Apr 2025 00:01:42 +0000 (17:01 -0700)]
opencl: use `max_alloc_size` in backend ctx instead of querying again (#12705)
Jeff Bolz [Wed, 2 Apr 2025 19:25:08 +0000 (14:25 -0500)]
vulkan: Implement split_k for coopmat2 flash attention. (#12627)
When using group query attention, we have one workgroup per KV batch and this
can be very few workgroups (e.g. just 8 in some models). Enable split_k to
spread the work across SMs. This helps a lot when the KV cache is large.
bandoti [Wed, 2 Apr 2025 17:56:26 +0000 (14:56 -0300)]
cmake: remove caching from vulkan coopmat checks (#12719)
Jeff Bolz [Wed, 2 Apr 2025 17:40:32 +0000 (12:40 -0500)]
vulkan: Implement grouped query attention in the coopmat2 FA shader (#12559)
When adjacent batches of Q share the same batches of K/V, batch them into
the same workgroup. For example, when:
dst(128,32,1,1) = FA(q(128,1,32,1), k(128,16640,8,1), v(128,16640,8,1))
previously we would run 32 workgroups computing 1 result each, now we will
run 8 workgroups computing 4 results each.
This doesn't directly translate to better performance (at least when you have
>=32 SMs), but in a subsequent change I'll enable split_k which will scale much
better with 4x fewer workgroups.
0cc4m [Wed, 2 Apr 2025 17:12:30 +0000 (19:12 +0200)]
Vulkan: Fix mmq int dot float cache size (#12722)
Georgi Gerganov [Wed, 2 Apr 2025 13:38:54 +0000 (16:38 +0300)]
model : print tensor size during load (#12711)
* model : print tensor size during load
* cont : fix units MB -> MiB
Co-authored-by: Diego Devesa <redacted>
---------
Co-authored-by: Diego Devesa <redacted>
Diego Devesa [Wed, 2 Apr 2025 12:52:01 +0000 (14:52 +0200)]
llama : add option to override model tensor buffers (#11397)
* llama : add option to override tensor buffers
* ggml : fix possible underflow in ggml_nbytes
Georgi Gerganov [Wed, 2 Apr 2025 11:32:59 +0000 (14:32 +0300)]
llama : refactor kv cache guard (#12695)
* llama : refactor kv cache guard
ggml-ci
* cont : fix comment [no ci]
* llama : fix kv_cache restore logic
ggml-ci
* context : simplify kv cache updates
ggml-ci
* cont : better name [no ci]
* llama : fix llama_decode return code when could not find KV slot
ggml-ci
* context : change log err -> warn [no ci]
* kv-cache : add comment + warning
Sigbjørn Skjæret [Wed, 2 Apr 2025 09:21:48 +0000 (11:21 +0200)]
vocab : BailingMoE : change possessive quantifiers to greedy (#12677)