]>
git.djapps.eu Git - pkg/ggml/sources/ggml/log
Xuan-Son Nguyen [Fri, 25 Apr 2025 12:31:42 +0000 (14:31 +0200)]
clip : fix pixtral on some GPU backends (llama/13097)
* clip : fix pixtral on some GPU backends
* refactor inp_raw set
* rm outdated comment
* fix dynamic size
* add TODO
Neo Zhang Jianyu [Fri, 25 Apr 2025 09:37:51 +0000 (17:37 +0800)]
change the reorder tensor from init to execute OP (llama/13003)
Radoslav Gerganov [Fri, 25 Apr 2025 07:08:08 +0000 (10:08 +0300)]
rpc : do not wait for response when sending RPC_CMD_SET_TENSOR (llama/12943)
RPC_CMD_SET_TENSOR always returns an empty response and we send this 4
times per token. We can improve TG speed if we don't wait for this empty
response.
The performance impact of this change depends on the network latency.
Diego Devesa [Wed, 30 Apr 2025 13:20:40 +0000 (15:20 +0200)]
ggml : fix ggml_gallocr_ptr type (#1205)
Georgi Gerganov [Wed, 30 Apr 2025 05:05:22 +0000 (08:05 +0300)]
media : rm logos (#1203)
* media : add
* media : experiment with cards
* media : add common.sh
* cards : fix frame and shadow
* cards : adjustments
* cards : add qwen3
* media : rm
Georgi Gerganov [Fri, 25 Apr 2025 12:59:04 +0000 (15:59 +0300)]
sync : whisper.cpp
ggml-ci
Georgi Gerganov [Thu, 24 Apr 2025 15:59:06 +0000 (18:59 +0300)]
cuda : fix unused variable compile warning (whisper/0)
ggml-ci
Georgi Gerganov [Thu, 24 Apr 2025 15:41:17 +0000 (18:41 +0300)]
opencl : remove obsolete files (skip) (#1200)
Georgi Gerganov [Thu, 24 Apr 2025 14:48:02 +0000 (17:48 +0300)]
sync : llama.cpp
ggml-ci
Georgi Gerganov [Thu, 24 Apr 2025 14:47:31 +0000 (17:47 +0300)]
metal : add memory pool for temp allocs (llama/12850)
lhez [Thu, 24 Apr 2025 14:46:49 +0000 (17:46 +0300)]
opencl: split ggml-opencl.cl into multiple files and cleanup (llama/12886)
---------
Co-authored-by: Shangqing Gu <redacted>
Georgi Gerganov [Thu, 24 Apr 2025 14:22:27 +0000 (17:22 +0300)]
ggml : fix trailing whitespaces (llama/0)
Johannes Gäßler [Thu, 24 Apr 2025 13:57:10 +0000 (15:57 +0200)]
CUDA: use switch statements in constexpr functions (llama/13095)
Georgi Gerganov [Thu, 24 Apr 2025 07:38:30 +0000 (10:38 +0300)]
metal : fix floating-point range of attention scores in FA kernels (llama/13090)
ggml-ci
Eve [Thu, 24 Apr 2025 07:18:33 +0000 (07:18 +0000)]
vulkan: matmul gcn tuning (llama/13016)
* tune matmul for gcn
* this one is more power efficient
* Update src/ggml-vulkan/ggml-vulkan.cpp
Co-authored-by: 0cc4m <redacted>
* disable this tune for the proprietary driver
---------
Co-authored-by: 0cc4m <redacted>
Johannes Gäßler [Tue, 22 Apr 2025 19:27:40 +0000 (21:27 +0200)]
CUDA: noncont MMVQ + batched bs1 MUL_MAT_ID (llama/13014)
* CUDA: noncont MMVQ + batched bs1 MUL_MAT_ID
* fix logic for RoPE support, CUDA graphs
Diego Devesa [Mon, 21 Apr 2025 16:13:51 +0000 (18:13 +0200)]
ggml : add SSE 4.2 and x64 base variant for CPUs without AVX (llama/12871)
* ggml : add SSE 4.2 variant for CPUs without AVX
* ggml : add x64 base ABI variant
Akarshan Biswas [Mon, 21 Apr 2025 13:43:30 +0000 (19:13 +0530)]
SYCL: Add non-contiguous support in ROPE (llama/12993)
ggml-ci
Jeff Bolz [Sun, 20 Apr 2025 08:50:02 +0000 (03:50 -0500)]
vulkan: support noncontiguous rms_norm (llama/13031)
Jeffrey Morgan [Sun, 20 Apr 2025 05:28:40 +0000 (22:28 -0700)]
metal: add neg operator (llama/13029)
Akarshan Biswas [Fri, 18 Apr 2025 13:57:56 +0000 (19:27 +0530)]
SYCL: Refactor and enable FP16 in binary broadcast OPs (llama/12975)
* SYCL: refactor move to a separate file
* Fix binbcast
* Remove duplicates
* fix include formatting
* fix typo
Radoslav Gerganov [Fri, 18 Apr 2025 07:13:42 +0000 (10:13 +0300)]
rpc : add RPC_CMD_HELLO (llama/12955)
Add RPC_CMD_HELLO for getting the version of the protocol implemend by
the server. Follow the semantic versioning rules at https://semver.org
Hopefully this bring better user experience when we make breaking
changes at the protocol level and avoid issues like #12465
Georgi Gerganov [Thu, 17 Apr 2025 15:16:36 +0000 (18:16 +0300)]
graph : make FA compatible with MLA + add initial Metal kernels (llama/12953)
* graph : make mla compatible with FA
* metal : add exp FA kernels for DeepSeek models
ggml-ci
* llama : minor naming updates
ggml-ci
* ggml : disable FA for DS head sizes
* tests : add FA tests for MLA shapes
ggml-ci
Alan Gray [Thu, 17 Apr 2025 13:19:42 +0000 (14:19 +0100)]
ggml: Re-enable CUDA graphs in presence of CONT and DUP nodes (llama/12970)
hipudding [Thu, 17 Apr 2025 12:34:16 +0000 (20:34 +0800)]
CANN: Add support for async operator submission (llama/12864)
Submit operators using asynchronous threads to improve performance.
Use the environment variable GGML_CANN_ASYNC_MODE to control whether
asynchronous submission is enabled. It is disabled by default.
Testing shows a 10%–20% performance improvement in scenarios with
small parameter sizes, especially in quantized models.
kimminsu [Wed, 16 Apr 2025 21:25:57 +0000 (06:25 +0900)]
opencl: fix incorrect local_size index in profiling log (llama/12868)
Jeff Bolz [Wed, 16 Apr 2025 18:37:25 +0000 (13:37 -0500)]
vulkan: enable coopmat2 FA gqa and split_k optimizations more often (llama/12931)
The grouped query attention optmization doesn't require a power of two ratio,
the only thing relying on it was the modulo operation written as bitwise &.
split_k need not depend on gqa_ratio - enable it any time there's only one
workgroup in the X dimension. The shader gets the split index from the x coord,
and multiple workgroups in the X dimension (pre-split) indicates a larger
FA operation that wouldn't need splitting.
Chenguang Li [Wed, 16 Apr 2025 08:21:05 +0000 (16:21 +0800)]
CANN: Add 310P operator support check (llama/12962)
Georgi Gerganov [Tue, 15 Apr 2025 11:45:05 +0000 (14:45 +0300)]
metal : add FA-vec kernels for head size 96 (llama/12952)
ggml-ci
hipudding [Tue, 15 Apr 2025 11:08:55 +0000 (19:08 +0800)]
CANN: Add x86 build ci (llama/12950)
* CANN: Add x86 build ci
* CANN: fix code format
David Huang [Tue, 15 Apr 2025 09:20:38 +0000 (17:20 +0800)]
CUDA/HIP: Share the same unified memory allocation logic. (llama/12934)
Replace compile-time `GGML_HIP_UMA` with environment variable `GGML_CUDA_ENABLE_UNIFIED_MEMORY`. This unifies the usage on NVIDIA and AMD GPUs, and allows a single binary to be shared between integrated and dedicated GPUs.
Akarshan Biswas [Tue, 15 Apr 2025 08:37:42 +0000 (14:07 +0530)]
SYCL: Add ROPE vision kernel (llama/12887)
* SYCL: Add ROPE vision kernel
* Add comment about rope mode
Srihari-mcw [Tue, 15 Apr 2025 06:22:36 +0000 (11:52 +0530)]
ggml : Add AVX512 implementation of GEMM - Q4_Kx8 (llama/12829)
* Add AVX512 implementation of GEMM - q4kx8
* Update changes to remove unnecessary whitespaces
Chenguang Li [Tue, 15 Apr 2025 02:09:35 +0000 (10:09 +0800)]
CANN: Opt ROPE optimization (llama/12865)
* [CANN]Opt ROPE optimization
* [CANN]Codestyle adjustment
* [CANN]Fix the ROPE precision issue
* [CANN]codestyle fix
* [CANN]add rope unsupport case
Signed-off-by: noemotiovon <redacted>
Xinpeng Dou [Tue, 15 Apr 2025 02:04:24 +0000 (10:04 +0800)]
CANN: Optimize CANN buffer pool memory management (llama/12875)
Multiple optional memory pools are provided for CANN, including VMM,
priority queue-based, and traditional memory pools.
1.When the memory pool is available and GGML_CANN_DISABLE_VMM_POOL
is not defined, the VMM pool is selected by default.
2.Otherwise, if GGML_CANN_ENABLE_BUF_PRIO_POOL is defined,
the priority queue-based memory pool is used.
3.If neither condition is met, the default memory pool is used.
Akarshan Biswas [Mon, 14 Apr 2025 12:23:53 +0000 (17:53 +0530)]
SYCL: Fix im2col (llama/12910)
* SYCL: Fix im2col
* restore local workgroup size adjustments for large inputs
* restore format
Radoslav Gerganov [Mon, 14 Apr 2025 10:59:34 +0000 (13:59 +0300)]
rpc : use ggml_context_ptr (llama/12938)
Georgi Gerganov [Thu, 24 Apr 2025 14:40:08 +0000 (17:40 +0300)]
scripts : update sync-llama-am.sh
Leonard Mosescu [Sat, 19 Apr 2025 05:36:38 +0000 (22:36 -0700)]
tests : Fix a few small Windows / MSVC build issues (#1193)
* Fix a few small Windows / MSVC build issues
* Fix MSVC build and keep C++17 conformance
* Update tests/test-pad-reflect-1d.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Incorporate review feedback
---------
Co-authored-by: Georgi Gerganov <redacted>
Acly [Thu, 17 Apr 2025 12:16:45 +0000 (14:16 +0200)]
ggml : Depthwise 2D convolution (#1152)
* ggml-cpu : kernels for faster depthwise 2D convolution
* fix compile: remove static after moving to ops.cpp
* add dilation for depthwise_conv_2d
* review: rename to ggml_conv_2d_dw_direct, remove redundant struct keywords, pass by ref, whitespace
* review: rename depthwise_conv_2d -> conv_2d_dw everywhere
Georgi Gerganov [Mon, 14 Apr 2025 06:26:45 +0000 (09:26 +0300)]
sync : llama.cpp
ggml-ci
SXX [Mon, 14 Apr 2025 05:47:55 +0000 (13:47 +0800)]
ggml: use _mm[512/256]_dpbusd[_avx]_epi32 to directly accumulate into the result register (llama/12773)
* ggml: use _mm[512/256]_dpbusd[_avx]_epi32 to directly accumulate into the result register
* simplifies the codebase by removing redundant functions
Alan Gray [Sun, 13 Apr 2025 21:12:21 +0000 (22:12 +0100)]
ggml: disable CUDA graphs for unsupported DUP and CONT node types (llama/12891)
Fixes #12798
Jeff Bolz [Sat, 12 Apr 2025 08:44:48 +0000 (03:44 -0500)]
vulkan: use aligned loads for flash attention mask (llama/12853)
Rewrite the stride logic for the mask tensor in the FA shader to force the
stride to be aligned, to allow using more efficient loads.
Ewan Crawford [Fri, 11 Apr 2025 13:32:14 +0000 (15:32 +0200)]
sycl: Support sycl_ext_oneapi_limited_graph (llama/12873)
The current usage of the SYCL-Graph extension checks for
the `sycl_ext_oneapi_graph` device aspect. However, it is also
possible to support `sycl_ext_oneapi_limied_graph` devices that
don't support update
Akarshan Biswas [Fri, 11 Apr 2025 08:03:50 +0000 (13:33 +0530)]
SYCL: Add fp16 type support to unary op kernels (llama/12788)
* SYCL: Add fp16 support to some elementwise OP kernels
* remove comment
ggml-ci
* Use static_cast directly
* remove not needed cast from tanh
* Use static cast and remove unneeded castings
* Adjust device_support_op for unary OPs
* Use cast_data and typed_data struct to deduplicate casting code
Aaron Teo [Fri, 11 Apr 2025 05:20:07 +0000 (13:20 +0800)]
ggml: fix compilation error s390x (llama/12848)
* ggml: fixes #12846 compilation error
Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
* ggml: add documentation for code change
Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
* ggml: refactor to type-cast and update documentation
Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
* ggml: update documentation to provide full issue link
Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Aleksei Nikiforov <redacted>
---------
Co-authored-by: Aleksei Nikiforov <redacted>
Georgi Gerganov [Thu, 10 Apr 2025 21:04:25 +0000 (00:04 +0300)]
tests : fix init order (llama/0)
ggml-ci
cmdr2 [Fri, 11 Apr 2025 06:44:19 +0000 (12:14 +0530)]
cpu: fix cpu backend's supports-op for GET_ROWS_BACK. fixes a fatal when running test-backend-ops with only the CPU backend (#1190)
Georgi Gerganov [Thu, 10 Apr 2025 21:02:24 +0000 (00:02 +0300)]
sync : fix (skip) (#0)
ggml-ci
Georgi Gerganov [Thu, 10 Apr 2025 20:46:46 +0000 (23:46 +0300)]
sync : llama.cpp
ggml-ci
Chenguang Li [Thu, 10 Apr 2025 00:51:52 +0000 (08:51 +0800)]
CANN: Support more ops (llama/12841)
* [CANN]Support Opt LOG && MEAN && PAD_REFLECT_1D
* [CANN]Support COUNT_EQUAL && STEP && SGN
* [CANN]codestyle adjustment
* [CANN]codestyle adjustment
---------
Signed-off-by: noemotiovon <redacted>
Prajwal B Mehendarkar [Wed, 9 Apr 2025 23:18:01 +0000 (04:48 +0530)]
Fixes #12823 (llama/12830)
* Including limits file on AIX
* Fixes #12823
Piotr Kubaj [Wed, 9 Apr 2025 23:00:34 +0000 (23:00 +0000)]
ggml-cpu-impl.h: do not redefine bool on POWER9 (llama/12856)
error: unknown type name '_Bool'
Piotr Kubaj [Wed, 9 Apr 2025 23:00:25 +0000 (23:00 +0000)]
ggml-impl.h: fix build on POWER9 (llama/12855)
error: ISO C++17 does not allow 'register' storage class specifier
Chenguang Li [Wed, 9 Apr 2025 06:04:14 +0000 (14:04 +0800)]
CANN: Support Opt CONV_TRANSPOSE_1D and ELU (llama/12786)
* [CANN] Support ELU and CONV_TRANSPOSE_1D
* [CANN]Modification review comments
* [CANN]Modification review comments
* [CANN]name adjustment
* [CANN]remove lambda used in template
* [CANN]Use std::func instead of template
* [CANN]Modify the code according to the review comments
---------
Signed-off-by: noemotiovon <redacted>
Jeff Bolz [Wed, 9 Apr 2025 05:25:08 +0000 (00:25 -0500)]
vulkan: In coopmat2 mmq, load q4_k/q5_k scales through shared memory (llama/12833)
q4_k and q5_k had a lot of redundant global loads where the same 16B of
scale information is repeatedly loaded and decoded during each loop iteration.
This change restructures the loops to more explicitly iterate over whole
blocks in the outer loop (with unrolled inner loop) and to copy/decode the
scale data into shared memory once at the start of each outer loop. The copy
is pipelined so the scale load from global memory is relatively cheap.
This improves q4_k/q5_k model prompt processing performance by around 5-7%.
I briefly tried applying this to q6_k and q4_0, and it didn't help for q6_k
and hurt for q4_0.
The big "else" path in mul_mm_cm2.comp that had all the clamped/unclamped
variants isn't used as often as it originally was (e.g. due to the padded_N
change), so I trimmed it down to offset some of the new complexity of the
semi-manual loop unrolling.
Jeff Bolz [Wed, 9 Apr 2025 05:12:57 +0000 (00:12 -0500)]
vulkan: Use fp16 for the flash attention P*V multiplication (llama/12783)
This is consistent with the ggml-cuda behavior and the mul_mat fallback.
Sigbjørn Skjæret [Tue, 8 Apr 2025 21:21:31 +0000 (23:21 +0200)]
cuda : add f32 to bf16 copy op (llama/12806)
This allows BF16 KV-cache on CUDA.
Georgi Gerganov [Tue, 8 Apr 2025 16:54:51 +0000 (19:54 +0300)]
llama : fix FA when KV cache is not used (i.e. embeddings) (llama/12825)
* ggml : FA supports F32 V
* graph : cast KV to F16 when the KV cache is not used
ggml-ci
* server : add test that exercises embeddings with FA enabled
ggml-ci
cmdr2 [Thu, 10 Apr 2025 12:23:08 +0000 (17:53 +0530)]
ggml: don't include arm_neon.h when using CUDA 12 with ARM Neon (#1187)
fix #1186
Diego Devesa [Wed, 9 Apr 2025 10:32:13 +0000 (12:32 +0200)]
ggml : add bilinear upscale support (#1185)
Diego Devesa [Wed, 9 Apr 2025 10:31:34 +0000 (12:31 +0200)]
ggml : add more generic custom op, remove deprecated custom ops (#1183)
* ggml : add more generic ggml_custom op
* ggml : remove deprecated custom ops
Georgi Gerganov [Tue, 8 Apr 2025 08:23:57 +0000 (11:23 +0300)]
sync : llama.cpp
ggml-ci
Neo Zhang Jianyu [Tue, 8 Apr 2025 07:03:21 +0000 (15:03 +0800)]
Revert "sycl:remove redundant memcopy in function ggml_backend_sycl_buffer_set_tensor" (llama/12812)
* Revert "sycl: remove redundant memcopy in function ggml_backend_sycl_buffer_s…"
This reverts commit
518a01480eb3a7c80a4951b430db9dee55428310 .
* Update ggml/src/ggml-sycl/ggml-sycl.cpp
* Update ggml/src/ggml-sycl/ggml-sycl.cpp
* rm tail space
lhez [Mon, 7 Apr 2025 20:22:54 +0000 (13:22 -0700)]
opencl: better identify Adreno GPU (llama/12760)
Georgi Gerganov [Mon, 7 Apr 2025 10:18:07 +0000 (13:18 +0300)]
cuda : fix HIP and MUSA BF16 (llama/0)
ggml-ci
zhouwg [Mon, 7 Apr 2025 15:22:57 +0000 (23:22 +0800)]
sycl: remove redundant memcopy in function ggml_backend_sycl_buffer_set_tensor (llama/12734)
zhouwg [Mon, 7 Apr 2025 11:34:14 +0000 (19:34 +0800)]
CANN: fix typo in ggml-cann (llama/12733)
hipudding [Mon, 7 Apr 2025 09:10:36 +0000 (17:10 +0800)]
CANN: Refactor to reduce duplicate code (llama/12731)
* CANN: Refactor to reduce duplicate code
* CANN: fix review comment
R0CKSTAR [Sun, 6 Apr 2025 13:23:54 +0000 (21:23 +0800)]
musa: fix compilation warnings in mp_22/31 (llama/12780)
Signed-off-by: Xiaodong Ye <redacted>
Jeff Bolz [Sun, 6 Apr 2025 09:03:47 +0000 (04:03 -0500)]
vulkan: fix NaN issue in flash attention shader (llama/12776)
Use -FLT_MAX/2 rather than -inf as the initial value for computing the maximum.
Jeff Bolz [Sun, 6 Apr 2025 08:47:13 +0000 (03:47 -0500)]
vulkan: Use unclamped loads for flash attention mask (llama/12720)
nem1 must be a multiple of GGML_KQ_MASK_PAD, and GGML_KQ_MASK_PAD is a multiple
of the number of rows in the matrix. The KV dim is a multiple of the number of
columns for the aligned shader.
0cc4m [Sat, 5 Apr 2025 16:04:03 +0000 (18:04 +0200)]
Vulkan: Tune Vulkan mmq int dot shader for performance (llama/12767)
Nicolò Scipione [Fri, 4 Apr 2025 14:00:46 +0000 (16:00 +0200)]
sycl: allow ggml-sycl configuration and compilation using Visual Studio project/solution (llama/12625)
Ronny Brendel [Fri, 4 Apr 2025 13:12:40 +0000 (15:12 +0200)]
cmake: fix ggml-shaders-gen compiler paths containing spaces (llama/12747)
fixes error for compiler paths with spaces
Jeff Bolz [Fri, 4 Apr 2025 05:54:35 +0000 (00:54 -0500)]
vulkan: Hybrid waitForFences/getFenceStatus to reduce fence latency (llama/12630)
There seems to be a bubble waking up from waitForFences, which costs a few
percent performance and also increased variance in performance. This change
inserts an "almost_ready" fence when the graph is about 80% complete and we
waitForFences for the almost_ready fence and then spin (with _mm_pauses) waiting
for the final fence to be signaled.
Jeff Bolz [Fri, 4 Apr 2025 05:53:20 +0000 (00:53 -0500)]
vulkan: set cmake minimum and project name in vulkan-shaders (llama/12744)
Gaurav Garg [Thu, 3 Apr 2025 16:20:29 +0000 (21:50 +0530)]
CUDA: Prefer vector flash decoding kernel for Gemma models (llama/12738)
* Prefer vector flash decoding kernel for Gemma models
Vector flash decoding kernel was not being picked for models with head dimension 256. Gemma models are in this category.
Removing this limit improves e2e performance by upto 12% in gen phase throughput for Gemm models.
* Update ggml/src/ggml-cuda/fattn.cu
Co-authored-by: Johannes Gäßler <redacted>
---------
Co-authored-by: Johannes Gäßler <redacted>
Jeff Bolz [Thu, 3 Apr 2025 15:08:26 +0000 (10:08 -0500)]
vulkan: Fix missing cmake logic for dot product extension (llama/12721)
a3sh [Thu, 3 Apr 2025 07:32:55 +0000 (15:32 +0800)]
fix MUSA compiler warning (llama/12704)
* fix MUSA compiler warning
* replace (void) with GGML_UNUSED
Chenguang Li [Thu, 3 Apr 2025 07:18:08 +0000 (15:18 +0800)]
CANN: Support operator SIN COS ARGMAX (llama/12709)
* [CANN]support sin cos argmax
Signed-off-by: noemotiovon <redacted>
* [CANN]codestyle adjustment
Signed-off-by: noemotiovon <redacted>
* [CANN]Remove redundant code
Signed-off-by: noemotiovon <redacted>
---------
Signed-off-by: noemotiovon <redacted>
Co-authored-by: noemotiovon <redacted>
Alan Gray [Thu, 3 Apr 2025 01:31:15 +0000 (02:31 +0100)]
Simplify and improve CUDA graphs through use of indirect copy pointers (llama/9017)
* CUDA: Simplify and improve CUDA graphs through use of indirect copy pointers
Previously there was complexity in the CUDA graphs implementation due
frequently changing parameters to copy kernels associated with K and V
cache pointers. This patch simplifies by using indirection to avoid
such parameters frequently changing, avoiding the need for frequent
graph updates.
Fixes #12152
* Addressed comments
* fix HIP builds
* properly sync to stream
* removed ggml_cuda_cpy_fn_ptrs
* move stream sync before free
* guard to only use indirection with graphs
* style fixes
* check for errors
---------
Co-authored-by: slaren <redacted>
hipudding [Thu, 3 Apr 2025 00:49:51 +0000 (08:49 +0800)]
CANN: Fix failed test cases (llama/12708)
* CANN: Fix memory waste in aclnn_tensor
* CANN: fix backend ops fail
* CANN: fix acl_tensor memory alloc.
* CANN: format
* CANN: remove trailing whitespace
lhez [Thu, 3 Apr 2025 00:01:42 +0000 (17:01 -0700)]
opencl: use `max_alloc_size` in backend ctx instead of querying again (llama/12705)
Jeff Bolz [Wed, 2 Apr 2025 19:25:08 +0000 (14:25 -0500)]
vulkan: Implement split_k for coopmat2 flash attention. (llama/12627)
When using group query attention, we have one workgroup per KV batch and this
can be very few workgroups (e.g. just 8 in some models). Enable split_k to
spread the work across SMs. This helps a lot when the KV cache is large.
bandoti [Wed, 2 Apr 2025 17:56:26 +0000 (14:56 -0300)]
cmake: remove caching from vulkan coopmat checks (llama/12719)
Jeff Bolz [Wed, 2 Apr 2025 17:40:32 +0000 (12:40 -0500)]
vulkan: Implement grouped query attention in the coopmat2 FA shader (llama/12559)
When adjacent batches of Q share the same batches of K/V, batch them into
the same workgroup. For example, when:
dst(128,32,1,1) = FA(q(128,1,32,1), k(128,16640,8,1), v(128,16640,8,1))
previously we would run 32 workgroups computing 1 result each, now we will
run 8 workgroups computing 4 results each.
This doesn't directly translate to better performance (at least when you have
>=32 SMs), but in a subsequent change I'll enable split_k which will scale much
better with 4x fewer workgroups.
0cc4m [Wed, 2 Apr 2025 17:12:30 +0000 (19:12 +0200)]
Vulkan: Fix mmq int dot float cache size (llama/12722)
Diego Devesa [Wed, 2 Apr 2025 12:52:01 +0000 (14:52 +0200)]
llama : add option to override model tensor buffers (llama/11397)
* llama : add option to override tensor buffers
* ggml : fix possible underflow in ggml_nbytes
Georgi Gerganov [Mon, 7 Apr 2025 09:25:15 +0000 (12:25 +0300)]
ggml : simplify Arm fp16 CPU logic (#1177)
* ggml : simlpify Arm fp16 CPU logic
ggml-ci
* cont : bring back CUDA/MUSA checks
ggml-ci
Sigbjørn Skjæret [Fri, 4 Apr 2025 19:05:12 +0000 (21:05 +0200)]
CUDA: don't convert BF16 weights to FP32 (#1174)
* add bf16 support
* use convert_from_bf16_cuda instead of convert_unary_cuda for f32
* revert
7ec5085
* move functionality into convert_unary with constexpr
Georgi Gerganov [Thu, 3 Apr 2025 07:31:13 +0000 (10:31 +0300)]
sync : whisper.cpp
ggml-ci
cmdr2 [Wed, 2 Apr 2025 12:16:16 +0000 (17:46 +0530)]
cpu: move all the operators into a separate c++ file (except mul_mat) (#1167)
* cpu: refactor SIMD mappings and vectorized op functions into separate files
* Fix warning for ggml_float to float
* Fix warnings
* cpu: move all the operations (except mul_mat) to a separate c++ file
* fix whitespace
* Update src/ggml-cpu/vec.h
Co-authored-by: Diego Devesa <redacted>
* Fix PR comments - use GGML_UNUSED, use cassert in ops.cpp
* Reverse the order of import for ops.h and vec.h, to match what was present in ggml-cpu.c previously
---------
Co-authored-by: Diego Devesa <redacted>
Georgi Gerganov [Wed, 2 Apr 2025 12:05:04 +0000 (15:05 +0300)]
sync : llama.cpp
ggml-ci
Chenguang Li [Wed, 2 Apr 2025 07:22:13 +0000 (15:22 +0800)]
get_rows and dup optimization (llama/12671)
* [CANN]get_rows and dup optimization.
Co-authored-by: hipudding <redacted>
Signed-off-by: noemotiovon <redacted>
* [CANN]GET_ROWS and CPY/DUP optimization
Co-authored-by: hipudding <redacted>
Signed-off-by: noemotiovon <redacted>
* [CANN]code style adjustment
Signed-off-by: noemotiovon <redacted>
* [CANN]code style adjustment
Signed-off-by: noemotiovon <redacted>
* [CANN]code style adjustment
Signed-off-by: noemotiovon <redacted>
* [CANN]code style adjustment
Signed-off-by: noemotiovon <redacted>
---------
Signed-off-by: noemotiovon <redacted>
Co-authored-by: noemotiovon <redacted>
Co-authored-by: hipudding <redacted>
Junil Kim [Tue, 1 Apr 2025 16:54:34 +0000 (01:54 +0900)]
opencl : fix memory allocation size (llama/12649)
issue:
https://github.com/CodeLinaro/llama.cpp/pull/17#issuecomment-
2760611283
This patch fixes the memory allocation size
not exceeding the maximum size of the OpenCL device.
Georgi Gerganov [Tue, 1 Apr 2025 11:57:19 +0000 (14:57 +0300)]
metal : use F32 prec in FA kernels (llama/12688)
* metal : use F32 prec in FA kernels
ggml-ci
* cont : fix FA vec kernel
ggml-ci
R0CKSTAR [Tue, 1 Apr 2025 11:12:53 +0000 (19:12 +0800)]
Fix clang warning in gguf_check_reserved_keys (llama/12686)
* Fix clang warning in gguf_check_reserved_keys
Signed-off-by: Xiaodong Ye <redacted>
* Fix typo
Signed-off-by: Xiaodong Ye <redacted>
---------
Signed-off-by: Xiaodong Ye <redacted>
Wagner Bruna [Tue, 1 Apr 2025 09:38:07 +0000 (06:38 -0300)]
vulkan: fix build when glslc doesn't support coopmat (llama/12683)