]> git.djapps.eu Git - pkg/ggml/sources/ggml/log
pkg/ggml/sources/ggml
6 months agovulkan: Add VK_NV_cooperative_matrix2 support for mul_mat and flash attention (llama...
Jeff Bolz [Thu, 5 Dec 2024 19:15:05 +0000 (13:15 -0600)]
vulkan: Add VK_NV_cooperative_matrix2 support for mul_mat and flash attention (llama/10206)

6 months agosync : whisper.cpp
Georgi Gerganov [Sun, 8 Dec 2024 20:56:43 +0000 (22:56 +0200)]
sync : whisper.cpp

ggml-ci

6 months agoscripts : update whisper sync
Georgi Gerganov [Sun, 8 Dec 2024 20:55:16 +0000 (22:55 +0200)]
scripts : update whisper sync

6 months agosync : llama.cpp
Georgi Gerganov [Thu, 5 Dec 2024 11:28:22 +0000 (13:28 +0200)]
sync : llama.cpp

ggml-ci

6 months agoggml : add predefined list of CPU backend variants to build (llama/10626)
Diego Devesa [Wed, 4 Dec 2024 13:45:40 +0000 (14:45 +0100)]
ggml : add predefined list of CPU backend variants to build (llama/10626)

* ggml : add predefined list of CPU backend variants to build

* update CPU dockerfiles

6 months agoggml-cpu : fix HWCAP2_I8MM value (llama/10646)
Diego Devesa [Wed, 4 Dec 2024 13:40:44 +0000 (14:40 +0100)]
ggml-cpu : fix HWCAP2_I8MM value (llama/10646)

6 months agovulkan: Implement "fast divide" (mul+shift) for unary ops like copy (llama/10642)
Jeff Bolz [Wed, 4 Dec 2024 07:28:59 +0000 (01:28 -0600)]
vulkan: Implement "fast divide" (mul+shift) for unary ops like copy (llama/10642)

6 months agoSYCL : Move to compile time oneMKL interface backend selection for NVIDIA backend...
Nicolò Scipione [Wed, 4 Dec 2024 01:29:20 +0000 (02:29 +0100)]
SYCL : Move to compile time oneMKL interface backend selection for NVIDIA backend (llama/10584)

* [SYCL] Move to Compile Time backend selection on oneMKL Interface for NVIDIA backend

Move to compile time selection to backend to avoid latency at run time.
Add it to all mkl gemm calls and only for NVIDIA backend.

Signed-off-by: nscipione <redacted>
* Formatting

* Address PR comments to increase readibility

---------

Signed-off-by: nscipione <redacted>
6 months agoAvoid using __fp16 on ARM with old nvcc (llama/10616)
Frankie Robertson [Wed, 4 Dec 2024 00:41:37 +0000 (02:41 +0200)]
Avoid using __fp16 on ARM with old nvcc (llama/10616)

6 months agovulkan: optimize and reenable split_k (llama/10637)
Jeff Bolz [Tue, 3 Dec 2024 19:29:54 +0000 (13:29 -0600)]
vulkan: optimize and reenable split_k (llama/10637)

Use vector loads when possible in mul_mat_split_k_reduce. Use split_k
when there aren't enough workgroups to fill the shaders.

6 months agoggml: add `GGML_SET` Metal kernel + i32 CPU kernel (#1037)
PAB [Wed, 4 Dec 2024 08:19:30 +0000 (09:19 +0100)]
ggml: add `GGML_SET` Metal kernel + i32 CPU kernel (#1037)

* implemented cpu kernel

* add i32 test cases in test-backend-ops

* typedef `ggml_metal_kargs_set`

* implemented `kernel_set`

* memcpy

6 months agoggml : add `GGML_PAD_REFLECT_1D` operation (#1034)
PAB [Tue, 3 Dec 2024 19:20:04 +0000 (20:20 +0100)]
ggml : add `GGML_PAD_REFLECT_1D` operation (#1034)

* ggml_pad_reflect_1d defined in header

* implemented on CPU

* called the forward pass

* impl Metal kernel

* added Metal kernel

* added OP_PAD_REFLECT_1D in test-backend-ops.cpp

* add test-pad-reflect-1d test case

* test case support multiple backend

6 months agoci : update requirements.txt
Georgi Gerganov [Tue, 3 Dec 2024 18:59:19 +0000 (20:59 +0200)]
ci : update requirements.txt

ggml-ci

6 months agoci : remove opencl workflow
Georgi Gerganov [Tue, 3 Dec 2024 18:31:24 +0000 (20:31 +0200)]
ci : remove opencl workflow

6 months agoci : fix pip env
Georgi Gerganov [Tue, 3 Dec 2024 18:29:58 +0000 (20:29 +0200)]
ci : fix pip env

ggml-ci

6 months agofiles : remove make artifacts
Georgi Gerganov [Tue, 3 Dec 2024 18:29:32 +0000 (20:29 +0200)]
files : remove make artifacts

6 months agocommon : fix compile warning
Georgi Gerganov [Tue, 3 Dec 2024 18:25:37 +0000 (20:25 +0200)]
common : fix compile warning

ggml-ci

6 months agoauthors : update
Georgi Gerganov [Tue, 3 Dec 2024 18:24:45 +0000 (20:24 +0200)]
authors : update

6 months agosync : llama.cpp
Georgi Gerganov [Tue, 3 Dec 2024 18:23:20 +0000 (20:23 +0200)]
sync : llama.cpp

ggml-ci

6 months agoggml : move AMX to the CPU backend (llama/10570)
Diego Devesa [Tue, 3 Dec 2024 18:22:12 +0000 (20:22 +0200)]
ggml : move AMX to the CPU backend (llama/10570)

ggml : automatic selection of best CPU backend (llama/10606)

6 months agoscripts : remove amx from sync
Georgi Gerganov [Tue, 3 Dec 2024 18:18:36 +0000 (20:18 +0200)]
scripts : remove amx from sync

6 months agometal : small-batch mat-mul kernels (llama/10581)
Georgi Gerganov [Tue, 3 Dec 2024 09:52:33 +0000 (11:52 +0200)]
metal : small-batch mat-mul kernels (llama/10581)

* metal : small-batch mat-mul kernels

ggml-ci

* metal : add rest of types

ggml-ci

* metal : final adjustments

ggml-ci

* metal : add comments

ggml-ci

6 months agoSYCL: Fix and switch to GGML_LOG system instead of fprintf (llama/10579)
Akarshan Biswas [Mon, 2 Dec 2024 07:04:11 +0000 (12:34 +0530)]
SYCL: Fix and switch to GGML_LOG system instead of fprintf (llama/10579)

* Switched to GGML_LOG

* Fix missing semicolon

6 months agoggml-cpu: replace AArch64 NEON assembly with intrinsics in ggml_gemv_q4_0_4x4_q8_0...
Adrien Gallouët [Sat, 30 Nov 2024 17:13:18 +0000 (18:13 +0100)]
ggml-cpu: replace AArch64 NEON assembly with intrinsics in ggml_gemv_q4_0_4x4_q8_0() (llama/10567)

Signed-off-by: Adrien Gallouët <redacted>
6 months agovulkan: Dynamic subgroup size support for Q6_K mat_vec (llama/10536)
Eve [Sat, 30 Nov 2024 07:00:02 +0000 (07:00 +0000)]
vulkan: Dynamic subgroup size support for Q6_K mat_vec (llama/10536)

* subgroup 64 version with subgroup add. 15% faster

scalable version

tested for subgroup sizes 16-128

* check for subgroup multiple of 16 and greater than 16

* subgroup sizes are always a power of 2 (https://github.com/KhronosGroup/GLSL/issues/45)

* force 16 sequential threads per block

* make 16 subgroup size a constant

6 months agoggml : fix I8MM Q4_1 scaling factor conversion (llama/10562)
Georgi Gerganov [Fri, 29 Nov 2024 14:25:39 +0000 (16:25 +0200)]
ggml : fix I8MM Q4_1 scaling factor conversion (llama/10562)

ggml-ci

6 months agoggml-cpu: fix typo in gemv/gemm iq4_nl_4_4 (llama/10580)
Shupei Fan [Fri, 29 Nov 2024 13:49:02 +0000 (21:49 +0800)]
ggml-cpu: fix typo in gemv/gemm iq4_nl_4_4 (llama/10580)

6 months agosycl : offload of get_rows set to 0 (llama/10432)
Alberto Cabrera Pérez [Fri, 29 Nov 2024 12:38:45 +0000 (12:38 +0000)]
sycl : offload of get_rows set to 0 (llama/10432)

6 months agosycl : Reroute permuted mul_mats through oneMKL (llama/10408)
Alberto Cabrera Pérez [Fri, 29 Nov 2024 09:49:43 +0000 (09:49 +0000)]
sycl : Reroute permuted mul_mats through oneMKL (llama/10408)

This PR fixes the failing MUL_MAT tests for the sycl backend.

6 months agoCANN: RoPE operator optimization (llama/10563)
Chenguang Li [Fri, 29 Nov 2024 06:46:55 +0000 (14:46 +0800)]
CANN: RoPE operator optimization (llama/10563)

* [cann] RoPE operator optimization

* [CANN]Code Formatting

---------

Co-authored-by: noemotiovon <redacted>
6 months agovulkan: get the first command buffer submitted sooner (llama/10499)
Jeff Bolz [Fri, 29 Nov 2024 06:18:02 +0000 (00:18 -0600)]
vulkan: get the first command buffer submitted sooner (llama/10499)

This is an incremental improvement over #9118 to get work to the GPU a bit
sooner. The first part is to start with a smaller number of nodes before
the first submit, and ramp it up to the current 100 nodes/submit. The
second part is to reduce the dryrun overhead for all the nodes that just
need to request descriptor space.

With these changes I get around 1-2% speedup on RTX 4070 combined with my
old Haswell-era CPU.

6 months agoggml : remove redundant copyright notice + update authors
Georgi Gerganov [Thu, 28 Nov 2024 18:46:40 +0000 (20:46 +0200)]
ggml : remove redundant copyright notice + update authors

6 months agoggml : fix row condition for i8mm kernels (llama/10561)
Georgi Gerganov [Thu, 28 Nov 2024 12:56:37 +0000 (14:56 +0200)]
ggml : fix row condition for i8mm kernels (llama/10561)

ggml-ci

6 months agocmake : fix ARM feature detection (llama/10543)
Georgi Gerganov [Thu, 28 Nov 2024 12:56:23 +0000 (14:56 +0200)]
cmake : fix ARM feature detection (llama/10543)

ggml-ci

6 months agoggml-cpu: support IQ4_NL_4_4 by runtime repack (llama/10541)
Shupei Fan [Thu, 28 Nov 2024 12:52:03 +0000 (20:52 +0800)]
ggml-cpu: support IQ4_NL_4_4 by runtime repack (llama/10541)

* ggml-cpu: support IQ4_NL_4_4 by runtime repack

* ggml-cpu: add __ARM_FEATURE_DOTPROD guard

6 months agokompute : improve backend to pass test_backend_ops (llama/10542)
Sergio López [Thu, 28 Nov 2024 11:51:38 +0000 (12:51 +0100)]
kompute : improve backend to pass test_backend_ops (llama/10542)

* kompute: op_unary: reject unsupported parameters

Signed-off-by: Sergio Lopez <redacted>
* kompute: softmax: implement ALiBi support

Signed-off-by: Sergio Lopez <redacted>
* kompute: rope: implement neox and phi3 support

Signed-off-by: Sergio Lopez <redacted>
* kompute: op_mul_mat_q4_k permutted support

Signed-off-by: Sergio Lopez <redacted>
* kompute: op_mul_mat_[q4_0|q4_1|q8_0] permutted support

Signed-off-by: Sergio Lopez <redacted>
* kompute: op_mul_mat_f16 permutted support

Signed-off-by: Sergio Lopez <redacted>
* kompute: op_mul_mat_q6_k permutted support

Signed-off-by: Sergio Lopez <redacted>
---------

Signed-off-by: Sergio Lopez <redacted>
6 months agoCANN: Fix SOC_TYPE compile bug (llama/10519)
leo-pony [Thu, 28 Nov 2024 07:25:24 +0000 (15:25 +0800)]
CANN: Fix SOC_TYPE compile bug (llama/10519)

* CANN: Fix the bug build fail on Ascend310P under two cases:
1) Manual specify SOC_TYPE
2) Under some unusual compile environment

* Update the cann backend News content: Support F16 and F32 data type model for Ascend 310P NPU.

* fix CANN  compile fail bug: the assert in ascend kernel function doesn't supportted on some CANN version

6 months agoCANN: ROPE operator optimization (llama/10540)
Chenguang Li [Thu, 28 Nov 2024 06:24:46 +0000 (14:24 +0800)]
CANN: ROPE operator optimization (llama/10540)

* [cann] ROPE operator optimization

Co-authored-by: noemotiovon <redacted>
6 months agoAdd some minimal optimizations for CDNA (llama/10498)
uvos [Wed, 27 Nov 2024 16:10:08 +0000 (17:10 +0100)]
Add some minimal optimizations for CDNA (llama/10498)

* Add some minimal optimizations for CDNA

* ggml_cuda: set launch bounds also for GCN as it helps there too

6 months agometal : fix group_norm support condition (llama/0)
Georgi Gerganov [Wed, 27 Nov 2024 09:22:14 +0000 (11:22 +0200)]
metal : fix group_norm support condition (llama/0)

6 months agovulkan: define all quant data structures in types.comp (llama/10440)
Jeff Bolz [Wed, 27 Nov 2024 07:32:54 +0000 (01:32 -0600)]
vulkan: define all quant data structures in types.comp (llama/10440)

6 months agovulkan: Handle GPUs with less shared memory (llama/10468)
Jeff Bolz [Wed, 27 Nov 2024 07:30:27 +0000 (01:30 -0600)]
vulkan: Handle GPUs with less shared memory (llama/10468)

There have been reports of failure to compile on systems with <= 32KB
of shared memory (e.g. #10037). This change makes the large tile size
fall back to a smaller size if necessary, and makes mul_mat_id fall
back to CPU if there's only 16KB of shared memory.

6 months agovulkan: further optimize q5_k mul_mat_vec (llama/10479)
Jeff Bolz [Wed, 27 Nov 2024 07:21:59 +0000 (01:21 -0600)]
vulkan: further optimize q5_k mul_mat_vec (llama/10479)

6 months agovulkan: skip integer div/mod in get_offsets for batch_idx==0 (llama/10506)
Jeff Bolz [Wed, 27 Nov 2024 07:08:54 +0000 (01:08 -0600)]
vulkan: skip integer div/mod in get_offsets for batch_idx==0 (llama/10506)

6 months agovulkan: optimize Q2_K and Q3_K mul_mat_vec (llama/10459)
Jeff Bolz [Wed, 27 Nov 2024 07:00:50 +0000 (01:00 -0600)]
vulkan: optimize Q2_K and Q3_K mul_mat_vec (llama/10459)

6 months agomtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (llama/10516)
R0CKSTAR [Tue, 26 Nov 2024 16:00:41 +0000 (00:00 +0800)]
mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (llama/10516)

Signed-off-by: Xiaodong Ye <redacted>
6 months agovulkan: fix group_norm (llama/10496)
Jeff Bolz [Tue, 26 Nov 2024 15:45:05 +0000 (09:45 -0600)]
vulkan: fix group_norm (llama/10496)

Fix bad calculation of the end of the range. Add a backend test that
covers the bad case (taken from stable diffusion).

Fixes https://github.com/leejet/stable-diffusion.cpp/issues/439.

6 months agocmake : enable warnings in llama (llama/10474)
Georgi Gerganov [Tue, 26 Nov 2024 12:18:08 +0000 (14:18 +0200)]
cmake : enable warnings in llama (llama/10474)

* cmake : enable warnings in llama

ggml-ci

* cmake : add llama_get_flags and respect LLAMA_FATAL_WARNINGS

* cmake : get_flags -> ggml_get_flags

* speculative-simple : fix warnings

* cmake : reuse ggml_get_flags

ggml-ci

* speculative-simple : fix compile warning

ggml-ci

6 months agoggml-cpu: cmake add arm64 cpu feature check for macos (llama/10487)
Charles Xu [Tue, 26 Nov 2024 11:37:05 +0000 (12:37 +0100)]
ggml-cpu: cmake add arm64 cpu feature check for macos (llama/10487)

* ggml-cpu: cmake add arm64 cpu feature check for macos

* use vmmlaq_s32 for compile option i8mm check

6 months agoCANN: Improve the Inferencing Performance for Ascend NPU Device (llama/10454)
Shanshan Shen [Tue, 26 Nov 2024 10:08:37 +0000 (18:08 +0800)]
CANN: Improve the Inferencing Performance for Ascend NPU Device (llama/10454)

* improve inferencing performance for ascend npu.

Co-authored-by: Frank Mai <redacted>
* some modification after review

* some modifications after review

* restore some modifications

* restore some modifications

---------

Co-authored-by: shanshan shen <redacted>
Co-authored-by: Frank Mai <redacted>
6 months agoCANN: RoPE and CANCAT operator optimization (llama/10488)
Chenguang Li [Tue, 26 Nov 2024 09:31:05 +0000 (17:31 +0800)]
CANN: RoPE and CANCAT operator optimization (llama/10488)

Co-authored-by: noemotiovon <redacted>
6 months agovulkan: Fix a vulkan-shaders-gen arugment parsing error (llama/10484)
Junil Kim [Tue, 26 Nov 2024 01:47:20 +0000 (10:47 +0900)]
vulkan: Fix a vulkan-shaders-gen arugment parsing error (llama/10484)

The vulkan-shaders-gen was not parsing the --no-clean argument correctly.
Because the previous code was parsing the arguments which have a value only
and the --no-clean argument does not have a value, it was not being parsed
correctly. This commit can now correctly parse arguments that don't have values.

6 months agometal : enable mat-vec kernels for bs <= 4 (llama/10491)
Georgi Gerganov [Mon, 25 Nov 2024 19:49:31 +0000 (21:49 +0200)]
metal : enable mat-vec kernels for bs <= 4 (llama/10491)

6 months agollama : accept a list of devices to use to offload a model (llama/10497)
Diego Devesa [Mon, 25 Nov 2024 18:30:06 +0000 (19:30 +0100)]
llama : accept a list of devices to use to offload a model (llama/10497)

* llama : accept a list of devices to use to offload a model

* accept `--dev none` to completely disable offloading

* fix dev list with dl backends

* rename env parameter to LLAMA_ARG_DEVICE for consistency

6 months agoggml : add support for dynamic loading of backends (llama/10469)
Diego Devesa [Mon, 25 Nov 2024 14:13:39 +0000 (15:13 +0100)]
ggml : add support for dynamic loading of backends (llama/10469)

* ggml : add support for dynamic loading of backends

---------

Co-authored-by: Georgi Gerganov <redacted>
6 months agotests : fix compile warning
Georgi Gerganov [Mon, 25 Nov 2024 13:17:32 +0000 (15:17 +0200)]
tests : fix compile warning

6 months agometal : minor code formatting
Georgi Gerganov [Mon, 25 Nov 2024 13:08:04 +0000 (15:08 +0200)]
metal : minor code formatting

6 months agoggml : do not use ARM features not included in the build (llama/10457)
Diego Devesa [Sat, 23 Nov 2024 13:41:12 +0000 (14:41 +0100)]
ggml : do not use ARM features not included in the build (llama/10457)

6 months agoCANN: Support Ascend310P to accelerate F32 and F16 Model (llama/10216)
leo-pony [Fri, 22 Nov 2024 06:07:20 +0000 (14:07 +0800)]
CANN: Support Ascend310P to accelerate F32 and F16 Model (llama/10216)

* CANN Support Ascend310P to accelerate F32 and F16 Model

* Add compile option soc type macro ASCEND_310P to ggml-cann lib

* Remove unused code

* Remove the ascend soc_type hard code compile option in CMakelist.txt

6 months agocuda : optimize argmax (llama/10441)
Diego Devesa [Thu, 21 Nov 2024 17:18:50 +0000 (18:18 +0100)]
cuda : optimize argmax (llama/10441)

* cuda : optimize argmax

* remove unused parameter

ggml-ci

* fixup : use full warps

ggml-ci

* Apply suggestions from code review

Co-authored-by: Johannes Gäßler <redacted>
* fix ub

* ggml : check ne00 <= INT32_MAX in argmax and argsort

---------

Co-authored-by: Johannes Gäßler <redacted>
6 months agovulkan: predicate max operation in soft_max shaders/soft_max (llama/10437)
Jeff Bolz [Wed, 20 Nov 2024 19:47:36 +0000 (13:47 -0600)]
vulkan: predicate max operation in soft_max shaders/soft_max (llama/10437)

Fixes #10434

6 months agovulkan: copy iq4_nl LUT into shared memory (llama/10409)
Jeff Bolz [Wed, 20 Nov 2024 07:40:18 +0000 (01:40 -0600)]
vulkan: copy iq4_nl LUT into shared memory (llama/10409)

6 months agovulkan: further optimize mul_mat_vec using larger loads (llama/10387)
Jeff Bolz [Wed, 20 Nov 2024 07:11:00 +0000 (01:11 -0600)]
vulkan: further optimize mul_mat_vec using larger loads (llama/10387)

* vulkan: Use pipeline_robustness to disable robustness in mul_mat_vec.

Add some early returns for nonexistent rows in mul_mat_vec shaders. These
can only be hit when dispatching a 2D grid of workgroups. Fix the logic
for the 2D grid of workgroups to round up.

Enable the pipeline robustness extension if it's available, and use it to
disable robustness for these pipelines. The instructions to do the bounds
checking contend for the same ALU resources as the bit twiddling dequant
instructions.

* vulkan: Add GLSL structure aliases for quant types to allow larger loads

In Vulkan it's not possible to cast pointer types, so instead you have to
declare an aliased binding for the memory with a different type. This
commit adds aliases for the quant formats using 16b ints, and in a few
places where the struct size is a multiple of 4 also using 32b ints.
Currently only q4_k's aliases are used, but others will be used in
subsequent commits.

* vulkan: use larger loads in q5_k and q6_k shaders.

Similar to the optimization I did in q4_k recently, this vectorizes some loads
and reduces the number of bit twiddling instructions.

* vulkan: use larger K step per iteration in mul_mat_vec.

Add vec4 dequantization functions, and use them to do K=8 per iteration in
mul_mat_vec. This uses 16b loads for the quant values and 128b loads for B
which helps reduce the load on the memory system.

The K_PER_ITER==2 logic is still there, just for F16/F32, and really only
because they support unaligned sizes.

Tweak the num_iters/unrolling logic to be simpler and catch a couple missed
unrolling opportunities.

6 months agoadd cmake rvv support (llama/10411)
haopeng [Tue, 19 Nov 2024 20:10:31 +0000 (04:10 +0800)]
add cmake rvv support (llama/10411)

6 months agoCUDA: remove unnecessary warp reduce in FA (#1032)
mahorozte [Tue, 3 Dec 2024 13:11:43 +0000 (21:11 +0800)]
CUDA: remove unnecessary warp reduce in FA (#1032)

* kqmax_new_j in every thread within warp is same after operate at line 199,this reduce can be omit

* same problem in vec32

---------

Co-authored-by: ZhaoXiaoYu <redacted>
6 months agofeat: add `GGML_UNARY_OP_ARGMAX` Metal kernel (#1019)
PAB [Mon, 2 Dec 2024 18:27:24 +0000 (19:27 +0100)]
feat: add `GGML_UNARY_OP_ARGMAX` Metal kernel (#1019)

* implemented argmax kernel

* tpig -> tgpig

* change to strides

* contiguous assertions

* kernel working and tested

* argmax simd parallel implementation

* added 2 new tests for argmax in test-backend-ops

* cosmit

* added 3 tests cases for perf eval

* add test_argmax in make_test_cases_perf

* Update test-backend-ops.cpp

Co-authored-by: Diego Devesa <redacted>
---------

Co-authored-by: Diego Devesa <redacted>
7 months agometal : add `GGML_OP_CONV_TRANSPOSE_1D` kernels (#1026)
PAB [Thu, 28 Nov 2024 08:25:06 +0000 (09:25 +0100)]
metal : add `GGML_OP_CONV_TRANSPOSE_1D` kernels (#1026)

* wip

* wip implementation f32

* kernel conv transpose 1d f32 working

* initial commit

7 months agoexamples: link to HuggingFace mirror of MNIST data (#1030)
Johannes Gäßler [Wed, 27 Nov 2024 18:42:36 +0000 (19:42 +0100)]
examples: link to HuggingFace mirror of MNIST data (#1030)

7 months agoFix build docs for hip (#1029)
Tristan Druyen [Tue, 26 Nov 2024 16:28:31 +0000 (17:28 +0100)]
Fix build docs for hip (#1029)

7 months agoDo not include arm_neon.h when compiling CUDA code (#1028)
Frankie Robertson [Tue, 26 Nov 2024 13:50:26 +0000 (15:50 +0200)]
Do not include arm_neon.h when compiling CUDA code (#1028)

7 months agoCreate .gitmodules for the kompute backend (#1024)
M Refi D.A [Wed, 20 Nov 2024 22:39:37 +0000 (06:39 +0800)]
Create .gitmodules for the kompute backend (#1024)

7 months agosync : whisper.cpp
Georgi Gerganov [Wed, 20 Nov 2024 19:01:53 +0000 (21:01 +0200)]
sync : whisper.cpp

7 months agoggml/sched : do not skip views in pre-assignments
slaren [Wed, 20 Nov 2024 12:25:08 +0000 (13:25 +0100)]
ggml/sched : do not skip views in pre-assignments

7 months agoggml-opt: fix data corruption (#1022)
Johannes Gäßler [Wed, 20 Nov 2024 13:56:04 +0000 (14:56 +0100)]
ggml-opt: fix data corruption (#1022)

7 months agosync : llama.cpp
Georgi Gerganov [Tue, 19 Nov 2024 16:53:16 +0000 (18:53 +0200)]
sync : llama.cpp

7 months agoAdd required ggml-base and backend libs to cmake pkg (llama/10407)
bandoti [Tue, 19 Nov 2024 16:10:30 +0000 (12:10 -0400)]
Add required ggml-base and backend libs to cmake pkg (llama/10407)

7 months agosync : llama.cpp
Georgi Gerganov [Tue, 19 Nov 2024 13:55:56 +0000 (15:55 +0200)]
sync : llama.cpp

7 months agocuda : fix CUDA_FLAGS not being applied (llama/10403)
Diego Devesa [Tue, 19 Nov 2024 13:29:38 +0000 (14:29 +0100)]
cuda : fix CUDA_FLAGS not being applied (llama/10403)

7 months agosync : llama.cpp
Georgi Gerganov [Tue, 19 Nov 2024 11:46:53 +0000 (13:46 +0200)]
sync : llama.cpp

7 months agosycl : Add option to set the SYCL architecture for all targets (llama/10266)
Romain Biessy [Tue, 19 Nov 2024 08:02:23 +0000 (09:02 +0100)]
sycl : Add option to set the SYCL architecture for all targets (llama/10266)

* Add option to set the SYCL architecture for all targets
* Convert GGML_SYCL_HIP_TARGET to the more generic GGML_SYCL_ARCH option
* Document that setting GGML_SYCL_ARCH can improve the performance

7 months agovulkan: Optimize soft_max (llama/10301)
Jeff Bolz [Tue, 19 Nov 2024 07:25:17 +0000 (01:25 -0600)]
vulkan: Optimize soft_max (llama/10301)

* vulkan: Optimize soft_max

Large soft_max could already saturate memory, but small/medium sizes were
pretty slow. The bulk of the gains for them comes from using a smaller
workgroup size, and making the workgroup size match the subgroup size also
makes the barriers much cheaper.

Cache some values in locals to avoid refetching/recomputing. And stamp
out a few "template instantiations" so smaller cases will fully unroll.

Add a missing early return for OOB rows. This happens when there are more
than 512 rows and the dispatch is 512 x H.

* vulkan: Further soft_max optimizations

Restore the workgroup size of 512 case, use it for >1024.

Use unrollable loops for more iteration counts.

7 months agosycl: Revert MUL_MAT_OP support changes (llama/10385)
Alberto Cabrera Pérez [Tue, 19 Nov 2024 00:50:04 +0000 (00:50 +0000)]
sycl: Revert MUL_MAT_OP support changes (llama/10385)

7 months agocuda : only use native when supported by cmake (llama/10389)
Diego Devesa [Mon, 18 Nov 2024 17:43:40 +0000 (18:43 +0100)]
cuda : only use native when supported by cmake (llama/10389)

7 months agovulkan: remove use of null initializer (llama/10372)
Jeff Bolz [Mon, 18 Nov 2024 14:28:42 +0000 (08:28 -0600)]
vulkan: remove use of null initializer (llama/10372)

Seems like this isn't working for vulkan-over-metal when the array is sized
by a spec constant. Maybe a spirv-cross limitation?

7 months agometal : fox offset integer overflows in im2col (#1015)
Plamen Minev [Mon, 18 Nov 2024 13:02:27 +0000 (15:02 +0200)]
metal : fox offset integer overflows in im2col (#1015)

-- While running StableDiffusion.cpp locally with Metal some offsets overflow and results in incorrect calculations

7 months agosync : llama.cpp
Georgi Gerganov [Mon, 18 Nov 2024 12:12:30 +0000 (14:12 +0200)]
sync : llama.cpp

ggml-ci

7 months agoVulkan: Fix device info output format specifiers (llama/10366)
0cc4m [Mon, 18 Nov 2024 10:02:43 +0000 (11:02 +0100)]
Vulkan: Fix device info output format specifiers (llama/10366)

* Vulkan: Fix device info output format specifiers

* Vulkan: Use zu printf specifier for size_t instead of ld

7 months agometal : add `GGML_UNARY_OP_ELU` kernel (#1018)
PAB [Mon, 18 Nov 2024 09:02:49 +0000 (10:02 +0100)]
metal : add `GGML_UNARY_OP_ELU` kernel (#1018)

7 months agosync : llama.cpp
Georgi Gerganov [Mon, 18 Nov 2024 08:14:04 +0000 (10:14 +0200)]
sync : llama.cpp

ggml-ci

7 months agoCUDA: fix MMV kernel being used for FP16 src1 (llama/10357)
Johannes Gäßler [Sun, 17 Nov 2024 22:20:42 +0000 (23:20 +0100)]
CUDA: fix MMV kernel being used for FP16 src1 (llama/10357)

7 months agosync : llama.cpp
Georgi Gerganov [Sun, 17 Nov 2024 12:02:10 +0000 (14:02 +0200)]
sync : llama.cpp

ggml-ci

7 months agoCMake: fix typo in comment [no ci] (llama/10360)
Johannes Gäßler [Sun, 17 Nov 2024 11:59:38 +0000 (12:59 +0100)]
CMake: fix typo in comment [no ci] (llama/10360)

7 months agollama : only use default buffer types for the KV cache (llama/10358)
Diego Devesa [Sun, 17 Nov 2024 11:25:45 +0000 (12:25 +0100)]
llama : only use default buffer types for the KV cache (llama/10358)

7 months agometal : refactor kernel args into structs (llama/10238)
Georgi Gerganov [Sun, 17 Nov 2024 09:23:01 +0000 (11:23 +0200)]
metal : refactor kernel args into structs (llama/10238)

* metal : add kernel arg structs (wip)

* metal : fattn args

ggml-ci

* metal : cont + avoid potential int overflow [no ci]

* metal : mul mat struct (wip)

* cont : mul mat vec

* cont : pass by reference

* cont : args is first argument

* cont : use char ptr

* cont : shmem style

* cont : thread counters style

* cont : mul mm id

ggml-ci

* cont : int safety + register optimizations

ggml-ci

* metal : GGML_OP_CONCAT

ggml-ci

* metal : GGML_OP_ADD, GGML_OP_SUB, GGML_OP_MUL, GGML_OP_DIV

* metal : GGML_OP_REPEAT

* metal : GGML_OP_CPY

* metal : GGML_OP_RMS_NORM

* metal : GGML_OP_NORM

* metal : add TODOs for rest of ops

* ggml : add ggml-metal-impl.h

ggml-ci

7 months agoggml : fix undefined reference to 'getcpu' (llama/10354)
FirstTimeEZ [Sun, 17 Nov 2024 08:39:22 +0000 (21:39 +1300)]
ggml : fix undefined reference to 'getcpu' (llama/10354)

https://github.com/ggerganov/llama.cpp/issues/10352

7 months agoCUDA: remove DMMV, consolidate F16 mult mat vec (llama/10318)
Johannes Gäßler [Sun, 17 Nov 2024 08:09:55 +0000 (09:09 +0100)]
CUDA: remove DMMV, consolidate F16 mult mat vec (llama/10318)

7 months agoCMake: default to -arch=native for CUDA build (llama/10320)
Johannes Gäßler [Sun, 17 Nov 2024 08:06:34 +0000 (09:06 +0100)]
CMake: default to -arch=native for CUDA build (llama/10320)

7 months agoggml : fix possible buffer use after free in sched reserve (llama/9930)
Diego Devesa [Sun, 17 Nov 2024 06:31:17 +0000 (07:31 +0100)]
ggml : fix possible buffer use after free in sched reserve (llama/9930)

7 months agoggml : inttypes.h -> cinttypes (llama/0)
Georgi Gerganov [Sat, 16 Nov 2024 21:40:39 +0000 (23:40 +0200)]
ggml : inttypes.h -> cinttypes (llama/0)

ggml-ci

7 months agoggml : adapt AMX to tensor->grad removal (llama/0)
Georgi Gerganov [Sat, 16 Nov 2024 19:38:01 +0000 (21:38 +0200)]
ggml : adapt AMX to tensor->grad removal (llama/0)

ggml-ci