]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
7 months agoFix HIP flag inconsistency & build docs (#10524)
Tristan Druyen [Tue, 26 Nov 2024 18:27:28 +0000 (19:27 +0100)]
Fix HIP flag inconsistency & build docs (#10524)

* Fix inconsistency of HIP flags in cmake & make

* Fix docs regarding GGML_HIP

7 months agomtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (#10516)
R0CKSTAR [Tue, 26 Nov 2024 16:00:41 +0000 (00:00 +0800)]
mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make (#10516)

Signed-off-by: Xiaodong Ye <redacted>
7 months agovulkan: fix group_norm (#10496)
Jeff Bolz [Tue, 26 Nov 2024 15:45:05 +0000 (09:45 -0600)]
vulkan: fix group_norm (#10496)

Fix bad calculation of the end of the range. Add a backend test that
covers the bad case (taken from stable diffusion).

Fixes https://github.com/leejet/stable-diffusion.cpp/issues/439.

7 months agoserver : replace behave with pytest (#10416)
Xuan Son Nguyen [Tue, 26 Nov 2024 15:20:18 +0000 (16:20 +0100)]
server : replace behave with pytest (#10416)

* server : replace behave with pytest

* fix test on windows

* misc

* add more tests

* more tests

* styling

* log less, fix embd test

* added all sequential tests

* fix coding style

* fix save slot test

* add parallel completion test

* fix parallel test

* remove feature files

* update test docs

* no cache_prompt for some tests

* add test_cache_vs_nocache_prompt

7 months agorestore the condistion to build & update pacakge when merge (#10507)
Neo Zhang Jianyu [Tue, 26 Nov 2024 13:43:47 +0000 (21:43 +0800)]
restore the condistion to build & update pacakge when merge (#10507)

Co-authored-by: arthw <redacted>
7 months agocmake : enable warnings in llama (#10474)
Georgi Gerganov [Tue, 26 Nov 2024 12:18:08 +0000 (14:18 +0200)]
cmake : enable warnings in llama (#10474)

* cmake : enable warnings in llama

ggml-ci

* cmake : add llama_get_flags and respect LLAMA_FATAL_WARNINGS

* cmake : get_flags -> ggml_get_flags

* speculative-simple : fix warnings

* cmake : reuse ggml_get_flags

ggml-ci

* speculative-simple : fix compile warning

ggml-ci

7 months agoci : publish the docker images created during scheduled runs (#10515)
Diego Devesa [Tue, 26 Nov 2024 12:05:20 +0000 (13:05 +0100)]
ci : publish the docker images created during scheduled runs (#10515)

7 months agoci : add ubuntu cuda build, build with one arch on windows (#10456)
Diego Devesa [Tue, 26 Nov 2024 12:05:07 +0000 (13:05 +0100)]
ci : add ubuntu cuda build, build with one arch on windows (#10456)

7 months agoggml-cpu: cmake add arm64 cpu feature check for macos (#10487)
Charles Xu [Tue, 26 Nov 2024 11:37:05 +0000 (12:37 +0100)]
ggml-cpu: cmake add arm64 cpu feature check for macos (#10487)

* ggml-cpu: cmake add arm64 cpu feature check for macos

* use vmmlaq_s32 for compile option i8mm check

7 months agoserver : fix parallel speculative decoding (#10513)
Georgi Gerganov [Tue, 26 Nov 2024 11:36:40 +0000 (13:36 +0200)]
server : fix parallel speculative decoding (#10513)

ggml-ci

7 months agospeculative : simplify the implementation (#10504)
Georgi Gerganov [Tue, 26 Nov 2024 10:29:38 +0000 (12:29 +0200)]
speculative : simplify the implementation (#10504)

ggml-ci

7 months agoCANN: Improve the Inferencing Performance for Ascend NPU Device (#10454)
Shanshan Shen [Tue, 26 Nov 2024 10:08:37 +0000 (18:08 +0800)]
CANN: Improve the Inferencing Performance for Ascend NPU Device (#10454)

* improve inferencing performance for ascend npu.

Co-authored-by: Frank Mai <redacted>
* some modification after review

* some modifications after review

* restore some modifications

* restore some modifications

---------

Co-authored-by: shanshan shen <redacted>
Co-authored-by: Frank Mai <redacted>
7 months agoCANN: RoPE and CANCAT operator optimization (#10488)
Chenguang Li [Tue, 26 Nov 2024 09:31:05 +0000 (17:31 +0800)]
CANN: RoPE and CANCAT operator optimization (#10488)

Co-authored-by: noemotiovon <redacted>
7 months agovulkan: Fix a vulkan-shaders-gen arugment parsing error (#10484)
Junil Kim [Tue, 26 Nov 2024 01:47:20 +0000 (10:47 +0900)]
vulkan: Fix a vulkan-shaders-gen arugment parsing error (#10484)

The vulkan-shaders-gen was not parsing the --no-clean argument correctly.
Because the previous code was parsing the arguments which have a value only
and the --no-clean argument does not have a value, it was not being parsed
correctly. This commit can now correctly parse arguments that don't have values.

7 months agoIntroduce llama-run (#10291)
Eric Curtin [Mon, 25 Nov 2024 21:56:24 +0000 (16:56 -0500)]
Introduce llama-run (#10291)

It's like simple-chat but it uses smart pointers to avoid manual
memory cleanups. Less memory leaks in the code now. Avoid printing
multiple dots. Split code into smaller functions. Uses no exception
handling.

Signed-off-by: Eric Curtin <redacted>
7 months agoci : build docker images only once daily (#10503)
Diego Devesa [Mon, 25 Nov 2024 21:05:39 +0000 (22:05 +0100)]
ci : build docker images only once daily (#10503)

7 months agoserver : add more information about error (#10455)
Georgi Gerganov [Mon, 25 Nov 2024 20:28:27 +0000 (22:28 +0200)]
server : add more information about error (#10455)

7 months agoserver : enable cache_prompt by default (#10501)
Georgi Gerganov [Mon, 25 Nov 2024 19:50:07 +0000 (21:50 +0200)]
server : enable cache_prompt by default (#10501)

ggml-ci

7 months agometal : enable mat-vec kernels for bs <= 4 (#10491)
Georgi Gerganov [Mon, 25 Nov 2024 19:49:31 +0000 (21:49 +0200)]
metal : enable mat-vec kernels for bs <= 4 (#10491)

7 months agoRename Olmo1124 to Olmo2 (#10500)
Shane A [Mon, 25 Nov 2024 18:36:09 +0000 (10:36 -0800)]
Rename Olmo1124 to Olmo2 (#10500)

7 months agollama : accept a list of devices to use to offload a model (#10497)
Diego Devesa [Mon, 25 Nov 2024 18:30:06 +0000 (19:30 +0100)]
llama : accept a list of devices to use to offload a model (#10497)

* llama : accept a list of devices to use to offload a model

* accept `--dev none` to completely disable offloading

* fix dev list with dl backends

* rename env parameter to LLAMA_ARG_DEVICE for consistency

7 months agoGithub: update issue templates [no ci] (#10489)
Johannes Gäßler [Mon, 25 Nov 2024 18:18:37 +0000 (19:18 +0100)]
Github: update issue templates [no ci] (#10489)

7 months agoAdd download chat feature to server chat (#10481)
brucepro [Mon, 25 Nov 2024 16:11:55 +0000 (08:11 -0800)]
Add download chat feature to server chat (#10481)

* Add download chat feature to server chat

Add a download feature next to the delete chat feature in the server vue chat interface.

* code style

---------

Co-authored-by: Xuan Son Nguyen <redacted>
7 months agoserver : add speculative decoding support (#10455)
Georgi Gerganov [Mon, 25 Nov 2024 14:31:38 +0000 (16:31 +0200)]
server : add speculative decoding support (#10455)

* server : add speculative decoding support

ggml-ci

* server : add helper function slot.can_speculate()

ggml-ci

7 months agoggml : add support for dynamic loading of backends (#10469)
Diego Devesa [Mon, 25 Nov 2024 14:13:39 +0000 (15:13 +0100)]
ggml : add support for dynamic loading of backends (#10469)

* ggml : add support for dynamic loading of backends

---------

Co-authored-by: Georgi Gerganov <redacted>
7 months agotests : fix compile warning
Georgi Gerganov [Mon, 25 Nov 2024 13:17:32 +0000 (15:17 +0200)]
tests : fix compile warning

7 months agometal : minor code formatting
Georgi Gerganov [Mon, 25 Nov 2024 13:08:04 +0000 (15:08 +0200)]
metal : minor code formatting

7 months ago[SYCL] Fix building Win package for oneAPI 2025.0 update (#10483)
Neo Zhang Jianyu [Mon, 25 Nov 2024 09:31:10 +0000 (17:31 +0800)]
[SYCL] Fix building Win package for oneAPI 2025.0 update (#10483)

* fix build package for 2025.0

* debug

* debug

* fix

* rm debug

---------

Co-authored-by: arthw <redacted>
7 months agospeculative : refactor and add a simpler example (#10362)
Georgi Gerganov [Mon, 25 Nov 2024 07:58:41 +0000 (09:58 +0200)]
speculative : refactor and add a simpler example (#10362)

* speculative : refactor and add a simpler example

ggml-ci

* speculative : clean-up and add comments and TODOs [no ci]

* speculative : manage context in common_speculative

ggml-ci

* speculative : simplify

ggml-ci

* speculative : simplify (cont)

ggml-ci

* speculative : add --draft-min CLI arg

* speculative : minor fixup

* make : build fixes

* speculative : do not redraft previous drafts

ggml-ci

* speculative : fix the draft sampling

ggml-ci

* speculative : fix compile warning

* common : refactor args

ggml-ci

* common : change defaults [no ci]

* common : final touches

ggml-ci

7 months agoflake.lock: Update (#10470)
Georgi Gerganov [Sun, 24 Nov 2024 16:03:25 +0000 (18:03 +0200)]
flake.lock: Update (#10470)

Flake lock file updates:

• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/5e4fbfb6b3de1aa2872b76d49fafc942626e2add?narHash=sha256-OZiZ3m8SCMfh3B6bfGC/Bm4x3qc1m2SVEAlkV6iY7Yg%3D' (2024-11-15)
  → 'github:NixOS/nixpkgs/23e89b7da85c3640bbc2173fe04f4bd114342367?narHash=sha256-y/MEyuJ5oBWrWAic/14LaIr/u5E0wRVzyYsouYY3W6w%3D' (2024-11-19)

Co-authored-by: github-actions[bot] <redacted>
7 months agollama : fix op mul check with command-r-plus (#10476)
Diego Devesa [Sun, 24 Nov 2024 15:10:26 +0000 (16:10 +0100)]
llama : fix op mul check with command-r-plus (#10476)

7 months agoconvert : XLMRoberta Type Vocab Size (#10458)
Gabe Goodhart [Sun, 24 Nov 2024 09:02:34 +0000 (02:02 -0700)]
convert : XLMRoberta Type Vocab Size (#10458)

This matches the key in common bert-based embedding models and may have a
value other than 1 in it.

Branch: XLMRobertaTypeVocabSize

Signed-off-by: Gabe Goodhart <redacted>
7 months agofix gguf-py: Conversion error when multiple licenses are configured (#9807)
momonga [Sun, 24 Nov 2024 00:09:22 +0000 (09:09 +0900)]
fix gguf-py:  Conversion error when multiple licenses are configured (#9807)

* fix general.license list to str

* fix join license list

---------

Co-authored-by: momonga <redacted>
7 months agoggml : do not use ARM features not included in the build (#10457)
Diego Devesa [Sat, 23 Nov 2024 13:41:12 +0000 (14:41 +0100)]
ggml : do not use ARM features not included in the build (#10457)

7 months agoci: Update oneAPI runtime dll packaging (#10428)
蕭澧邦 [Fri, 22 Nov 2024 09:44:08 +0000 (17:44 +0800)]
ci: Update oneAPI runtime dll packaging (#10428)

This is the minimum runtime dll dependencies for oneAPI 2025.0

7 months agoGitHub: ask for more info in issue templates (#10426)
Johannes Gäßler [Fri, 22 Nov 2024 07:32:40 +0000 (08:32 +0100)]
GitHub: ask for more info in issue templates (#10426)

* GitHub: ask for more info in issues [no ci]

* refactor issue templates to be component-specific

* more understandable issue description

* add dropdown for llama.cpp module

7 months agoCANN: Support Ascend310P to accelerate F32 and F16 Model (#10216)
leo-pony [Fri, 22 Nov 2024 06:07:20 +0000 (14:07 +0800)]
CANN: Support Ascend310P to accelerate F32 and F16 Model (#10216)

* CANN Support Ascend310P to accelerate F32 and F16 Model

* Add compile option soc type macro ASCEND_310P to ggml-cann lib

* Remove unused code

* Remove the ascend soc_type hard code compile option in CMakelist.txt

7 months agocuda : optimize argmax (#10441)
Diego Devesa [Thu, 21 Nov 2024 17:18:50 +0000 (18:18 +0100)]
cuda : optimize argmax (#10441)

* cuda : optimize argmax

* remove unused parameter

ggml-ci

* fixup : use full warps

ggml-ci

* Apply suggestions from code review

Co-authored-by: Johannes Gäßler <redacted>
* fix ub

* ggml : check ne00 <= INT32_MAX in argmax and argsort

---------

Co-authored-by: Johannes Gäßler <redacted>
7 months agollama : handle KV shift for recurrent models (#10402)
Georgi Gerganov [Thu, 21 Nov 2024 08:22:47 +0000 (10:22 +0200)]
llama : handle KV shift for recurrent models (#10402)

ggml-ci

7 months agosync : ggml
Georgi Gerganov [Thu, 21 Nov 2024 07:22:11 +0000 (09:22 +0200)]
sync : ggml

7 months agoggml/sched : do not skip views in pre-assignments
slaren [Wed, 20 Nov 2024 12:25:08 +0000 (13:25 +0100)]
ggml/sched : do not skip views in pre-assignments

7 months agoggml-opt: fix data corruption (ggml/1022)
Johannes Gäßler [Wed, 20 Nov 2024 13:56:04 +0000 (14:56 +0100)]
ggml-opt: fix data corruption (ggml/1022)

7 months agovulkan: predicate max operation in soft_max shaders/soft_max (#10437)
Jeff Bolz [Wed, 20 Nov 2024 19:47:36 +0000 (13:47 -0600)]
vulkan: predicate max operation in soft_max shaders/soft_max (#10437)

Fixes #10434

7 months agocmake: add link dependencies to cmake find pkg (#10433)
bandoti [Wed, 20 Nov 2024 16:22:19 +0000 (12:22 -0400)]
cmake: add link dependencies to cmake find pkg (#10433)

* cmake pkg: find accelerate, openmp, memkind libs

* cmake pkg: find BLAS libs

* try BLAS_LIBRARIES instead

* Add BLAS link opts

* Add more link deps. and set GGML_ vars

7 months agollama : add .clang-format file (#10415)
Diego Devesa [Wed, 20 Nov 2024 11:57:53 +0000 (12:57 +0100)]
llama : add .clang-format file (#10415)

7 months agovulkan: copy iq4_nl LUT into shared memory (#10409)
Jeff Bolz [Wed, 20 Nov 2024 07:40:18 +0000 (01:40 -0600)]
vulkan: copy iq4_nl LUT into shared memory (#10409)

7 months agovulkan: further optimize mul_mat_vec using larger loads (#10387)
Jeff Bolz [Wed, 20 Nov 2024 07:11:00 +0000 (01:11 -0600)]
vulkan: further optimize mul_mat_vec using larger loads (#10387)

* vulkan: Use pipeline_robustness to disable robustness in mul_mat_vec.

Add some early returns for nonexistent rows in mul_mat_vec shaders. These
can only be hit when dispatching a 2D grid of workgroups. Fix the logic
for the 2D grid of workgroups to round up.

Enable the pipeline robustness extension if it's available, and use it to
disable robustness for these pipelines. The instructions to do the bounds
checking contend for the same ALU resources as the bit twiddling dequant
instructions.

* vulkan: Add GLSL structure aliases for quant types to allow larger loads

In Vulkan it's not possible to cast pointer types, so instead you have to
declare an aliased binding for the memory with a different type. This
commit adds aliases for the quant formats using 16b ints, and in a few
places where the struct size is a multiple of 4 also using 32b ints.
Currently only q4_k's aliases are used, but others will be used in
subsequent commits.

* vulkan: use larger loads in q5_k and q6_k shaders.

Similar to the optimization I did in q4_k recently, this vectorizes some loads
and reduces the number of bit twiddling instructions.

* vulkan: use larger K step per iteration in mul_mat_vec.

Add vec4 dequantization functions, and use them to do K=8 per iteration in
mul_mat_vec. This uses 16b loads for the quant values and 128b loads for B
which helps reduce the load on the memory system.

The K_PER_ITER==2 logic is still there, just for F16/F32, and really only
because they support unaligned sizes.

Tweak the num_iters/unrolling logic to be simpler and catch a couple missed
unrolling opportunities.

7 months agoupdate rel to 4040 (#10395)
Neo Zhang Jianyu [Wed, 20 Nov 2024 05:54:25 +0000 (13:54 +0800)]
update rel to 4040 (#10395)

Co-authored-by: arthw <redacted>
7 months agoFix missing file renames in Makefile due to changes in commit ae8de6d50a (#10413)
Anthony Van de Gejuchte [Tue, 19 Nov 2024 22:18:17 +0000 (23:18 +0100)]
Fix missing file renames in Makefile due to changes in commit ae8de6d50a (#10413)

7 months agoadd cmake rvv support (#10411)
haopeng [Tue, 19 Nov 2024 20:10:31 +0000 (04:10 +0800)]
add cmake rvv support (#10411)

7 months agosync : ggml
Georgi Gerganov [Tue, 19 Nov 2024 17:15:50 +0000 (19:15 +0200)]
sync : ggml

7 months agometal : fox offset integer overflows in im2col (ggml/1015)
Plamen Minev [Mon, 18 Nov 2024 13:02:27 +0000 (15:02 +0200)]
metal : fox offset integer overflows in im2col (ggml/1015)

-- While running StableDiffusion.cpp locally with Metal some offsets overflow and results in incorrect calculations

7 months agometal : add `GGML_UNARY_OP_ELU` kernel (ggml/1018)
PAB [Mon, 18 Nov 2024 09:02:49 +0000 (10:02 +0100)]
metal : add `GGML_UNARY_OP_ELU` kernel (ggml/1018)

7 months agocmake: force MSVC compiler charset to utf-8 (#9989)
蕭澧邦 [Tue, 19 Nov 2024 17:42:00 +0000 (01:42 +0800)]
cmake: force MSVC compiler charset to utf-8 (#9989)

7 months agoAdd required ggml-base and backend libs to cmake pkg (#10407)
bandoti [Tue, 19 Nov 2024 16:10:30 +0000 (12:10 -0400)]
Add required ggml-base and backend libs to cmake pkg (#10407)

7 months agocuda : fix CUDA_FLAGS not being applied (#10403)
Diego Devesa [Tue, 19 Nov 2024 13:29:38 +0000 (14:29 +0100)]
cuda : fix CUDA_FLAGS not being applied (#10403)

7 months agollama : add check for KV cache shifts (#10401)
Georgi Gerganov [Tue, 19 Nov 2024 11:29:26 +0000 (13:29 +0200)]
llama : add check for KV cache shifts (#10401)

ggml-ci

7 months agollama : add OLMo November 2024 support (#10394)
Shane A [Tue, 19 Nov 2024 09:04:08 +0000 (01:04 -0800)]
llama : add OLMo November 2024 support (#10394)

* Add OLMo November 2024 constants

* Add OLMo November 2024 converter

* Add loading of OLMo November 2024 tensors and hyper parameters

* Add building of OLMo November 2024 model

7 months agosycl : Add option to set the SYCL architecture for all targets (#10266)
Romain Biessy [Tue, 19 Nov 2024 08:02:23 +0000 (09:02 +0100)]
sycl : Add option to set the SYCL architecture for all targets (#10266)

* Add option to set the SYCL architecture for all targets
* Convert GGML_SYCL_HIP_TARGET to the more generic GGML_SYCL_ARCH option
* Document that setting GGML_SYCL_ARCH can improve the performance

7 months agovulkan: Optimize soft_max (#10301)
Jeff Bolz [Tue, 19 Nov 2024 07:25:17 +0000 (01:25 -0600)]
vulkan: Optimize soft_max (#10301)

* vulkan: Optimize soft_max

Large soft_max could already saturate memory, but small/medium sizes were
pretty slow. The bulk of the gains for them comes from using a smaller
workgroup size, and making the workgroup size match the subgroup size also
makes the barriers much cheaper.

Cache some values in locals to avoid refetching/recomputing. And stamp
out a few "template instantiations" so smaller cases will fully unroll.

Add a missing early return for OOB rows. This happens when there are more
than 512 rows and the dispatch is 512 x H.

* vulkan: Further soft_max optimizations

Restore the workgroup size of 512 case, use it for >1024.

Use unrollable loops for more iteration counts.

7 months agosycl: Revert MUL_MAT_OP support changes (#10385)
Alberto Cabrera Pérez [Tue, 19 Nov 2024 00:50:04 +0000 (00:50 +0000)]
sycl: Revert MUL_MAT_OP support changes (#10385)

7 months agocuda : only use native when supported by cmake (#10389)
Diego Devesa [Mon, 18 Nov 2024 17:43:40 +0000 (18:43 +0100)]
cuda : only use native when supported by cmake (#10389)

7 months agoSkip searching root path for cross-compile builds (#10383)
bandoti [Mon, 18 Nov 2024 15:23:58 +0000 (11:23 -0400)]
Skip searching root path for cross-compile builds (#10383)

7 months agovulkan: remove use of null initializer (#10372)
Jeff Bolz [Mon, 18 Nov 2024 14:28:42 +0000 (08:28 -0600)]
vulkan: remove use of null initializer (#10372)

Seems like this isn't working for vulkan-over-metal when the array is sized
by a spec constant. Maybe a spirv-cross limitation?

7 months agoflake.lock: Update (#10346)
Georgi Gerganov [Mon, 18 Nov 2024 14:08:20 +0000 (16:08 +0200)]
flake.lock: Update (#10346)

Flake lock file updates:

• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/4aa36568d413aca0ea84a1684d2d46f55dbabad7?narHash=sha256-Zwl8YgTVJTEum%2BL%2B0zVAWvXAGbWAuXHax3KzuejaDyo%3D' (2024-11-05)
  → 'github:NixOS/nixpkgs/5e4fbfb6b3de1aa2872b76d49fafc942626e2add?narHash=sha256-OZiZ3m8SCMfh3B6bfGC/Bm4x3qc1m2SVEAlkV6iY7Yg%3D' (2024-11-15)

Co-authored-by: github-actions[bot] <redacted>
7 months agoVulkan: Fix device info output format specifiers (#10366)
0cc4m [Mon, 18 Nov 2024 10:02:43 +0000 (11:02 +0100)]
Vulkan: Fix device info output format specifiers (#10366)

* Vulkan: Fix device info output format specifiers

* Vulkan: Use zu printf specifier for size_t instead of ld

7 months agodocker: use GGML_NATIVE=OFF (#10368)
Johannes Gäßler [Sun, 17 Nov 2024 23:21:53 +0000 (00:21 +0100)]
docker: use GGML_NATIVE=OFF (#10368)

7 months agoCUDA: fix MMV kernel being used for FP16 src1 (#10357)
Johannes Gäßler [Sun, 17 Nov 2024 22:20:42 +0000 (23:20 +0100)]
CUDA: fix MMV kernel being used for FP16 src1 (#10357)

7 months agoCMake: fix typo in comment [no ci] (#10360)
Johannes Gäßler [Sun, 17 Nov 2024 11:59:38 +0000 (12:59 +0100)]
CMake: fix typo in comment [no ci] (#10360)

7 months agollama : only use default buffer types for the KV cache (#10358)
Diego Devesa [Sun, 17 Nov 2024 11:25:45 +0000 (12:25 +0100)]
llama : only use default buffer types for the KV cache (#10358)

7 months agogitignore : ignore local run scripts [no ci]
Georgi Gerganov [Sun, 17 Nov 2024 11:12:22 +0000 (13:12 +0200)]
gitignore : ignore local run scripts [no ci]

7 months agometal : refactor kernel args into structs (#10238)
Georgi Gerganov [Sun, 17 Nov 2024 09:23:01 +0000 (11:23 +0200)]
metal : refactor kernel args into structs (#10238)

* metal : add kernel arg structs (wip)

* metal : fattn args

ggml-ci

* metal : cont + avoid potential int overflow [no ci]

* metal : mul mat struct (wip)

* cont : mul mat vec

* cont : pass by reference

* cont : args is first argument

* cont : use char ptr

* cont : shmem style

* cont : thread counters style

* cont : mul mm id

ggml-ci

* cont : int safety + register optimizations

ggml-ci

* metal : GGML_OP_CONCAT

ggml-ci

* metal : GGML_OP_ADD, GGML_OP_SUB, GGML_OP_MUL, GGML_OP_DIV

* metal : GGML_OP_REPEAT

* metal : GGML_OP_CPY

* metal : GGML_OP_RMS_NORM

* metal : GGML_OP_NORM

* metal : add TODOs for rest of ops

* ggml : add ggml-metal-impl.h

ggml-ci

7 months agoggml : fix undefined reference to 'getcpu' (#10354)
FirstTimeEZ [Sun, 17 Nov 2024 08:39:22 +0000 (21:39 +1300)]
ggml : fix undefined reference to 'getcpu' (#10354)

https://github.com/ggerganov/llama.cpp/issues/10352

7 months agoCUDA: remove DMMV, consolidate F16 mult mat vec (#10318)
Johannes Gäßler [Sun, 17 Nov 2024 08:09:55 +0000 (09:09 +0100)]
CUDA: remove DMMV, consolidate F16 mult mat vec (#10318)

7 months agoCMake: default to -arch=native for CUDA build (#10320)
Johannes Gäßler [Sun, 17 Nov 2024 08:06:34 +0000 (09:06 +0100)]
CMake: default to -arch=native for CUDA build (#10320)

7 months agoggml : fix possible buffer use after free in sched reserve (#9930)
Diego Devesa [Sun, 17 Nov 2024 06:31:17 +0000 (07:31 +0100)]
ggml : fix possible buffer use after free in sched reserve (#9930)

7 months agoggml : inttypes.h -> cinttypes (#0)
Georgi Gerganov [Sat, 16 Nov 2024 21:40:39 +0000 (23:40 +0200)]
ggml : inttypes.h -> cinttypes (#0)

ggml-ci

7 months agoggml : adapt AMX to tensor->grad removal (#0)
Georgi Gerganov [Sat, 16 Nov 2024 19:38:01 +0000 (21:38 +0200)]
ggml : adapt AMX to tensor->grad removal (#0)

ggml-ci

7 months agomake : add ggml-opt (#0)
Georgi Gerganov [Sat, 16 Nov 2024 19:35:31 +0000 (21:35 +0200)]
make : add ggml-opt (#0)

ggml-ci

7 months agotests : remove test-grad0
Georgi Gerganov [Sat, 16 Nov 2024 19:34:03 +0000 (21:34 +0200)]
tests : remove test-grad0

7 months agoggml : fix compile warnings (#0)
Georgi Gerganov [Sat, 16 Nov 2024 19:32:41 +0000 (21:32 +0200)]
ggml : fix compile warnings (#0)

ggml-ci

7 months agoggml: new optimization interface (ggml/988)
Johannes Gäßler [Sat, 16 Nov 2024 20:17:59 +0000 (22:17 +0200)]
ggml: new optimization interface (ggml/988)

7 months agoscripts : update sync
Georgi Gerganov [Sat, 16 Nov 2024 20:16:04 +0000 (22:16 +0200)]
scripts : update sync

7 months agodocs : vulkan build instructions to use git bash mingw64 (#10303)
FirstTimeEZ [Sat, 16 Nov 2024 23:29:18 +0000 (12:29 +1300)]
docs : vulkan build instructions to use git bash mingw64 (#10303)

7 months agollama/ex: remove --logdir argument (#10339)
Johannes Gäßler [Sat, 16 Nov 2024 22:00:41 +0000 (23:00 +0100)]
llama/ex: remove --logdir argument (#10339)

7 months agollamafile : fix include path (#0)
Georgi Gerganov [Sat, 16 Nov 2024 15:58:56 +0000 (17:58 +0200)]
llamafile : fix include path (#0)

ggml-ci

7 months agomake : auto-determine dependencies (#0)
Georgi Gerganov [Sat, 16 Nov 2024 15:58:32 +0000 (17:58 +0200)]
make : auto-determine dependencies (#0)

7 months agoserver: (web UI) Add samplers sequence customization (#10255)
MaggotHATE [Sat, 16 Nov 2024 13:26:54 +0000 (18:26 +0500)]
server: (web UI) Add samplers sequence customization (#10255)

* Samplers sequence: simplified and input field.

* Removed unused function

* Modify and use `settings-modal-short-input`

* rename "name" --> "label"

---------

Co-authored-by: Xuan Son Nguyen <redacted>
7 months agoscripts : fix missing key in compare-llama-bench.py (#10332)
Georgi Gerganov [Sat, 16 Nov 2024 08:32:50 +0000 (10:32 +0200)]
scripts : fix missing key in compare-llama-bench.py (#10332)

7 months agovulkan: Optimize some mat-vec mul quant shaders (#10296)
Jeff Bolz [Sat, 16 Nov 2024 06:26:57 +0000 (00:26 -0600)]
vulkan: Optimize some mat-vec mul quant shaders (#10296)

Compute two result elements per workgroup (for Q{4,5}_{0,1}). This reuses
the B loads across the rows and also reuses some addressing calculations.
This required manually partially unrolling the loop, since the compiler
is less willing to unroll outer loops.

Add bounds-checking on the last iteration of the loop. I think this was at
least partly broken before.

Optimize the Q4_K shader to vectorize most loads and reduce the number of
bit twiddling instructions.

7 months agovulkan : add cmake preset debug/release (#10306)
FirstTimeEZ [Sat, 16 Nov 2024 01:59:33 +0000 (14:59 +1300)]
vulkan : add cmake preset debug/release (#10306)

7 months agoggml : optimize Q4_0 into Q4_0_X_Y repack (#10324)
Dan Johansson [Sat, 16 Nov 2024 00:53:37 +0000 (01:53 +0100)]
ggml : optimize Q4_0 into Q4_0_X_Y repack (#10324)

7 months agollama : save number of parameters and the size in llama_model (#10286)
FirstTimeEZ [Sat, 16 Nov 2024 00:42:13 +0000 (13:42 +1300)]
llama : save number of parameters and the size in llama_model (#10286)

fixes #10285

7 months agoMake updates to fix issues with clang-cl builds while using AVX512 flags (#10314)
Srihari-mcw [Fri, 15 Nov 2024 21:27:00 +0000 (02:57 +0530)]
Make updates to fix issues with clang-cl builds while using AVX512 flags (#10314)

7 months agoscripts: update compare-llama-bench.py (#10319)
Johannes Gäßler [Fri, 15 Nov 2024 20:19:03 +0000 (21:19 +0100)]
scripts: update compare-llama-bench.py (#10319)

7 months agoggml : fix some build issues
slaren [Fri, 15 Nov 2024 19:20:54 +0000 (20:20 +0100)]
ggml : fix some build issues

7 months agocmake : fix ppc64 check (whisper/0)
Georgi Gerganov [Fri, 15 Nov 2024 13:35:22 +0000 (15:35 +0200)]
cmake : fix ppc64 check (whisper/0)

ggml-ci

7 months agoggml : vulkan logs (whisper/2547)
thewh1teagle [Fri, 15 Nov 2024 13:33:53 +0000 (15:33 +0200)]
ggml : vulkan logs (whisper/2547)

7 months agosync : ggml
Georgi Gerganov [Fri, 15 Nov 2024 13:31:16 +0000 (15:31 +0200)]
sync : ggml

7 months agoAVX BF16 and single scale quant optimizations (#10212)
Eve [Fri, 15 Nov 2024 11:47:58 +0000 (11:47 +0000)]
AVX BF16 and single scale quant optimizations (#10212)

* use 128 bit loads (i've tried 256->128 to death and its slower)

* double accumulator

* avx bf16 vec dot

* +3% q4_0 inference

* +7% tg +5% pp compared to master

* slower f16c version, kep for reference

* 256b version, also slow. i tried :)

* revert f16

* faster with madd

* split to functions

* Q8_0 and IQ4_NL, 5-7% faster

* fix potential overflow (performance reduced)

* 16 bit add for q4_0 only

* merge