]>
git.djapps.eu Git - pkg/ggml/sources/ggml/log
Georgi Gerganov [Fri, 15 Nov 2024 19:46:11 +0000 (21:46 +0200)]
sync : llama.cpp
ggml-ci
slaren [Fri, 15 Nov 2024 19:20:54 +0000 (20:20 +0100)]
ggml : fix some build issues
Georgi Gerganov [Fri, 15 Nov 2024 19:43:41 +0000 (21:43 +0200)]
sync : leftovers (#0)
ggml-ci
Georgi Gerganov [Fri, 15 Nov 2024 19:38:15 +0000 (21:38 +0200)]
test-dup : minor fix
ggml-ci
Georgi Gerganov [Fri, 15 Nov 2024 19:35:51 +0000 (21:35 +0200)]
cmake : restore CMakeLists.txt (llama/10256)
ggml-ci
Eve [Fri, 15 Nov 2024 11:47:58 +0000 (11:47 +0000)]
AVX BF16 and single scale quant optimizations (llama/10212)
* use 128 bit loads (i've tried 256->128 to death and its slower)
* double accumulator
* avx bf16 vec dot
* +3% q4_0 inference
* +7% tg +5% pp compared to master
* slower f16c version, kep for reference
* 256b version, also slow. i tried :)
* revert f16
* faster with madd
* split to functions
* Q8_0 and IQ4_NL, 5-7% faster
* fix potential overflow (performance reduced)
* 16 bit add for q4_0 only
* merge
Romain Biessy [Fri, 15 Nov 2024 03:09:12 +0000 (04:09 +0100)]
sycl: Use syclcompat::dp4a (llama/10267)
* sycl: Use syclcompat::dp4a
* Using the syclcompat version allow the compiler to optimize the
operation with native function
* Update news section
* Update CI Windows oneAPI version to 2025.0
* Reword doc
* Call syclcompat::dp4a inside dpct::dp4a
This reverts commit
90cb61d692d61360b46954a1c7f780bd2e569b73 .
Charles Xu [Fri, 15 Nov 2024 00:28:50 +0000 (01:28 +0100)]
backend cpu: add online flow for aarch64 Q4_0 GEMV/GEMM kernels (llama/9921)
* backend-cpu: add online flow for aarch64 Q4_0 GEMV/GEMM kernels
---------
Co-authored-by: Diego Devesa <redacted>
Diego Devesa [Thu, 14 Nov 2024 17:04:35 +0000 (18:04 +0100)]
ggml : build backends as libraries (llama/10256)
* ggml : build backends as libraries
---------
Signed-off-by: Xiaodong Ye <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: R0CKSTAR <redacted>
Georgi Gerganov [Fri, 15 Nov 2024 19:21:38 +0000 (21:21 +0200)]
scripts : update sync llama.cpp
Georgi Gerganov [Fri, 15 Nov 2024 13:26:50 +0000 (15:26 +0200)]
sync : whisper.cpp
Georgi Gerganov [Fri, 15 Nov 2024 07:04:34 +0000 (09:04 +0200)]
cmake : fix ppc64 check (whisper/0)
thewh1teagle [Wed, 13 Nov 2024 19:47:15 +0000 (21:47 +0200)]
ggml : vulkan logs (whisper/2547)
Georgi Gerganov [Wed, 13 Nov 2024 16:13:34 +0000 (18:13 +0200)]
sync : llama.cpp
ggml-ci
Alberto Cabrera Pérez [Wed, 13 Nov 2024 09:40:57 +0000 (09:40 +0000)]
sycl : Fixes to broken builds and test-backend-ops (llama/10257)
* Fixes broken build for the SYCL CUDA backend caused by non-explicit gemm call in outprod (merged in with RWKV6 in
Optimize RWKV6 Operator Naming and Implement Multi-core CPU/ SYCL Acceleration #10133)
* Marks permuted MUL_MAT as unsupported to be able to run test-backend-ops
* Fixes asserts in norm to fix debug builds.
Jeff Bolz [Wed, 13 Nov 2024 06:58:57 +0000 (00:58 -0600)]
vulkan: Optimize contiguous copies (llama/10254)
* tests: Fix memory bandwidth calculation for perf tests
Add a flops calculation for flash attention.
Add one GGML_OP_CPY perf test.
* vulkan: Optimize contiguous copies
Add a variant of the copy shader for when the tensors are contiguous. Avoid
the complex addressing calculations, and do four elements per invocation
to hide some other overhead.
Apply similar changes to the scale shader, since scale is always contiguous.
Add a "progress bar" for shader compiles.
Jeff Bolz [Mon, 11 Nov 2024 17:13:51 +0000 (11:13 -0600)]
vulkan: Throttle the number of shader compiles during the build step. (llama/10222)
Fixes #9582
Spawning too many concurrent copies of glslc leads to "Failed to create pipes"
errors on Linux. This change applies the same throttling we use for
multithreaded pipeline creation.
Georgi Gerganov [Mon, 11 Nov 2024 06:39:13 +0000 (08:39 +0200)]
metal : more precise Q*K in FA vec kernel (llama/10247)
Jeff Bolz [Sun, 10 Nov 2024 11:37:56 +0000 (05:37 -0600)]
vulkan: Fix newly added tests for permuted mul_mat and 1D im2col (llama/10226)
Georgi Gerganov [Sat, 9 Nov 2024 09:53:13 +0000 (11:53 +0200)]
metal : reorder write loop in mul mat kernel + style (llama/10231)
* metal : reorder write loop
* metal : int -> short, style
ggml-ci
Georgi Gerganov [Sat, 9 Nov 2024 09:53:02 +0000 (11:53 +0200)]
metal : fix build and some more comments (llama/10229)
Georgi Gerganov [Sat, 9 Nov 2024 09:52:45 +0000 (11:52 +0200)]
metal : fix F32 accumulation in FA vec kernel (llama/10232)
Georgi Gerganov [Sat, 9 Nov 2024 09:21:49 +0000 (11:21 +0200)]
metal : hide debug messages from normal log
SXX [Sat, 9 Nov 2024 07:35:46 +0000 (15:35 +0800)]
ggml: fix zero division in ‘dne’ calculation in CUDA COUNT_EQUAL operator when ‘ne’ is small (#10213)
amritahs-ibm [Sat, 9 Nov 2024 07:17:50 +0000 (12:47 +0530)]
ggml : optimize llamafile cpu matrix multiplication for ppc64le (llama/10156)
This change upstreams llamafile's cpu matrix
multiplication kernels for ppc64le using MMA
builtins for FP32 datatype.
This change results in a consistent 90%
improvement in input processing time, and 20%
to 80% improvement in output processing time,
across various batch sizes.
The patch is tested with Meta-Lllama-3-8B,
Mistral-7B, Llama-2-7B-chat-hf models on a
IBM POWER10 machine.
Signed-off-by: Amrita H S <redacted>
Georgi Gerganov [Fri, 8 Nov 2024 19:59:46 +0000 (21:59 +0200)]
metal : opt-in compile flag for BF16 (llama/10218)
* metal : opt-in compile flag for BF16
ggml-ci
* ci : use BF16
ggml-ci
* swift : switch back to v12
* metal : has_float -> use_float
ggml-ci
* metal : fix BF16 check in MSL
ggml-ci
Georgi Gerganov [Fri, 8 Nov 2024 16:37:41 +0000 (18:37 +0200)]
metal : improve clarity (minor) (llama/10171)
Georgi Gerganov [Fri, 8 Nov 2024 11:47:22 +0000 (13:47 +0200)]
metal : optimize FA kernels (llama/10171)
* ggml : add ggml_flash_attn_ext_get_prec
* metal : use F16 precision in FA kernels
ggml-ci
* metal : minor clean-up
* metal : compile-guard bf16 FA kernels
ggml-ci
* build : remove obsolete compile flag [no ci]
* metal : prevent int overflows [no ci]
* cuda : disable BF16 FA
ggml-ci
* metal : fix BF16 requirement for FA kernels
ggml-ci
* make : clean-up [no ci]
Georgi Gerganov [Thu, 7 Nov 2024 21:00:50 +0000 (23:00 +0200)]
sync : llama.cpp
ggml-ci
Diego Devesa [Thu, 7 Nov 2024 17:16:08 +0000 (18:16 +0100)]
ggml : add ggml-cpu.h to the public headers (llama/10204)
snadampal [Thu, 7 Nov 2024 08:02:08 +0000 (02:02 -0600)]
fix q4_0_8_8 format for corrupted tokens issue (llama/10198)
Co-authored-by: EC2 Default User <redacted>
Zhiyuan Li [Thu, 7 Nov 2024 07:19:10 +0000 (18:19 +1100)]
Optimize RWKV6 Operator Naming and Implement Multi-core CPU/ SYCL Acceleration (llama/10133)
* rwkv6: rename to wkv6
* rwkv6: support avx2 avx512 armv8 armv9
* rwkv6: update cuda file name
* rwkv6: rename params
* wkv on sycl
* sycl: add some ops
* sycl: Enhance OP support judgment
* wkv6: drop armv9 and tranfer to GGML style
ggml-ci
* sync : ggml
* update the function to use appropriate types
* fix define error
* Update ggml/src/ggml-cpu.c
* add appropriate asserts
* move element-wise functions outside
* put the declaration outside the loop
* rewrite to be more inline with the common pattern for distributing threads
* use recommended way GGML_TENSOR_LOCALS
---------
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Diego Devesa <redacted>
Co-authored-by: Plamen Minev <redacted>
Co-authored-by: Yuri Khrustalev <redacted>
Co-authored-by: Meng, Hengyu <redacted>
Georgi Gerganov [Wed, 6 Nov 2024 17:53:51 +0000 (19:53 +0200)]
metal : add BF16 support (llama/8439)
* ggml : add initial BF16 support
ggml-ci
* metal : add mul_mat_id BF16 support
ggml-ci
* metal : check for bfloat support on the Metal device
ggml-ci
* metal : better var names [no ci]
* metal : do not build bfloat kernels when not supported
ggml-ci
* metal : try to fix BF16 support check
ggml-ci
* metal : this should correctly check bfloat support
Diego Devesa [Wed, 6 Nov 2024 11:10:07 +0000 (12:10 +0100)]
metal : fix from ptr buffer name (llama/10189)
Georgi Gerganov [Wed, 6 Nov 2024 09:20:10 +0000 (11:20 +0200)]
ggml : adjust is_first_call init value (llama/10193)
ggml-ci
Georgi Gerganov [Wed, 6 Nov 2024 08:24:23 +0000 (10:24 +0200)]
metal : add quantized FA support (llama/10149)
* metal : add quantized FA (vec) support
ggml-ci
* metal : add quantized FA (non-vec) support
* metal : fix support check
ggml-ci
* metal : clean-up
* metal : clean-up (cont)
* metal : fix shared memory calc + reduce smem + comments
* metal : float-correctness
* metal : minor [no ci]
Diego Devesa [Mon, 4 Nov 2024 22:17:01 +0000 (23:17 +0100)]
ggml : fix arch check in bf16_to_fp32 (llama/10164)
Eve [Mon, 4 Nov 2024 22:06:31 +0000 (22:06 +0000)]
Q6_K AVX improvements (llama/10118)
* q6_k instruction reordering attempt
* better subtract method
* should be theoretically faster
small improvement with shuffle lut, likely because all loads are already done at that stage
* optimize bit fiddling
* handle -32 offset separately. bsums exists for a reason!
* use shift
* Update ggml-quants.c
* have to update ci macos version to 13 as 12 doesnt work now. 13 is still x86
Georgi Gerganov [Mon, 4 Nov 2024 19:08:57 +0000 (21:08 +0200)]
sync : llama.cpp
ggml-ci
Diego Devesa [Mon, 4 Nov 2024 19:06:58 +0000 (20:06 +0100)]
ggml : fix gelu tables initialization (llama/10172)
Diego Devesa [Mon, 4 Nov 2024 16:34:08 +0000 (17:34 +0100)]
ggml : fix q4xx mat mul, increase ggml_aligned_malloc alignment (llama/10167)
snadampal [Mon, 4 Nov 2024 15:08:33 +0000 (09:08 -0600)]
fix build break on arm64 linux (llama/10166)
This fixes the build break from the recent changes
to move the CPU backend to separate files
https://github.com/ggerganov/llama.cpp/pull/10144
Diego Devesa [Mon, 4 Nov 2024 12:10:23 +0000 (13:10 +0100)]
cuda : clear error after changing peer access (llama/10153)
Georgi Gerganov [Mon, 4 Nov 2024 11:49:34 +0000 (13:49 +0200)]
metal : simplify f16 and f32 dequant kernels (llama/0)
Georgi Gerganov [Mon, 4 Nov 2024 11:43:32 +0000 (13:43 +0200)]
metal : move dequantize templates to beginning of MSL source (llama/0)
leo-pony [Mon, 4 Nov 2024 11:08:22 +0000 (19:08 +0800)]
CANN: adjust backend registry refactor. (llama/10158)
remove buffer->iface.get_name that used in cann as it was removed in backend registry refactor PR.
slaren [Mon, 4 Nov 2024 11:43:54 +0000 (12:43 +0100)]
update tests and examples
Georgi Gerganov [Mon, 4 Nov 2024 08:50:56 +0000 (10:50 +0200)]
sync : llama.cpp
Diego Devesa [Sun, 3 Nov 2024 18:34:08 +0000 (19:34 +0100)]
ggml : move CPU backend to a separate file (llama/10144)
Georgi Gerganov [Sun, 3 Nov 2024 13:18:40 +0000 (15:18 +0200)]
metal : minor fixup in FA kernel (llama/10143)
* metal : minor fixup in FA kernel
ggml-ci
* metal : use the unrolled loop variable
* metal : remove unused var
Diego Devesa [Fri, 1 Nov 2024 22:50:59 +0000 (23:50 +0100)]
llama : add simple-chat example (llama/10124)
* llama : add simple-chat example
---------
Co-authored-by: Xuan Son Nguyen <redacted>
Diego Devesa [Fri, 1 Nov 2024 22:48:26 +0000 (23:48 +0100)]
llama : use smart pointers for ggml resources (llama/10117)
Shupei Fan [Fri, 1 Nov 2024 18:33:14 +0000 (02:33 +0800)]
vulkan : improve ggml_vk_create_buffer error handling (llama/9898)
Georgi Gerganov [Fri, 1 Nov 2024 10:58:45 +0000 (12:58 +0200)]
ggml : remove ggml_scratch (llama/10121)
ggml-ci
Zhenwei Jin [Fri, 1 Nov 2024 03:09:59 +0000 (11:09 +0800)]
build: fix build error in Windows env with OneAPI setup (llama/10107)
Diego Devesa [Thu, 31 Oct 2024 21:54:23 +0000 (22:54 +0100)]
llama : fix buffer checks for mamba and rwk (llama/10111)
* llama : fix buffer checks for mamba and rwk
* llama : fix missing worst case flag during reserve
* cuda : fix supports_op for norm
* disable sched SET_CAUSE
Diego Devesa [Thu, 31 Oct 2024 10:40:59 +0000 (11:40 +0100)]
ggml : check tensor name lengths in gguf files (llama/10100)
Sergio López [Thu, 31 Oct 2024 09:09:52 +0000 (10:09 +0100)]
kompute: add mul_mat_q4_k shader (llama/10097)
This is a more or less direct translation from the Metal implementation
to GLSL.
Signed-off-by: Sergio Lopez <redacted>
Sergio López [Wed, 30 Oct 2024 16:01:52 +0000 (17:01 +0100)]
kompute: add backend registry / device interfaces (llama/10045)
Get in line with the other backends by supporting the newer
backend/device registry interfaces.
Signed-off-by: Sergio Lopez <redacted>
Diego Devesa [Wed, 30 Oct 2024 13:51:21 +0000 (14:51 +0100)]
ggml : fix memory leaks when loading invalid gguf files (llama/10094)
* ggml : fix gguf string leak when reading kv pairs fails
* ggml : avoid crashing with GGML_ABORT when the KV has an invalid type
* ggml : avoid crashing on failed memory allocations when loading a gguf file
xctan [Wed, 30 Oct 2024 07:00:40 +0000 (15:00 +0800)]
ggml : add Q4_0_8_8 RISC-V GEMV and GEMM kernels (llama/10029)
* ggml : RISC-V vector gemv for q4_0_8x8
* ggml : Added WIP rvv q4_0_8x8 gemm
* ggml : Added initial implementation of rvv gemm
* ggml : optimize gemm to avoid register spillover
* ggml : Fix GCC rvv load alignment issue
* ggml : Format gemm rvv code
* ggml : Fix a typo in RVV q4_0_8_8 GEMM
Diego Devesa [Wed, 30 Oct 2024 01:01:23 +0000 (02:01 +0100)]
llama : refactor model loader with backend registry (llama/10026)
Changyeon Kim [Tue, 29 Oct 2024 08:52:56 +0000 (17:52 +0900)]
ggml: Add POOL2D OP for GPU acceleration to the Vulkan backend in the MobileVLM model. (llama/9763)
* ggml: Add POOL2D OP for GPU ACC to the Vulkan.
- The MobileVLM model now supports inference acceleration through GPU by utilizing the Vulkan backend.
- A GGML_OP_POOL_2D shader has been added. (Pooling)
- The encoding performance of the CLIP model improved from 2.8s on the CPU to 0.7s on the GPU.
Signed-off-by: Changyeon Kim <redacted>
* [fix] Correct the incorrect order of the parameters.
fix casting to int.
Signed-off-by: Changyeon Kim <redacted>
---------
Signed-off-by: Changyeon Kim <redacted>
R0CKSTAR [Mon, 28 Oct 2024 09:02:48 +0000 (17:02 +0800)]
musa: workaround for Guilty Lockup in cleaning src0 (llama/10042)
Signed-off-by: Xiaodong Ye <redacted>
Georgi Gerganov [Mon, 4 Nov 2024 08:37:38 +0000 (10:37 +0200)]
scripts : update sync
Yuri Khrustalev [Sat, 2 Nov 2024 09:09:12 +0000 (05:09 -0400)]
cmake : make it possible linking ggml as external lib (#1003)
Plamen Minev [Fri, 1 Nov 2024 14:55:10 +0000 (16:55 +0200)]
metal : fix minor string leaks (#1004)
Georgi Gerganov [Fri, 1 Nov 2024 08:23:32 +0000 (10:23 +0200)]
sync : whisper.cpp
Georgi Gerganov [Fri, 1 Nov 2024 08:23:05 +0000 (10:23 +0200)]
ggml : alloc ggml_contexts on the heap (whisper/2525)
Georgi Gerganov [Sat, 26 Oct 2024 06:44:39 +0000 (09:44 +0300)]
ggml : remove sync artifacts
ggml-ci
Ma Mingfei [Sat, 26 Oct 2024 06:43:40 +0000 (09:43 +0300)]
ggml : add AMX backend (llama/8998)
Georgi Gerganov [Sat, 26 Oct 2024 06:40:53 +0000 (09:40 +0300)]
sync : llama.cpp
Georgi Gerganov [Fri, 25 Oct 2024 19:26:15 +0000 (22:26 +0300)]
metal : support permuted matrix multiplicaions (llama/10033)
* metal : support permuted matrix multiplicaions
ggml-ci
* cont : use nb01 directly for row steps
ggml-ci
* cont : add comments [no ci]
* metal : minor refactor
* metal : minor
Johannes Gäßler [Thu, 24 Oct 2024 12:40:23 +0000 (14:40 +0200)]
CUDA: fix insufficient buffer clearing for MMQ (llama/10032)
Johannes Gäßler [Thu, 24 Oct 2024 09:09:36 +0000 (11:09 +0200)]
CUDA: fix MMQ for non-contiguous src0, add tests (llama/10021)
* CUDA: fix MMQ for non-contiguous src0, add tests
* revise test code
Georgi Gerganov [Sat, 26 Oct 2024 06:39:48 +0000 (09:39 +0300)]
scripts : fix sync scripts (amx)
bssrdf [Wed, 23 Oct 2024 18:34:00 +0000 (14:34 -0400)]
increase cuda_cpy block size (#996)
Co-authored-by: bssrdf <redacted>
Georgi Gerganov [Wed, 23 Oct 2024 14:27:08 +0000 (17:27 +0300)]
sync : llama.cpp
Jun Hee Yoo [Wed, 23 Oct 2024 10:33:45 +0000 (19:33 +0900)]
metal : add POOL2D and fix IM2COL (llama/9943)
* add pool_2d
Signed-off-by: Junhee Yoo <redacted>
* fix im2col and add unittest for N>=1024
Signed-off-by: Junhee Yoo <redacted>
* add tests for N % 1024 != 0
Signed-off-by: Junhee Yoo <redacted>
* remove trailing whitespaces
Signed-off-by: Junhee Yoo <redacted>
* apply suggestions
Signed-off-by: Junhee Yoo <redacted>
* apply more optimization
- original IM2COL kernel + _ext with MIN()
Signed-off-by: Junhee Yoo <redacted>
* apply review: change kernel name of pool_2d
Signed-off-by: Junhee Yoo <redacted>
* apply review
Signed-off-by: Junhee Yoo <redacted>
* fix more formatting and enhance readability
Signed-off-by: Junhee Yoo <redacted>
---------
Signed-off-by: Junhee Yoo <redacted>
leo-pony [Tue, 22 Oct 2024 08:16:01 +0000 (16:16 +0800)]
Adapt to dynamically loadable backends mechanism (llama/9970)
* [CANN] Adapt to dynamically loadable backends mechanism
* Fix the Bug: inference running result is garbled in debug running model for LM models who's type is Q4_0 class
* Handle the review comments of this pull request
Georgi Gerganov [Mon, 21 Oct 2024 13:20:46 +0000 (16:20 +0300)]
ggml : add asserts for type conversion in fattn kernels (llama/9971)
ggml-ci
Radoslav Gerganov [Mon, 21 Oct 2024 10:35:40 +0000 (13:35 +0300)]
rpc : pack only RPC structs (llama/9959)
Neo Zhang Jianyu [Mon, 21 Oct 2024 06:26:09 +0000 (14:26 +0800)]
fix mul_mat_vec_q and *_vec_q error (llama/9939)
Co-authored-by: arthw <redacted>
Radoslav Gerganov [Fri, 18 Oct 2024 11:33:58 +0000 (14:33 +0300)]
rpc : backend refactoring (llama/9912)
* rpc : refactor backend
Use structs for RPC request/response messages
* rpc : refactor server
Ouadie EL FAROUKI [Fri, 18 Oct 2024 05:46:16 +0000 (06:46 +0100)]
Add SYCL Backend registry, device and Event Interfaces (llama/9705)
* implemented missing SYCL event APIs
* sycl : Added device and backend reg interfaces
* Restructured ggml-sycl.cpp
Ma Mingfei [Fri, 18 Oct 2024 05:34:36 +0000 (13:34 +0800)]
add amx kernel for gemm (llama/8998)
add intel amx isa detection
add vnni kernel for gemv cases
add vnni and amx kernel support for block_q8_0
code cleanup
fix packing B issue
enable openmp
fine tune amx kernel
switch to aten parallel pattern
add error message for nested parallelism
code cleanup
add f16 support in ggml-amx
add amx kernels for QK_K quant formats: Q4_K, Q5_K, Q6_K and IQ4_XS
update CMakeList
update README
fix some compilation warning
fix compiler warning when amx is not enabled
minor change
ggml-ci
move ggml_amx_init from ggml.c to ggml-amx/mmq.cpp
ggml-ci
update CMakeLists with -mamx-tile, -mamx-int8 and -mamx-bf16
ggml-ci
add amx as an ggml-backend
update header file, the old path for immintrin.h has changed to ggml-cpu-impl.h
minor change
update CMakeLists.txt
minor change
apply weight prepacking in set_tensor method in ggml-backend
fix compile error
ggml-ci
minor change
ggml-ci
update CMakeLists.txt
ggml-ci
add march dependency
minor change
ggml-ci
change ggml_backend_buffer_is_host to return false for amx backend
ggml-ci
fix supports_op
use device reg for AMX backend
ggml-ci
minor change
ggml-ci
minor change
fix rebase
set .buffer_from_host_ptr to be false for AMX backend
Diego Devesa [Thu, 17 Oct 2024 00:46:58 +0000 (02:46 +0200)]
vulkan : add backend registry / device interfaces (llama/9721)
* vulkan : add backend registry / device interfaces
* llama : print devices used on model load
Gilad S [Wed, 16 Oct 2024 23:34:22 +0000 (02:34 +0300)]
fix: allocating CPU buffer with size `0` (llama/9917)
Gilad S [Wed, 16 Oct 2024 22:36:51 +0000 (01:36 +0300)]
fix: use `vm_allocate` to allocate CPU backend buffer on macOS (llama/9875)
* fix: use `vm_allocate` to allocate CPU backend buffer on macOS
* fix: switch to `posix_memalign` to keep existing `free()` usages work
* feat: move `GGML_ALIGNED_MALLOC` to `ggml-backend-impl.h`, add support for `vm_allocate` on macOS
* style: formatting
* fix: move const outside of `#ifndef`
* style: formatting
* fix: unused var
* fix: transform `GGML_ALIGNED_MALLOC` and `GGML_ALIGNED_FREE` into functions and add them to `ggml-impl.h`
* fix: unused var
* fix: page align to `GGUF_DEFAULT_ALIGNMENT`
* fix: page align to `TENSOR_ALIGNMENT`
* fix: convert `TENSOR_ALIGNMENT` to a macro
* fix: increase page size to `32` on iOS
* fix: iOS page size
* fix: `hbw_posix_memalign` alignment
Johannes Gäßler [Fri, 18 Oct 2024 07:24:44 +0000 (09:24 +0200)]
CUDA: fix 1D im2col, add tests (#993)
Daniel Bevenius [Wed, 16 Oct 2024 18:10:01 +0000 (20:10 +0200)]
ggml : remove redundant set of contexts used field (#978)
This commit removes the setting of the `used` field of the contexts in
the global state (g_state) in `ggml_init`.
The motivation for this change is that I believe that this additional
initialization might not be required after the changes in Commit
45fc4fed0b9fb5b1af4a8525cbebb95e11208732 ("sync : latest changes from
whisper.cpp"), which changed the initialization of the contexts field
from `{ 0 }` to `{ { 0 } }`:
```console
g_state = (struct ggml_state) {
- /*.contexts =*/ { 0 },
+ /*.contexts =*/ { { 0 } },
};
```
My understanding is that the `{0}` initialization might not have
zero-initialized all the nested fields in every array element because of
compiler differences, and might have been the reason for having the
explicit setting of the `used` fields to false.
Georgi Gerganov [Wed, 16 Oct 2024 08:39:35 +0000 (11:39 +0300)]
tests : update type traits call (#0)
ggml-ci
Georgi Gerganov [Wed, 16 Oct 2024 08:28:53 +0000 (11:28 +0300)]
sync : llama.cpp
leo-pony [Wed, 16 Oct 2024 00:51:46 +0000 (08:51 +0800)]
Fix cann compilation error (llama/9891)
Fix cann compilation error after merging llama.cpp supports dynamically loadable backends.
agray3 [Mon, 14 Oct 2024 00:49:08 +0000 (01:49 +0100)]
Vectorize load instructions in dmmv f16 CUDA kernel (llama/9816)
* Vectorize load instructions in dmmv f16 CUDA kernel
Replaces scalar with vector load instructions, which substantially
improves performance on NVIDIA HBM GPUs, e.g. gives a 1.27X overall
speedup for Meta-Llama-3-8B-Instruct-F16 BS1 inference evaluation on
H100 SXM 80GB HBM3. On GDDR GPUs, there is a slight (1.01X) speedup.
* addressed comment
* Update ggml/src/ggml-cuda/dmmv.cu
Co-authored-by: Johannes Gäßler <redacted>
---------
Co-authored-by: Johannes Gäßler <redacted>
Diego Devesa [Fri, 11 Oct 2024 13:34:45 +0000 (15:34 +0200)]
ggml : move more prints to the ggml log system (llama/9839)
* ggml : move more prints to the ggml log system
* show BLAS OpenMP warnings in all builds using debug print
Diego Devesa [Thu, 10 Oct 2024 18:14:55 +0000 (20:14 +0200)]
rpc : add backend registry / device interfaces (llama/9812)
* rpc : add backend registry / device interfaces
* llama : add llama_supports_rpc API
* ggml_backend_rpc_start_rpc_server -> ggml_backend_rpc_start_server
R0CKSTAR [Thu, 10 Oct 2024 18:10:37 +0000 (02:10 +0800)]
musa: add docker image support (llama/9685)
* mtgpu: add docker image support
Signed-off-by: Xiaodong Ye <redacted>
* mtgpu: enable docker workflow
Signed-off-by: Xiaodong Ye <redacted>
---------
Signed-off-by: Xiaodong Ye <redacted>
Diego Devesa [Tue, 8 Oct 2024 12:21:43 +0000 (14:21 +0200)]
ggml : fix BLAS with unsupported types (llama/9775)
* ggml : do not use BLAS with types without to_float
* ggml : return pointer from ggml_internal_get_type_traits to avoid unnecessary copies
* ggml : rename ggml_internal_get_type_traits -> ggml_get_type_traits
it's not really internal if everybody uses it
Diego Devesa [Mon, 7 Oct 2024 19:55:08 +0000 (21:55 +0200)]
ggml : add backend registry / device interfaces to BLAS backend (llama/9752)
* ggml : add backend registry / device interfaces to BLAS backend
* fix mmap usage when using host buffers