]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
ggml : GPU-accelerated token generation (#1412)
authorJohannes Gäßler <redacted>
Sat, 13 May 2023 13:38:36 +0000 (15:38 +0200)
committerGitHub <redacted>
Sat, 13 May 2023 13:38:36 +0000 (16:38 +0300)
commit905d87b70aa189623d500a28602d7a3a755a4769
tree11f0d435ecb7555734b14b7a8994e88772bf8190
parentf954edda935a70a14cf0cc45ecc7fe7d60cf3e4b
ggml : GPU-accelerated token generation (#1412)

* CUDA kernel for q4_0 dequant. + mat. vec. mult.

* Added q4_1 via template

* Added missing __syncthreads();

* --gpu_layers -> --gpu-layers

* Shorter dequantize_mul_mat_vec line

* q5_0 dequantize_mul_mat kernel

* More readable dequantize_mul_mat_vec logic

* dequantize_mul_mat_vec kernels for q5_1, q8_0, f16

* llama : offload "output" tensor to GPU too + coding style fixes

---------

Co-authored-by: Georgi Gerganov <redacted>
examples/common.cpp
examples/common.h
ggml-cuda.cu
ggml-cuda.h
ggml.c
ggml.h
llama.cpp
llama.h