]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
2 years agocmake : fix llama.h location when built outside of root directory (#3179)
Andrei [Fri, 15 Sep 2023 08:07:40 +0000 (04:07 -0400)]
cmake : fix llama.h location when built outside of root directory (#3179)

2 years agoci : Cloud-V for RISC-V builds (#3160)
Ali Tariq [Fri, 15 Sep 2023 08:06:56 +0000 (13:06 +0500)]
ci : Cloud-V for RISC-V builds (#3160)

* Added Cloud-V File

* Replaced Makefile with original one

---------

Co-authored-by: moiz.hussain <redacted>
2 years agollama : remove mtest (#3177)
Roland [Fri, 15 Sep 2023 07:28:45 +0000 (03:28 -0400)]
llama : remove mtest (#3177)

* Remove mtest

* remove from common/common.h and examples/main/main.cpp

2 years agollama : make quantize example up to 2.7x faster (#3115)
Cebtenzzre [Fri, 15 Sep 2023 01:09:53 +0000 (21:09 -0400)]
llama : make quantize example up to 2.7x faster (#3115)

2 years agoflake : allow $out/include to already exist (#3175)
jneem [Thu, 14 Sep 2023 18:54:47 +0000 (13:54 -0500)]
flake : allow $out/include to already exist (#3175)

2 years agocmake : compile ggml-rocm with -fpic when building shared library (#3158)
Andrei [Thu, 14 Sep 2023 17:38:16 +0000 (13:38 -0400)]
cmake : compile ggml-rocm with -fpic when building shared library (#3158)

2 years agoflake : include llama.h in nix output (#3159)
Asbjørn Olling [Thu, 14 Sep 2023 17:25:00 +0000 (19:25 +0200)]
flake : include llama.h in nix output (#3159)

2 years agomake : fix clang++ detection, move some definitions to CPPFLAGS (#3155)
Cebtenzzre [Thu, 14 Sep 2023 17:22:47 +0000 (13:22 -0400)]
make : fix clang++ detection, move some definitions to CPPFLAGS (#3155)

* make : fix clang++ detection

* make : fix compiler definitions outside of CPPFLAGS

2 years agoCI: add FreeBSD & simplify CUDA windows (#3053)
Alon [Thu, 14 Sep 2023 17:21:25 +0000 (20:21 +0300)]
CI: add FreeBSD & simplify CUDA windows (#3053)

* add freebsd to ci

* bump actions/checkout to v3
* bump cuda 12.1.0 -> 12.2.0
* bump Jimver/cuda-toolkit version

* unify and simplify "Copy and pack Cuda runtime"
* install only necessary cuda sub packages

2 years agofalcon : use stated vocab size (#2914)
akawrykow [Thu, 14 Sep 2023 17:19:42 +0000 (10:19 -0700)]
falcon : use stated vocab size (#2914)

2 years agocmake : add relocatable Llama package (#2960)
bandoti [Thu, 14 Sep 2023 17:04:40 +0000 (14:04 -0300)]
cmake : add relocatable Llama package (#2960)

* Keep static libs and headers with install

* Add logic to generate Config package

* Use proper build info

* Add llama as import library

* Prefix target with package name

* Add example project using CMake package

* Update README

* Update README

* Remove trailing whitespace

2 years agodocker : add gpu image CI builds (#3103)
dylan [Thu, 14 Sep 2023 16:47:00 +0000 (09:47 -0700)]
docker : add gpu image CI builds (#3103)

Enables the GPU enabled container images to be built and pushed
alongside the CPU containers.

Co-authored-by: canardleteer <redacted>
2 years agogguf-py : support identity operation in TensorNameMap (#3095)
Kerfuffle [Thu, 14 Sep 2023 16:32:26 +0000 (10:32 -0600)]
gguf-py : support identity operation in TensorNameMap (#3095)

Make try_suffixes keyword param optional.

2 years agofeature : support Baichuan serial models (#3009)
jameswu2014 [Thu, 14 Sep 2023 16:32:10 +0000 (00:32 +0800)]
feature : support Baichuan serial models (#3009)

2 years agospeculative : add heuristic algorithm (#3006)
Leng Yue [Thu, 14 Sep 2023 16:14:44 +0000 (09:14 -0700)]
speculative : add heuristic algorithm (#3006)

* Add heuristic algo for speculative

* Constrain minimum n_draft to 2

* speculative : improve heuristic impl

* speculative : be more rewarding upon guessing max drafted tokens

* speculative : fix typos

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agowhisper : tokenizer fix + re-enable tokenizer test for LLaMa (#3096)
goerch [Wed, 13 Sep 2023 13:19:44 +0000 (15:19 +0200)]
whisper : tokenizer fix + re-enable tokenizer test for LLaMa (#3096)

* Fix für #2721

* Reenable tokenizer test for LLaMa

* Add `console.cpp` dependency

* Fix dependency to `common`

* Fixing wrong fix.

* Make console usage platform specific

Work on compiler warnings.

* Adapting makefile

* Remove trailing whitespace

* Adapting the other parts of the makefile

* Fix typo.

2 years agocmake : add a compiler flag check for FP16 format (#3086)
Tristan Ross [Wed, 13 Sep 2023 13:08:52 +0000 (06:08 -0700)]
cmake : add a compiler flag check for FP16 format (#3086)

2 years agoCUDA: mul_mat_q RDNA2 tunings (#2910)
Johannes Gäßler [Wed, 13 Sep 2023 09:20:24 +0000 (11:20 +0200)]
CUDA: mul_mat_q RDNA2 tunings (#2910)

* CUDA: mul_mat_q RDNA2 tunings

* Update ggml-cuda.cu

Co-authored-by: Henri Vasserman <redacted>
---------

Co-authored-by: Henri Vasserman <redacted>
2 years agospeculative: add --n-gpu-layers-draft option (#3063)
FK [Wed, 13 Sep 2023 06:50:46 +0000 (08:50 +0200)]
speculative: add --n-gpu-layers-draft option (#3063)

2 years agoarm64 support for windows (#3007)
Eric Sommerlade [Wed, 13 Sep 2023 01:54:20 +0000 (02:54 +0100)]
arm64 support for windows (#3007)

Co-authored-by: Cebtenzzre <redacted>
2 years agoCUDA: fix LoRAs (#3130)
Johannes Gäßler [Tue, 12 Sep 2023 22:15:33 +0000 (00:15 +0200)]
CUDA: fix LoRAs (#3130)

2 years agoCUDA: fix mul_mat_q not used for output tensor (#3127)
Johannes Gäßler [Mon, 11 Sep 2023 20:58:41 +0000 (22:58 +0200)]
CUDA: fix mul_mat_q not used for output tensor (#3127)

2 years agoCUDA: lower GPU latency + fix Windows performance (#3110)
Johannes Gäßler [Mon, 11 Sep 2023 17:55:51 +0000 (19:55 +0200)]
CUDA: lower GPU latency + fix Windows performance (#3110)

2 years agocmake : support build for iOS/tvOS (#3116)
Jhen-Jie Hong [Mon, 11 Sep 2023 11:49:06 +0000 (19:49 +0800)]
cmake : support build for iOS/tvOS (#3116)

* cmake : support build for iOS/tvOS

* ci : add iOS/tvOS build into macOS-latest-cmake

* ci : split ios/tvos jobs

2 years agoCUDA: add device number to error messages (#3112)
Johannes Gäßler [Mon, 11 Sep 2023 11:00:24 +0000 (13:00 +0200)]
CUDA: add device number to error messages (#3112)

2 years agometal : PP speedup (#3084)
Kawrakow [Mon, 11 Sep 2023 07:30:11 +0000 (09:30 +0200)]
metal : PP speedup (#3084)

* Minor speed gains for all quantization types

* metal: faster kernel_scale via float4

* Various other speedups for "small" kernels

* metal: faster soft_max vial float4

* metal: faster diagonal infinity

Although, to me it looks like one should simply
fuse scale + diagnonal infinity + soft_max on the
KQtensor.

* Another faster f16 x f32 matrix multiply kernel

* Reverting the diag infinity change

It does work for PP, but somehow it fails for TG.
Need to look more into it.

* metal: add back faster diagonal infinity

This time more carefully

* metal : minor (readibility)

---------

Co-authored-by: Iwan Kawrakow <redacted>
Co-authored-by: Georgi Gerganov <redacted>
2 years agoconvert: remove most of the n_mult usage in convert.py (#3098)
Erik Scholz [Sun, 10 Sep 2023 15:06:53 +0000 (17:06 +0200)]
convert: remove most of the n_mult usage in convert.py (#3098)

2 years agometal : support for Swift (#3078)
kchro3 [Sat, 9 Sep 2023 09:12:10 +0000 (02:12 -0700)]
metal : support for Swift (#3078)

* Metal support for Swift

* update

* add a toggle for arm/arm64

* set minimum versions for all platforms

* update to use newLibraryWithURL

* bump version

Co-authored-by: Jhen-Jie Hong <redacted>
---------

Co-authored-by: Jhen-Jie Hong <redacted>
2 years agometal : support build for iOS/tvOS (#3089)
Jhen-Jie Hong [Sat, 9 Sep 2023 08:46:04 +0000 (16:46 +0800)]
metal : support build for iOS/tvOS (#3089)

2 years agoflake : add train-text-from-scratch to flake.nix (#3042)
takov751 [Fri, 8 Sep 2023 16:06:26 +0000 (17:06 +0100)]
flake : add train-text-from-scratch to flake.nix (#3042)

2 years agoreadme : fix typo (#3043)
Ikko Eltociear Ashimine [Fri, 8 Sep 2023 16:04:32 +0000 (01:04 +0900)]
readme : fix typo (#3043)

* readme : fix typo

acceleation -> acceleration

* Update README.md

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agometal : Q3_K speedup (#2995)
Kawrakow [Fri, 8 Sep 2023 16:01:04 +0000 (18:01 +0200)]
metal : Q3_K speedup (#2995)

* Slightly faster Q3_K and Q5_K on metal

* Another Q3_K speedup on metal

Combined with previous commit, we are now +9.6% for TG.
PP is not affected as this happens via the matrix multiplication
templates.

* Slowly progressing on Q3_K on metal

We are now 13% faster than master

* nother small improvement for Q3_K on metal

---------

Co-authored-by: Iwan Kawrakow <redacted>
2 years agoexamples : make n_ctx warning work again (#3066)
Cebtenzzre [Fri, 8 Sep 2023 15:43:35 +0000 (11:43 -0400)]
examples : make n_ctx warning work again (#3066)

This was broken by commit e36ecdcc ("build : on Mac OS enable Metal by
default (#2901)").

2 years agoreadme : update hot tpoics
Georgi Gerganov [Fri, 8 Sep 2023 15:18:04 +0000 (18:18 +0300)]
readme : update hot tpoics

2 years agosync : ggml (CUDA GLM RoPE + POSIX) (#3082)
Georgi Gerganov [Fri, 8 Sep 2023 14:58:07 +0000 (17:58 +0300)]
sync : ggml (CUDA GLM RoPE + POSIX) (#3082)

ggml-ci

2 years agobuild : do not use _GNU_SOURCE gratuitously (#2035)
Przemysław Pawełczyk [Fri, 8 Sep 2023 12:09:21 +0000 (14:09 +0200)]
build : do not use _GNU_SOURCE gratuitously (#2035)

* Do not use _GNU_SOURCE gratuitously.

What is needed to build llama.cpp and examples is availability of
stuff defined in The Open Group Base Specifications Issue 6
(https://pubs.opengroup.org/onlinepubs/009695399/) known also as
Single Unix Specification v3 (SUSv3) or POSIX.1-2001 + XSI extensions,
plus some stuff from BSD that is not specified in POSIX.1.

Well, that was true until NUMA support was added recently,
so enable GNU libc extensions for Linux builds to cover that.

Not having feature test macros in source code gives greater flexibility
to those wanting to reuse it in 3rd party app, as they can build it with
FTMs set by Makefile here or other FTMs depending on their needs.

It builds without issues in Alpine (musl libc), Ubuntu (glibc), MSYS2.

* make : enable Darwin extensions for macOS to expose RLIMIT_MEMLOCK

* make : enable BSD extensions for DragonFlyBSD to expose RLIMIT_MEMLOCK

* make : use BSD-specific FTMs to enable alloca on BSDs

* make : fix OpenBSD build by exposing newer POSIX definitions

* cmake : follow recent FTM improvements from Makefile

2 years agodocker : add git to full-cuda.Dockerfile main-cuda.Dockerfile (#3044)
hongbo.mo [Fri, 8 Sep 2023 10:57:55 +0000 (18:57 +0800)]
docker : add git to full-cuda.Dockerfile main-cuda.Dockerfile (#3044)

2 years agoUpdate deprecated GGML TheBloke links to GGUF (#3079)
Yui [Fri, 8 Sep 2023 10:32:55 +0000 (12:32 +0200)]
Update deprecated GGML TheBloke links to GGUF (#3079)

2 years agoggml-alloc : correctly check mmap return value for errors (#3075)
slaren [Fri, 8 Sep 2023 02:04:56 +0000 (04:04 +0200)]
ggml-alloc : correctly check mmap return value for errors (#3075)

2 years agoenable CPU HBM (#2603)
Kunshang Ji [Fri, 8 Sep 2023 01:46:56 +0000 (09:46 +0800)]
enable CPU HBM (#2603)

* add cpu hbm support

* add memalign 0 byte check

* Update ggml.c

* Update llama.cpp

* ggml : allow ggml_init with 0 size

* retrigger ci

* fix code style

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoconvert : fix F32 ftype not being saved (#3048)
Cebtenzzre [Thu, 7 Sep 2023 18:27:42 +0000 (14:27 -0400)]
convert : fix F32 ftype not being saved (#3048)

2 years agofix some warnings from gcc and clang-tidy (#3038)
Cebtenzzre [Thu, 7 Sep 2023 17:22:29 +0000 (13:22 -0400)]
fix some warnings from gcc and clang-tidy (#3038)

Co-authored-by: xaedes <redacted>
2 years agomake : improve test target (#3031)
Cebtenzzre [Thu, 7 Sep 2023 14:15:01 +0000 (10:15 -0400)]
make : improve test target (#3031)

2 years agomake : fix CPPFLAGS (#3035)
Cebtenzzre [Thu, 7 Sep 2023 14:13:50 +0000 (10:13 -0400)]
make : fix CPPFLAGS (#3035)

2 years agollama-bench : use two tokens in the warmup run for prompt evals (#3059)
slaren [Thu, 7 Sep 2023 13:52:34 +0000 (15:52 +0200)]
llama-bench : use two tokens in the warmup run for prompt evals (#3059)

2 years agometal : parallel RoPE on Metal (#3024)
Kawrakow [Thu, 7 Sep 2023 13:45:01 +0000 (15:45 +0200)]
metal : parallel RoPE on Metal (#3024)

* Parallel RoPE on metal

* PR suggestion

---------

Co-authored-by: Iwan Kawrakow <redacted>
2 years agometal : correct fix of kernel_norm (#3060)
Kawrakow [Thu, 7 Sep 2023 13:42:42 +0000 (15:42 +0200)]
metal : correct fix of kernel_norm (#3060)

Co-authored-by: Iwan Kawrakow <redacted>
Co-authored-by: Georgi Gerganov <redacted>
2 years agometal : fix kernel_norm (fixes Falcon on Metal) (#3057)
Georgi Gerganov [Thu, 7 Sep 2023 12:49:09 +0000 (15:49 +0300)]
metal : fix kernel_norm (fixes Falcon on Metal) (#3057)

* metal : fix kernel_norm

ggml-ci

* metal : put warning in kernel_norm to not combine the loops

* metal : restore original F16 mat-vec multiplication

It works after the norm fixes

* common : don't do warm-up with more than n_batch tokens (close #3058)

ggml-ci

* metal : minor

2 years agoggml : posixify madvise and pagesize (#3037)
Przemysław Pawełczyk [Thu, 7 Sep 2023 08:15:06 +0000 (10:15 +0200)]
ggml : posixify madvise and pagesize (#3037)

* llama : use posix_madvise() instead of madvise() derived from BSD

sed -i 's,\<madvise\>,posix_&,g;s,\<MADV_,POSIX_&,g' llama.cpp

* ggml : use sysconf(_SC_PAGESIZE) instead of getpagesize() derived from BSD

sed -i 's,getpagesize(),sysconf(_SC_PAGESIZE),g' ggml.c

* metal : use sysconf(_SC_PAGESIZE) instead of getpagesize() derived from BSD

sed -i 's,getpagesize(),sysconf(_SC_PAGESIZE),g' ggml-metal.m

2 years agok-quants : fix zero-weight guard in Q6_K (ref #3040)
Georgi Gerganov [Wed, 6 Sep 2023 09:40:57 +0000 (12:40 +0300)]
k-quants : fix zero-weight guard in Q6_K (ref #3040)

2 years agoconvert-llama-ggml-to-gguf: Try to handle files older than GGJTv3 (#3023)
Kerfuffle [Wed, 6 Sep 2023 08:49:11 +0000 (02:49 -0600)]
convert-llama-ggml-to-gguf: Try to handle files older than GGJTv3 (#3023)

* convert-llama-ggmlv3-to-gguf: Try to handle files older than GGJTv3

* Better error messages for files that cannot be converted

* Add file type to GGUF output

* Rename to convert-llama-ggml-to-gguf.py

* Include original file type information in description

* Improve some informational output

2 years agobuild : add LLAMA_METAL_NDEBUG flag (#3033)
Cebtenzzre [Tue, 5 Sep 2023 22:21:10 +0000 (18:21 -0400)]
build : add LLAMA_METAL_NDEBUG flag (#3033)

2 years agomake : use new flag variables for recent changes (#3019)
Cebtenzzre [Tue, 5 Sep 2023 19:12:00 +0000 (15:12 -0400)]
make : use new flag variables for recent changes (#3019)

2 years agoexamples : replace fprintf to stdout with printf (#3017)
Cebtenzzre [Tue, 5 Sep 2023 19:10:27 +0000 (15:10 -0400)]
examples : replace fprintf to stdout with printf (#3017)

2 years agoconvert: fix convert.py not working with int filename_stem (#3028)
Erik Scholz [Tue, 5 Sep 2023 17:41:00 +0000 (19:41 +0200)]
convert: fix convert.py not working with int filename_stem (#3028)

* fix implicit int to string conversion
* convert : remove an obsolete pyright comment

---------

Co-authored-by: Cebtenzzre <redacted>
2 years agoGuard against all weights in a super-block being zero (#3010)
Kawrakow [Tue, 5 Sep 2023 07:55:33 +0000 (09:55 +0200)]
Guard against all weights in a super-block being zero (#3010)

* Guard against all weights in a super-block being zero

* Also guard against extremely small weights

Closes #2982

---------

Co-authored-by: Iwan Kawrakow <redacted>
2 years agollama : update logic for number of threads when using BLAS
Georgi Gerganov [Tue, 5 Sep 2023 07:46:39 +0000 (10:46 +0300)]
llama : update logic for number of threads when using BLAS

2 years agospeculative : add grammar support (#2991)
Georgi Gerganov [Tue, 5 Sep 2023 05:46:17 +0000 (08:46 +0300)]
speculative : add grammar support (#2991)

* speculative : add grammar support

* grammars : add json_arr.gbnf

* grammar : add comments to new grammar file

* grammar : remove one nested level

* common : warm-up with 2 tokens - seems to work better

* speculative : print draft token pieces

* speculative : reuse grammar parser + better logs and comments

* speculative : avoid grammar_mem

* make : fix speculative build

2 years agopy : minor
Georgi Gerganov [Mon, 4 Sep 2023 19:50:50 +0000 (22:50 +0300)]
py : minor

2 years agobuild : on Mac OS enable Metal by default (#2901)
Georgi Gerganov [Mon, 4 Sep 2023 19:26:24 +0000 (22:26 +0300)]
build : on Mac OS enable Metal by default (#2901)

* build : on Mac OS enable Metal by default

* make : try to fix build on Linux

* make : move targets back to the top

* make : fix target clean

* llama : enable GPU inference by default with Metal

* llama : fix vocab_only logic when GPU is enabled

* common : better `n_gpu_layers` assignment

* readme : update Metal instructions

* make : fix merge conflict remnants

* gitignore : metal

2 years agoggml-opencl : store GPU buffer in ggml_tensor::extra (#2994)
slaren [Mon, 4 Sep 2023 12:59:52 +0000 (14:59 +0200)]
ggml-opencl : store GPU buffer in ggml_tensor::extra (#2994)

2 years agollama-bench : make cpp file non-executable (#2999)
Cebtenzzre [Mon, 4 Sep 2023 10:40:18 +0000 (06:40 -0400)]
llama-bench : make cpp file non-executable (#2999)

2 years agomake : add speculative example (#3003)
Leng Yue [Mon, 4 Sep 2023 10:39:57 +0000 (03:39 -0700)]
make : add speculative example (#3003)

2 years agoserver : add a subtle loading animation to the edit box (#2466)
Aarni Koskela [Mon, 4 Sep 2023 08:28:55 +0000 (10:28 +0200)]
server : add a subtle loading animation to the edit box (#2466)

* editorconfig: add override for the server HTML (which already is 2-space indented)

* server: add a subtle loading animation to the edit box

2 years ago2x faster (rms) norm cuda kernels (3.7% e2e improvement) (#2985)
Jiahao Li [Mon, 4 Sep 2023 06:53:30 +0000 (14:53 +0800)]
2x faster (rms) norm cuda kernels (3.7% e2e improvement) (#2985)

* 2x faster (rms) norm cuda kernels

* Fix code style

2 years agoggml-alloc : use virtual memory for measurement (#2973)
slaren [Sun, 3 Sep 2023 18:34:09 +0000 (20:34 +0200)]
ggml-alloc : use virtual memory for measurement (#2973)

* ggml-alloc : use virtual memory for measurement

* compatibility fixes for MAP_ANONYMOUS

* fallback to fixed address for systems without virtual memory

2 years agospeculative : PoC for speeding-up inference via speculative sampling (#2926)
Georgi Gerganov [Sun, 3 Sep 2023 12:12:08 +0000 (15:12 +0300)]
speculative : PoC for speeding-up inference via speculative sampling (#2926)

* speculative : initial example

* speculative : print encoding speed

* speculative : add --draft CLI arg

2 years agoperplexity : fix ETA by warming up the model with an empty run
Georgi Gerganov [Sun, 3 Sep 2023 10:42:56 +0000 (13:42 +0300)]
perplexity : fix ETA by warming up the model with an empty run

2 years agogguf(python): Fix special vocab handling when id < 0 (#2984)
Kerfuffle [Sun, 3 Sep 2023 10:38:43 +0000 (04:38 -0600)]
gguf(python): Fix special vocab handling when id < 0 (#2984)

2 years agometal : restore 363f0bf and fix reduce in F16_F32 kernels (#2986)
Georgi Gerganov [Sun, 3 Sep 2023 10:23:33 +0000 (13:23 +0300)]
metal : restore 363f0bf and fix reduce in F16_F32 kernels (#2986)

2 years agocov : disable comment in PRs (#2989)
Alon [Sun, 3 Sep 2023 10:19:01 +0000 (13:19 +0300)]
cov : disable comment in PRs (#2989)

2 years agollama : fix bpe tokenize from byte (#2889)
opparco [Sun, 3 Sep 2023 10:18:09 +0000 (19:18 +0900)]
llama : fix bpe tokenize from byte (#2889)

2 years agometal : revert 6af0bab until we fix it
Georgi Gerganov [Sun, 3 Sep 2023 09:40:56 +0000 (12:40 +0300)]
metal : revert 6af0bab until we fix it

This restores the generated text to be the same as before #2959

2 years agocov : add Code Coverage and codecov.io integration (#2928)
Alon [Sun, 3 Sep 2023 08:48:49 +0000 (11:48 +0300)]
cov : add Code Coverage and codecov.io integration (#2928)

* update .gitignore

* makefile: add coverage support (lcov, gcovr)

* add code-coverage workflow

* update code coverage workflow

* wun on ubuntu 20.04

* use gcc-8

* check why the job hang

* add env vars

* add LLAMA_CODE_COVERAGE=1 again

* - add CODECOV_TOKEN
- add missing make lcov-report

* install lcov

* update make file -pb flag

* remove unused  GGML_NITER from workflows

* wrap coverage output files in COV_TARGETS

2 years agoopencl : fix a bug in ggml_cl_pool_malloc() for ggml_cl_mul_mat_f32() (#2955)
Wentai Zhang [Sun, 3 Sep 2023 08:46:44 +0000 (16:46 +0800)]
opencl : fix a bug in ggml_cl_pool_malloc() for ggml_cl_mul_mat_f32() (#2955)

Co-authored-by: Wentai Zhang <redacted>
2 years agometal : more optimizations (#2959)
Kawrakow [Sun, 3 Sep 2023 08:06:22 +0000 (11:06 +0300)]
metal : more optimizations (#2959)

* Very minor speedup via simd-group synchronization in f16 x f32

* Another very minor speedup on metal

* Quite significant PP speedup on metal

* Another attempt

* Minor

* Massive improvement for TG for fp16

* ~4-5% improvement for Q8_0 TG on metal

---------

Co-authored-by: Iwan Kawrakow <redacted>
Co-authored-by: Georgi Gerganov <redacted>
2 years agoswift : add support for k-quants (#2983)
kchro3 [Sun, 3 Sep 2023 06:21:05 +0000 (23:21 -0700)]
swift : add support for k-quants (#2983)

2 years agoconvert.py : BPE fixes (#2938)
Kerfuffle [Sun, 3 Sep 2023 05:52:13 +0000 (23:52 -0600)]
convert.py : BPE fixes (#2938)

* convert.py: BPE fixes?

* Remove unnecessary conditional in addl token error handling

2 years agodocs : add `catai` to `README.md` (#2967)
Ido S [Sun, 3 Sep 2023 05:50:51 +0000 (08:50 +0300)]
docs : add `catai` to `README.md` (#2967)

2 years agoexamples : fix gpt-neox (#2943)
momonga [Sun, 3 Sep 2023 05:36:28 +0000 (14:36 +0900)]
examples : fix gpt-neox (#2943)

Co-authored-by: mmnga <redacted>
2 years agoswift : add missing c file to Package.swift (#2978)
kchro3 [Sun, 3 Sep 2023 05:27:25 +0000 (22:27 -0700)]
swift : add missing c file to Package.swift (#2978)

2 years agomake : support overriding CFLAGS/CXXFLAGS/CPPFLAGS/LDFLAGS (#2886)
Cebtenzzre [Sun, 3 Sep 2023 05:26:59 +0000 (01:26 -0400)]
make : support overriding CFLAGS/CXXFLAGS/CPPFLAGS/LDFLAGS (#2886)

* make : remove unused -DGGML_BIG_ENDIAN

* make : put preprocessor stuff in CPPFLAGS

* make : pass Raspberry Pi arch flags to g++ as well

* make : support overriding CFLAGS/CXXFLAGS/CPPFLAGS/LDFLAGS

* make : fix inverted conditional

2 years agologging: Fix creating empty file even when disabled (#2966)
Kerfuffle [Sat, 2 Sep 2023 17:53:55 +0000 (11:53 -0600)]
logging: Fix creating empty file even when disabled (#2966)

* logging: Fix creating empty file even when disabled

* Minor formatting fix

Co-authored-by: staviq <redacted>
---------

Co-authored-by: staviq <redacted>
2 years agoreadme : update clblast instructions (#2903)
bandoti [Sat, 2 Sep 2023 12:53:18 +0000 (09:53 -0300)]
readme : update clblast instructions (#2903)

* Update Windows CLBlast instructions

* Update Windows CLBlast instructions

* Remove trailing whitespace

2 years agometal : show all Metal device instances in the system (#2952)
Karsten Weiss [Sat, 2 Sep 2023 12:29:09 +0000 (14:29 +0200)]
metal : show all Metal device instances in the system (#2952)

* ggml_metal_init: Show all Metal device instances in the system

Also show the default Metal device that was picked.

* Update ggml-metal.m

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agok-quants : fix build on armv7 (android only) (#2920)
Jhen-Jie Hong [Sat, 2 Sep 2023 12:23:45 +0000 (20:23 +0800)]
k-quants : fix build on armv7 (android only) (#2920)

* k-quants : fix build on armv7

* ggml : cleanup unused arm32 specific impl

* k-quants : avoid some unused vzero / mzero define

* ggml-alloc : use 4g for MEASURE_MAX_SIZE in 32-bit arm

2 years agoserver : avoid aniprompt in probabilities of final response (#2849)
Jhen-Jie Hong [Sat, 2 Sep 2023 00:31:46 +0000 (08:31 +0800)]
server : avoid aniprompt in probabilities of final response (#2849)

2 years agocuda : vsubss4 for older versions of ROCm/clang (#2942)
Engininja2 [Fri, 1 Sep 2023 21:33:19 +0000 (15:33 -0600)]
cuda : vsubss4 for older versions of ROCm/clang (#2942)

2 years agoreadme : quick start command fix (#2908)
ZHAOKAI WANG [Fri, 1 Sep 2023 14:06:44 +0000 (22:06 +0800)]
readme : quick start command fix (#2908)

* quick start command fix

* quick start win command fix

2 years agoAllow quantize to only copy tensors, some other improvements (#2931)
Kerfuffle [Fri, 1 Sep 2023 14:02:48 +0000 (08:02 -0600)]
Allow quantize to only copy tensors, some other improvements (#2931)

* Allow quantize tool to only copy tensors to allow repackaging models.

* Slightly better logic when requantizing.

* Change help message to go to `stdout`.

2 years agollama2c : rename function
Georgi Gerganov [Fri, 1 Sep 2023 14:00:40 +0000 (17:00 +0300)]
llama2c : rename function

2 years agomake : use unaligned vector moves on MinGW (#2945)
Cebtenzzre [Fri, 1 Sep 2023 13:53:14 +0000 (09:53 -0400)]
make : use unaligned vector moves on MinGW (#2945)

Fixes #2922

2 years agominor : add const qualifiers (#2853)
m3ndax [Fri, 1 Sep 2023 13:47:27 +0000 (15:47 +0200)]
minor : add const qualifiers (#2853)

* made the methods const

# Conflicts:
# examples/convert-llama2c-to-ggml/convert-llama2c-to-ggml.cpp

* made method const

* Update convert-llama2c-to-ggml.cpp

removed write_raw and write_u32

* llama2c : remove misleading const

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agodocs : add java-llama.cpp to README.md (#2935)
Konstantin Herud [Fri, 1 Sep 2023 13:36:14 +0000 (15:36 +0200)]
docs : add java-llama.cpp to README.md (#2935)

2 years agobuild : fix most gcc and clang warnings (#2861)
Cebtenzzre [Fri, 1 Sep 2023 13:34:50 +0000 (09:34 -0400)]
build : fix most gcc and clang warnings (#2861)

* fix most gcc and clang warnings

* baby-llama : remove commented opt_params_adam

* fix some MinGW warnings

* fix more MinGW warnings

2 years agoexamples : add C grammar (#2357)
Ben Siraphob [Fri, 1 Sep 2023 13:32:14 +0000 (09:32 -0400)]
examples : add C grammar (#2357)

2 years agoggml : add RISC-V vector intrinsics support (#2929)
Tameem [Fri, 1 Sep 2023 13:27:40 +0000 (18:27 +0500)]
ggml : add RISC-V vector intrinsics support (#2929)

* added support for RISCV CFLAGS & native compile + cross compile options

* Add RISC-V Vector Intrinsics Support

Added RVV intrinsics for following
   ggml_vec_dot_q4_0_q8_0
   ggml_vec_dot_q4_1_q8_1
   ggml_vec_dot_q5_0_q8_0
   ggml_vec_dot_q5_1_q8_1
   ggml_vec_dot_q8_0_q8_0

Co-authored-by: Sharafat <redacted>
Signed-off-by: Ahmad Tameem <redacted>
---------

Signed-off-by: Ahmad Tameem <redacted>
Co-authored-by: moiz.hussain <redacted>
Co-authored-by: Sharafat <redacted>
2 years agometal : slight speed-up for add and mul kernels (#2917)
Georgi Gerganov [Fri, 1 Sep 2023 10:42:41 +0000 (13:42 +0300)]
metal : slight speed-up for add and mul kernels (#2917)

2 years agologs : fix mingw-like builds (fixes #2898) (#2911)
staviq [Fri, 1 Sep 2023 09:07:06 +0000 (11:07 +0200)]
logs : fix mingw-like builds (fixes #2898) (#2911)

* fix mingw-like builds

* formatting

* make LOG_COMPAT easier to override and extend

* simplify win detection

* fix for #2940

2 years agollama2c : fix segfault and alloc-dealloc-mismatch (#2913)
Cebtenzzre [Fri, 1 Sep 2023 09:03:49 +0000 (05:03 -0400)]
llama2c : fix segfault and alloc-dealloc-mismatch (#2913)

* llama2c : fix segfault if vocab is not found

* llama2c : fix mismatch between new[] and delete

* llama2c : fix basename on Windows

* llama2c : use a destructor to prevent memory leaks