]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
14 months agoconvert.py : add consolidated.safetensors for mixtral 8x22b (#6587)
slaren [Wed, 10 Apr 2024 13:23:12 +0000 (15:23 +0200)]
convert.py : add consolidated.safetensors for mixtral 8x22b (#6587)

14 months agodocs : how to add a model (#6565)
Pierrick Hymbert [Wed, 10 Apr 2024 06:58:48 +0000 (08:58 +0200)]
docs : how to add a model (#6565)

* docs: how to add a model

* docs: model: typo and docs

* docs: model: add prevision on RoPE

* docs: model: rephrasing README.md

* docs: model: rephrasing README.md

* docs: model: README.md fix trailing spaces

* docs : some fixes

* Update README.md

---------

Co-authored-by: Georgi Gerganov <redacted>
14 months agoreadme : fix ROCm link (#6579)
Artem Zinnatullin [Wed, 10 Apr 2024 06:49:12 +0000 (00:49 -0600)]
readme : fix ROCm link (#6579)

14 months agoreadme : update UI list (#6560)
sjxx [Wed, 10 Apr 2024 06:34:00 +0000 (14:34 +0800)]
readme : update UI list (#6560)

14 months agoreadme: fix typo in amdgpu target name (#6573)
Jiří Sejkora [Tue, 9 Apr 2024 22:23:02 +0000 (00:23 +0200)]
readme: fix typo in amdgpu target name (#6573)

14 months agoBERT tokenizer fixes (#6498)
Jared Van Bortel [Tue, 9 Apr 2024 17:44:08 +0000 (13:44 -0400)]
BERT tokenizer fixes (#6498)

Key changes:
* BERT conversion: fix abuse of LlamaHfVocab, do not set BOS or EOS
* Nomic Embed conversion: pad vocab instead of slicing embedding tensor
* llama_tokenize: handle added special tokens like HF does

14 months agosync : ggml
Georgi Gerganov [Tue, 9 Apr 2024 17:29:06 +0000 (20:29 +0300)]
sync : ggml

14 months agoserver : detect search query to start webchat (#6554)
Ed Lee [Tue, 9 Apr 2024 08:31:47 +0000 (01:31 -0700)]
server : detect search query to start webchat (#6554)

14 months agollama : add Command R Plus support (#6491)
Carolinabanana [Tue, 9 Apr 2024 08:16:13 +0000 (09:16 +0100)]
llama : add Command R Plus support (#6491)

* Add Command R Plus GGUF

* Add Command R Plus GGUF

* Loading works up to LayerNorm2D

* Export new tensors in 1D so they are not quantized.

* Fix embedding layer based on Noeda's example

* Whitespace

* Add line

* Fix unexpected tokens on MPS. Re-add F16 fix. ((Noeda)

* dranger003: Fix block index overflow in CUDA dequantizing.

* Reverted blocked multiplication code as it still has issues and could affect other Llama arches

* export norms as f32

* fix overflow issues during quant and other cleanup

* Type convention

Co-authored-by: Georgi Gerganov <redacted>
* dranger003: Fix more int overflow during quant.

---------

Co-authored-by: S <redacted>
Co-authored-by: S <redacted>
Co-authored-by: slaren <redacted>
Co-authored-by: Georgi Gerganov <redacted>
14 months agolicense : update copyright notice + add AUTHORS (#6405)
Georgi Gerganov [Tue, 9 Apr 2024 06:23:19 +0000 (09:23 +0300)]
license : update copyright notice + add AUTHORS (#6405)

* license : add AUTHORS

* authors : update

* scipts : add LICENSE and gen-authors.sh to sync

14 months agollama : fix attention layer count sanity check (#6550)
Georgi Gerganov [Mon, 8 Apr 2024 19:25:49 +0000 (22:25 +0300)]
llama : fix attention layer count sanity check (#6550)

* llama : fix attention layer count sanity check

* llama : fix parentheses in attention layer count sanity check

There was otherwise a warning when compiling.

---------

Co-authored-by: Francis Couture-Harpin <redacted>
14 months agoComment explaining a decision (#6531)
kunnis [Mon, 8 Apr 2024 15:44:19 +0000 (10:44 -0500)]
Comment explaining a decision (#6531)

14 months agoquantize : fix precedence of cli args (#6541)
Georgi Gerganov [Mon, 8 Apr 2024 13:23:01 +0000 (16:23 +0300)]
quantize : fix precedence of cli args (#6541)

14 months agollama : support negative ith in llama_get_ API (#6519)
Rick G [Mon, 8 Apr 2024 13:02:30 +0000 (06:02 -0700)]
llama : support negative ith in llama_get_ API (#6519)

* llama_sampling_sample with default args is more naively usable

* Batches populated by either llama_batch_get_one or llama_batch_add work with default args
  * Previously get_one could use the default argument
  * Previously add should usually have used the last index where logits[idx] == true
* This hopefully encourages the use of llama_batch_add
  * By giving expected results when using default arguments.
* Adds "negative indexing" feature to llama_get_logits_ith and llama_get_embeddings_ith
* Believed to work with any currently well behaved program
  * Default arg now works for both cases (previously would give strange results for add case)
  * Any non-negative number is unaffected and behaves as previously
  * Negative arguments were previously invalid.
* Implemented as a special case of indexing as suggested by @compilade in https://github.com/ggerganov/llama.cpp/pull/6519

* Fixed mismatch type errors

* cited in macOS CI tests
* Missed in original updates based on PR feedback in https://github.com/ggerganov/llama.cpp/pull/6519

14 months agollama : save and restore kv cache for single seq id (#6341)
Jan Boon [Mon, 8 Apr 2024 12:43:30 +0000 (20:43 +0800)]
llama : save and restore kv cache for single seq id (#6341)

* llama : save and restore kv cache for single seq id

* remove trailing whitespace

* respond error in case there's no space in the kv cache

* add kv seq save restore to test case

* add --slot-save-path arg to enable save restore and restrict save location

* Returning 0 for some cases, instead of asserting.

* cleanup error cases

* rename sequence state functions

* rename state get set functions

* add previous function names back in with DEPRECATED notice

* update doc

* adjust endpoints to preferred style

* fix restoring zero cell count

* handle seq rm return value

* unused param

* keep in the size check

* fix return types

* add server test case for slot save restore

* cleanup

* add cake

* cleanup style

* add special

* removing a whole sequence never fails

* move sequence state file functionality from server to llama to match session api and add version tags

* catch exceptions on save as well

* error log messages

* check types for stricter restore

* update server doc

* readme : update API changes date

* strict filename validation

* move include, reject bom as well

* also reject empty filename

* reject whitespace and trailing dot

---------

Co-authored-by: Martin Evans <redacted>
Co-authored-by: Georgi Gerganov <redacted>
14 months agoremove row=1 cond (#6532)
Abhilash Majumder [Mon, 8 Apr 2024 08:26:01 +0000 (13:56 +0530)]
remove row=1 cond (#6532)

14 months agoAdding KodiBot to UI list (#6535)
Firat [Mon, 8 Apr 2024 07:48:29 +0000 (00:48 -0700)]
Adding KodiBot to UI list (#6535)

KodiBot is free and open source ai chat app released under the GNU General Public License.

14 months agoChange Windows AMD example to release build to make inference much faster. (#6525)
Mark Fairbairn [Sun, 7 Apr 2024 18:52:19 +0000 (19:52 +0100)]
Change Windows AMD example to release build to make inference much faster. (#6525)

14 months agoflake.lock: Update (#6517)
Georgi Gerganov [Sun, 7 Apr 2024 18:25:30 +0000 (21:25 +0300)]
flake.lock: Update (#6517)

Flake lock file updates:

• Updated input 'flake-parts':
    'github:hercules-ci/flake-parts/f7b3c975cf067e56e7cda6cb098ebe3fb4d74ca2' (2024-03-01)
  → 'github:hercules-ci/flake-parts/9126214d0a59633752a136528f5f3b9aa8565b7d' (2024-04-01)
• Updated input 'flake-parts/nixpkgs-lib':
    'github:NixOS/nixpkgs/1536926ef5621b09bba54035ae2bb6d806d72ac8?dir=lib' (2024-02-29)
  → 'github:NixOS/nixpkgs/d8fe5e6c92d0d190646fb9f1056741a229980089?dir=lib' (2024-03-29)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/d8fe5e6c92d0d190646fb9f1056741a229980089' (2024-03-29)
  → 'github:NixOS/nixpkgs/fd281bd6b7d3e32ddfa399853946f782553163b5' (2024-04-03)

Co-authored-by: github-actions[bot] <redacted>
14 months agoAdd GritLM as supported models. (#6513)
DAN™ [Sun, 7 Apr 2024 17:33:59 +0000 (13:33 -0400)]
Add GritLM as supported models. (#6513)

14 months agosync : ggml
Georgi Gerganov [Sun, 7 Apr 2024 14:05:51 +0000 (17:05 +0300)]
sync : ggml

14 months agoggml: bypass code incompatible with CUDA < 11.1 (whisper/2020)
Slava Primenko [Thu, 4 Apr 2024 12:49:24 +0000 (14:49 +0200)]
ggml: bypass code incompatible with CUDA < 11.1 (whisper/2020)

`cudaHostRegisterReadOnly` parameter was only introduced in CUDA 11.1

See this issue for more details:
https://github.com/ggerganov/examples/whisper/whisper.cpp/issues/2007

14 months agoscripts : sync ggml-cuda folder
Georgi Gerganov [Sun, 7 Apr 2024 13:08:12 +0000 (16:08 +0300)]
scripts : sync ggml-cuda folder

14 months agoRun make to build the project (#6457)
limitedAtonement [Sun, 7 Apr 2024 11:05:40 +0000 (07:05 -0400)]
Run make to build the project (#6457)

14 months agosupport/fix OPs GGML_TYPE_IQ4_NL, GGML_TYPE_IQ4_XS, GGML_TYPE_IQ3_XXS, GGML_TYPE_IQ3_...
Neo Zhang Jianyu [Sun, 7 Apr 2024 02:55:59 +0000 (10:55 +0800)]
support/fix OPs GGML_TYPE_IQ4_NL, GGML_TYPE_IQ4_XS, GGML_TYPE_IQ3_XXS, GGML_TYPE_IQ3_S, GGML_TYPE_IQ2_XXS, GGML_TYPE_IQ2_XS, GGML_TYPE_IQ2_S, GGML_TYPE_IQ1_S, GGML_TYPE_IQ1_M (#6521)

14 months agosync : ggml
Georgi Gerganov [Sat, 6 Apr 2024 14:43:15 +0000 (17:43 +0300)]
sync : ggml

14 months agobackend : fix typo in scheduler documentation (ggml/781)
Daniel Bevenius [Wed, 3 Apr 2024 20:57:20 +0000 (22:57 +0200)]
backend : fix typo in scheduler documentation (ggml/781)

Signed-off-by: Daniel Bevenius <redacted>
14 months agoTests: Added integration tests for GBNF parser (#6472)
Clint Herron [Sat, 6 Apr 2024 14:31:33 +0000 (10:31 -0400)]
Tests: Added integration tests for GBNF parser  (#6472)

* Added integration tests for GBNF parser to validate correctness of parsing, as well as correctness of string matching. Intended for use to pin behavior while working on performance improvements.

* Fixing whitespace errors and cleaning error message alert to be clearer.

* Removing hacky include to llama.cpp from grammar integration test now that needed functions are available via internal API.

* Comment cleanup.

* Reorganizing tests for readability.

* Cleaning up debug message to make a bit more sense.

14 months agoci: bench: support sse and fix prompt processing time / server: add tokens usage...
Pierrick Hymbert [Sat, 6 Apr 2024 03:40:47 +0000 (05:40 +0200)]
ci: bench: support sse and fix prompt processing time / server: add tokens usage in stream OAI response (#6495)

* ci: bench: support sse and fix prompt processing time
server: add tokens usage in stream mode

* ci: bench: README.md EOL

* ci: bench: remove total pp and tg as it is not accurate

* ci: bench: fix case when there is no token generated

* ci: bench: change to the 95 percentile for pp and tg as it is closer to what the server exports in metrics

* ci: bench: fix finish reason rate

14 months agogguf.py : add licence and version to gguf writer (#6504)
Brian [Fri, 5 Apr 2024 18:41:38 +0000 (05:41 +1100)]
gguf.py : add licence and version to gguf writer (#6504)

14 months agoreadme : update UI list (#6503)
Hoang Nguyen [Fri, 5 Apr 2024 18:39:43 +0000 (11:39 -0700)]
readme : update UI list (#6503)

* Add MindMac to UI list

* Update proprietary description

Co-authored-by: slaren <redacted>
---------

Co-authored-by: slaren <redacted>
14 months agobench : make n_batch and n_ubatch configurable in Batched bench (#6500)
Ting Sun [Fri, 5 Apr 2024 18:34:53 +0000 (01:34 +0700)]
bench : make n_batch and n_ubatch configurable in Batched bench (#6500)

* bench: make n_batch and n_ubatch configurable

* bench: update doc for batched bench

14 months ago[SYCL] Fixed minor bug when enabling FP16 for non intel targets (#6464)
Ouadie EL FAROUKI [Fri, 5 Apr 2024 13:35:06 +0000 (14:35 +0100)]
[SYCL] Fixed minor bug when enabling FP16 for non intel targets (#6464)

* moved INTEL_MKL guard from gemm_impl to gemm (wrapper)

* Update ggml-sycl.cpp

Co-authored-by: AidanBeltonS <redacted>
---------

Co-authored-by: AidanBeltonS <redacted>
14 months agoreadme : add Dot to UI list (#6487)
alexpinel [Thu, 4 Apr 2024 17:22:50 +0000 (18:22 +0100)]
readme : add Dot to UI list (#6487)

14 months agoreadme : fix typo (#6481)
Jun Jie [Thu, 4 Apr 2024 17:16:37 +0000 (01:16 +0800)]
readme : fix typo (#6481)

14 months agoserver: add cURL support to server Dockerfiles (#6474)
Ed Lepedus [Thu, 4 Apr 2024 16:31:22 +0000 (17:31 +0100)]
server: add cURL support to server Dockerfiles (#6474)

* server: add cURL support to `full.Dockerfile`

* server: add cURL support to `full-cuda.Dockerfile` and `server-cuda.Dockerfile`

* server: add cURL support to `full-rocm.Dockerfile` and `server-rocm.Dockerfile`

* server: add cURL support to `server-intel.Dockerfile`

* server: add cURL support to `server-vulkan.Dockerfile`

* fix typo in `server-vulkan.Dockerfile`

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
14 months agoci: exempt master branch workflows from getting cancelled (#6486)
Minsoo Cheong [Thu, 4 Apr 2024 16:30:53 +0000 (01:30 +0900)]
ci: exempt master branch workflows from getting cancelled (#6486)

* ci: exempt master branch workflows from getting cancelled

* apply to bench.yml

14 months agobuild CI: Name artifacts (#6482)
Ewout ter Hoeven [Thu, 4 Apr 2024 15:08:55 +0000 (17:08 +0200)]
build CI: Name artifacts (#6482)

Name the artifacts in the build CI, so that they get uploaded with separate names, instead of all put into the same `artifact` ZIP.

It might be possible to further simplify the packing step (in future PRs).

14 months agoserver: allow penalizing repetition of newlines on server webpage (#6431)
Shakhar Dasgupta [Thu, 4 Apr 2024 15:03:00 +0000 (11:03 -0400)]
server: allow penalizing repetition of newlines on server webpage (#6431)

14 months agoci: bench fix concurrency for workflow trigger dispatch with sha1 (#6478)
Pierrick Hymbert [Thu, 4 Apr 2024 14:59:04 +0000 (16:59 +0200)]
ci: bench fix concurrency for workflow trigger dispatch with sha1 (#6478)

14 months agoCorrect README link (#6458)
limitedAtonement [Thu, 4 Apr 2024 14:30:02 +0000 (10:30 -0400)]
Correct README link (#6458)

README is called README.md.

14 months agoci: bench: add more ftype, fix triggers and bot comment (#6466)
Pierrick Hymbert [Thu, 4 Apr 2024 09:57:58 +0000 (11:57 +0200)]
ci: bench: add more ftype, fix triggers and bot comment (#6466)

* ci: bench: change trigger path to not spawn on each PR

* ci: bench: add more file type for phi-2: q8_0 and f16.
- do not show the comment by default

* ci: bench: add seed parameter in k6 script

* ci: bench: artefact name perf job

* Add iteration in the commit status, reduce again the autocomment

* ci: bench: add per slot metric in the commit status

* Fix trailing spaces

14 months agocommon: remove duplicate check for curl (#6471)
Daniel Bevenius [Thu, 4 Apr 2024 07:49:21 +0000 (09:49 +0200)]
common: remove duplicate check for curl (#6471)

This commit removes one of the two identical checks for curl being NULL
in llama_load_model_from_url.

Signed-off-by: Daniel Bevenius <redacted>
14 months agoexamples : add GBNF validator program (#5948)
Clint Herron [Thu, 4 Apr 2024 07:44:28 +0000 (03:44 -0400)]
examples : add GBNF validator program (#5948)

* Revising GBNF validator program to be much simpler.

* Changing from streams to using cstdio

* Adding final newline character.

14 months agoserver : remove obsolete --memory-f32 option
Georgi Gerganov [Thu, 4 Apr 2024 06:34:58 +0000 (09:34 +0300)]
server : remove obsolete --memory-f32 option

14 months agoserver : add option to disable KV offload (#6468)
Xiao-Yong Jin [Thu, 4 Apr 2024 06:33:48 +0000 (01:33 -0500)]
server : add option to disable KV offload (#6468)

14 months agoconvert : fix for lint error complaining of bare except (#6470)
Clint Herron [Thu, 4 Apr 2024 06:32:53 +0000 (02:32 -0400)]
convert : fix for lint error complaining of bare except (#6470)

15 months agoA few small fixes to server's README docs (#6428)
Fattire [Wed, 3 Apr 2024 20:22:57 +0000 (13:22 -0700)]
A few small fixes to server's README docs (#6428)

* Typo fix to server's README.md

Fix minor typo ("tonen") in server README.

* server readme grammar/style fixes.

Quickly went through this file to look for inconsistencies in
presentation of defaults, flag options, and looked for typos
and grammar issues.

Not perfect, but hopefully improved.

* Update README.md

Remove an extra space before newline.

15 months agoserver : handle exception on wrong type in request (#6452)
JH23X [Wed, 3 Apr 2024 18:09:52 +0000 (20:09 +0200)]
server : handle exception on wrong type in request (#6452)

Co-authored-by: Jonas Holzner <redacted>
15 months agollama : add SEA-LION support (#6448)
bryanSwk [Wed, 3 Apr 2024 18:05:10 +0000 (02:05 +0800)]
llama : add SEA-LION support (#6448)

* initial commit for sealion support

* add sealion support

* minor fix

* q/k ln and pos_embd only if required

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <redacted>
* minor : clear whitespaces

---------

Co-authored-by: bryan <redacted>
Co-authored-by: Georgi Gerganov <redacted>
15 months agoci : update checkout, setup-python and upload-artifact to latest (#6456)
Ewout ter Hoeven [Wed, 3 Apr 2024 18:01:13 +0000 (20:01 +0200)]
ci : update checkout, setup-python and upload-artifact to latest (#6456)

* CI: Update actions/checkout to v4

* CI: Update actions/setup-python to v5

* CI: Update actions/upload-artifact to v4

15 months agoserver: add cURL support to `server.Dockerfile` (#6461)
Ed Lepedus [Wed, 3 Apr 2024 17:56:37 +0000 (18:56 +0100)]
server: add cURL support to `server.Dockerfile` (#6461)

15 months agoreadme : add feature-rich rust bindings (#6465)
Francisco Melo [Wed, 3 Apr 2024 17:53:37 +0000 (18:53 +0100)]
readme : add feature-rich rust bindings (#6465)

15 months agosecurity : create policy (#6354)
Joyce [Wed, 3 Apr 2024 17:48:07 +0000 (14:48 -0300)]
security : create policy (#6354)

* Create SECURITY.md

Signed-off-by: Joyce <redacted>
* Fix: link on SECURITY.md

Signed-off-by: Joyce <redacted>
* Fix: link on SECURITY.md

Signed-off-by: Joyce <redacted>
* minor

* fix

* fix

---------

Signed-off-by: Joyce <redacted>
Co-authored-by: Georgi Gerganov <redacted>
15 months agoMissing tokenizer.model error during gguf conversion (#6443)
Abhishek Gopinath K [Wed, 3 Apr 2024 15:42:52 +0000 (21:12 +0530)]
Missing tokenizer.model error during gguf conversion (#6443)

Co-authored-by: Jared Van Bortel <redacted>
15 months agoAdd OpenChat, Alpaca, Vicuna chat templates (#6397)
kaizau [Wed, 3 Apr 2024 15:24:31 +0000 (23:24 +0800)]
Add OpenChat, Alpaca, Vicuna chat templates (#6397)

* Add openchat chat template

* Add chat template test for openchat

* Add chat template for vicuna

* Add chat template for orca-vicuna

* Add EOS for vicuna templates

* Combine vicuna chat templates

* Add tests for openchat and vicuna chat templates

* Add chat template for alpaca

* Add separate template name for vicuna-orca

* Remove alpaca, match deepseek with jinja output

* Regenerate chat template test with add_generation_prompt

* Separate deepseek bos from system message

* Match openchat template with jinja output

* Remove BOS token from templates, unprefix openchat

15 months agoreadme : update hot topics
Georgi Gerganov [Wed, 3 Apr 2024 13:11:15 +0000 (16:11 +0300)]
readme : update hot topics

15 months agoggml : mul_mat_id use the same tensor for all the experts (#6387)
slaren [Wed, 3 Apr 2024 13:07:05 +0000 (15:07 +0200)]
ggml : mul_mat_id use the same tensor for all the experts (#6387)

* ggml : update mul_mat_id to use the same tensor for all the experts

* update cuda

* minor

* update metal

* update test-backend-ops

* fix cuda

* Update ggml-metal.m

Co-authored-by: Georgi Gerganov <redacted>
* update convert.py

* update convert-hf-to-gguf.py

* update convert.py for mixtral hf models

* Update convert-hf-to-gguf.py

Co-authored-by: Georgi Gerganov <redacted>
* cuda : support non-pow-2 number of experts

* allow quantize to work for split and merged experts models in the same way

* cleanup + disable mmap automatically with split tensors models

* update imatrix

* test-backend-ops : test qwen argsort

* update grok model loading

* llama : add merged experts tensors to the grok tensor map

* minor

* gguf : bump version

* fix quantizing of merged experts

* convert-hf-to-gguf.py : update grok (untested)

* make linter happy

* cuda/argsort : use shared memory instead of pool memory

* convert : fix grok tensor names

* metal : add support for non-pow-2 argsort

* llama : more loader cleanup, better error checking

* cuda : fix warning

* llama : still use mmap for loading old models, but copy the data to a host buffer

* add review note

* llama : remove ffn tensor counting + add sanity check

ggml-ci

* convert : fix handling of n_experts == None

ggml-ci

* imatrix : fix ncall counters

* llama : produce error if imatrix size does not match

* quantize : terminate on errors + trace logs

ggml-ci

* metal : pad shared memory to 16 bytes

---------

Co-authored-by: Georgi Gerganov <redacted>
15 months ago[SYCL] Disable iqx on windows as WA (#6435)
Meng, Hengyu [Wed, 3 Apr 2024 02:34:40 +0000 (10:34 +0800)]
[SYCL] Disable iqx on windows as WA (#6435)

* disable iqx on windows as WA

* array instead of global_memory

15 months agoflake.lock: Update (#6402)
Georgi Gerganov [Mon, 1 Apr 2024 16:05:57 +0000 (19:05 +0300)]
flake.lock: Update (#6402)

Flake lock file updates:

• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/44d0940ea560dee511026a53f0e2e2cde489b4d4' (2024-03-23)
  → 'github:NixOS/nixpkgs/d8fe5e6c92d0d190646fb9f1056741a229980089' (2024-03-29)

Co-authored-by: github-actions[bot] <redacted>
15 months agocompare-llama-bench.py: fix long hexsha args (#6424)
Johannes Gäßler [Mon, 1 Apr 2024 11:30:43 +0000 (13:30 +0200)]
compare-llama-bench.py: fix long hexsha args (#6424)

15 months agoci: server: verify deps are coherent with the commit (#6409)
Pierrick Hymbert [Mon, 1 Apr 2024 10:36:40 +0000 (12:36 +0200)]
ci: server: verify deps are coherent with the commit (#6409)

* ci: server: verify deps are coherent with the commit

* ci: server: change the ref to build as now it's a pull event target

15 months agoreadme : update hot topics
Georgi Gerganov [Sun, 31 Mar 2024 08:56:30 +0000 (11:56 +0300)]
readme : update hot topics

15 months agoci: bench: fix Resource not accessible by integration on PR event (#6393)
Pierrick Hymbert [Sat, 30 Mar 2024 10:36:07 +0000 (11:36 +0100)]
ci: bench: fix Resource not accessible by integration on PR event (#6393)

15 months agoFedora build update (#6388)
Mohammadreza Hendiani [Fri, 29 Mar 2024 21:59:56 +0000 (01:29 +0330)]
Fedora build update (#6388)

* fixed deprecated address

* fixed deprecated address

* fixed deprecated address

* Added 'Apache-2.0' SPDX license identifier due to 'kompute.cc' submodule licensing. Explanation of licensing method: https://docs.fedoraproject.org/en-US/legal/spdx/#_and_expressions

* Added 'Apache-2.0' SPDX license identifier due to 'kompute.cc' submodule licensing. Explanation of licensing method: https://docs.fedoraproject.org/en-US/legal/spdx/#_and_expressions

* Added 'Apache-2.0' SPDX license identifier due to 'kompute.cc' submodule licensing. Explanation of licensing method: https://docs.fedoraproject.org/en-US/legal/spdx/#_and_expressions

* reverted back to only the MIT license

15 months agosplit: allow --split-max-size option (#6343)
Xuan Son Nguyen [Fri, 29 Mar 2024 21:34:44 +0000 (22:34 +0100)]
split: allow --split-max-size option (#6343)

* split by max size

* clean up arg parse

* split: ok

* add dry run option

* error on 0 tensors

* be positive

* remove next_metadata_size

15 months agoVulkan k-quant mmq and ggml-backend offload functionality (#6155)
0cc4m [Fri, 29 Mar 2024 16:29:21 +0000 (17:29 +0100)]
Vulkan k-quant mmq and ggml-backend offload functionality (#6155)

* Fix Vulkan no kv offload incoherence

* Add k-quant mul mat mat shaders

* Rework working buffer allocation, reduces vram use noticeably

Clean up cpu assist code, replaced with ggml-backend offload function

* Default to all dedicated GPUs

* Add fallback for integrated GPUs if no dedicated GPUs are found

* Add debug info which device is allocating memory

* Fix Intel dequant issue

Fix validation issue

* Fix Vulkan GGML_OP_GET_ROWS implementation

* Clean up merge artifacts

* Remove Vulkan warning

15 months agosync : ggml (#6351)
Georgi Gerganov [Fri, 29 Mar 2024 15:45:46 +0000 (17:45 +0200)]
sync : ggml (#6351)

* sync : ggml

ggml-ci

* cuda : move GGML_CUDA_DMMV constants to dmmv.cuh

---------

Co-authored-by: slaren <redacted>
15 months ago[Model] Add support for xverse (#6301)
hxer7963 [Fri, 29 Mar 2024 13:37:03 +0000 (21:37 +0800)]
[Model] Add support for xverse (#6301)

* Support xverse model convert to gguf format.

* 1. Convert xverse models to gguf;
2. Add LLM_ARCH_XVERSE inference in llama.cpp;
3. Add xverse item in Supported models in README.md;

* * gguf-py: remove redundant logs
* llama: remove the init_mapping_prefetch custom parameter

* llama.cpp: Include the changes from #6122 to exclude the unused outputs of the last layers.

* - Fix format issues
- Remove duplicate set kqv_out to llm_build_kv

* Update llama.cpp

---------

Co-authored-by: willhe <redacted>
Co-authored-by: willhe <redacted>
15 months agoci : fix BGE wget (#6383)
Georgi Gerganov [Fri, 29 Mar 2024 12:34:28 +0000 (14:34 +0200)]
ci : fix BGE wget (#6383)

ggml-ci

15 months agoreadme : add project (#6356)
zhouwg [Fri, 29 Mar 2024 07:33:46 +0000 (15:33 +0800)]
readme : add project (#6356)

* readme: add Android UI binding

* Update README.md

15 months agocmake : add explicit metal version options (#6370)
Matt Clayton [Fri, 29 Mar 2024 07:27:42 +0000 (03:27 -0400)]
cmake : add explicit metal version options (#6370)

* cmake: add explicit metal version options

* Update CMakeLists.txt

---------

Co-authored-by: Georgi Gerganov <redacted>
15 months agollama : remove redundant reshape in build_kv_store (#6369)
Daniel Bevenius [Fri, 29 Mar 2024 07:23:22 +0000 (08:23 +0100)]
llama : remove redundant reshape in build_kv_store (#6369)

* llama: remove redundant reshape in build_kv_store

This commit removes the reshape of the V matrix in the build_kv_store.

The motivation for this is that V matrix has the shape:
```console
(gdb) p *v_cur
$46 = {type = GGML_TYPE_F32, backend = GGML_BACKEND_TYPE_CPU,
       buffer = 0x0, ne = {4096, 512, 1, 1}, nb = {4, 16384, 8388608,
       8388608}, op = GGML_OP_MUL_MAT, op_params = {
       0 <repeats 16 times>}, flags = 0, grad = 0x0,
       src = {0xb496b0, 0x7ffef1c40950, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
       0x0, 0x0}, perf_runs = 0, perf_cycles = 0, perf_time_us = 0,
       view_src = 0x0, view_offs = 0, data = 0x0,
       name = "Vcur-0", '\000' <repeats 57 times>, extra = 0x0,
       padding = "\000\000\000\000\000\000\000"}
```
And after reshaping this tensor we get:
```console
gdb) p *ggml_reshape_2d(ctx, v_cur, n_embd_v_gqa, n_tokens)
$44 = {type = GGML_TYPE_F32, backend = GGML_BACKEND_TYPE_CPU,
       buffer = 0x0, ne = {4096, 512, 1, 1}, nb = {4, 16384, 8388608,
       8388608}, op = GGML_OP_RESHAPE, op_params = {
       0 <repeats 16 times>}, flags = 0, grad = 0x0,
       src = {0x7ffef1c40e00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
       0x0}, perf_runs = 0, perf_cycles = 0, perf_time_us = 0,
       view_src = 0x7ffef1c40e00, view_offs = 0, data = 0x0,
       name = "Vcur-0 (reshaped)", '\000' <repeats 46 times>, extra = 0x0,
       padding = "\000\000\000\000\000\000\000"}
```
I noticed that the `src` and `view_src` fields are different but that the
dimensions are the same. From the code comment it seems like the reshape
call is not needed and perhaps the above can motivate the removal of the
reshape call.

Signed-off-by: Daniel Bevenius <redacted>
* llama : add assert

---------

Signed-off-by: Daniel Bevenius <redacted>
Co-authored-by: Georgi Gerganov <redacted>
15 months agoconvert : allow conversion of Mistral HF models (#6144)
Pedro Cuenca [Fri, 29 Mar 2024 07:15:00 +0000 (08:15 +0100)]
convert : allow conversion of Mistral HF models (#6144)

* Allow conversion of Mistral HF models

* Homogenize Llama, Mistral, Mixtral under the same entry.

* Fix tokenizer, permute tensors

* Use sentencepiece tokenizer, or fall back to hfft.

* convert-hf : small fix for mypy

* convert-hf : fix duplicated block_count

* convert-hf : add vocab size to metadata

---------

Co-authored-by: Jared Van Bortel <redacted>
15 months agoreadme : add notice for UI list
Georgi Gerganov [Thu, 28 Mar 2024 20:56:03 +0000 (22:56 +0200)]
readme : add notice for UI list

15 months ago[SYCL] Revisited & updated SYCL build documentation (#6141)
Ouadie EL FAROUKI [Thu, 28 Mar 2024 16:01:47 +0000 (16:01 +0000)]
[SYCL] Revisited & updated SYCL build documentation (#6141)

* Revisited & updated SYCL build documentation

* removed outdated comment

* Addressed PR comments

* Trimed white spaces

* added new end line

15 months agoconvert : refactor vocab selection logic (#6355)
Jared Van Bortel [Thu, 28 Mar 2024 15:44:36 +0000 (11:44 -0400)]
convert : refactor vocab selection logic (#6355)

15 months agollava : fix MobileVLM (#6364)
Ziang Wu [Thu, 28 Mar 2024 14:33:10 +0000 (22:33 +0800)]
llava : fix MobileVLM (#6364)

* fix empty bug

* Update MobileVLM-README.md

added more results on devices

* Update MobileVLM-README.md

* Update MobileVLM-README.md

* Update MobileVLM-README.md

* Update MobileVLM-README.md

* Update MobileVLM-README.md

* Update MobileVLM-README.md

* Update examples/llava/MobileVLM-README.md

Co-authored-by: Georgi Gerganov <redacted>
* Update MobileVLM-README.md

remove gguf links

---------

Co-authored-by: Georgi Gerganov <redacted>
15 months agollama : fix command-r inference when omitting outputs (#6367)
compilade [Thu, 28 Mar 2024 12:05:54 +0000 (08:05 -0400)]
llama : fix command-r inference when omitting outputs (#6367)

15 months agoci: bench: fix master not schedule, fix commit status failed on external repo (#6365)
Pierrick Hymbert [Thu, 28 Mar 2024 10:27:56 +0000 (11:27 +0100)]
ci: bench: fix master not schedule, fix commit status failed on external repo (#6365)

15 months agodoc: fix outdated default value of batch size (#6336)
Ting Sun [Thu, 28 Mar 2024 08:51:06 +0000 (16:51 +0800)]
doc: fix outdated default value of batch size (#6336)

* doc: fix outdated default value of batch size

* doc: add doc for ubatch-size

15 months agoserver : stop gracefully on SIGTERM (#6348)
Eric Zhang [Thu, 28 Mar 2024 08:50:48 +0000 (16:50 +0800)]
server : stop gracefully on SIGTERM (#6348)

15 months agonix: removed unnessesary indentation
hutli [Wed, 27 Mar 2024 18:17:30 +0000 (19:17 +0100)]
nix: removed unnessesary indentation

15 months agonix: moved blas availability check to package inputs so it is still overridable
hutli [Wed, 27 Mar 2024 18:14:28 +0000 (19:14 +0100)]
nix: moved blas availability check to package inputs so it is still overridable

15 months agousing blas.meta.available to check host platform
hutli [Wed, 27 Mar 2024 17:10:08 +0000 (18:10 +0100)]
using blas.meta.available to check host platform

15 months agoonly using explicit blas if hostPlatform is allowed
hutli [Wed, 27 Mar 2024 16:25:05 +0000 (17:25 +0100)]
only using explicit blas if hostPlatform is allowed

15 months agonix: .#windows: proper cross-compilation set-up
Someone Serge [Tue, 26 Mar 2024 16:22:42 +0000 (16:22 +0000)]
nix: .#windows: proper cross-compilation set-up

Take all dependencies from the cross stage, rather tha only stdenv

15 months agonix: package: don't introduce the dependency on python
Someone Serge [Tue, 26 Mar 2024 16:22:07 +0000 (16:22 +0000)]
nix: package: don't introduce the dependency on python

- The generic /usr/bin/env shebangs are good enough
- Python deps are provisioned in the devShells
- We need to be able to leave python out at least on windows (currently breaks eval)

15 months agonix: .#widnows: init
hutli [Thu, 15 Feb 2024 13:25:04 +0000 (14:25 +0100)]
nix: .#widnows: init

initial nix build for windows using zig

mingwW64 build

removes nix zig windows build

removes nix zig windows build

removed unnessesary glibc.static

removed unnessesary import of pkgs in nix

fixed missing trailing newline on non-windows nix builds

overriding stdenv when building for crosscompiling to windows in nix

better variables when crosscompiling windows in nix

cross compile windows on macos

removed trailing whitespace

remove unnessesary overwrite of "CMAKE_SYSTEM_NAME" in nix windows build

nix: keep file extension when copying result files during cross compile for windows

nix: better checking for file extensions when using MinGW

nix: using hostPlatform instead of targetPlatform when cross compiling for Windows

using hostPlatform.extensions.executable to extract executable format

15 months agodoc: fix typo in MobileVLM-README.md (#6181)
Ziang Wu [Thu, 28 Mar 2024 04:03:30 +0000 (12:03 +0800)]
doc: fix typo in MobileVLM-README.md (#6181)

15 months ago[SYCL] fix set main gpu crash (#6339)
Neo Zhang Jianyu [Thu, 28 Mar 2024 00:55:24 +0000 (08:55 +0800)]
[SYCL] fix set main gpu crash (#6339)

15 months agoserver: continuous performance monitoring and PR comment (#6283)
Pierrick Hymbert [Wed, 27 Mar 2024 19:26:49 +0000 (20:26 +0100)]
server: continuous performance monitoring and PR comment (#6283)

* server: bench: init

* server: bench: reduce list of GPU nodes

* server: bench: fix graph, fix output artifact

* ci: bench: add mermaid in case of image cannot be uploaded

* ci: bench: more resilient, more metrics

* ci: bench: trigger build

* ci: bench: fix duration

* ci: bench: fix typo

* ci: bench: fix mermaid values, markdown generated

* typo on the step name

Co-authored-by: Xuan Son Nguyen <redacted>
* ci: bench: trailing spaces

* ci: bench: move images in a details section

* ci: bench: reduce bullet point size

---------

Co-authored-by: Xuan Son Nguyen <redacted>
15 months agonix: ci: dont test cuda and rocm (for now)
Someone Serge [Wed, 27 Mar 2024 16:17:46 +0000 (16:17 +0000)]
nix: ci: dont test cuda and rocm (for now)

Until https://github.com/ggerganov/llama.cpp/issues/6346 is resolved

15 months agoggml : fix bounds checking of zero size views (#6347)
slaren [Wed, 27 Mar 2024 14:07:50 +0000 (15:07 +0100)]
ggml : fix bounds checking of zero size views (#6347)

15 months agomake : whitespace
Georgi Gerganov [Wed, 27 Mar 2024 13:02:49 +0000 (15:02 +0200)]
make : whitespace

15 months agoembedding : show full embedding for single prompt (#6342)
howlger [Wed, 27 Mar 2024 11:15:44 +0000 (12:15 +0100)]
embedding : show full embedding for single prompt (#6342)

* embedding : show full embedding for single prompt

To support the use case of creating an embedding for a given prompt, the entire embedding and not just the first part needed to be printed.

Also, show cosine similarity matrix only if there is more than one prompt, as the cosine similarity matrix for a single prompt is always `1.00`.

* Update examples/embedding/embedding.cpp

---------

Co-authored-by: Georgi Gerganov <redacted>
15 months ago[SYCL] Fix batched impl for NVidia GPU (#6164)
AidanBeltonS [Wed, 27 Mar 2024 08:16:40 +0000 (08:16 +0000)]
[SYCL] Fix batched impl for NVidia GPU (#6164)

* Fix batched impl

* Maintain previous behaviour for igpu

* retrigger CI

---------

Co-authored-by: Abhilash Majumder <redacted>
15 months agoMake IQ1_M work for QK_K = 64 (#6327)
Kawrakow [Wed, 27 Mar 2024 07:44:27 +0000 (08:44 +0100)]
Make IQ1_M work for QK_K = 64 (#6327)

* iq1_m: make it work for QK_K = 64 (WIP)

* iq1_m: make it work for QK_K = 64 (scalar and AVX2)

* iq1_m: QK_K = 64 seems to work on Metal and ARM_NEON

---------

Co-authored-by: Iwan Kawrakow <redacted>
15 months agocommon : change --no-penalize-nl to --penalize-nl (#6334)
Sigbjørn Skjæret [Wed, 27 Mar 2024 07:23:10 +0000 (08:23 +0100)]
common : change --no-penalize-nl to --penalize-nl (#6334)

* Change --no-penalize-nl to --penalize-nl

* Update documentation too

15 months agollama2c : open file as binary (#6332)
Georgi Gerganov [Wed, 27 Mar 2024 07:16:02 +0000 (09:16 +0200)]
llama2c : open file as binary (#6332)