]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Frank Mai [Mon, 17 Jun 2024 14:11:08 +0000 (22:11 +0800)]
fix: divide 0 exception in mamba (#7932)
Signed-off-by: thxCode <redacted>
Markus Tavenrath [Mon, 17 Jun 2024 14:10:15 +0000 (16:10 +0200)]
Implement non-mapped async IO for CUDA on Windows. (#7896)
* Implement non-mapped async IO for CUDA on Windows. On a fast Gen5 NVMe drive this change improves model load time by >3x while it should be the same (or slightly faster) on any other drive.
* Free resources except for backend.
* Change assertions to exceptions in llama_file, find correct cuda backend to create CUDA resources and respect the use_mmap flag again for CUDA.
* Apply suggestions from code review
Co-authored-by: slaren <redacted>
* Fix editorconfig and unused variable
* Fix issues with Windows build
---------
Co-authored-by: slaren <redacted>
Georgi Gerganov [Mon, 17 Jun 2024 08:09:20 +0000 (11:09 +0300)]
rpc : fix load/store misaligned addresses (#7948)
Brian [Mon, 17 Jun 2024 05:25:20 +0000 (15:25 +1000)]
gguf-dump.py: add --markdown dump output (#7853)
* gguf-dump.py: add --markdown dump output
* gguf-dump.py: Add toc
* gguf-dump.py: use standard tensor name lookup. Also add tensor ID field
* gguf-dump.py: Add tensor overview count
* gguf-dump.py: fix array preview
* gguf-dump.py: markdownTableWithAlignmentSupport() added
* Add type hints and spacing
Co-authored-by: compilade <redacted>
* gguf-dump.py: prettyfy dimention
* gguf-dump: right align element count
* gguf-dump.py: element count autosizing
* Apply suggestions from code review
Co-authored-by: compilade <redacted>
---------
Co-authored-by: compilade <redacted>
Neo Zhang [Mon, 17 Jun 2024 03:17:07 +0000 (11:17 +0800)]
[SYCL] Update README-sycl.md for Chapter "Recommended release" and "News" (#7946)
* Update README-sycl.md
* Update README-sycl.md
* Update README-sycl.md
* Update README-sycl.md
Calvin Laurenson [Sun, 16 Jun 2024 22:23:04 +0000 (15:23 -0700)]
Add support for sqrt on CUDA (#7953)
* cuda sqrt support
* enable cuda in pca
* fix comments in pca
* add test
* add sqrt to ggml_backend_cuda_supports_op
* fix test
* new line
* Use F32 sqrtf instead of F64 sqrt
Co-authored-by: Johannes Gäßler <redacted>
---------
Co-authored-by: Johannes Gäßler <redacted>
Georgi Gerganov [Tue, 11 Jun 2024 14:39:01 +0000 (17:39 +0300)]
cuda : fix bounds check for src0 rows in MMVQ kernel (whisper/2231)
* cuda : fix bounds check for src0 rows in MMVQ kernel
* Update ggml-cuda/mmvq.cu
Co-authored-by: Johannes Gäßler <redacted>
---------
Co-authored-by: Johannes Gäßler <redacted>
Hong Bo PENG [Sun, 16 Jun 2024 08:53:11 +0000 (16:53 +0800)]
ggml : fix and optimize ppc64le (ggml/849)
* fix compile issues introduced by loongarch_asx
* restore quant changes to merge
* fix compile issues introduced by loongarch_asx
* further optimize by using vec_msum & vec_sum4s on ppc64le
Daniel Bevenius [Sun, 16 Jun 2024 08:51:18 +0000 (10:51 +0200)]
ggml : remove duplicate include of ggml-common.h (ggml/853)
Signed-off-by: Daniel Bevenius <redacted>
Georgi Gerganov [Sun, 16 Jun 2024 16:16:21 +0000 (19:16 +0300)]
flake.lock: Update (#7951)
Georgi Gerganov [Sun, 16 Jun 2024 11:51:40 +0000 (14:51 +0300)]
unicode : avoid char32_t (#7957)
ggml-ci
hopkins385 [Sun, 16 Jun 2024 11:51:18 +0000 (13:51 +0200)]
readme : update UI list [no ci] (#7958)
Georgi Gerganov [Sun, 16 Jun 2024 11:50:12 +0000 (14:50 +0300)]
ggml : fix handling of zero blocks in IQ quants (#7955)
ggml-ci
Georgi Gerganov [Sun, 16 Jun 2024 07:46:51 +0000 (10:46 +0300)]
github : update pr template
0cc4m [Sun, 16 Jun 2024 05:17:31 +0000 (07:17 +0200)]
Vulkan Shader Refactor, Memory Debugging Option (#7947)
* Refactor shaders, extract GLSL code from ggml_vk_generate_shaders.py into vulkan-shaders directory
* Improve debug log code
* Add memory debug output option
* Fix flake8
* Fix unnecessary high llama-3 VRAM use
Xuan Son Nguyen [Sat, 15 Jun 2024 16:53:40 +0000 (18:53 +0200)]
Add `cvector-generator` example (#7514)
* add control-vector-generator
* calc diff
* add comments
* proof-of-concept stdlib implementation
Implements PCA and file writing using mostly standard libraries. The output is recognized as a functional control vector, but outputs gibberish.
* param parsing, refactor, comments
Added basic command-line parameters for outfile and one each positive/negative prompt.
Refactored some messy code in PCA computation and GGUF exporting.
Left a bunch of comments regarding further work needed.
* example template completions
Implements an example template set built from the positive/negative prompts like the control vector Python implementation.
* add multi prompts, multi-thread for PCA
* fix mem error
* add debugs
* fix matrix transpose multiplication
you have got to be kidding me
* preliminary template/multiprompt support
model is running out of context and that ought to be fixed (segfaulting) but other than that it looks goodish
* fix zero output & param parsing, functional templating
fixed a bug where the output file had no tensor data/was all zero
fixed a bug where single hyphen flags were not being correctly parsed
implements creation of templated prompts from input (still need to adapt based on model)
* fix square_diff matmul index range and CRLF->LF line endings
fixed a logic error where square_diff would not multiply all rows
fixed a formatting error where the provided completions.txt had CRLF line endings
* add command-line args for num threads, num completions file lines, always reload model
refactored a few things and did what the commit message says on the tin
* code aestheticization
* fix compiler warnings
* in-series multithreading for prompt embedding?
added commented-out code to attempt to start implementing mutlithreading for embedding in main
* remove unnecessary multithreading
* interim fix memory leak
* translated everything but PCA (I think)
* tentatively translate the rest
* fix ggml errors and make new ones
at least it compiles and runs
* fix cb_eval
* temporary commit while I move dev environments
it finally outputs a functioning control vector - "functioning" in the sense that it can be loaded and it clearly has the right idea, but makes the model incoherent
* update debug statements
* pre-tokenize so we can allocate correct memory to ctx_diffs_wrapped
* update comments
* (wip) refactor
* clean up PCA ggml implementation
* fix shape of v_diff_original
* add n_batch for pca
* working version
* remember to copy back the last_eigenvector
* fix n_completions
* bring back n_completions
* default n_pca_batch to 20
* fix macos build
* add to makefile all targets
* use ggml_format_name
* add readme
* fix .editorconfig
* use ggml_backend_tensor_copy
* attemp to fix compile problem on mac
* fix compile warn
* reuse allocr
* move param parser to common
* better error handling
* clean up a bit
* add print_usage
* shorten help msg
* beautify help msg
* escape prompt by default
* change compile target to llama-cvector-generator
* typo
* disable GPU for PCA
* code style
---------
Co-authored-by: Christian Zhou-Zheng <redacted>
Meng, Hengyu [Sat, 15 Jun 2024 06:05:10 +0000 (14:05 +0800)]
[SYCL] remove global variables (#7710)
* separate DPCT helpers outside
* replace global variables with context
* remove useless extra
* update mul_mat condition
* remove duplicate buft initialization
* remove duplicate extra and global work group size
* remove useless backend check
* remove duplicated extras
* use macro for group_size and remove cuda-related
olexiyb [Fri, 14 Jun 2024 17:28:34 +0000 (20:28 +0300)]
ci : fix macos x86 build (#7940)
In order to use old `macos-latest` we should use `macos-12`
Potentially will fix: https://github.com/ggerganov/llama.cpp/issues/6975
Johannes Gäßler [Fri, 14 Jun 2024 16:41:49 +0000 (18:41 +0200)]
CUDA: faster q2_K, q3_K MMQ + int8 tensor cores (#7921)
* CUDA: faster q2_K, q3_K MMQ + int8 tensor cores
* try CI fix
* try CI fix
* try CI fix
* fix data race
* rever q2_K precision related changes
Georgi Gerganov [Fri, 14 Jun 2024 14:14:09 +0000 (17:14 +0300)]
metal : utilize max shared memory for mul_mat_id (#7935)
Radoslav Gerganov [Fri, 14 Jun 2024 13:47:41 +0000 (16:47 +0300)]
llama-bench : fix RPC indication (#7936)
Show "<backend_name>+RPC" when RPC offloading is used
Sigbjørn Skjæret [Fri, 14 Jun 2024 10:20:04 +0000 (12:20 +0200)]
llama : more checks before assuming FIM tokens (#7644)
* More checks before assuming FIM tokens for Llama arch
* extensive token check
Elaine [Fri, 14 Jun 2024 10:16:49 +0000 (13:16 +0300)]
convert : add Poro-34B-chat tokenizer support (#7713)
* support for Poro chat pre-tokenizer
* add support for Poro pre-tokenizer
* Update convert-hf-to-gguf-update.py
Co-authored-by: Georgi Gerganov <redacted>
* Change Poro-34B-chat to poro-chat
* Change Poro-34B-chat to poro-chat
* Update convert-hf-to-gguf-update.py
* Update llama.cpp
---------
Co-authored-by: Georgi Gerganov <redacted>
Radoslav Gerganov [Thu, 13 Jun 2024 12:18:44 +0000 (15:18 +0300)]
rpc : fix ggml_backend_rpc_supports_buft() (#7918)
Galunid [Thu, 13 Jun 2024 07:42:41 +0000 (09:42 +0200)]
readme : Remove outdated instructions from README.md (#7914) [no ci]
slaren [Thu, 13 Jun 2024 01:11:35 +0000 (03:11 +0200)]
move BLAS to a separate backend (#6210)
* move BLAS to a separate backend
* rename GGML_USE_OPENBLAS to GGML_USE_BLAS
* alloc : reuse same buffer when the same buffer type if used multiple times
* set number of threads automatically for openblas and blis
* sched : print assignments when GGML_SCHED_DEBUG env variable is set
* sched : allow ops with weights on an incompatible buffer type
This will cause the weight to be copied to a backend that supports the
op, which is very costly. The weight should have been stored in a buffer
of a backend that can run the op, but llama.cpp cannot do this
automatically at the moment.
---------
Co-authored-by: Georgi Gerganov <redacted>
Olivier Chafik [Wed, 12 Jun 2024 23:41:52 +0000 (00:41 +0100)]
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
* `main`/`server`: rename to `llama` / `llama-server` for consistency w/ homebrew
* server: update refs -> llama-server
gitignore llama-server
* server: simplify nix package
* main: update refs -> llama
fix examples/main ref
* main/server: fix targets
* update more names
* Update build.yml
* rm accidentally checked in bins
* update straggling refs
* Update .gitignore
* Update server-llm.sh
* main: target name -> llama-cli
* Prefix all example bins w/ llama-
* fix main refs
* rename {main->llama}-cmake-pkg binary
* prefix more cmake targets w/ llama-
* add/fix gbnf-validator subfolder to cmake
* sort cmake example subdirs
* rm bin files
* fix llama-lookup-* Makefile rules
* gitignore /llama-*
* rename Dockerfiles
* rename llama|main -> llama-cli; consistent RPM bin prefixes
* fix some missing -cli suffixes
* rename dockerfile w/ llama-cli
* rename(make): llama-baby-llama
* update dockerfile refs
* more llama-cli(.exe)
* fix test-eval-callback
* rename: llama-cli-cmake-pkg(.exe)
* address gbnf-validator unused fread warning (switched to C++ / ifstream)
* add two missing llama- prefixes
* Updating docs for eval-callback binary to use new `llama-` prefix.
* Updating a few lingering doc references for rename of main to llama-cli
* Updating `run-with-preset.py` to use new binary names.
Updating docs around `perplexity` binary rename.
* Updating documentation references for lookup-merge and export-lora
* Updating two small `main` references missed earlier in the finetune docs.
* Update apps.nix
* update grammar/README.md w/ new llama-* names
* update llama-rpc-server bin name + doc
* Revert "update llama-rpc-server bin name + doc"
This reverts commit
e474ef1df481fd8936cd7d098e3065d7de378930 .
* add hot topic notice to README.md
* Update README.md
* Update README.md
* rename gguf-split & quantize bins refs in **/tests.sh
---------
Co-authored-by: HanClinto <redacted>
Johannes Gäßler [Wed, 12 Jun 2024 15:41:51 +0000 (17:41 +0200)]
CUDA: fix broken oob check for FA vec f32 kernel (#7904)
Georgi Gerganov [Wed, 12 Jun 2024 13:00:22 +0000 (16:00 +0300)]
tests : add non-cont unary tests (#7857)
* tests : add non-cont unary tests
* ggml : update unary asserts and "supports_op"
ggml-ci
Georgi Gerganov [Wed, 12 Jun 2024 12:24:20 +0000 (15:24 +0300)]
ggml : improve ggml_is_contiguous logic (#7856)
* ggml : improve ggml_is_contiguous logic
ggml-ci
* ggml : support more contiguous cases
ggml-ci
Georgi Gerganov [Wed, 12 Jun 2024 11:42:29 +0000 (14:42 +0300)]
server : restore numeric prompts (#7883)
Meng, Hengyu [Wed, 12 Jun 2024 09:05:35 +0000 (17:05 +0800)]
update intel docker oneapi-basekit to 2024.1.1-devel-ubuntu22.04 (#7894)
In addition this reverts a workaround we had to do to workaround the upstream issue with expired intel GPG package keys in 2024.0.1-devel-ubuntu22.04
Patrice Ferlet [Wed, 12 Jun 2024 01:18:16 +0000 (03:18 +0200)]
Fix a typo and add Fedora 40 pacakge to install for Vulkan (#7794) [no ci]
Fix "appropiate" to "appropriate" and add Fedora 40 packages to install to compile with Vulkan support
k.h.lai [Tue, 11 Jun 2024 19:26:05 +0000 (03:26 +0800)]
vulkan: select only one device for single gpu with multiple drivers (#7582)
0cc4m [Tue, 11 Jun 2024 19:20:29 +0000 (21:20 +0200)]
Update Vulkan RoPE implementation (#7818)
* Update Vulkan RoPE implementation
* Return nullptr on alloc_buffer when allocation fails, instead of throwing an exception
Minor fixes
* Fix segfault when running out of VRAM
Co-authored-by: slaren <redacted>
---------
Co-authored-by: slaren <redacted>
Deven Mistry [Tue, 11 Jun 2024 16:18:58 +0000 (12:18 -0400)]
fix broken link in pr template (#7880) [no ci]
* fix broken link in pr template
* Update pull_request_template.md [no ci]
---------
Co-authored-by: Brian <redacted>
Brian [Tue, 11 Jun 2024 14:43:41 +0000 (00:43 +1000)]
github: move PR template to .github/ root (#7868)
Johannes Gäßler [Tue, 11 Jun 2024 12:45:40 +0000 (14:45 +0200)]
llama-bench: more compact markdown tables (#7879)
Georgi Gerganov [Tue, 11 Jun 2024 07:10:20 +0000 (10:10 +0300)]
tests : check the Python version (#7872)
ggml-ci
Johannes Gäßler [Tue, 11 Jun 2024 06:26:07 +0000 (08:26 +0200)]
CUDA: int8 tensor cores for MMQ (q4_K, q5_K, q6_K) (#7860)
slaren [Tue, 11 Jun 2024 05:59:20 +0000 (07:59 +0200)]
fix CUDA CI by using a windows-2019 image (#7861)
* try to fix CUDA ci with --allow-unsupported-compiler
* trigger when build.yml changes
* another test
* try exllama/bdashore3 method
* install vs build tools before cuda toolkit
* try win-2019
Olivier Chafik [Tue, 11 Jun 2024 01:22:57 +0000 (02:22 +0100)]
json: refine constraint for whitespace to avoid runaways yet allow pretty print (#7866)
Olivier Chafik [Tue, 11 Jun 2024 00:00:30 +0000 (01:00 +0100)]
`json`: document schema conversion in GBNF readme, align manual grammar examples & converters (#7841)
* json: fix char pattern in grammar converters
* json: prevent number precision & whitespace runaways in example grammars
* json: add doc to grammar readme
Jared Van Bortel [Mon, 10 Jun 2024 22:32:10 +0000 (18:32 -0400)]
cmake : fix CMake requirement for CUDA (#7821)
slaren [Mon, 10 Jun 2024 12:18:41 +0000 (14:18 +0200)]
ci : try win-2019 on server windows test (#7854)
Georgi Gerganov [Mon, 10 Jun 2024 12:00:15 +0000 (15:00 +0300)]
examples : remove --instruct remnants (#7846)
Georgi Gerganov [Mon, 10 Jun 2024 11:59:55 +0000 (14:59 +0300)]
server : improve "prompt" handling (#7847)
Johannes Gäßler [Mon, 10 Jun 2024 09:45:13 +0000 (11:45 +0200)]
CUDA: use tensor cores for MMQ (#7676)
* CUDA: int8 tensor cores for MMQ (legacy quants)
* fix out-of-bounds writes
* __builtin_assume -> GGML_CUDA_ASSUME
* fix writeback returning too early
Ben Ashbaugh [Mon, 10 Jun 2024 09:21:31 +0000 (02:21 -0700)]
use the correct SYCL context for host USM allocations (#7777)
Signed-off-by: Ben Ashbaugh <redacted>
Georgi Gerganov [Sun, 9 Jun 2024 23:04:50 +0000 (02:04 +0300)]
flake.lock: Update (#7838)
Flake lock file updates:
• Updated input 'nixpkgs':
'github:NixOS/nixpkgs/
ad57eef4ef0659193044870c731987a6df5cf56b ?narHash=sha256-SzDKxseEcHR5KzPXLwsemyTR/kaM9whxeiJohbL04rs%3D' (2024-05-29)
→ 'github:NixOS/nixpkgs/
051f920625ab5aabe37c920346e3e69d7d34400e ?narHash=sha256-4q0s6m0GUcN7q%2BY2DqD27iLvbcd1G50T2lv08kKxkSI%3D' (2024-06-07)
Co-authored-by: github-actions[bot] <redacted>
Georgi Gerganov [Sun, 9 Jun 2024 17:19:35 +0000 (20:19 +0300)]
imatrix : handle partial entries (#7833)
Nicolás Pérez [Sun, 9 Jun 2024 15:24:29 +0000 (11:24 -0400)]
docs: Added initial PR template with directions for doc only changes and squash merges [no ci] (#7700)
This commit adds pull_request_template.md and CONTRIBUTING.md . It focuses on explaining to contributors the need to rate PR complexity level, when to add [no ci] and how to format PR title and descriptions.
Co-authored-by: Brian <redacted>
Co-authored-by: compilade <redacted>
mgroeber9110 [Sun, 9 Jun 2024 10:50:35 +0000 (12:50 +0200)]
server: do not remove whitespace at the start of a completion chunk (#7830)
Johannes Gäßler [Sun, 9 Jun 2024 07:42:25 +0000 (09:42 +0200)]
CUDA: revise q8_1 data layout for mul_mat_q (#7824)
sasha0552 [Sun, 9 Jun 2024 06:39:25 +0000 (06:39 +0000)]
convert-hf : set the model name based on cli arg, if present (#7693)
`--model-name` argument was added a while ago but did not do anything.
This commit fixes this issue and enables this feature.
compilade [Sun, 9 Jun 2024 02:47:25 +0000 (22:47 -0400)]
convert-hf : match model part name prefix and suffix (#7687)
In #7075, to fix the conversion of (some) models using model-00001-of-00001.safetensors instead of model.safetensors for a single model part we simply used the same logic as the part count to get the part names.
But this doesn't always work correctly, like when unusual additional model files like consolidated.safetensors in https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3 are present.
This commit matching both the prefix and the suffix of the model part names should fix this problem without breaking any previously-supported upstream models. But according to report by @teleprint-me there is still some
persistent problem, but shall do in the meantime.
compilade [Sun, 9 Jun 2024 02:34:29 +0000 (22:34 -0400)]
gguf-py : decouple adding metadata from writing in GGUFWriter (#7827)
Main changes of this PR is to consolidate GGUFWriter.add_key and GGUFWriter.add_val into GGUFWriter.add_key_value.
In addition use_temp_file is now opt-in instead of opt-out defaulting to False.
Also GGUFWriter now does not require output file name until when actually writing to it.
And GGUFWriter doesn't really need to eagerly prepare the data layout of the metadata
slaren [Sat, 8 Jun 2024 23:43:39 +0000 (01:43 +0200)]
Revert "[SYCL] Update rpc-server.cpp to include SYCL backend (#7682)" (#7808)
This reverts commit
9422c5e34bbd302493b77a8f6d546154a1f4fe82 .
Olivier Chafik [Sat, 8 Jun 2024 19:21:08 +0000 (20:21 +0100)]
url: save -mu downloads to new cache location (#7826)
* url: save -mu download to new cache location
* url: fs_get_cache_file_path util
* url: tweak sig of fs_get_cache_file
sasha0552 [Sat, 8 Jun 2024 07:50:31 +0000 (07:50 +0000)]
server : smart slot selection using Longest Common Prefix (#7728)
* server : Smart selection of available slot using Longest Common Substring
* add usage
* remove trailing whitespaces
* Use Longest Common Prefix (LCP) instead of LCS
* Rename argument
slaren [Fri, 7 Jun 2024 17:47:49 +0000 (19:47 +0200)]
vulkan : reuse parent extra for views (#7806)
* vulkan : reuse parent extra for views
* Fix validation error when multiple compute contexts are used in a graph
---------
Co-authored-by: 0cc4m <redacted>
Christian Zhou-Zheng [Fri, 7 Jun 2024 12:56:01 +0000 (08:56 -0400)]
gguf-split : change binary multi-byte units to decimal (#7803)
intelmatt [Fri, 7 Jun 2024 12:15:07 +0000 (05:15 -0700)]
cmake : fix BUILD_SHARED_LIBS=ON build (#7784)
common depends on pthreads in Linux
Johannes Gäßler [Fri, 7 Jun 2024 09:15:49 +0000 (11:15 +0200)]
server: update cache_prompt documentation [no ci] (#7745)
woodx [Fri, 7 Jun 2024 07:09:45 +0000 (15:09 +0800)]
server : do not get prompt in infill mode (#7286)
* avoid to get prompt in infill mode and embedding mode
* remove embedding mode
* refactor format
---------
Co-authored-by: wudexiang <redacted>
pengxin99 [Fri, 7 Jun 2024 06:28:26 +0000 (14:28 +0800)]
[SYCL] fix softmax r2r result wrong issue (#7811)
slaren [Fri, 7 Jun 2024 06:01:29 +0000 (08:01 +0200)]
check for nans in imatrix and quantize (#7807)
* imatrix : detect nan/inf values
* quantize : check imatrix for nan/inf values
Georgi Gerganov [Thu, 6 Jun 2024 16:19:59 +0000 (19:19 +0300)]
server : fix --threads-http arg (#7801)
Georgi Gerganov [Thu, 6 Jun 2024 13:30:58 +0000 (16:30 +0300)]
imatrix : migrate to gpt_params (#7771)
* imatrix : migrate to gpt_params
ggml-ci
* imatrix : add --save-frequency cli arg
* common : fix --no-ppl
Clint Herron [Thu, 6 Jun 2024 13:08:52 +0000 (06:08 -0700)]
Added support for . (any character) token in grammar engine. (#6467)
* Added support for . (any characer) token in grammar engine.
* Add integration tests for any-character symbol.
Mattheus Chediak [Thu, 6 Jun 2024 12:17:54 +0000 (09:17 -0300)]
README minor fixes (#7798) [no ci]
derievatives --> derivatives
Olivier Chafik [Thu, 6 Jun 2024 09:07:06 +0000 (10:07 +0100)]
grammars: x{min,max} repetition operator (#6640)
* grammars: x{min,max} repetition operator + tweak +/*/? to avoid duplication of original over alternates
* grammars: handle `x{n}` and fix `x{n,n}`
* grammars: document new repetition operators
* grammars: uniform use of int for min & max
* grammars: refactor parser test
* grammar: parsing tests w/ natural pretty print of updated expectations
* grammars: much prettier print of expectations (+ TEST_GRAMMAR_PARSER_PRINT_ALL=1 to force all)
* grammars: improve test pretty print again
* grammars: pretty print rules and chars
* grammars: fix copy rule skipping
* grammars: disallow `a{,}` (not allowed in regexps)
* Update common/grammar-parser.cpp
Co-authored-by: Clint Herron <redacted>
* grammars: fix copy rule skipping (again) & display of expectations
* grammars: more test cases
* grammars: update reps parsing to bring ? / * / + closer to before
* json: use new GBNF repetitions{m,n} syntax
* grammars: update performance gotchas w/ repetition advice
* Update examples/json_schema_to_grammar.py
Co-authored-by: Clint Herron <redacted>
* Update examples/server/public/json-schema-to-grammar.mjs
Co-authored-by: Clint Herron <redacted>
* grammars: comment on rule repetitions
* grammars: ensure unambiguous number alternatives
* grammar: nit typo switched error msgs
* grammar: nit numbering in comment
* json: update numeric rule to be unambiguous
* Apply suggestions from code review
Co-authored-by: Clint Herron <redacted>
* Update examples/server/public/json-schema-to-grammar.mjs
Co-authored-by: Clint Herron <redacted>
* json: fix integral-part
* grammar: add repetition tests
---------
Co-authored-by: Clint Herron <redacted>
Joan Fontanals [Thu, 6 Jun 2024 07:22:41 +0000 (09:22 +0200)]
llama : add jina v2 base code (#7596)
* feat: add changes to handle jina v2 base code
* fix: do not complicate things
* fix: fix the usage of the code model
* fix: fix comments
* fix: fix linting issues
* fix: remove ollama patches
* style : minor
---------
Co-authored-by: Georgi Gerganov <redacted>
slaren [Thu, 6 Jun 2024 05:19:49 +0000 (07:19 +0200)]
docker : build only main and server in their images (#7782)
* add openmp lib to dockerfiles
* build only main and server in their docker images
slaren [Thu, 6 Jun 2024 05:17:21 +0000 (07:17 +0200)]
docker : add openmp lib (#7780)
Galunid [Wed, 5 Jun 2024 17:07:24 +0000 (19:07 +0200)]
Fix encoding in python scripts (#7733)
Johannes Gäßler [Wed, 5 Jun 2024 14:53:00 +0000 (16:53 +0200)]
CUDA: refactor mmq, dmmv, mmvq (#7716)
* CUDA: refactor mmq, dmmv, mmvq
* fix out-of-bounds write
* struct for qk, qr, qi
* fix cmake build
* mmq_type_traits
Georgi Gerganov [Wed, 5 Jun 2024 08:29:20 +0000 (11:29 +0300)]
ggml : refactor rope norm/neox (#7634)
* ggml : unify rope norm/neox (CPU)
* ggml : fix compile warning
* ggml : remove GLM rope mode
ggml-ci
* metal : better rope implementation
ggml-ci
* cuda : better rope implementation
ggml-ci
* naming : n_orig_ctx -> n_ctx_orig
ggml-ci
* dev : add reminders to update backends
ggml-ci
* vulkan : fix ggml_rope_ext() usage
* cuda : fix array size + indents
ggml-ci
arch-btw [Wed, 5 Jun 2024 06:40:49 +0000 (23:40 -0700)]
readme : remove -ins (#7759)
-ins and --instruct were moved in https://github.com/ggerganov/llama.cpp/pull/7675
I have adjusted the README accordingly.
There was no trace of --chatml in the README.
jaime-m-p [Tue, 4 Jun 2024 23:26:14 +0000 (01:26 +0200)]
Fix per token atrributes bits (#7749)
agray3 [Tue, 4 Jun 2024 20:06:49 +0000 (21:06 +0100)]
Allow number of nodes in CUDA graph to change (#7738)
Previously the code would have failed to cope in the case that the
number of nodes changes in an existing CUDA graph. This fixes the
issue by removing an unnecessary conditional.
Georgi Gerganov [Tue, 4 Jun 2024 18:23:39 +0000 (21:23 +0300)]
common : refactor cli arg parsing (#7675)
* common : gpt_params_parse do not print usage
* common : rework usage print (wip)
* common : valign
* common : rework print_usage
* infill : remove cfg support
* common : reorder args
* server : deduplicate parameters
ggml-ci
* common : add missing header
ggml-ci
* common : remote --random-prompt usages
ggml-ci
* examples : migrate to gpt_params
ggml-ci
* batched-bench : migrate to gpt_params
* retrieval : migrate to gpt_params
* common : change defaults for escape and n_ctx
* common : remove chatml and instruct params
ggml-ci
* common : passkey use gpt_params
Georgi Gerganov [Tue, 4 Jun 2024 18:23:20 +0000 (21:23 +0300)]
ggml : remove OpenCL (#7735)
ggml-ci
Georgi Gerganov [Tue, 4 Jun 2024 18:23:05 +0000 (21:23 +0300)]
llama : remove beam search (#7736)
Georgi Gerganov [Tue, 4 Jun 2024 16:43:01 +0000 (19:43 +0300)]
readme : remove obsolete Zig instructions (#7471)
slaren [Tue, 4 Jun 2024 12:32:42 +0000 (14:32 +0200)]
llama-bench : allow using a different printer for stderr with -oe (#7722)
compare-commits.sh : hide stdout, use -oe to print markdown
Daniele [Tue, 4 Jun 2024 12:09:15 +0000 (12:09 +0000)]
Improve hipBLAS support in CMake (#7696)
* Improve hipBLAS support in CMake
This improves the detection of the correct CMAKE_PREFIX_PATH when using different distributions or a self-built ROCm SDK.
* Set ROCM_PATH correctly
zhouwg [Tue, 4 Jun 2024 11:21:26 +0000 (19:21 +0800)]
refine .gitignore (#7688)
This adds tags and android ndk into the git ignore list
jaime-m-p [Tue, 4 Jun 2024 07:17:17 +0000 (09:17 +0200)]
Per token attributes (#7685)
* Add per token attributes enum
* Using phi-3 for testing 'rstrip'
* Using jina-v2 for testing 'lstrip'
* Brute force test for 'lstrip' and 'rstrip'
* Implement 'rstrip' and 'lstrip'
* Update phi-3 GGUF file (obsolete since
917dc8c )
* Replace llama_token_type with llama_token_attribs
Georgi Gerganov [Tue, 4 Jun 2024 07:01:09 +0000 (10:01 +0300)]
ggml : prevent builds with -ffinite-math-only (#7726)
This enforces a check that -fno-finite-math-only was set and that the operating
compiling mode is not in finite maths mode. This is because during rewriting of
silu and softmax for cpu #7154 there emerged an issue where the result that was
observed when >1 slot was nondeterministic as found by @JohannesGaessler.
@LostRuins narrowed the problem down to -ffinite-math-only which was theorised
to be due to SiLU, instead of flushing small values to 0, returns NaN or some
other garbage. @jart proposed a fix that @ggerganov then implemented in this fix
ref https://github.com/ggerganov/llama.cpp/pull/7154#issuecomment-
2145661825
Radoslav Gerganov [Mon, 3 Jun 2024 17:03:26 +0000 (20:03 +0300)]
llama : offload to RPC in addition to other backends (#7640)
* llama : offload to RPC in addition to other backends
* - fix copy_tensor being called on the src buffer instead of the dst buffer
- always initialize views in the view_src buffer
- add RPC backend to Makefile build
- add endpoint to all RPC object names
* add rpc-server to Makefile
* Update llama.cpp
Co-authored-by: slaren <redacted>
---------
Co-authored-by: slaren <redacted>
Masaya, Kato [Mon, 3 Jun 2024 15:14:15 +0000 (00:14 +0900)]
ggml : use OpenMP as a thread pool (#7606)
* ggml: Added OpenMP for multi-threads processing
* ggml : Limit the number of threads used to avoid deadlock
* update shared state n_threads in parallel region
* clear numa affinity for main thread even with openmp
* enable openmp by default
* fix msvc build
* disable openmp on macos
* ci : disable openmp with thread sanitizer
* Update ggml.c
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: slaren <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Johannes Gäßler [Mon, 3 Jun 2024 14:28:58 +0000 (16:28 +0200)]
make: fix debug options not being applied to NVCC (#7714)
0cc4m [Mon, 3 Jun 2024 08:59:14 +0000 (10:59 +0200)]
Vulkan Mixture of Experts (MoE) support (#7628)
* Finish Vulkan mul_mat_id implementation
* Add Vulkan sum_rows and div ops
* Fix MUL_MAT_ID matrix matrix shader
* Fix MUL_MAT_ID matrix vector shader dispatch size
* Fix MUL_MAT_ID matrix vector shader and dispatch code
* Update Vulkan CPU offload for MUL_MAT_ID
* Fix crash when using split mode none and setting a main GPU
Andy Tai [Mon, 3 Jun 2024 08:06:24 +0000 (01:06 -0700)]
cmake : add pkg-config spec file for llama.cpp (#7702)
zhangkaihuo [Mon, 3 Jun 2024 07:49:30 +0000 (15:49 +0800)]
llama : MiniCPM support tied embeddings (#7664)
* support lm_head
* remove the code block
---------
Co-authored-by: zhangkaihuo <redacted>
Georgi Gerganov [Mon, 3 Jun 2024 05:34:43 +0000 (08:34 +0300)]
llama : avoid double token-to-piece cache (#7654)
ggml-ci
woachk [Mon, 3 Jun 2024 05:32:16 +0000 (07:32 +0200)]
kompute : implement op_getrows_f32 (#6403)
op_getrows_f32 is required since https://github.com/ggerganov/llama.cpp/pull/6122
for the Vulkan w/ Kompute backend to be functional.
As such, implement this op to make this backend functional again.
Dave Airlie [Sun, 2 Jun 2024 21:59:54 +0000 (07:59 +1000)]
fix bug introduced in using calloc (#7701)
compilade pointed this out on the previous MR
Georgi Gerganov [Sun, 2 Jun 2024 21:13:12 +0000 (00:13 +0300)]
flake.lock: Update (#7686)
Flake lock file updates:
• Updated input 'flake-parts':
'github:hercules-ci/flake-parts/
8dc45382d5206bd292f9c2768b8058a8fd8311d9 ?narHash=sha256-/GJvTdTpuDjNn84j82cU6bXztE0MSkdnTWClUCRub78%3D' (2024-05-16)
→ 'github:hercules-ci/flake-parts/
2a55567fcf15b1b1c7ed712a2c6fadaec7412ea8 ?narHash=sha256-iKzJcpdXih14qYVcZ9QC9XuZYnPc6T8YImb6dX166kw%3D' (2024-06-01)
• Updated input 'flake-parts/nixpkgs-lib':
'https://github.com/NixOS/nixpkgs/archive/
50eb7ecf4cd0a5756d7275c8ba36790e5bd53e33 .tar.gz?narHash=sha256-QBx10%2Bk6JWz6u7VsohfSw8g8hjdBZEf8CFzXH1/1Z94%3D' (2024-05-02)
→ 'https://github.com/NixOS/nixpkgs/archive/
eb9ceca17df2ea50a250b6b27f7bf6ab0186f198 .tar.gz?narHash=sha256-lIbdfCsf8LMFloheeE6N31%2BBMIeixqyQWbSr2vk79EQ%3D' (2024-06-01)
• Updated input 'nixpkgs':
'github:NixOS/nixpkgs/
bfb7a882678e518398ce9a31a881538679f6f092 ?narHash=sha256-4zSIhSRRIoEBwjbPm3YiGtbd8HDWzFxJjw5DYSDy1n8%3D' (2024-05-24)
→ 'github:NixOS/nixpkgs/
ad57eef4ef0659193044870c731987a6df5cf56b ?narHash=sha256-SzDKxseEcHR5KzPXLwsemyTR/kaM9whxeiJohbL04rs%3D' (2024-05-29)
Co-authored-by: github-actions[bot] <redacted>