]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
alexpinel [Thu, 4 Apr 2024 17:22:50 +0000 (18:22 +0100)]
readme : add Dot to UI list (#6487)
Jun Jie [Thu, 4 Apr 2024 17:16:37 +0000 (01:16 +0800)]
readme : fix typo (#6481)
Ed Lepedus [Thu, 4 Apr 2024 16:31:22 +0000 (17:31 +0100)]
server: add cURL support to server Dockerfiles (#6474)
* server: add cURL support to `full.Dockerfile`
* server: add cURL support to `full-cuda.Dockerfile` and `server-cuda.Dockerfile`
* server: add cURL support to `full-rocm.Dockerfile` and `server-rocm.Dockerfile`
* server: add cURL support to `server-intel.Dockerfile`
* server: add cURL support to `server-vulkan.Dockerfile`
* fix typo in `server-vulkan.Dockerfile`
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Minsoo Cheong [Thu, 4 Apr 2024 16:30:53 +0000 (01:30 +0900)]
ci: exempt master branch workflows from getting cancelled (#6486)
* ci: exempt master branch workflows from getting cancelled
* apply to bench.yml
Ewout ter Hoeven [Thu, 4 Apr 2024 15:08:55 +0000 (17:08 +0200)]
build CI: Name artifacts (#6482)
Name the artifacts in the build CI, so that they get uploaded with separate names, instead of all put into the same `artifact` ZIP.
It might be possible to further simplify the packing step (in future PRs).
Shakhar Dasgupta [Thu, 4 Apr 2024 15:03:00 +0000 (11:03 -0400)]
server: allow penalizing repetition of newlines on server webpage (#6431)
Pierrick Hymbert [Thu, 4 Apr 2024 14:59:04 +0000 (16:59 +0200)]
ci: bench fix concurrency for workflow trigger dispatch with sha1 (#6478)
limitedAtonement [Thu, 4 Apr 2024 14:30:02 +0000 (10:30 -0400)]
Correct README link (#6458)
README is called README.md.
Pierrick Hymbert [Thu, 4 Apr 2024 09:57:58 +0000 (11:57 +0200)]
ci: bench: add more ftype, fix triggers and bot comment (#6466)
* ci: bench: change trigger path to not spawn on each PR
* ci: bench: add more file type for phi-2: q8_0 and f16.
- do not show the comment by default
* ci: bench: add seed parameter in k6 script
* ci: bench: artefact name perf job
* Add iteration in the commit status, reduce again the autocomment
* ci: bench: add per slot metric in the commit status
* Fix trailing spaces
Daniel Bevenius [Thu, 4 Apr 2024 07:49:21 +0000 (09:49 +0200)]
common: remove duplicate check for curl (#6471)
This commit removes one of the two identical checks for curl being NULL
in llama_load_model_from_url.
Signed-off-by: Daniel Bevenius <redacted>
Clint Herron [Thu, 4 Apr 2024 07:44:28 +0000 (03:44 -0400)]
examples : add GBNF validator program (#5948)
* Revising GBNF validator program to be much simpler.
* Changing from streams to using cstdio
* Adding final newline character.
Georgi Gerganov [Thu, 4 Apr 2024 06:34:58 +0000 (09:34 +0300)]
server : remove obsolete --memory-f32 option
Xiao-Yong Jin [Thu, 4 Apr 2024 06:33:48 +0000 (01:33 -0500)]
server : add option to disable KV offload (#6468)
Clint Herron [Thu, 4 Apr 2024 06:32:53 +0000 (02:32 -0400)]
convert : fix for lint error complaining of bare except (#6470)
Fattire [Wed, 3 Apr 2024 20:22:57 +0000 (13:22 -0700)]
A few small fixes to server's README docs (#6428)
* Typo fix to server's README.md
Fix minor typo ("tonen") in server README.
* server readme grammar/style fixes.
Quickly went through this file to look for inconsistencies in
presentation of defaults, flag options, and looked for typos
and grammar issues.
Not perfect, but hopefully improved.
* Update README.md
Remove an extra space before newline.
JH23X [Wed, 3 Apr 2024 18:09:52 +0000 (20:09 +0200)]
server : handle exception on wrong type in request (#6452)
Co-authored-by: Jonas Holzner <redacted>
bryanSwk [Wed, 3 Apr 2024 18:05:10 +0000 (02:05 +0800)]
llama : add SEA-LION support (#6448)
* initial commit for sealion support
* add sealion support
* minor fix
* q/k ln and pos_embd only if required
* Apply suggestions from code review
Co-authored-by: Georgi Gerganov <redacted>
* minor : clear whitespaces
---------
Co-authored-by: bryan <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Ewout ter Hoeven [Wed, 3 Apr 2024 18:01:13 +0000 (20:01 +0200)]
ci : update checkout, setup-python and upload-artifact to latest (#6456)
* CI: Update actions/checkout to v4
* CI: Update actions/setup-python to v5
* CI: Update actions/upload-artifact to v4
Ed Lepedus [Wed, 3 Apr 2024 17:56:37 +0000 (18:56 +0100)]
server: add cURL support to `server.Dockerfile` (#6461)
Francisco Melo [Wed, 3 Apr 2024 17:53:37 +0000 (18:53 +0100)]
readme : add feature-rich rust bindings (#6465)
Joyce [Wed, 3 Apr 2024 17:48:07 +0000 (14:48 -0300)]
security : create policy (#6354)
* Create SECURITY.md
Signed-off-by: Joyce <redacted>
* Fix: link on SECURITY.md
Signed-off-by: Joyce <redacted>
* Fix: link on SECURITY.md
Signed-off-by: Joyce <redacted>
* minor
* fix
* fix
---------
Signed-off-by: Joyce <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Abhishek Gopinath K [Wed, 3 Apr 2024 15:42:52 +0000 (21:12 +0530)]
Missing tokenizer.model error during gguf conversion (#6443)
Co-authored-by: Jared Van Bortel <redacted>
kaizau [Wed, 3 Apr 2024 15:24:31 +0000 (23:24 +0800)]
Add OpenChat, Alpaca, Vicuna chat templates (#6397)
* Add openchat chat template
* Add chat template test for openchat
* Add chat template for vicuna
* Add chat template for orca-vicuna
* Add EOS for vicuna templates
* Combine vicuna chat templates
* Add tests for openchat and vicuna chat templates
* Add chat template for alpaca
* Add separate template name for vicuna-orca
* Remove alpaca, match deepseek with jinja output
* Regenerate chat template test with add_generation_prompt
* Separate deepseek bos from system message
* Match openchat template with jinja output
* Remove BOS token from templates, unprefix openchat
Georgi Gerganov [Wed, 3 Apr 2024 13:11:15 +0000 (16:11 +0300)]
readme : update hot topics
slaren [Wed, 3 Apr 2024 13:07:05 +0000 (15:07 +0200)]
ggml : mul_mat_id use the same tensor for all the experts (#6387)
* ggml : update mul_mat_id to use the same tensor for all the experts
* update cuda
* minor
* update metal
* update test-backend-ops
* fix cuda
* Update ggml-metal.m
Co-authored-by: Georgi Gerganov <redacted>
* update convert.py
* update convert-hf-to-gguf.py
* update convert.py for mixtral hf models
* Update convert-hf-to-gguf.py
Co-authored-by: Georgi Gerganov <redacted>
* cuda : support non-pow-2 number of experts
* allow quantize to work for split and merged experts models in the same way
* cleanup + disable mmap automatically with split tensors models
* update imatrix
* test-backend-ops : test qwen argsort
* update grok model loading
* llama : add merged experts tensors to the grok tensor map
* minor
* gguf : bump version
* fix quantizing of merged experts
* convert-hf-to-gguf.py : update grok (untested)
* make linter happy
* cuda/argsort : use shared memory instead of pool memory
* convert : fix grok tensor names
* metal : add support for non-pow-2 argsort
* llama : more loader cleanup, better error checking
* cuda : fix warning
* llama : still use mmap for loading old models, but copy the data to a host buffer
* add review note
* llama : remove ffn tensor counting + add sanity check
ggml-ci
* convert : fix handling of n_experts == None
ggml-ci
* imatrix : fix ncall counters
* llama : produce error if imatrix size does not match
* quantize : terminate on errors + trace logs
ggml-ci
* metal : pad shared memory to 16 bytes
---------
Co-authored-by: Georgi Gerganov <redacted>
Meng, Hengyu [Wed, 3 Apr 2024 02:34:40 +0000 (10:34 +0800)]
[SYCL] Disable iqx on windows as WA (#6435)
* disable iqx on windows as WA
* array instead of global_memory
Georgi Gerganov [Mon, 1 Apr 2024 16:05:57 +0000 (19:05 +0300)]
flake.lock: Update (#6402)
Flake lock file updates:
• Updated input 'nixpkgs':
'github:NixOS/nixpkgs/
44d0940ea560dee511026a53f0e2e2cde489b4d4 ' (2024-03-23)
→ 'github:NixOS/nixpkgs/
d8fe5e6c92d0d190646fb9f1056741a229980089 ' (2024-03-29)
Co-authored-by: github-actions[bot] <redacted>
Johannes Gäßler [Mon, 1 Apr 2024 11:30:43 +0000 (13:30 +0200)]
compare-llama-bench.py: fix long hexsha args (#6424)
Pierrick Hymbert [Mon, 1 Apr 2024 10:36:40 +0000 (12:36 +0200)]
ci: server: verify deps are coherent with the commit (#6409)
* ci: server: verify deps are coherent with the commit
* ci: server: change the ref to build as now it's a pull event target
Georgi Gerganov [Sun, 31 Mar 2024 08:56:30 +0000 (11:56 +0300)]
readme : update hot topics
Pierrick Hymbert [Sat, 30 Mar 2024 10:36:07 +0000 (11:36 +0100)]
ci: bench: fix Resource not accessible by integration on PR event (#6393)
Mohammadreza Hendiani [Fri, 29 Mar 2024 21:59:56 +0000 (01:29 +0330)]
Fedora build update (#6388)
* fixed deprecated address
* fixed deprecated address
* fixed deprecated address
* Added 'Apache-2.0' SPDX license identifier due to 'kompute.cc' submodule licensing. Explanation of licensing method: https://docs.fedoraproject.org/en-US/legal/spdx/#_and_expressions
* Added 'Apache-2.0' SPDX license identifier due to 'kompute.cc' submodule licensing. Explanation of licensing method: https://docs.fedoraproject.org/en-US/legal/spdx/#_and_expressions
* Added 'Apache-2.0' SPDX license identifier due to 'kompute.cc' submodule licensing. Explanation of licensing method: https://docs.fedoraproject.org/en-US/legal/spdx/#_and_expressions
* reverted back to only the MIT license
Xuan Son Nguyen [Fri, 29 Mar 2024 21:34:44 +0000 (22:34 +0100)]
split: allow --split-max-size option (#6343)
* split by max size
* clean up arg parse
* split: ok
* add dry run option
* error on 0 tensors
* be positive
* remove next_metadata_size
0cc4m [Fri, 29 Mar 2024 16:29:21 +0000 (17:29 +0100)]
Vulkan k-quant mmq and ggml-backend offload functionality (#6155)
* Fix Vulkan no kv offload incoherence
* Add k-quant mul mat mat shaders
* Rework working buffer allocation, reduces vram use noticeably
Clean up cpu assist code, replaced with ggml-backend offload function
* Default to all dedicated GPUs
* Add fallback for integrated GPUs if no dedicated GPUs are found
* Add debug info which device is allocating memory
* Fix Intel dequant issue
Fix validation issue
* Fix Vulkan GGML_OP_GET_ROWS implementation
* Clean up merge artifacts
* Remove Vulkan warning
Georgi Gerganov [Fri, 29 Mar 2024 15:45:46 +0000 (17:45 +0200)]
sync : ggml (#6351)
* sync : ggml
ggml-ci
* cuda : move GGML_CUDA_DMMV constants to dmmv.cuh
---------
Co-authored-by: slaren <redacted>
hxer7963 [Fri, 29 Mar 2024 13:37:03 +0000 (21:37 +0800)]
[Model] Add support for xverse (#6301)
* Support xverse model convert to gguf format.
* 1. Convert xverse models to gguf;
2. Add LLM_ARCH_XVERSE inference in llama.cpp;
3. Add xverse item in Supported models in README.md;
* * gguf-py: remove redundant logs
* llama: remove the init_mapping_prefetch custom parameter
* llama.cpp: Include the changes from #6122 to exclude the unused outputs of the last layers.
* - Fix format issues
- Remove duplicate set kqv_out to llm_build_kv
* Update llama.cpp
---------
Co-authored-by: willhe <redacted>
Co-authored-by: willhe <redacted>
Georgi Gerganov [Fri, 29 Mar 2024 12:34:28 +0000 (14:34 +0200)]
ci : fix BGE wget (#6383)
ggml-ci
zhouwg [Fri, 29 Mar 2024 07:33:46 +0000 (15:33 +0800)]
readme : add project (#6356)
* readme: add Android UI binding
* Update README.md
Matt Clayton [Fri, 29 Mar 2024 07:27:42 +0000 (03:27 -0400)]
cmake : add explicit metal version options (#6370)
* cmake: add explicit metal version options
* Update CMakeLists.txt
---------
Co-authored-by: Georgi Gerganov <redacted>
Daniel Bevenius [Fri, 29 Mar 2024 07:23:22 +0000 (08:23 +0100)]
llama : remove redundant reshape in build_kv_store (#6369)
* llama: remove redundant reshape in build_kv_store
This commit removes the reshape of the V matrix in the build_kv_store.
The motivation for this is that V matrix has the shape:
```console
(gdb) p *v_cur
$46 = {type = GGML_TYPE_F32, backend = GGML_BACKEND_TYPE_CPU,
buffer = 0x0, ne = {4096, 512, 1, 1}, nb = {4, 16384,
8388608 ,
8388608 }, op = GGML_OP_MUL_MAT, op_params = {
0 <repeats 16 times>}, flags = 0, grad = 0x0,
src = {0xb496b0, 0x7ffef1c40950, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
0x0, 0x0}, perf_runs = 0, perf_cycles = 0, perf_time_us = 0,
view_src = 0x0, view_offs = 0, data = 0x0,
name = "Vcur-0", '\000' <repeats 57 times>, extra = 0x0,
padding = "\000\000\000\000\000\000\000"}
```
And after reshaping this tensor we get:
```console
gdb) p *ggml_reshape_2d(ctx, v_cur, n_embd_v_gqa, n_tokens)
$44 = {type = GGML_TYPE_F32, backend = GGML_BACKEND_TYPE_CPU,
buffer = 0x0, ne = {4096, 512, 1, 1}, nb = {4, 16384,
8388608 ,
8388608 }, op = GGML_OP_RESHAPE, op_params = {
0 <repeats 16 times>}, flags = 0, grad = 0x0,
src = {0x7ffef1c40e00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
0x0}, perf_runs = 0, perf_cycles = 0, perf_time_us = 0,
view_src = 0x7ffef1c40e00, view_offs = 0, data = 0x0,
name = "Vcur-0 (reshaped)", '\000' <repeats 46 times>, extra = 0x0,
padding = "\000\000\000\000\000\000\000"}
```
I noticed that the `src` and `view_src` fields are different but that the
dimensions are the same. From the code comment it seems like the reshape
call is not needed and perhaps the above can motivate the removal of the
reshape call.
Signed-off-by: Daniel Bevenius <redacted>
* llama : add assert
---------
Signed-off-by: Daniel Bevenius <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Pedro Cuenca [Fri, 29 Mar 2024 07:15:00 +0000 (08:15 +0100)]
convert : allow conversion of Mistral HF models (#6144)
* Allow conversion of Mistral HF models
* Homogenize Llama, Mistral, Mixtral under the same entry.
* Fix tokenizer, permute tensors
* Use sentencepiece tokenizer, or fall back to hfft.
* convert-hf : small fix for mypy
* convert-hf : fix duplicated block_count
* convert-hf : add vocab size to metadata
---------
Co-authored-by: Jared Van Bortel <redacted>
Georgi Gerganov [Thu, 28 Mar 2024 20:56:03 +0000 (22:56 +0200)]
readme : add notice for UI list
Ouadie EL FAROUKI [Thu, 28 Mar 2024 16:01:47 +0000 (16:01 +0000)]
[SYCL] Revisited & updated SYCL build documentation (#6141)
* Revisited & updated SYCL build documentation
* removed outdated comment
* Addressed PR comments
* Trimed white spaces
* added new end line
Jared Van Bortel [Thu, 28 Mar 2024 15:44:36 +0000 (11:44 -0400)]
convert : refactor vocab selection logic (#6355)
Ziang Wu [Thu, 28 Mar 2024 14:33:10 +0000 (22:33 +0800)]
llava : fix MobileVLM (#6364)
* fix empty bug
* Update MobileVLM-README.md
added more results on devices
* Update MobileVLM-README.md
* Update MobileVLM-README.md
* Update MobileVLM-README.md
* Update MobileVLM-README.md
* Update MobileVLM-README.md
* Update MobileVLM-README.md
* Update examples/llava/MobileVLM-README.md
Co-authored-by: Georgi Gerganov <redacted>
* Update MobileVLM-README.md
remove gguf links
---------
Co-authored-by: Georgi Gerganov <redacted>
compilade [Thu, 28 Mar 2024 12:05:54 +0000 (08:05 -0400)]
llama : fix command-r inference when omitting outputs (#6367)
Pierrick Hymbert [Thu, 28 Mar 2024 10:27:56 +0000 (11:27 +0100)]
ci: bench: fix master not schedule, fix commit status failed on external repo (#6365)
Ting Sun [Thu, 28 Mar 2024 08:51:06 +0000 (16:51 +0800)]
doc: fix outdated default value of batch size (#6336)
* doc: fix outdated default value of batch size
* doc: add doc for ubatch-size
Eric Zhang [Thu, 28 Mar 2024 08:50:48 +0000 (16:50 +0800)]
server : stop gracefully on SIGTERM (#6348)
hutli [Wed, 27 Mar 2024 18:17:30 +0000 (19:17 +0100)]
nix: removed unnessesary indentation
hutli [Wed, 27 Mar 2024 18:14:28 +0000 (19:14 +0100)]
nix: moved blas availability check to package inputs so it is still overridable
hutli [Wed, 27 Mar 2024 17:10:08 +0000 (18:10 +0100)]
using blas.meta.available to check host platform
hutli [Wed, 27 Mar 2024 16:25:05 +0000 (17:25 +0100)]
only using explicit blas if hostPlatform is allowed
Someone Serge [Tue, 26 Mar 2024 16:22:42 +0000 (16:22 +0000)]
nix: .#windows: proper cross-compilation set-up
Take all dependencies from the cross stage, rather tha only stdenv
Someone Serge [Tue, 26 Mar 2024 16:22:07 +0000 (16:22 +0000)]
nix: package: don't introduce the dependency on python
- The generic /usr/bin/env shebangs are good enough
- Python deps are provisioned in the devShells
- We need to be able to leave python out at least on windows (currently breaks eval)
hutli [Thu, 15 Feb 2024 13:25:04 +0000 (14:25 +0100)]
nix: .#widnows: init
initial nix build for windows using zig
mingwW64 build
removes nix zig windows build
removes nix zig windows build
removed unnessesary glibc.static
removed unnessesary import of pkgs in nix
fixed missing trailing newline on non-windows nix builds
overriding stdenv when building for crosscompiling to windows in nix
better variables when crosscompiling windows in nix
cross compile windows on macos
removed trailing whitespace
remove unnessesary overwrite of "CMAKE_SYSTEM_NAME" in nix windows build
nix: keep file extension when copying result files during cross compile for windows
nix: better checking for file extensions when using MinGW
nix: using hostPlatform instead of targetPlatform when cross compiling for Windows
using hostPlatform.extensions.executable to extract executable format
Ziang Wu [Thu, 28 Mar 2024 04:03:30 +0000 (12:03 +0800)]
doc: fix typo in MobileVLM-README.md (#6181)
Neo Zhang Jianyu [Thu, 28 Mar 2024 00:55:24 +0000 (08:55 +0800)]
[SYCL] fix set main gpu crash (#6339)
Pierrick Hymbert [Wed, 27 Mar 2024 19:26:49 +0000 (20:26 +0100)]
server: continuous performance monitoring and PR comment (#6283)
* server: bench: init
* server: bench: reduce list of GPU nodes
* server: bench: fix graph, fix output artifact
* ci: bench: add mermaid in case of image cannot be uploaded
* ci: bench: more resilient, more metrics
* ci: bench: trigger build
* ci: bench: fix duration
* ci: bench: fix typo
* ci: bench: fix mermaid values, markdown generated
* typo on the step name
Co-authored-by: Xuan Son Nguyen <redacted>
* ci: bench: trailing spaces
* ci: bench: move images in a details section
* ci: bench: reduce bullet point size
---------
Co-authored-by: Xuan Son Nguyen <redacted>
Someone Serge [Wed, 27 Mar 2024 16:17:46 +0000 (16:17 +0000)]
nix: ci: dont test cuda and rocm (for now)
Until https://github.com/ggerganov/llama.cpp/issues/6346 is resolved
slaren [Wed, 27 Mar 2024 14:07:50 +0000 (15:07 +0100)]
ggml : fix bounds checking of zero size views (#6347)
Georgi Gerganov [Wed, 27 Mar 2024 13:02:49 +0000 (15:02 +0200)]
make : whitespace
howlger [Wed, 27 Mar 2024 11:15:44 +0000 (12:15 +0100)]
embedding : show full embedding for single prompt (#6342)
* embedding : show full embedding for single prompt
To support the use case of creating an embedding for a given prompt, the entire embedding and not just the first part needed to be printed.
Also, show cosine similarity matrix only if there is more than one prompt, as the cosine similarity matrix for a single prompt is always `1.00`.
* Update examples/embedding/embedding.cpp
---------
Co-authored-by: Georgi Gerganov <redacted>
AidanBeltonS [Wed, 27 Mar 2024 08:16:40 +0000 (08:16 +0000)]
[SYCL] Fix batched impl for NVidia GPU (#6164)
* Fix batched impl
* Maintain previous behaviour for igpu
* retrigger CI
---------
Co-authored-by: Abhilash Majumder <redacted>
Kawrakow [Wed, 27 Mar 2024 07:44:27 +0000 (08:44 +0100)]
Make IQ1_M work for QK_K = 64 (#6327)
* iq1_m: make it work for QK_K = 64 (WIP)
* iq1_m: make it work for QK_K = 64 (scalar and AVX2)
* iq1_m: QK_K = 64 seems to work on Metal and ARM_NEON
---------
Co-authored-by: Iwan Kawrakow <redacted>
Sigbjørn Skjæret [Wed, 27 Mar 2024 07:23:10 +0000 (08:23 +0100)]
common : change --no-penalize-nl to --penalize-nl (#6334)
* Change --no-penalize-nl to --penalize-nl
* Update documentation too
Georgi Gerganov [Wed, 27 Mar 2024 07:16:02 +0000 (09:16 +0200)]
llama2c : open file as binary (#6332)
Mateusz Charytoniuk [Wed, 27 Mar 2024 07:08:59 +0000 (08:08 +0100)]
readme : add php api bindings (#6326)
* add php bindings to readme
* readme : add link to PR
---------
Co-authored-by: Georgi Gerganov <redacted>
Eric Zhang [Wed, 27 Mar 2024 05:55:29 +0000 (13:55 +0800)]
server: public: use relative routes for static files (#6325)
server: public: support custom `api_url`, default to relative base path
Neo Zhang Jianyu [Wed, 27 Mar 2024 01:47:06 +0000 (09:47 +0800)]
[SYCL] fix no file in win rel (#6314)
Jared Van Bortel [Tue, 26 Mar 2024 21:46:21 +0000 (17:46 -0400)]
wpm : portable unicode tolower (#6305)
Also use C locale for ispunct/isspace, and split unicode-data.cpp from unicode.cpp.
compilade [Tue, 26 Mar 2024 14:46:41 +0000 (10:46 -0400)]
llama : greatly reduce output buffer memory usage (#6122)
* llama : greatly reduce logits memory usage
* llama : more compact state saving and reloading
* llama : fix lctx.n_outputs not being set before building graph
* perplexity : adapt to the logits API changes
* perplexity : fix Winogrande, use correct logits for second choice start
The first logits used to evaluate the second choice were not from
the end of the common prefix; instead, they were the logits from the end
of the first choice. This has been corrected.
The previous implementation sometimes had outliers in the scores of
choices for some tasks, and the logic to skip choices words
in the log-likelihood evaluation probably was an attempt to reduce those,
but it was complex and didn't quite seem to be the right thing.
This is simpler now, and the outlier scores aren't there anymore.
* perplexity : normalize spaces and punctuation in Winogrande sentences
* llama : fix embedding conditions
* llama : fix llama_get_embeddings_ith when the resulting id is 0
* llama : fix wrong n_outputs in llama_set_inputs
A mismatch happened when using a smaller n_ubatch than n_batch and then using
llama_batch_get_one(). The decision of what n_outputs should be now almost
fully depends on how lctx.n_outputs is set in llama_decode_internal.
The conditions are simpler this way.
* llama : when saving the state, recalculate n_outputs
This ensures the correct number of outputs for the entire previous batch
is stored in the session file, even when n_ubatch is smaller than n_batch.
* llama : fix not-skipping outputs of non-causal models
* llama : fix running a batch with n_outputs == 0
It previously worked because lctx.inp_out_ids was not initialized,
so it pointed to some garbage address which was somehow still valid when I
ran my tests.
* llama : keep same graph topology even when n_outputs == 0
* ggml : saner ggml_can_repeat with empty tensors
* ggml : future-proof ggml_is_empty by using GGML_MAX_DIMS - 1
* ggml : do not multi-thread ops returning empty tensors
* ggml : make ggml_is_empty public and work with views
* llama : use a vector for ctx->output_ids
* llama : rework reallocation logic for llama_output_reserve
Now comparing the actual size with the new total size of the output buffer
to allow more efficient enabling and disabling of the embeddings
and/or logits output in the future.
* ggml : skip empty tensors in all backends
* llama : fix llama_output_reserve nullptr deref when new_size is 0
* perplexity : make Winogrande work as it does on master
The problems with the Winogrande implementation will
need to be fixed in a separate PR to ease review.
* llama : clearer error messages for invalid logits or embeddings ids
* llama : assert all models that can have inp_out_ids
Since the graph topology is now constant, this presence check
can be done even when there are no outputs.
* llama : assert logits and embd buffers exist before writing to them
* llama : handle errors from llama_output_reserve at call sites
* perplexity : make hellaswag and multiple-choice outputs identical to master
Due to how the KV cache is updated, the logprobs for tokens in a batch
are very slightly affected by the other tokens present in the batch,
so to make hellaswag and multiple-choice return exactly the same results
as on master, the last token of each sequence needs to be evaluated
even though its output is not used at all.
This will probably be changed back in the future to make these benchmarks
a tiny bit faster.
* perplexity : fix division by zero when using less than 100 multiple-choice tasks
* llama : allow loading state saved with a different ctx size
When loading a session file, the context size is now only required to be
at least enough to load the KV cells contained in that session file,
instead of requiring to use exactly the same context size as when saving.
Doing this enables the use-case of extending or shrinking the context size
of a saved session.
This breaks existing session files because the meaning of kv_buf_size
is slightly changed (previously it was the size of the whole KV cache,
now it's only the size of the saved part of it). This allows for
finer-grained sanity checks when loading in an effort to keep kv_buf_size
useful even when the kv_size is changed.
* llama : minor
ggml-ci
* readme : update recent API changes, and warn about Vulkan
---------
Co-authored-by: Georgi Gerganov <redacted>
Kawrakow [Tue, 26 Mar 2024 14:21:27 +0000 (15:21 +0100)]
IQ1_M: 1.75 bpw quantization (#6302)
* iq1_m: basics
* iq1_m: basics-2
* iq1_m: CUDA dequantize works
Very 1st shot I get PPL = 9.76 for LLaMA-v2-7B.
* iq1_m: separate shifts for each group of 8 in a block
We get
PPL(LLaMA-v2-7B ) = 9.2810
PPL(LLaMA-v2-13B) = 6.8105
Not bad, but slightly higher than
sqrt(PPL(IQ1_S) * PPL(IQ2_XXS))
which is the expected outcome given that IQ1_M is
halfway between IQ1_S and IQ2_XXS in terms of bpw.
From this, we would expect
PPL = 9.14 for LLaMA-v2-7B
PPL = 6.63 for LLaMA-v2-13B
* iq1_m: go to 3-bit scales
There is slight increase in PPL, but the 0.0625 bpw reduction
in size is totally worth it.
We now have
PPL(LLaMA-v2-7B ) = 9.4469 at 1.96 bpw
PPL(LLaMA-v2-13B) = 6.8717 at 1.93 bpw
PPL(LLaMA-v2-70B) = 4.8568 at 1.85 bpw
* iq1_m: scalar dot product
* iq1_m: AVX2 dot product
* iq1_m: very slightly faster AVX2 dot product
* iq1_m: ARM_NEON dot product
Works, but very slow (10.5 t/s)
* iq1_m: Metal - dequantize works, dot product does not
* iq1_m: Metal now works
About the same performance as iq1_s.
* iq1_m: minor
* iq1_m: checking pure iq1_m quantization
It is pretty bad: PPL(LLaMA-v2-7B) = 34 if we quantize output.weight
with Q4_K.
* iiq1_m: slightly faster ARM_NEON dot product
10.5 t/s -> 11.65 t/s
* iq1_m: faster ARM_NEON dot product
11.65 t/s -> 14.9 t/s
* iq1_m: another minor ARM_NEON dot product improvement
14.9 -> 15.0 t/s
* iq1_m: small PPL improvement via super-block scale adjustment
After quantizing block scales redo the super-block scale fit.
PPL(LLaMA-v2-7B ) = 9.3346
PPL(LLaMA-v2-13B) = 6.8419
PPL(LLaMA-v2-70B) = 4.8294
PPL(Mistral-7B ) = 8.1624
* iq1_m: adapt to CUDA refactoring
* iq1_m: remove unused variable
We have progressed to warnings being errors.
* iq1_m: add to backend-ops tests
* iq1_m: fix Windows ARM
* iq1_m: use common definition of iq1m_scale_t
* cuda: assert -> NO_DEVICE_CODE
* iq1_M: PR comments
---------
Co-authored-by: Iwan Kawrakow <redacted>
Pedro Cuenca [Tue, 26 Mar 2024 12:32:19 +0000 (13:32 +0100)]
convert-hf : fix exception in sentencepiece with added tokens (#6320)
Kawrakow [Tue, 26 Mar 2024 12:09:30 +0000 (13:09 +0100)]
quantize : be able to override metadata by key (#6321)
* quantize: be able to override metadata by key
* minor : spacing
---------
Co-authored-by: Iwan Kawrakow <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Minsoo Cheong [Tue, 26 Mar 2024 09:11:46 +0000 (18:11 +0900)]
embedding : adjust `n_ubatch` value (#6296)
* embedding: assign `n_ubatch` value, print error on `n_batch` overflow
* Update examples/embedding/embedding.cpp
Co-authored-by: Xuan Son Nguyen <redacted>
* use %ld instead of %lld
* Revert "use %ld instead of %lld"
This reverts commit
ea753ede90a86a0699f65878cc8e2020ff5eabb8 .
---------
Co-authored-by: Xuan Son Nguyen <redacted>
Jan Boon [Tue, 26 Mar 2024 08:47:43 +0000 (16:47 +0800)]
server : add `n_discard` parameter (#6300)
Joseph Stahl [Tue, 26 Mar 2024 00:51:46 +0000 (20:51 -0400)]
nix: make `xcrun` visible in Nix sandbox for precompiling Metal shaders (#6118)
* Symlink to /usr/bin/xcrun so that `xcrun` binary
is usable during build (used for compiling Metal shaders)
Fixes https://github.com/ggerganov/llama.cpp/issues/6117
* cmake - copy default.metallib to install directory
When metal files are compiled to default.metallib, Cmake needs to add this to the install directory so that it's visible to llama-cpp
Also, update package.nix to use absolute path for default.metallib (it's not finding the bundle)
* add `precompileMetalShaders` flag (defaults to false) to disable precompilation of metal shader
Precompilation requires Xcode to be installed and requires disable sandbox on nix-darwin
slaren [Tue, 26 Mar 2024 00:16:01 +0000 (01:16 +0100)]
cuda : rename build flag to LLAMA_CUDA (#6299)
Christian Kögler [Mon, 25 Mar 2024 17:52:45 +0000 (18:52 +0100)]
nix: fix blas support (#6281)
Since no blas was provided to buildInputs, the executable is built without blas support.
This is a backport of NixOS/nixpkgs#298567
Kawrakow [Mon, 25 Mar 2024 17:33:15 +0000 (18:33 +0100)]
tests : include IQ2_XXS and IQ2_XS in test-quantize-fns (#6303)
Co-authored-by: Iwan Kawrakow <redacted>
Georgi Gerganov [Mon, 25 Mar 2024 15:22:27 +0000 (17:22 +0200)]
flake.lock: Update (#6266)
Flake lock file updates:
• Updated input 'nixpkgs':
'github:NixOS/nixpkgs/
d691274a972b3165335d261cc4671335f5c67de9 ' (2024-03-14)
→ 'github:NixOS/nixpkgs/
44d0940ea560dee511026a53f0e2e2cde489b4d4 ' (2024-03-23)
Co-authored-by: github-actions[bot] <redacted>
slaren [Mon, 25 Mar 2024 14:43:22 +0000 (15:43 +0100)]
cuda : fix LLAMA_CUDA_F16 build (#6298)
slaren [Mon, 25 Mar 2024 12:50:23 +0000 (13:50 +0100)]
cuda : refactor into multiple files (#6269)
Xuan Son Nguyen [Mon, 25 Mar 2024 08:42:17 +0000 (09:42 +0100)]
Server: clean up OAI params parsing function (#6284)
* server: clean up oai parsing function
* fix response_format
* fix empty response_format
* minor fixes
* add TODO for logprobs
* update docs
Neo Zhang Jianyu [Mon, 25 Mar 2024 07:52:41 +0000 (15:52 +0800)]
[SYCL] fix SYCL backend build on windows is break by LOG() error (#6290)
* fix LOG() error for SYCL, enhance erro check by CI
* rollback to bash
* add newline at end of file
Minsoo Cheong [Mon, 25 Mar 2024 07:38:22 +0000 (16:38 +0900)]
examples : add "retrieval" (#6193)
* add `retrieval` example
* add README
* minor fixes
* cast filepos on print
* remove use of variable sized array
* store similarities in separate vector
* print error on insufficient batch size
* fix error message printing
* assign n_batch value to n_ubatch
* fix param definitions
* define retrieval-only parameters in retrieval.cpp
* fix `--context-file` option to be provided multiple times for multiple files
* use vector for `query_emb`
* add usage description in README
* fix merge conflict
* fix usage printing
* remove seed setting
* fix lint
* increase file read buffer size
* retrieval : minor
---------
Co-authored-by: Georgi Gerganov <redacted>
Justine Tunney [Mon, 25 Mar 2024 05:39:56 +0000 (01:39 -0400)]
ggml : support AVX512VNNI (#6280)
This change causes some quants (e.g. Q4_0, Q8_0) to go faster on some
architectures (e.g. AMD Zen 4).
Rick G [Sun, 24 Mar 2024 21:45:56 +0000 (14:45 -0700)]
Fix heap corruption from wmode out-of-bound writes on windows (#6272)
* would throw error on VS2022 on GGML_FREE(wmode)
* wchar_t is usually 2 bytes, but malloc wants bytes
* therefore `*wmode_p++ = (wchar_t)*mode;` could write off the end of the allocation
* Fixes error possibly introduced by https://github.com/ggerganov/llama.cpp/pull/6248
Georgi Gerganov [Sun, 24 Mar 2024 14:18:45 +0000 (16:18 +0200)]
imatrix : fix wname for mul_mat_id ops (#6271)
* imatrix : fix wname for mul_mat_id ops
* also filter tensor names in mul_mat_id ops
---------
Co-authored-by: slaren <redacted>
Johannes Gäßler [Sun, 24 Mar 2024 13:21:17 +0000 (14:21 +0100)]
Fixed lookup compilation issues on Windows (#6273)
Pierrick Hymbert [Sun, 24 Mar 2024 08:57:06 +0000 (09:57 +0100)]
ci : close inactive issue, increase operations per run (#6270)
Minsoo Cheong [Sun, 24 Mar 2024 08:54:07 +0000 (17:54 +0900)]
sampling : deduplicated code for probability distribution access (#6240)
* sampling: remove duplicated code for probability distribution access
* free original_logits
* fix original_logits allocation
* fixes based on review @cebtenzzre
* change function name to `llama_sampling_prepare`
Meng, Hengyu [Sun, 24 Mar 2024 04:04:25 +0000 (12:04 +0800)]
[SYCL] offload op (#6217)
* remove no USM methods
* leave the schedule to ggml_backend_sched entirely
Neo Zhang Jianyu [Sun, 24 Mar 2024 01:44:01 +0000 (09:44 +0800)]
Support build win release for SYCL (#6241)
* support release win
* fix value
* fix value
* fix value
* fix error
* fix error
* fix format
Jared Van Bortel [Sat, 23 Mar 2024 22:48:02 +0000 (18:48 -0400)]
use _wfopen instead of fopen on Windows (#6248)
also fix missing #defines before windows.h, and BPE LF token on MSVC
Georgi Gerganov [Sat, 23 Mar 2024 19:35:23 +0000 (21:35 +0200)]
gitignore : gguf-split
Pierrick Hymbert [Sat, 23 Mar 2024 17:07:00 +0000 (18:07 +0100)]
common: llama_load_model_from_url split support (#6192)
* llama: llama_split_prefix fix strncpy does not include string termination
common: llama_load_model_from_url:
- fix header name case sensitive
- support downloading additional split in parallel
- hide password in url
* common: EOL EOF
* common: remove redundant LLAMA_CURL_MAX_PATH_LENGTH definition
* common: change max url max length
* common: minor comment
* server: support HF URL options
* llama: llama_model_loader fix log
* common: use a constant for max url length
* common: clean up curl if file cannot be loaded in gguf
* server: tests: add split tests, and HF options params
* common: move llama_download_hide_password_in_url inside llama_download_file as a lambda
* server: tests: enable back Release test on PR
* spacing
Co-authored-by: Georgi Gerganov <redacted>
* spacing
Co-authored-by: Georgi Gerganov <redacted>
* spacing
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Pierrick Hymbert [Sat, 23 Mar 2024 17:00:38 +0000 (18:00 +0100)]
server: docs: `--threads` and `--threads`, `--ubatch-size`, `--log-disable` (#6254)
Julius Arkenberg [Sat, 23 Mar 2024 16:41:53 +0000 (17:41 +0100)]
llama : add grok-1 support (#6204)
* Add support for Grok model architecture
* Revert convert-hf-to-gguf to default options
* Fixed f_norm_rms_eps bug
* Fix whitespaces
* llama : fix grok rope type
* llama : minor
---------
Co-authored-by: Georgi Gerganov <redacted>