]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
4 months agocuda: Add Q5_1, Q5_0, Q4_1 and Q4_0 to F32 conversion support. (#12000)
Gian-Carlo Pascutto [Sat, 22 Feb 2025 08:43:24 +0000 (09:43 +0100)]
cuda: Add Q5_1, Q5_0, Q4_1 and Q4_0 to F32 conversion support. (#12000)

4 months agollama.swiftui : add "Done" dismiss button to help view (#11998)
Daniel Bevenius [Sat, 22 Feb 2025 05:33:29 +0000 (06:33 +0100)]
llama.swiftui : add "Done" dismiss button to help view (#11998)

The commit updates the help view in the llama.swiftui example to use a
NavigationView and a Done button to dismiss the help view.

The motivation for this is that without this change there is now way to
dimiss the help view.

4 months agollama : skip loading unused tensors (#12004)
Georgi Gerganov [Fri, 21 Feb 2025 16:33:18 +0000 (18:33 +0200)]
llama : skip loading unused tensors (#12004)

* llama : assign unknown/unused tensors to host buffer type

ggml-ci

* llama : skip unused tensors

ggml-ci

4 months agodoc: update contributing guidelines [no ci] (#11969)
Johannes Gäßler [Fri, 21 Feb 2025 11:51:25 +0000 (12:51 +0100)]
doc: update contributing guidelines [no ci] (#11969)

4 months agoCUDA: correct the lowest Maxwell supported by CUDA 12 (#11984)
PureJourney [Fri, 21 Feb 2025 11:21:05 +0000 (19:21 +0800)]
CUDA: correct the lowest Maxwell supported by CUDA 12 (#11984)

* CUDA: correct the lowest Maxwell supported by CUDA 12

---------

Co-authored-by: Johannes Gäßler <redacted>
4 months agoMUSA: support ARM64 and enable dp4a .etc (#11843)
Bodhi [Fri, 21 Feb 2025 07:46:23 +0000 (15:46 +0800)]
MUSA: support ARM64 and enable dp4a .etc (#11843)

* MUSA:  support ARM64 and enable __dp4a .etc

* fix cross entropy loss op for musa

* update

* add cc info log for musa

* add comment for the MUSA .cc calculation block

---------

Co-authored-by: Bodhi Hu <redacted>
4 months agoclip : fix visual encoders with no CLS (#11982)
Alex Brooks [Fri, 21 Feb 2025 06:11:03 +0000 (23:11 -0700)]
clip : fix visual encoders with no CLS (#11982)

Signed-off-by: Alex-Brooks <redacted>
4 months agoserver (webui): Fix Premature Submission During IME Conversion (#11971)
momonga [Thu, 20 Feb 2025 18:43:22 +0000 (03:43 +0900)]
server (webui): Fix Premature Submission During IME Conversion (#11971)

* fix skip ime composing

* fix npm rebuild

* fix warn

---------

Co-authored-by: momonga <redacted>
Co-authored-by: Xuan Son Nguyen <redacted>
4 months agoggml-cpu: Add CPU backend support for KleidiAI library (#11390)
Charles Xu [Thu, 20 Feb 2025 13:06:51 +0000 (14:06 +0100)]
ggml-cpu: Add CPU backend support for KleidiAI library (#11390)

* ggml-cpu: Add CPU backend support for KleidiAI library

* Add environmental variable GGML_KLEIDIAI_SME

* Add support for multithread LHS conversion

* Switch kernel selection order to dotprod and i8mm

* updates for review comments

* More updates for review comments

* Reorganize and rename KleidiAI files

* Move ggml-cpu-traits.h to source file

* Update cmake for SME build and add alignment for SME

* Remove append GGML_USE_CPU_KLEIDIAI to the GGML_CDEF_PUBLIC list

4 months agoggml: aarch64: implement SVE kernels for q3_K_q8_K vector dot (#11917)
Prashant Vithule [Thu, 20 Feb 2025 10:08:32 +0000 (15:38 +0530)]
ggml: aarch64: implement SVE kernels for q3_K_q8_K vector dot (#11917)

* Added SVE Implementation for Q3_K Kernel in ggml-cpu-quants.c file

* Improved Formating of code in  ggml-cpu-quants.c file

* style : minor fixes

* style : less whitespaces

* style : ptr spaceing

---------

Co-authored-by: vithulep <redacted>
Co-authored-by: Georgi Gerganov <redacted>
4 months agorun : add --chat-template-file (#11961)
Michael Engel [Thu, 20 Feb 2025 08:35:11 +0000 (09:35 +0100)]
run : add --chat-template-file (#11961)

Relates to: https://github.com/ggml-org/llama.cpp/issues/11178

Added --chat-template-file CLI option to llama-run. If specified, the file
will be read and the content passed for overwriting the chat template of
the model to common_chat_templates_from_model.

Signed-off-by: Michael Engel <redacted>
4 months agodoc: add links to ggml examples [no ci] (#11958)
Johannes Gäßler [Wed, 19 Feb 2025 19:45:17 +0000 (20:45 +0100)]
doc: add links to ggml examples [no ci] (#11958)

4 months agocommon : add llama.vim preset for Qwen2.5 Coder (#11945)
Daniel Bevenius [Wed, 19 Feb 2025 11:29:52 +0000 (12:29 +0100)]
common : add llama.vim preset for Qwen2.5 Coder (#11945)

This commit adds a preset for llama.vim to use the default Qwen 2.5
Coder models.

The motivation for this change is to make it easier to start a server
suitable to be used with the llama.vim plugin. For example, the server
can be started with a command like the following:
```console
$ llama.vim --fim-qwen-1.5b-default
```

Refs: https://github.com/ggml-org/llama.cpp/issues/10932

4 months agospeculative : update default params (#11954)
Georgi Gerganov [Wed, 19 Feb 2025 11:29:42 +0000 (13:29 +0200)]
speculative : update default params (#11954)

* speculative : update default params

* speculative : do not discard the last drafted token

4 months agollama : fix indentation in llama-grammar [no ci] (#11943)
Daniel Bevenius [Wed, 19 Feb 2025 05:16:23 +0000 (06:16 +0100)]
llama : fix indentation in llama-grammar [no ci] (#11943)

This commit adjusts the indentation for the functions `parse_sequence`
and `parse_rule` in src/llama-grammar.cpp.

The motivation is consistency and improve readability.

4 months agoserver : (webui) Enable communication with parent html (if webui is in iframe) (...
igardev [Tue, 18 Feb 2025 22:01:44 +0000 (00:01 +0200)]
server : (webui) Enable communication with parent html (if webui is in iframe) (#11940)

* Webui: Enable communication with parent html (if webui is in iframe):
- Listens for "setText" command from parent with "text" and "context" fields. "text" is set in inputMsg, "context" is used as hidden context on the following requests to the llama.cpp server
- On pressing na Escape button sends command "escapePressed" to the parent

Example handling from the parent html side:
- Send command "setText" from parent html to webui in iframe:
const iframe = document.getElementById('askAiIframe');
if (iframe) {
iframe.contentWindow.postMessage({ command: 'setText', text: text, context: context }, '*');
}

- Listen for Escape key from webui on parent html:
// Listen for escape key event in the iframe
window.addEventListener('keydown', (event) => {
if (event.key === 'Escape') {
// Process case when Escape is pressed inside webui
}
});

* Move the extraContext from storage to app.context.

* Fix formatting.

* add Message.extra

* format + build

* MessageExtraContext

* build

* fix display

* rm console.log

---------

Co-authored-by: igardev <redacted>
Co-authored-by: Xuan Son Nguyen <redacted>
4 months agotool-call: refactor common chat / tool-call api (+ tests / fixes) (#11900)
Olivier Chafik [Tue, 18 Feb 2025 18:03:23 +0000 (18:03 +0000)]
tool-call: refactor common chat / tool-call api (+ tests / fixes) (#11900)

* tool-call refactoring: moved common_chat_* to chat.h, common_chat_templates_init return a unique_ptr to opaque type

* addressed clang-tidy lints in [test-]chat.*

* rm minja deps from util & common & move it to common/minja/

* add name & tool_call_id to common_chat_msg

* add common_chat_tool

* added json <-> tools, msgs conversions to chat.h

* fix double bos/eos jinja avoidance hack (was preventing inner bos/eos tokens)

* fix deepseek r1 slow test (no longer <think> opening w/ new template)

* allow empty tools w/ auto + grammar

* fix & test server grammar & json_schema params w/ & w/o --jinja

4 months agoserver : add TEI API format for /rerank endpoint (#11942)
Xuan-Son Nguyen [Tue, 18 Feb 2025 13:21:41 +0000 (14:21 +0100)]
server : add TEI API format for /rerank endpoint (#11942)

* server : add TEI API format for /rerank endpoint

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <redacted>
* fix

* also gitignore examples/server/*.gz.hpp

---------

Co-authored-by: Georgi Gerganov <redacted>
4 months agoscripts: corrected encoding when getting chat template (#11866) (#11907)
MoonRide303 [Tue, 18 Feb 2025 09:30:16 +0000 (10:30 +0100)]
scripts: corrected encoding when getting chat template (#11866) (#11907)

Signed-off-by: MoonRide303 <redacted>
4 months agodocs : Fix duplicated file extension in test command (#11935)
xiaobing318 [Tue, 18 Feb 2025 09:12:49 +0000 (17:12 +0800)]
docs : Fix duplicated file extension in test command (#11935)

This commit fixes an issue in the llama.cpp project where the command for testing the llama-server object contained a duplicated file extension. The original command was:
./tests.sh unit/test_chat_completion.py.py -v -x
It has been corrected to:
./tests.sh unit/test_chat_completion.py -v -x
This change ensures that the test script correctly locates and executes the intended test file, preventing test failures due to an incorrect file name.

4 months agoCUDA: use async data loading for FlashAttention (#11894)
Johannes Gäßler [Mon, 17 Feb 2025 13:03:24 +0000 (14:03 +0100)]
CUDA: use async data loading for FlashAttention (#11894)

* CUDA: use async data loading for FlashAttention

---------

Co-authored-by: Diego Devesa <redacted>
4 months agoupdate release requirements (#11897)
Eve [Mon, 17 Feb 2025 11:20:23 +0000 (11:20 +0000)]
update release requirements (#11897)

4 months agoserver : fix divide-by-zero in metrics reporting (#11915)
Antoine Viallon [Mon, 17 Feb 2025 10:25:12 +0000 (11:25 +0100)]
server : fix divide-by-zero in metrics reporting (#11915)

4 months agovulkan: implement several ops relevant for ggml_opt (#11769)
Rémy O [Mon, 17 Feb 2025 06:55:57 +0000 (07:55 +0100)]
vulkan: implement several ops relevant for ggml_opt (#11769)

* vulkan: support memset_tensor

* vulkan: support GGML_OP_SUM

* vulkan: implement GGML_OP_ARGMAX

* vulkan: implement GGML_OP_SUB

* vulkan: implement GGML_OP_COUNT_EQUAL

* vulkan: implement GGML_OP_OPT_STEP_ADAMW

* vulkan: fix check_results RWKV_WKV6 crash and memory leaks

* vulkan: implement GGML_OP_REPEAT_BACK

* tests: remove invalid test-backend-ops REPEAT_BACK tests

* vulkan: fix COUNT_EQUAL memset using a fillBuffer command

4 months agoserver : bump httplib to 0.19.0 (#11908)
Xuan-Son Nguyen [Sun, 16 Feb 2025 17:11:22 +0000 (18:11 +0100)]
server : bump httplib to 0.19.0 (#11908)

4 months agocommon : Fix a typo in help (#11899)
standby24x7 [Sun, 16 Feb 2025 09:51:13 +0000 (18:51 +0900)]
common : Fix a typo in help (#11899)

This patch fixes a typo in command help.
prefx -> prefix

Signed-off-by: Masanari Iida <redacted>
4 months agoci : fix (again) arm64 build fails (#11895)
Xuan-Son Nguyen [Sun, 16 Feb 2025 09:36:39 +0000 (10:36 +0100)]
ci : fix (again) arm64 build fails (#11895)

* docker : attempt fixing arm64 build on ci

* qemu v7.0.0-28

4 months agovulkan: support multi/vision rope, and noncontiguous rope (#11902)
Jeff Bolz [Sun, 16 Feb 2025 07:52:23 +0000 (01:52 -0600)]
vulkan: support multi/vision rope, and noncontiguous rope (#11902)

4 months agometal : fix the crash caused by the lack of residency set support on Intel Macs....
Hale Chan [Sun, 16 Feb 2025 06:50:26 +0000 (14:50 +0800)]
metal : fix the crash caused by the lack of residency set support on Intel Macs. (#11904)

4 months agoscripts: fix compare-llama-bench commit hash logic (#11891)
Johannes Gäßler [Sat, 15 Feb 2025 19:23:22 +0000 (20:23 +0100)]
scripts: fix compare-llama-bench commit hash logic (#11891)

4 months agoexamples: fix typo in imatrix/README.md (#11884)
708-145 [Sat, 15 Feb 2025 19:03:30 +0000 (20:03 +0100)]
examples: fix typo in imatrix/README.md (#11884)

* simple typo fixed

* Update examples/imatrix/README.md

---------

Co-authored-by: Tobias Bergmann <redacted>
Co-authored-by: Georgi Gerganov <redacted>
4 months agometal : optimize dequant q6_K kernel (#11892)
Adrian Kretz [Sat, 15 Feb 2025 18:39:20 +0000 (19:39 +0100)]
metal : optimize dequant q6_K kernel (#11892)

4 months agoreadme : add notice about new package registry (#11890)
Georgi Gerganov [Sat, 15 Feb 2025 18:29:56 +0000 (20:29 +0200)]
readme : add notice about new package registry (#11890)

* readme : add notice about new package registry

* cont : fix whitespace

4 months agorepo : update links to new url (#11886)
Georgi Gerganov [Sat, 15 Feb 2025 14:40:57 +0000 (16:40 +0200)]
repo : update links to new url (#11886)

* repo : update links to new url

ggml-ci

* cont : more urls

ggml-ci

4 months agoserver: fix type promotion typo causing crashes w/ --jinja w/o tools (#11880)
Olivier Chafik [Sat, 15 Feb 2025 10:11:36 +0000 (10:11 +0000)]
server: fix type promotion typo causing crashes w/ --jinja w/o tools  (#11880)

4 months agovulkan: initial support for IQ1_S and IQ1_M quantizations (#11528)
Rémy O [Sat, 15 Feb 2025 08:01:40 +0000 (09:01 +0100)]
vulkan: initial support for IQ1_S and IQ1_M quantizations (#11528)

* vulkan: initial support for IQ1_S and IQ1_M quantizations

* vulkan: define MMV kernels for IQ1 quantizations

* devops: increase timeout of Vulkan tests again

* vulkan: simplify ifdef for init_iq_shmem

4 months agollguidance build fixes for Windows (#11664) upstream/0.0.4719
Michał Moskal [Fri, 14 Feb 2025 20:46:08 +0000 (12:46 -0800)]
llguidance build fixes for Windows (#11664)

* setup windows linking for llguidance; thanks @phil-scott-78

* add build instructions for windows and update script link

* change VS Community link from DE to EN

* whitespace fix

4 months agoopencl: Fix rope and softmax (#11833)
lhez [Fri, 14 Feb 2025 19:12:23 +0000 (11:12 -0800)]
opencl: Fix rope and softmax (#11833)

* opencl: fix `ROPE`

* opencl: fix `SOFT_MAX`

* Add fp16 variant

* opencl: enforce subgroup size for `soft_max`

4 months agocuda : add ampere to the list of default architectures (#11870)
Diego Devesa [Fri, 14 Feb 2025 14:33:52 +0000 (15:33 +0100)]
cuda : add ampere to the list of default architectures (#11870)

4 months agodocker : drop to CUDA 12.4 (#11869)
Georgi Gerganov [Fri, 14 Feb 2025 12:48:40 +0000 (14:48 +0200)]
docker : drop to CUDA 12.4 (#11869)

* docker : drop to CUDA 12.4

* docker : update readme [no ci]

4 months agollama : add completion for --chat-template-file (#11860)
Daniel Bevenius [Fri, 14 Feb 2025 10:16:56 +0000 (11:16 +0100)]
llama : add completion for --chat-template-file (#11860)

This commit adds completion for `--chat-template-file`, enabling only
`.jinja` files to be displayed as completions.

Example usage:
```console
$ ./build/bin/llama-cli --chat-template-file models/templates/<TAB>
models/templates/CohereForAI-c4ai-command-r7b-12-2024-tool_use.jinja
models/templates/CohereForAI-c4ai-command-r-plus-tool_use.jinja
models/templates/deepseek-ai-DeepSeek-R1-Distill-Llama-8B.jinja
models/templates/deepseek-ai-DeepSeek-R1-Distill-Qwen-32B.jinja
models/templates/fireworks-ai-llama-3-firefunction-v2.jinja
models/templates/google-gemma-2-2b-it.jinja
models/templates/llama-cpp-deepseek-r1.jinja
models/templates/meetkai-functionary-medium-v3.1.jinja
models/templates/meetkai-functionary-medium-v3.2.jinja
models/templates/meta-llama-Llama-3.1-8B-Instruct.jinja
models/templates/meta-llama-Llama-3.2-3B-Instruct.jinja
models/templates/meta-llama-Llama-3.3-70B-Instruct.jinja
models/templates/microsoft-Phi-3.5-mini-instruct.jinja
models/templates/mistralai-Mistral-Nemo-Instruct-2407.jinja
models/templates/NousResearch-Hermes-2-Pro-Llama-3-8B-tool_use.jinja
models/templates/NousResearch-Hermes-3-Llama-3.1-8B-tool_use.jinja
models/templates/Qwen-Qwen2.5-7B-Instruct.jinja
```
This is not limited to the models/templates directory, it can be used
anywhere in the filesystem, the above is just an example.

4 months agoggml: optimize some vec dot functions for LoongArch ASX (#11842)
Jinyang He [Fri, 14 Feb 2025 08:54:27 +0000 (16:54 +0800)]
ggml: optimize some vec dot functions for LoongArch ASX (#11842)

* Optimize ggml_vec_dot_q3_K_q8_K for LoongArch ASX

* Optimize ggml_vec_dot_q4_K_q8_K for LoongArch ASX

* Optimize ggml_vec_dot_q6_K_q8_K for LoongArch ASX

* Optimize ggml_vec_dot_q5_K_q8_K for LoongArch ASX

* Optimize ggml_vec_dot_q2_K_q8_K for LoongArch ASX

* Optimize mul_sum_i8_pairs_float for LoongArch ASX

* Optimize ggml_vec_dot_iq4_xs_q8_K for LoongArch ASX

4 months agovulkan: linux builds + small subgroup size fixes (#11767)
Eve [Fri, 14 Feb 2025 02:59:40 +0000 (02:59 +0000)]
vulkan: linux builds + small subgroup size fixes (#11767)

* mm subgroup size

* upload vulkan x86 builds

4 months agollama-bench : fix unexpected global variable initialize sequence issue (#11832)
theraininsky [Fri, 14 Feb 2025 01:13:43 +0000 (09:13 +0800)]
llama-bench : fix unexpected global variable initialize sequence issue (#11832)

* llama-bench : fix unexpected global variable initialize sequence issue

* Update examples/llama-bench/llama-bench.cpp

---------

Co-authored-by: Diego Devesa <redacted>
4 months agoreadme : minor
Georgi Gerganov [Thu, 13 Feb 2025 22:16:56 +0000 (00:16 +0200)]
readme : minor

4 months agollamafile: use member variable instead of constant for iq4nlt (#11780)
Jeffrey Morgan [Thu, 13 Feb 2025 17:05:04 +0000 (09:05 -0800)]
llamafile: use member variable instead of constant for iq4nlt (#11780)

4 months agoserver : (docs) Update wrong tool calling example (#11809)
Reza Rahemtola [Thu, 13 Feb 2025 16:22:44 +0000 (17:22 +0100)]
server : (docs) Update wrong tool calling example (#11809)

Call updated to match the tool used in the output just below, following the example in https://github.com/ggerganov/llama.cpp/pull/9639

4 months agollama : add --completion-bash option (#11846)
Daniel Bevenius [Thu, 13 Feb 2025 13:46:59 +0000 (14:46 +0100)]
llama : add --completion-bash option (#11846)

This commit adds a new option `--completion-bash` to the llama.cpp which
outputs a source-able bash completion script.

The motivation for this change is to provide a more user-friendly
experience for users who use the command-line interface of llama.cpp.

This is currently only basic and all options are displayed for all llama
executables but this can be improved in the future if needed.

Example usage:
```console
$ build/bin/llama-cli --completion-bash > ~/.llama-completion.bash
$ source ~/.llama-completion.bash

$ ./build/bin/llama-server --m<TAB>
--main-gpu         --mirostat         --mirostat-lr      --model            --multiline-input
--min-p            --mirostat-ent     --mlock            --model-url
```

4 months agomusa: bump MUSA SDK version to rc3.1.1 (#11822)
R0CKSTAR [Thu, 13 Feb 2025 12:28:18 +0000 (20:28 +0800)]
musa: bump MUSA SDK version to rc3.1.1  (#11822)

* musa: Update MUSA SDK version to rc3.1.1

Signed-off-by: Xiaodong Ye <redacted>
* musa: Remove workaround in PR #10042

Signed-off-by: Xiaodong Ye <redacted>
---------

Signed-off-by: Xiaodong Ye <redacted>
4 months ago`server`: fix tool-call of DeepSeek R1 Qwen, return reasoning_content (Command 7RB...
Olivier Chafik [Thu, 13 Feb 2025 10:05:16 +0000 (10:05 +0000)]
`server`: fix tool-call of DeepSeek R1 Qwen, return reasoning_content (Command 7RB & DeepSeek R1) unless `--reasoning-format none` (#11607)

* extract & return thoughts in reasoning_content field (unless --reasoning-format) for DeepSeek R1 & Command R7B

* tool-calls: add deepseek r1 template (models/templates/llama-cpp-deepseek-r1.jinja) + hackommodate broken official template

* tool-calls: accommodate variety of wrong tool call opening tags both R1 Qwen 32B and 7B distills like to spit out

* server/oai: ensure content is null when there are tool calls, and reasoning_content appears before content for readability

* tool-calls: add DeepSeek R1 Qwen distills to server/README.md & server tests

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
4 months agosampling: add Top-nσ sampler (#11223)
Vinesh Janarthanan [Thu, 13 Feb 2025 06:45:57 +0000 (00:45 -0600)]
sampling: add Top-nσ sampler (#11223)

* initial sampling changes:

* completed top nsigma sampler implementation

* apply parameter to only llama-cli

* updated readme

* added tests and fixed nsigma impl

* cleaned up pr

* format

* format

* format

* removed commented tests

* cleanup pr and remove explicit floats

* added top-k sampler to improve performance

* changed sigma to float

* fixed string format to float

* Update src/llama-sampling.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Update common/sampling.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Update src/llama-sampling.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Update src/llama-sampling.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Update src/llama-sampling.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Update src/llama-sampling.cpp

Co-authored-by: Georgi Gerganov <redacted>
* added llama_sampler_init

---------

Co-authored-by: Georgi Gerganov <redacted>
4 months agollama.cpp: fix warning message (#11839)
Oleksandr Kuvshynov [Thu, 13 Feb 2025 06:25:34 +0000 (01:25 -0500)]
llama.cpp: fix warning message (#11839)

There was a typo-like error, which would print the same number twice if
request is received with n_predict > server-side config.

Before the fix:
```
slot launch_slot_: id  0 | task 0 | n_predict = 4096 exceeds server configuration, setting to 4096
```

After the fix:
```
slot launch_slot_: id  0 | task 0 | n_predict = 8192 exceeds server configuration, setting to 4096
```

4 months agollama : update llama_decode_internal ref [no ci] (#11840)
Daniel Bevenius [Thu, 13 Feb 2025 06:07:51 +0000 (07:07 +0100)]
llama : update llama_decode_internal ref [no ci] (#11840)

This commit updates the comment in llama_kv_cache.h to reflect the
change of the function name from llama_decode_internal to
llama_decode_impl.

4 months agoggml-cpu : add chunking support to mul_mat_id (#11666)
Diego Devesa [Thu, 13 Feb 2025 00:02:38 +0000 (01:02 +0100)]
ggml-cpu : add chunking support to mul_mat_id (#11666)

* ggml-cpu : add chunking support to mul_mat_id

* allocate chunk counter in wdata
parallelize src1 quantization by column to allows parallelization even when there is only one row

* disable for arm

* cleanup

* better way to disable for arm

* fix uninitialized counter when using 1 thread only

* revert test-backend-ops changes

4 months agoggml : x2 speed for WASM by optimizing SIMD (#11453)
Xuan-Son Nguyen [Wed, 12 Feb 2025 23:33:45 +0000 (00:33 +0100)]
ggml : x2 speed for WASM by optimizing SIMD (#11453)

* ggml : x2 speed for WASM by optimizing SIMD

* fix bad merging

* rm trailing spaces

* rm redundant clamp

* better quantize_row_q8_K

Co-authored-by: camel-cdr <redacted>
* remove memset that causes buffer overflow
Co-authored-by: camel-cdr <redacted>
---------

Co-authored-by: camel-cdr <redacted>
4 months agoserver : (webui) Give copy button back to all message bubbles (#11814)
Woof Dog [Wed, 12 Feb 2025 22:47:11 +0000 (22:47 +0000)]
server : (webui) Give copy button back to all message bubbles (#11814)

* All messages get the copy button

* Update index.html.gz

4 months agoHIP: Remove GCN from list of devices that avoid MMQ (#11831)
uvos [Wed, 12 Feb 2025 21:25:28 +0000 (22:25 +0100)]
HIP: Remove GCN from list of devices that avoid MMQ (#11831)

4 months agoFix: Compile failure due to Microsoft STL breaking change (#11836)
JC [Wed, 12 Feb 2025 20:36:11 +0000 (20:36 +0000)]
Fix: Compile failure due to Microsoft STL breaking change (#11836)

4 months agosync : ggml
Georgi Gerganov [Wed, 12 Feb 2025 19:46:02 +0000 (21:46 +0200)]
sync : ggml

4 months agoHIP: Switch to std::vector in rocblas version check (#11820)
uvos [Wed, 12 Feb 2025 16:25:03 +0000 (17:25 +0100)]
HIP: Switch to std::vector in rocblas version check (#11820)

4 months agocleanup: fix compile warnings associated with gnu_printf (#11811)
bandoti [Wed, 12 Feb 2025 14:06:53 +0000 (10:06 -0400)]
cleanup: fix compile warnings associated with gnu_printf (#11811)

4 months agoggml : fix multi-threaded clamp_f32 (#11824)
Richard [Wed, 12 Feb 2025 13:57:33 +0000 (13:57 +0000)]
ggml : fix multi-threaded clamp_f32 (#11824)

* Bug fix for clamp_f32

When using tensors larger than 1d clamp operation does not work due to the restriction of returning if ith is not 0.

* Bug fix for clamp_f32

* Bug fix for clamp_f32

4 months agoggml-cpu: Fix duplicate MATMUL_INT8 (#11817)
Weizhao Ouyang [Wed, 12 Feb 2025 12:22:58 +0000 (20:22 +0800)]
ggml-cpu: Fix duplicate MATMUL_INT8 (#11817)

Signed-off-by: Weizhao Ouyang <redacted>
4 months agoCUDA: fix CUDART_VERSION checks (#11821)
Johannes Gäßler [Wed, 12 Feb 2025 12:16:39 +0000 (13:16 +0100)]
CUDA: fix CUDART_VERSION checks (#11821)

4 months agollama : fix typo in llama-grammar.h [no ci] (#11816)
Daniel Bevenius [Wed, 12 Feb 2025 07:40:01 +0000 (08:40 +0100)]
llama : fix typo in llama-grammar.h [no ci] (#11816)

4 months agodocs: add OpenCL (#11697)
lhez [Tue, 11 Feb 2025 22:04:13 +0000 (14:04 -0800)]
docs: add OpenCL (#11697)

4 months agoFix #11802: Compile bug - RegQueryValueExA changed to RegQueryValueEx (#11803)
Sheldon Robinson [Tue, 11 Feb 2025 15:55:45 +0000 (10:55 -0500)]
Fix #11802: Compile bug - RegQueryValueExA changed to RegQueryValueEx (#11803)

* Fix #11802: Compile bug - RegQueryValueExA changed to RegQueryValueEx

* Fix #11802: PR #11803 - keep RegQueryValueExA, remove TEXT macro, description needs to be ANSI string

4 months agoserver : use common_token_to_piece instead of common_detokenize (#11740)
Daniel Bevenius [Tue, 11 Feb 2025 13:06:45 +0000 (14:06 +0100)]
server : use common_token_to_piece instead of common_detokenize (#11740)

* server : use common_token_to_piece instead of common_detokenize

This commit replaces the call to common_detokenize with
common_token_to_piece in the populate_token_probs.

The motivation for this change is to avoid an issue where
common_detokenize would remove the word boundary character for tokens,
which caused a regression in the server generated token probabilities.

Resolves: https://github.com/ggerganov/llama.cpp/issues/11728

* squash! server : use common_token_to_piece instead of common_detokenize

Use common_token_to_piece for post_sampling_probs as well.

4 months agoCUDA: use arch list for compatibility check (#11775)
Johannes Gäßler [Mon, 10 Feb 2025 23:17:22 +0000 (00:17 +0100)]
CUDA: use arch list for compatibility check (#11775)

* CUDA: use arch list for feature availability check

---------

Co-authored-by: Diego Devesa <redacted>
4 months agofix: typos in documentation files (#11791)
Maxim Evtush [Mon, 10 Feb 2025 22:21:31 +0000 (23:21 +0100)]
fix: typos in documentation files (#11791)

* Update ggml.c

* Update arg.cpp

* Update speculative.h

4 months agodocs: utilize the forward slash (/) as the path separator for Unix-like systems ...
jason_w [Mon, 10 Feb 2025 22:17:48 +0000 (06:17 +0800)]
docs: utilize the forward slash (/) as the path separator for Unix-like systems (#11770)

4 months agoserver : (webui) introduce conversation branching + idb storage (#11792)
Xuan-Son Nguyen [Mon, 10 Feb 2025 20:23:17 +0000 (21:23 +0100)]
server : (webui) introduce conversation branching + idb storage (#11792)

* server : (webui) introduce conversation branching + idb storage

* mark old conv as "migrated" instead deleting them

* improve migration

* add more comments

* more clarification

4 months agollama-mmap: fix missing include (#11796)
Wilken Gottwalt [Mon, 10 Feb 2025 18:58:18 +0000 (19:58 +0100)]
llama-mmap: fix missing include (#11796)

Technically the fixed width types come only from iostream and
cstdint/stdint.h headers. memory and vector headers should not provide
these. In GCC 15 the headers are cleaned up and you require the proper
header cstdint.

src/llama-mmap.h:26:5: error: ‘uint32_t’ does not name a type
   26 |     uint32_t read_u32() const;
      |     ^~~~~~~~

4 months agoserver : correct signal handler (#11795)
Xuan-Son Nguyen [Mon, 10 Feb 2025 17:03:28 +0000 (18:03 +0100)]
server : correct signal handler (#11795)

4 months agosync: minja (https://github.com/google/minja/commit/a72057e5190de2c612d4598bb10b4bfd0...
Olivier Chafik [Mon, 10 Feb 2025 09:34:09 +0000 (09:34 +0000)]
sync: minja (https://github.com/google/minja/commit/a72057e5190de2c612d4598bb10b4bfd0f53011f) (#11774)

4 months agoUpdate README.md [no ci] (#11781)
pascal-lc [Mon, 10 Feb 2025 08:05:57 +0000 (16:05 +0800)]
Update README.md [no ci] (#11781)

typo: `\` -> `/`
Change the UNIX path separator to` \`.

4 months agovulkan: Make Vulkan optional at runtime (#11493). (#11494)
Danny Milosavljevic [Mon, 10 Feb 2025 06:17:21 +0000 (07:17 +0100)]
vulkan: Make Vulkan optional at runtime (#11493). (#11494)

Co-authored-by: Jeff Bolz <redacted>
4 months agovulkan: add environment variable GGML_VK_PREFER_HOST_MEMORY to avoid VRAM allocation...
Wagner Bruna [Mon, 10 Feb 2025 06:08:22 +0000 (03:08 -0300)]
vulkan: add environment variable GGML_VK_PREFER_HOST_MEMORY to avoid VRAM allocation (#11592)

4 months agoThere's a better way of clearing lines (#11756)
Eric Curtin [Sun, 9 Feb 2025 10:34:49 +0000 (10:34 +0000)]
There's a better way of clearing lines (#11756)

Use the ANSI escape code for clearing a line.

Signed-off-by: Eric Curtin <redacted>
4 months agovulkan: account for lookup tables when checking shared memory size (#11502)
Jeff Bolz [Sun, 9 Feb 2025 07:43:51 +0000 (01:43 -0600)]
vulkan: account for lookup tables when checking shared memory size (#11502)

4 months agoserver : (webui) revamp Settings dialog, add Pyodide interpreter (#11759)
Xuan-Son Nguyen [Sat, 8 Feb 2025 20:54:50 +0000 (21:54 +0100)]
server : (webui) revamp Settings dialog, add Pyodide interpreter (#11759)

* redo Settings modal UI

* add python code interpreter

* fix auto scroll

* build

* fix overflow for long output lines

* bring back sticky copy button

* adapt layout on mobile view

* fix multiple lines output and color scheme

* handle python exception

* better state management

* add webworker

* add headers

* format code

* speed up by loading pyodide on page load

* (small tweak) add small animation to make it feels like claude

4 months agoserver : (webui) increase edit textarea size (#11763)
Woof Dog [Sat, 8 Feb 2025 19:09:55 +0000 (19:09 +0000)]
server : (webui) increase edit textarea size (#11763)

4 months agoserver : minor log updates (#11760)
Georgi Gerganov [Sat, 8 Feb 2025 16:08:43 +0000 (18:08 +0200)]
server : minor log updates (#11760)

ggml-ci

4 months agocont : fix mmap flag print (#11699)
Georgi Gerganov [Sat, 8 Feb 2025 14:49:38 +0000 (16:49 +0200)]
cont : fix mmap flag print (#11699)

4 months agoggml: Fix data race in ggml threadpool (#11736)
Karol Kontny [Sat, 8 Feb 2025 14:30:53 +0000 (15:30 +0100)]
ggml: Fix data race in ggml threadpool (#11736)

After the barrier in last iteration is executed, still the loop termination
condition will be executed. However main thread can destroy the cgraph object
and its nodes already, then another thread will access it, but the thing is already gone.
Also trouble can happen when n_nodes == 0 or abort is called, but I'm not sure if the
prior situation is possible.

Last syncronization should be done after the loop to ensure the cgraph/cplan won't be
accessed after the main thread exits from the function.

4 months agoCUDA: fix min. version for movmatrix (#11751)
Johannes Gäßler [Sat, 8 Feb 2025 09:46:07 +0000 (10:46 +0100)]
CUDA: fix min. version for movmatrix (#11751)

4 months agoreadme : update front-end framework (#11753)
Nikolaos Pothitos [Sat, 8 Feb 2025 09:43:04 +0000 (11:43 +0200)]
readme : update front-end framework (#11753)

After the migration to React with #11688

4 months agoserver : (webui) fix numeric settings being saved as string (#11739)
Xuan-Son Nguyen [Sat, 8 Feb 2025 09:42:34 +0000 (10:42 +0100)]
server : (webui) fix numeric settings being saved as string (#11739)

* server : (webui) fix numeric settings being saved as string

* add some more comments

4 months agoMake logging more verbose (#11714)
Eric Curtin [Fri, 7 Feb 2025 14:42:46 +0000 (14:42 +0000)]
Make logging more verbose (#11714)

Debugged an issue with a user who was on a read-only filesystem.

Signed-off-by: Eric Curtin <redacted>
4 months agollama : fix defrag logic (#11707)
Georgi Gerganov [Fri, 7 Feb 2025 14:05:34 +0000 (16:05 +0200)]
llama : fix defrag logic (#11707)

* llama : fix defrag logic

ggml-ci

* cont : better logic

ggml-ci

* cont : clamp fragmentation to 0.0

ggml-ci

4 months agovocab : ignore invalid UTF-8 input in the BPE tokenizer (#11729)
Christian Fillion [Fri, 7 Feb 2025 13:55:47 +0000 (08:55 -0500)]
vocab : ignore invalid UTF-8 input in the BPE tokenizer (#11729)

Silently insert U+FFFD(s) (Unicode replacement character) instead until the
next valid codepoint can be found.

This fixes `llama_tokenize` throwing an exception across the C API boundary
or libllama's module boundary (the caller's runtime might be incompatible!)

Returing a proper error code might be desirable, however the signature
of `llama_tokenize` doesn't allow it as all return values already have
existing meaning.

4 months agollama : fix progress dots (#11730)
magicse [Fri, 7 Feb 2025 13:48:47 +0000 (15:48 +0200)]
llama : fix progress dots (#11730)

* Update llama.cpp

For display progress dots in terminal.
Without this it didn't display dots progress during loading model from file.

* Update llama.cpp

removed trailing spaces

4 months agovulkan: print shared memory size (#11719)
Jeff Bolz [Fri, 7 Feb 2025 10:26:03 +0000 (04:26 -0600)]
vulkan: print shared memory size (#11719)

4 months agollama : add llama_sampler_init for safe usage of llama_sampler_free (#11727)
Christian Fillion [Fri, 7 Feb 2025 09:33:27 +0000 (04:33 -0500)]
llama : add llama_sampler_init for safe usage of llama_sampler_free (#11727)

The C API in llama.h claims users can implement `llama_sampler_i` to
create custom `llama_sampler`. The sampler chain takes ownership and
calls `llama_sampler_free` on them. However, `llama_sampler_free` is
hard-coded to use `delete`. This is undefined behavior if the object
wasn't also allocated via `new` from libllama's C++ runtime. Callers
in C and C-compatible languages do not use C++'s `new` operator. C++
callers may not be sharing the same heap as libllama.

4 months agoSYCL: remove XMX info from print devices (#11712)
Akarshan Biswas [Fri, 7 Feb 2025 09:27:53 +0000 (14:57 +0530)]
SYCL: remove XMX info from print devices (#11712)

4 months agocommon : add default embeddings presets (#11677)
Daniel Bevenius [Fri, 7 Feb 2025 08:15:22 +0000 (09:15 +0100)]
common : add default embeddings presets (#11677)

* common : add default embeddings presets

This commit adds default embeddings presets for the following models:
- bge-small-en-v1.5
- e5-small-v2
- gte-small

These can be used with llama-embedding and llama-server.

For example, with llama-embedding:
```console
./build/bin/llama-embedding --embd-gte-small-default -p "Hello, how are you?"
```

And with llama-server:
```console
./build/bin/llama-server --embd-gte-small-default
```
And the embeddings endpoint can then be called with a POST request:
```console
curl --request POST \
    --url http://localhost:8080/embeddings \
    --header "Content-Type: application/json" \
    --data '{"input": "Hello, how are you?"}'
```

I'm not sure if these are the most common embedding models but hopefully
this can be a good starting point for discussion and further
improvements.

Refs: https://github.com/ggerganov/llama.cpp/issues/10932

4 months agoggml : optimize and build warning fix for LoongArch (#11709)
Jinyang He [Fri, 7 Feb 2025 07:38:31 +0000 (15:38 +0800)]
ggml : optimize and build warning fix for LoongArch (#11709)

* ggml : optimize convert f32<->f16 for loongarch_asx

* ggml : optimize loongarch_asx extend i16,i8,u8 to i32,i16

* ggml : Fix warnings when run cpu CI locally on LoongArch

4 months agollama : fix old glm4 models (#11670)
tv1wnd [Thu, 6 Feb 2025 21:48:51 +0000 (22:48 +0100)]
llama : fix old glm4 models (#11670)

4 months agosync : ggml
Georgi Gerganov [Thu, 6 Feb 2025 19:23:03 +0000 (21:23 +0200)]
sync : ggml

4 months agorpc: fix known RCE in rpc-server (ggml/1103)
Patrick Peng [Thu, 6 Feb 2025 14:29:13 +0000 (09:29 -0500)]
rpc: fix known RCE in rpc-server (ggml/1103)

Add bounds checking in `rpc_server::copy_tensor` to prevent out-of-bounds writes
+ Check if  `(uint8_t *)dst->data + ggml_nbytes(src)` remains within the destination buffer’s allocated region.