]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
4 months agoMerge tag 'upstream/0.0.4719' into debian/latest
Mathieu Baudier [Sat, 15 Feb 2025 07:25:21 +0000 (08:25 +0100)]
Merge tag 'upstream/0.0.4719' into debian/latest

4 months agollguidance build fixes for Windows (#11664) upstream/0.0.4719
Michał Moskal [Fri, 14 Feb 2025 20:46:08 +0000 (12:46 -0800)]
llguidance build fixes for Windows (#11664)

* setup windows linking for llguidance; thanks @phil-scott-78

* add build instructions for windows and update script link

* change VS Community link from DE to EN

* whitespace fix

4 months agoopencl: Fix rope and softmax (#11833)
lhez [Fri, 14 Feb 2025 19:12:23 +0000 (11:12 -0800)]
opencl: Fix rope and softmax (#11833)

* opencl: fix `ROPE`

* opencl: fix `SOFT_MAX`

* Add fp16 variant

* opencl: enforce subgroup size for `soft_max`

4 months agocuda : add ampere to the list of default architectures (#11870)
Diego Devesa [Fri, 14 Feb 2025 14:33:52 +0000 (15:33 +0100)]
cuda : add ampere to the list of default architectures (#11870)

4 months agodocker : drop to CUDA 12.4 (#11869)
Georgi Gerganov [Fri, 14 Feb 2025 12:48:40 +0000 (14:48 +0200)]
docker : drop to CUDA 12.4 (#11869)

* docker : drop to CUDA 12.4

* docker : update readme [no ci]

4 months agollama : add completion for --chat-template-file (#11860)
Daniel Bevenius [Fri, 14 Feb 2025 10:16:56 +0000 (11:16 +0100)]
llama : add completion for --chat-template-file (#11860)

This commit adds completion for `--chat-template-file`, enabling only
`.jinja` files to be displayed as completions.

Example usage:
```console
$ ./build/bin/llama-cli --chat-template-file models/templates/<TAB>
models/templates/CohereForAI-c4ai-command-r7b-12-2024-tool_use.jinja
models/templates/CohereForAI-c4ai-command-r-plus-tool_use.jinja
models/templates/deepseek-ai-DeepSeek-R1-Distill-Llama-8B.jinja
models/templates/deepseek-ai-DeepSeek-R1-Distill-Qwen-32B.jinja
models/templates/fireworks-ai-llama-3-firefunction-v2.jinja
models/templates/google-gemma-2-2b-it.jinja
models/templates/llama-cpp-deepseek-r1.jinja
models/templates/meetkai-functionary-medium-v3.1.jinja
models/templates/meetkai-functionary-medium-v3.2.jinja
models/templates/meta-llama-Llama-3.1-8B-Instruct.jinja
models/templates/meta-llama-Llama-3.2-3B-Instruct.jinja
models/templates/meta-llama-Llama-3.3-70B-Instruct.jinja
models/templates/microsoft-Phi-3.5-mini-instruct.jinja
models/templates/mistralai-Mistral-Nemo-Instruct-2407.jinja
models/templates/NousResearch-Hermes-2-Pro-Llama-3-8B-tool_use.jinja
models/templates/NousResearch-Hermes-3-Llama-3.1-8B-tool_use.jinja
models/templates/Qwen-Qwen2.5-7B-Instruct.jinja
```
This is not limited to the models/templates directory, it can be used
anywhere in the filesystem, the above is just an example.

4 months agoggml: optimize some vec dot functions for LoongArch ASX (#11842)
Jinyang He [Fri, 14 Feb 2025 08:54:27 +0000 (16:54 +0800)]
ggml: optimize some vec dot functions for LoongArch ASX (#11842)

* Optimize ggml_vec_dot_q3_K_q8_K for LoongArch ASX

* Optimize ggml_vec_dot_q4_K_q8_K for LoongArch ASX

* Optimize ggml_vec_dot_q6_K_q8_K for LoongArch ASX

* Optimize ggml_vec_dot_q5_K_q8_K for LoongArch ASX

* Optimize ggml_vec_dot_q2_K_q8_K for LoongArch ASX

* Optimize mul_sum_i8_pairs_float for LoongArch ASX

* Optimize ggml_vec_dot_iq4_xs_q8_K for LoongArch ASX

4 months agovulkan: linux builds + small subgroup size fixes (#11767)
Eve [Fri, 14 Feb 2025 02:59:40 +0000 (02:59 +0000)]
vulkan: linux builds + small subgroup size fixes (#11767)

* mm subgroup size

* upload vulkan x86 builds

4 months agollama-bench : fix unexpected global variable initialize sequence issue (#11832)
theraininsky [Fri, 14 Feb 2025 01:13:43 +0000 (09:13 +0800)]
llama-bench : fix unexpected global variable initialize sequence issue (#11832)

* llama-bench : fix unexpected global variable initialize sequence issue

* Update examples/llama-bench/llama-bench.cpp

---------

Co-authored-by: Diego Devesa <redacted>
4 months agoreadme : minor
Georgi Gerganov [Thu, 13 Feb 2025 22:16:56 +0000 (00:16 +0200)]
readme : minor

4 months agollamafile: use member variable instead of constant for iq4nlt (#11780)
Jeffrey Morgan [Thu, 13 Feb 2025 17:05:04 +0000 (09:05 -0800)]
llamafile: use member variable instead of constant for iq4nlt (#11780)

4 months agoserver : (docs) Update wrong tool calling example (#11809)
Reza Rahemtola [Thu, 13 Feb 2025 16:22:44 +0000 (17:22 +0100)]
server : (docs) Update wrong tool calling example (#11809)

Call updated to match the tool used in the output just below, following the example in https://github.com/ggerganov/llama.cpp/pull/9639

4 months agollama : add --completion-bash option (#11846)
Daniel Bevenius [Thu, 13 Feb 2025 13:46:59 +0000 (14:46 +0100)]
llama : add --completion-bash option (#11846)

This commit adds a new option `--completion-bash` to the llama.cpp which
outputs a source-able bash completion script.

The motivation for this change is to provide a more user-friendly
experience for users who use the command-line interface of llama.cpp.

This is currently only basic and all options are displayed for all llama
executables but this can be improved in the future if needed.

Example usage:
```console
$ build/bin/llama-cli --completion-bash > ~/.llama-completion.bash
$ source ~/.llama-completion.bash

$ ./build/bin/llama-server --m<TAB>
--main-gpu         --mirostat         --mirostat-lr      --model            --multiline-input
--min-p            --mirostat-ent     --mlock            --model-url
```

4 months agomusa: bump MUSA SDK version to rc3.1.1 (#11822)
R0CKSTAR [Thu, 13 Feb 2025 12:28:18 +0000 (20:28 +0800)]
musa: bump MUSA SDK version to rc3.1.1  (#11822)

* musa: Update MUSA SDK version to rc3.1.1

Signed-off-by: Xiaodong Ye <redacted>
* musa: Remove workaround in PR #10042

Signed-off-by: Xiaodong Ye <redacted>
---------

Signed-off-by: Xiaodong Ye <redacted>
4 months ago`server`: fix tool-call of DeepSeek R1 Qwen, return reasoning_content (Command 7RB...
Olivier Chafik [Thu, 13 Feb 2025 10:05:16 +0000 (10:05 +0000)]
`server`: fix tool-call of DeepSeek R1 Qwen, return reasoning_content (Command 7RB & DeepSeek R1) unless `--reasoning-format none` (#11607)

* extract & return thoughts in reasoning_content field (unless --reasoning-format) for DeepSeek R1 & Command R7B

* tool-calls: add deepseek r1 template (models/templates/llama-cpp-deepseek-r1.jinja) + hackommodate broken official template

* tool-calls: accommodate variety of wrong tool call opening tags both R1 Qwen 32B and 7B distills like to spit out

* server/oai: ensure content is null when there are tool calls, and reasoning_content appears before content for readability

* tool-calls: add DeepSeek R1 Qwen distills to server/README.md & server tests

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
4 months agosampling: add Top-nσ sampler (#11223)
Vinesh Janarthanan [Thu, 13 Feb 2025 06:45:57 +0000 (00:45 -0600)]
sampling: add Top-nσ sampler (#11223)

* initial sampling changes:

* completed top nsigma sampler implementation

* apply parameter to only llama-cli

* updated readme

* added tests and fixed nsigma impl

* cleaned up pr

* format

* format

* format

* removed commented tests

* cleanup pr and remove explicit floats

* added top-k sampler to improve performance

* changed sigma to float

* fixed string format to float

* Update src/llama-sampling.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Update common/sampling.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Update src/llama-sampling.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Update src/llama-sampling.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Update src/llama-sampling.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Update src/llama-sampling.cpp

Co-authored-by: Georgi Gerganov <redacted>
* added llama_sampler_init

---------

Co-authored-by: Georgi Gerganov <redacted>
4 months agollama.cpp: fix warning message (#11839)
Oleksandr Kuvshynov [Thu, 13 Feb 2025 06:25:34 +0000 (01:25 -0500)]
llama.cpp: fix warning message (#11839)

There was a typo-like error, which would print the same number twice if
request is received with n_predict > server-side config.

Before the fix:
```
slot launch_slot_: id  0 | task 0 | n_predict = 4096 exceeds server configuration, setting to 4096
```

After the fix:
```
slot launch_slot_: id  0 | task 0 | n_predict = 8192 exceeds server configuration, setting to 4096
```

4 months agollama : update llama_decode_internal ref [no ci] (#11840)
Daniel Bevenius [Thu, 13 Feb 2025 06:07:51 +0000 (07:07 +0100)]
llama : update llama_decode_internal ref [no ci] (#11840)

This commit updates the comment in llama_kv_cache.h to reflect the
change of the function name from llama_decode_internal to
llama_decode_impl.

4 months agoggml-cpu : add chunking support to mul_mat_id (#11666)
Diego Devesa [Thu, 13 Feb 2025 00:02:38 +0000 (01:02 +0100)]
ggml-cpu : add chunking support to mul_mat_id (#11666)

* ggml-cpu : add chunking support to mul_mat_id

* allocate chunk counter in wdata
parallelize src1 quantization by column to allows parallelization even when there is only one row

* disable for arm

* cleanup

* better way to disable for arm

* fix uninitialized counter when using 1 thread only

* revert test-backend-ops changes

4 months agoggml : x2 speed for WASM by optimizing SIMD (#11453)
Xuan-Son Nguyen [Wed, 12 Feb 2025 23:33:45 +0000 (00:33 +0100)]
ggml : x2 speed for WASM by optimizing SIMD (#11453)

* ggml : x2 speed for WASM by optimizing SIMD

* fix bad merging

* rm trailing spaces

* rm redundant clamp

* better quantize_row_q8_K

Co-authored-by: camel-cdr <redacted>
* remove memset that causes buffer overflow
Co-authored-by: camel-cdr <redacted>
---------

Co-authored-by: camel-cdr <redacted>
4 months agoserver : (webui) Give copy button back to all message bubbles (#11814)
Woof Dog [Wed, 12 Feb 2025 22:47:11 +0000 (22:47 +0000)]
server : (webui) Give copy button back to all message bubbles (#11814)

* All messages get the copy button

* Update index.html.gz

4 months agoHIP: Remove GCN from list of devices that avoid MMQ (#11831)
uvos [Wed, 12 Feb 2025 21:25:28 +0000 (22:25 +0100)]
HIP: Remove GCN from list of devices that avoid MMQ (#11831)

4 months agoFix: Compile failure due to Microsoft STL breaking change (#11836)
JC [Wed, 12 Feb 2025 20:36:11 +0000 (20:36 +0000)]
Fix: Compile failure due to Microsoft STL breaking change (#11836)

4 months agosync : ggml
Georgi Gerganov [Wed, 12 Feb 2025 19:46:02 +0000 (21:46 +0200)]
sync : ggml

4 months agoHIP: Switch to std::vector in rocblas version check (#11820)
uvos [Wed, 12 Feb 2025 16:25:03 +0000 (17:25 +0100)]
HIP: Switch to std::vector in rocblas version check (#11820)

4 months agocleanup: fix compile warnings associated with gnu_printf (#11811)
bandoti [Wed, 12 Feb 2025 14:06:53 +0000 (10:06 -0400)]
cleanup: fix compile warnings associated with gnu_printf (#11811)

4 months agoggml : fix multi-threaded clamp_f32 (#11824)
Richard [Wed, 12 Feb 2025 13:57:33 +0000 (13:57 +0000)]
ggml : fix multi-threaded clamp_f32 (#11824)

* Bug fix for clamp_f32

When using tensors larger than 1d clamp operation does not work due to the restriction of returning if ith is not 0.

* Bug fix for clamp_f32

* Bug fix for clamp_f32

4 months agoggml-cpu: Fix duplicate MATMUL_INT8 (#11817)
Weizhao Ouyang [Wed, 12 Feb 2025 12:22:58 +0000 (20:22 +0800)]
ggml-cpu: Fix duplicate MATMUL_INT8 (#11817)

Signed-off-by: Weizhao Ouyang <redacted>
4 months agoCUDA: fix CUDART_VERSION checks (#11821)
Johannes Gäßler [Wed, 12 Feb 2025 12:16:39 +0000 (13:16 +0100)]
CUDA: fix CUDART_VERSION checks (#11821)

4 months agollama : fix typo in llama-grammar.h [no ci] (#11816)
Daniel Bevenius [Wed, 12 Feb 2025 07:40:01 +0000 (08:40 +0100)]
llama : fix typo in llama-grammar.h [no ci] (#11816)

4 months agodocs: add OpenCL (#11697)
lhez [Tue, 11 Feb 2025 22:04:13 +0000 (14:04 -0800)]
docs: add OpenCL (#11697)

4 months agoFix #11802: Compile bug - RegQueryValueExA changed to RegQueryValueEx (#11803)
Sheldon Robinson [Tue, 11 Feb 2025 15:55:45 +0000 (10:55 -0500)]
Fix #11802: Compile bug - RegQueryValueExA changed to RegQueryValueEx (#11803)

* Fix #11802: Compile bug - RegQueryValueExA changed to RegQueryValueEx

* Fix #11802: PR #11803 - keep RegQueryValueExA, remove TEXT macro, description needs to be ANSI string

4 months agoserver : use common_token_to_piece instead of common_detokenize (#11740)
Daniel Bevenius [Tue, 11 Feb 2025 13:06:45 +0000 (14:06 +0100)]
server : use common_token_to_piece instead of common_detokenize (#11740)

* server : use common_token_to_piece instead of common_detokenize

This commit replaces the call to common_detokenize with
common_token_to_piece in the populate_token_probs.

The motivation for this change is to avoid an issue where
common_detokenize would remove the word boundary character for tokens,
which caused a regression in the server generated token probabilities.

Resolves: https://github.com/ggerganov/llama.cpp/issues/11728

* squash! server : use common_token_to_piece instead of common_detokenize

Use common_token_to_piece for post_sampling_probs as well.

4 months agoCUDA: use arch list for compatibility check (#11775)
Johannes Gäßler [Mon, 10 Feb 2025 23:17:22 +0000 (00:17 +0100)]
CUDA: use arch list for compatibility check (#11775)

* CUDA: use arch list for feature availability check

---------

Co-authored-by: Diego Devesa <redacted>
4 months agofix: typos in documentation files (#11791)
Maxim Evtush [Mon, 10 Feb 2025 22:21:31 +0000 (23:21 +0100)]
fix: typos in documentation files (#11791)

* Update ggml.c

* Update arg.cpp

* Update speculative.h

4 months agodocs: utilize the forward slash (/) as the path separator for Unix-like systems ...
jason_w [Mon, 10 Feb 2025 22:17:48 +0000 (06:17 +0800)]
docs: utilize the forward slash (/) as the path separator for Unix-like systems (#11770)

4 months agoserver : (webui) introduce conversation branching + idb storage (#11792)
Xuan-Son Nguyen [Mon, 10 Feb 2025 20:23:17 +0000 (21:23 +0100)]
server : (webui) introduce conversation branching + idb storage (#11792)

* server : (webui) introduce conversation branching + idb storage

* mark old conv as "migrated" instead deleting them

* improve migration

* add more comments

* more clarification

4 months agollama-mmap: fix missing include (#11796)
Wilken Gottwalt [Mon, 10 Feb 2025 18:58:18 +0000 (19:58 +0100)]
llama-mmap: fix missing include (#11796)

Technically the fixed width types come only from iostream and
cstdint/stdint.h headers. memory and vector headers should not provide
these. In GCC 15 the headers are cleaned up and you require the proper
header cstdint.

src/llama-mmap.h:26:5: error: ‘uint32_t’ does not name a type
   26 |     uint32_t read_u32() const;
      |     ^~~~~~~~

4 months agoserver : correct signal handler (#11795)
Xuan-Son Nguyen [Mon, 10 Feb 2025 17:03:28 +0000 (18:03 +0100)]
server : correct signal handler (#11795)

4 months agosync: minja (https://github.com/google/minja/commit/a72057e5190de2c612d4598bb10b4bfd0...
Olivier Chafik [Mon, 10 Feb 2025 09:34:09 +0000 (09:34 +0000)]
sync: minja (https://github.com/google/minja/commit/a72057e5190de2c612d4598bb10b4bfd0f53011f) (#11774)

4 months agoUpdate README.md [no ci] (#11781)
pascal-lc [Mon, 10 Feb 2025 08:05:57 +0000 (16:05 +0800)]
Update README.md [no ci] (#11781)

typo: `\` -> `/`
Change the UNIX path separator to` \`.

4 months agovulkan: Make Vulkan optional at runtime (#11493). (#11494)
Danny Milosavljevic [Mon, 10 Feb 2025 06:17:21 +0000 (07:17 +0100)]
vulkan: Make Vulkan optional at runtime (#11493). (#11494)

Co-authored-by: Jeff Bolz <redacted>
4 months agovulkan: add environment variable GGML_VK_PREFER_HOST_MEMORY to avoid VRAM allocation...
Wagner Bruna [Mon, 10 Feb 2025 06:08:22 +0000 (03:08 -0300)]
vulkan: add environment variable GGML_VK_PREFER_HOST_MEMORY to avoid VRAM allocation (#11592)

4 months agoThere's a better way of clearing lines (#11756)
Eric Curtin [Sun, 9 Feb 2025 10:34:49 +0000 (10:34 +0000)]
There's a better way of clearing lines (#11756)

Use the ANSI escape code for clearing a line.

Signed-off-by: Eric Curtin <redacted>
4 months agovulkan: account for lookup tables when checking shared memory size (#11502)
Jeff Bolz [Sun, 9 Feb 2025 07:43:51 +0000 (01:43 -0600)]
vulkan: account for lookup tables when checking shared memory size (#11502)

4 months agoserver : (webui) revamp Settings dialog, add Pyodide interpreter (#11759)
Xuan-Son Nguyen [Sat, 8 Feb 2025 20:54:50 +0000 (21:54 +0100)]
server : (webui) revamp Settings dialog, add Pyodide interpreter (#11759)

* redo Settings modal UI

* add python code interpreter

* fix auto scroll

* build

* fix overflow for long output lines

* bring back sticky copy button

* adapt layout on mobile view

* fix multiple lines output and color scheme

* handle python exception

* better state management

* add webworker

* add headers

* format code

* speed up by loading pyodide on page load

* (small tweak) add small animation to make it feels like claude

4 months agoserver : (webui) increase edit textarea size (#11763)
Woof Dog [Sat, 8 Feb 2025 19:09:55 +0000 (19:09 +0000)]
server : (webui) increase edit textarea size (#11763)

4 months agoserver : minor log updates (#11760)
Georgi Gerganov [Sat, 8 Feb 2025 16:08:43 +0000 (18:08 +0200)]
server : minor log updates (#11760)

ggml-ci

4 months agocont : fix mmap flag print (#11699)
Georgi Gerganov [Sat, 8 Feb 2025 14:49:38 +0000 (16:49 +0200)]
cont : fix mmap flag print (#11699)

4 months agoggml: Fix data race in ggml threadpool (#11736)
Karol Kontny [Sat, 8 Feb 2025 14:30:53 +0000 (15:30 +0100)]
ggml: Fix data race in ggml threadpool (#11736)

After the barrier in last iteration is executed, still the loop termination
condition will be executed. However main thread can destroy the cgraph object
and its nodes already, then another thread will access it, but the thing is already gone.
Also trouble can happen when n_nodes == 0 or abort is called, but I'm not sure if the
prior situation is possible.

Last syncronization should be done after the loop to ensure the cgraph/cplan won't be
accessed after the main thread exits from the function.

4 months agoCUDA: fix min. version for movmatrix (#11751)
Johannes Gäßler [Sat, 8 Feb 2025 09:46:07 +0000 (10:46 +0100)]
CUDA: fix min. version for movmatrix (#11751)

4 months agoreadme : update front-end framework (#11753)
Nikolaos Pothitos [Sat, 8 Feb 2025 09:43:04 +0000 (11:43 +0200)]
readme : update front-end framework (#11753)

After the migration to React with #11688

4 months agoserver : (webui) fix numeric settings being saved as string (#11739)
Xuan-Son Nguyen [Sat, 8 Feb 2025 09:42:34 +0000 (10:42 +0100)]
server : (webui) fix numeric settings being saved as string (#11739)

* server : (webui) fix numeric settings being saved as string

* add some more comments

4 months agoMake logging more verbose (#11714)
Eric Curtin [Fri, 7 Feb 2025 14:42:46 +0000 (14:42 +0000)]
Make logging more verbose (#11714)

Debugged an issue with a user who was on a read-only filesystem.

Signed-off-by: Eric Curtin <redacted>
4 months agollama : fix defrag logic (#11707)
Georgi Gerganov [Fri, 7 Feb 2025 14:05:34 +0000 (16:05 +0200)]
llama : fix defrag logic (#11707)

* llama : fix defrag logic

ggml-ci

* cont : better logic

ggml-ci

* cont : clamp fragmentation to 0.0

ggml-ci

4 months agovocab : ignore invalid UTF-8 input in the BPE tokenizer (#11729)
Christian Fillion [Fri, 7 Feb 2025 13:55:47 +0000 (08:55 -0500)]
vocab : ignore invalid UTF-8 input in the BPE tokenizer (#11729)

Silently insert U+FFFD(s) (Unicode replacement character) instead until the
next valid codepoint can be found.

This fixes `llama_tokenize` throwing an exception across the C API boundary
or libllama's module boundary (the caller's runtime might be incompatible!)

Returing a proper error code might be desirable, however the signature
of `llama_tokenize` doesn't allow it as all return values already have
existing meaning.

4 months agollama : fix progress dots (#11730)
magicse [Fri, 7 Feb 2025 13:48:47 +0000 (15:48 +0200)]
llama : fix progress dots (#11730)

* Update llama.cpp

For display progress dots in terminal.
Without this it didn't display dots progress during loading model from file.

* Update llama.cpp

removed trailing spaces

4 months agovulkan: print shared memory size (#11719)
Jeff Bolz [Fri, 7 Feb 2025 10:26:03 +0000 (04:26 -0600)]
vulkan: print shared memory size (#11719)

4 months agollama : add llama_sampler_init for safe usage of llama_sampler_free (#11727)
Christian Fillion [Fri, 7 Feb 2025 09:33:27 +0000 (04:33 -0500)]
llama : add llama_sampler_init for safe usage of llama_sampler_free (#11727)

The C API in llama.h claims users can implement `llama_sampler_i` to
create custom `llama_sampler`. The sampler chain takes ownership and
calls `llama_sampler_free` on them. However, `llama_sampler_free` is
hard-coded to use `delete`. This is undefined behavior if the object
wasn't also allocated via `new` from libllama's C++ runtime. Callers
in C and C-compatible languages do not use C++'s `new` operator. C++
callers may not be sharing the same heap as libllama.

4 months agoSYCL: remove XMX info from print devices (#11712)
Akarshan Biswas [Fri, 7 Feb 2025 09:27:53 +0000 (14:57 +0530)]
SYCL: remove XMX info from print devices (#11712)

4 months agocommon : add default embeddings presets (#11677)
Daniel Bevenius [Fri, 7 Feb 2025 08:15:22 +0000 (09:15 +0100)]
common : add default embeddings presets (#11677)

* common : add default embeddings presets

This commit adds default embeddings presets for the following models:
- bge-small-en-v1.5
- e5-small-v2
- gte-small

These can be used with llama-embedding and llama-server.

For example, with llama-embedding:
```console
./build/bin/llama-embedding --embd-gte-small-default -p "Hello, how are you?"
```

And with llama-server:
```console
./build/bin/llama-server --embd-gte-small-default
```
And the embeddings endpoint can then be called with a POST request:
```console
curl --request POST \
    --url http://localhost:8080/embeddings \
    --header "Content-Type: application/json" \
    --data '{"input": "Hello, how are you?"}'
```

I'm not sure if these are the most common embedding models but hopefully
this can be a good starting point for discussion and further
improvements.

Refs: https://github.com/ggerganov/llama.cpp/issues/10932

4 months agoggml : optimize and build warning fix for LoongArch (#11709)
Jinyang He [Fri, 7 Feb 2025 07:38:31 +0000 (15:38 +0800)]
ggml : optimize and build warning fix for LoongArch (#11709)

* ggml : optimize convert f32<->f16 for loongarch_asx

* ggml : optimize loongarch_asx extend i16,i8,u8 to i32,i16

* ggml : Fix warnings when run cpu CI locally on LoongArch

4 months agollama : fix old glm4 models (#11670)
tv1wnd [Thu, 6 Feb 2025 21:48:51 +0000 (22:48 +0100)]
llama : fix old glm4 models (#11670)

4 months agosync : ggml
Georgi Gerganov [Thu, 6 Feb 2025 19:23:03 +0000 (21:23 +0200)]
sync : ggml

4 months agorpc: fix known RCE in rpc-server (ggml/1103)
Patrick Peng [Thu, 6 Feb 2025 14:29:13 +0000 (09:29 -0500)]
rpc: fix known RCE in rpc-server (ggml/1103)

Add bounds checking in `rpc_server::copy_tensor` to prevent out-of-bounds writes
+ Check if  `(uint8_t *)dst->data + ggml_nbytes(src)` remains within the destination buffer’s allocated region.

4 months agoserver : (webui) migrate project to ReactJS with typescript (#11688)
Xuan-Son Nguyen [Thu, 6 Feb 2025 16:32:29 +0000 (17:32 +0100)]
server : (webui) migrate project to ReactJS with typescript (#11688)

* init version

* fix auto scroll

* bring back copy btn

* bring back thought process

* add lint and format check on CI

* remove lang from html tag

* allow multiple generations at the same time

* lint and format combined

* fix unused var

* improve MarkdownDisplay

* fix more latex

* fix code block cannot be selected while generating

4 months agodocs: update fedora cuda guide for 12.8 release (#11393)
Tei Home [Thu, 6 Feb 2025 12:16:15 +0000 (20:16 +0800)]
docs: update fedora cuda guide for 12.8 release (#11393)

* docs: update fedora cuda guide for 12.8 release

* docs: build cuda update

4 months agoSYCL: Adjust support condition for norm operators (#11674)
Akarshan Biswas [Thu, 6 Feb 2025 11:42:35 +0000 (17:12 +0530)]
SYCL: Adjust support condition for norm operators (#11674)

SYCL does not support non contiguous tensors for norm operations

4 months agollama : add log about loading model tensors (#11699)
Georgi Gerganov [Thu, 6 Feb 2025 11:41:37 +0000 (13:41 +0200)]
llama : add log about loading model tensors (#11699)

4 months agobuild : fix llama.pc (#11658)
Adrien Gallouët [Thu, 6 Feb 2025 11:08:13 +0000 (12:08 +0100)]
build : fix llama.pc (#11658)

Signed-off-by: Adrien Gallouët <redacted>
4 months agoggml : fix LoongArch compile error with 128-bit SIMD (#11701)
junchao-zhao [Thu, 6 Feb 2025 09:20:00 +0000 (17:20 +0800)]
ggml : fix LoongArch compile error with 128-bit SIMD (#11701)

4 months agovulkan: optimize coopmat2 iq2/iq3 callbacks (#11521)
Jeff Bolz [Thu, 6 Feb 2025 06:15:30 +0000 (00:15 -0600)]
vulkan: optimize coopmat2 iq2/iq3 callbacks (#11521)

* vulkan: optimize coopmat2 iq2/iq3 callbacks

* build: trigger CI on GLSL compute shader changes

4 months agovulkan: initial support for IQ4_XS quantization (#11501)
Rémy O [Thu, 6 Feb 2025 06:09:59 +0000 (07:09 +0100)]
vulkan: initial support for IQ4_XS quantization (#11501)

4 months agovulkan: use smaller combined allocations to avoid fragmentation (#11551)
Jeff Bolz [Thu, 6 Feb 2025 06:02:18 +0000 (00:02 -0600)]
vulkan: use smaller combined allocations to avoid fragmentation (#11551)

4 months agometal : avoid breaking build when metal API predates TARGET_OS_VISION (#11690)
Charles Duffy [Thu, 6 Feb 2025 01:52:31 +0000 (19:52 -0600)]
metal : avoid breaking build when metal API predates TARGET_OS_VISION (#11690)

Avoids breakage in nix flake build introduced by b0569130c5e9c671152c913d82803b7c2f014ff9

4 months agoreadme : add link to Autopen under UIs (#11684)
Matvey Soloviev [Thu, 6 Feb 2025 00:55:25 +0000 (01:55 +0100)]
readme : add link to Autopen under UIs (#11684)

Autopen (https://github.com/blackhole89/autopen) is a graphical text editor that uses llama.cpp to tokenize the buffer on the fly, score the buffer, visualise token logits and allow you to switch back and forth between different possible completions at any point. It hopefully meets the criteria for inclusion, as the dependency on llama.cpp is stated prominently.

4 months agoMerge tag 'upstream/0.0.4631' into debian/latest
Mathieu Baudier [Wed, 5 Feb 2025 14:58:10 +0000 (15:58 +0100)]
Merge tag 'upstream/0.0.4631' into debian/latest

4 months agometal : adjust support conditions for norm operators (#11671)
Georgi Gerganov [Wed, 5 Feb 2025 08:57:42 +0000 (10:57 +0200)]
metal : adjust support conditions for norm operators (#11671)

cont #11659

ggml-ci

4 months agoCUDA: support for mat. mul. with ne03 != ne13 (#11656)
Johannes Gäßler [Wed, 5 Feb 2025 07:58:31 +0000 (08:58 +0100)]
CUDA: support for mat. mul. with ne03 != ne13 (#11656)

4 months agollava: add quantization for the visual projector LLAVA, Qwen2VL (#11644)
SAMI [Wed, 5 Feb 2025 07:45:40 +0000 (14:45 +0700)]
llava: add quantization for the visual projector LLAVA, Qwen2VL (#11644)

* Added quantization for visual projector
* Added README
* Fixed the clip quantize implementation in the file

* Fixed the gcc warning regarding minor linting

* Removed trailing whitespace

4 months ago`sync`: minja (#11641)
Olivier Chafik [Wed, 5 Feb 2025 01:00:12 +0000 (01:00 +0000)]
`sync`: minja (#11641)

* `sync`: minja

https://github.com/google/minja/commit/182de30cdaee78ba86179122f8047b3bdbab7f7f

https://github.com/google/minja/pull/46

https://github.com/google/minja/pull/45

4 months agoCUDA: non-contiguous (RMS) norm support (#11659)
Johannes Gäßler [Tue, 4 Feb 2025 21:21:42 +0000 (22:21 +0100)]
CUDA: non-contiguous (RMS) norm support (#11659)

* CUDA: non-contiguous (RMS) norm support

---------

Co-authored-by: Georgi Gerganov <redacted>
4 months agoHIP: force max threads per block to be 1024 (#11621)
fxzjshm [Tue, 4 Feb 2025 18:18:38 +0000 (02:18 +0800)]
HIP: force max threads per block to be 1024 (#11621)

Some old/vendor forked version of llvm still use 256. Explicitly set it to 1024 to align with upstream llvm.

Signed-off-by: fxzjshm <redacted>
4 months agoserver : add try..catch to places not covered by set_exception_handler (#11620)
Xuan-Son Nguyen [Tue, 4 Feb 2025 17:25:42 +0000 (18:25 +0100)]
server : add try..catch to places not covered by set_exception_handler (#11620)

* server : add try..catch to places not covered by set_exception_handler

* log_server_request: rm try catch, add reminder

4 months agoarg : list RPC devices first when using --list-devices (#11655)
Radoslav Gerganov [Tue, 4 Feb 2025 16:16:20 +0000 (18:16 +0200)]
arg : list RPC devices first when using --list-devices (#11655)

List devices in the same order as they appear when evaluating the model
and splitting tensors across devices, i.e. RPC devices come first in the
list.

ref #11435

4 months ago`tool-call`: command r7b fix for normal responses (#11608)
Olivier Chafik [Tue, 4 Feb 2025 15:48:53 +0000 (15:48 +0000)]
`tool-call`: command r7b fix for normal responses (#11608)

* fix command r7b normal response regex + add to server test

* test multiline non-tool-call responses in test-chat

4 months agoreadme : add llm_client Rust crate to readme bindings (#11628)
Shelby Jenkins [Tue, 4 Feb 2025 11:20:55 +0000 (05:20 -0600)]
readme : add llm_client Rust crate to readme bindings (#11628)

[This crate](https://github.com/ShelbyJenkins/llm_client) has been in a usable state for quite awhile, so I figured now is fair to add it.

It installs from crates.io, and automatically downloads the llama.cpp repo and builds it for the target platform - with the goal being the easiest user experience possible.

It also integrates model presets and choosing the largest quant given the target's available VRAM. So a user just has to specify one of the presets (I manually add the most popular models), and it will download from hugging face.

So, it's like a Rust Ollama, but it's not really for chatting. It makes heavy use of llama.cpp's grammar system to do structured output for decision making and control flow tasks.

4 months agoswift : fix llama-vocab api usage (#11645)
Jhen-Jie Hong [Tue, 4 Feb 2025 11:15:24 +0000 (19:15 +0800)]
swift : fix llama-vocab api usage (#11645)

* swiftui : fix vocab api usage

* batched.swift : fix vocab api usage

4 months agometal : use residency set for other platforms (#11648)
Jhen-Jie Hong [Tue, 4 Feb 2025 11:07:18 +0000 (19:07 +0800)]
metal : use residency set for other platforms (#11648)

4 months agoauthors : update
Georgi Gerganov [Tue, 4 Feb 2025 11:04:10 +0000 (13:04 +0200)]
authors : update

4 months agosync : ggml upstream/0.0.4631
Georgi Gerganov [Tue, 4 Feb 2025 10:59:21 +0000 (12:59 +0200)]
sync : ggml

4 months agocmake: Add ability to pass in GGML_BUILD_NUMBER (ggml/1096)
Christian Kastner [Mon, 3 Feb 2025 23:17:15 +0000 (00:17 +0100)]
cmake: Add ability to pass in GGML_BUILD_NUMBER (ggml/1096)

This makes git as a dependency optional, and is useful in the case where
ggml is built not from git, but from a tarball, or a distribution source
package.

This conditional also affects GGML_BUILD_COMMIT. Nothing seems to be
using it, though, so there doesn't seem much value factor it out, or
even require it.

4 months agoci : do not stale-close roadmap issues
Georgi Gerganov [Tue, 4 Feb 2025 07:30:42 +0000 (09:30 +0200)]
ci : do not stale-close roadmap issues

4 months ago`tool-call`: allow `--chat-template chatml` w/ `--jinja`, default to chatml upon...
Olivier Chafik [Mon, 3 Feb 2025 23:49:27 +0000 (23:49 +0000)]
`tool-call`: allow `--chat-template chatml` w/ `--jinja`, default to chatml upon parsing issue, avoid double bos (#11616)

* tool-call: allow `--jinja --chat-template chatml`

* fix double bos issue (drop bos/eos tokens from jinja template)

* add missing try catch around jinja parsing to default to chatml

* Simplify default chatml logic

4 months agoserver : (webui) revert hacky solution from #11626 (#11634)
Xuan-Son Nguyen [Mon, 3 Feb 2025 23:10:52 +0000 (00:10 +0100)]
server : (webui) revert hacky solution from #11626 (#11634)

4 months agoserver : (webui) allow typing and submitting during llm response (#11626)
Woof Dog [Mon, 3 Feb 2025 22:16:27 +0000 (22:16 +0000)]
server : (webui) allow typing and submitting during llm response (#11626)

4 months agoserver : remove CPPHTTPLIB_NO_EXCEPTIONS define (#11622)
Daniel Bevenius [Mon, 3 Feb 2025 15:45:38 +0000 (16:45 +0100)]
server : remove CPPHTTPLIB_NO_EXCEPTIONS define (#11622)

This commit removes the CPPHTTPLIB_NO_EXCEPTIONS define from the server
code.

The motivation for this is that when using a debug build the server
would crash when an exception was throws and terminate the server
process, as it was unhandled. When CPPHTTPLIB_NO_EXCEPTIONS is set
cpp_httplib will not call the exception handler, which would normally
return a 500 error to the client. This caused tests to fail when using
a debug build.

Fixes: https://github.com/ggerganov/llama.cpp/issues/11613
4 months agosync : ggml
Georgi Gerganov [Mon, 3 Feb 2025 12:57:08 +0000 (14:57 +0200)]
sync : ggml

4 months agoCUDA: fix Volta FlashAttention logic (#11615)
Johannes Gäßler [Mon, 3 Feb 2025 12:25:56 +0000 (13:25 +0100)]
CUDA: fix Volta FlashAttention logic (#11615)

4 months agoserver : (webui) Fix Shift+Enter handling (#11609)
mashdragon [Mon, 3 Feb 2025 09:42:55 +0000 (09:42 +0000)]
server : (webui) Fix Shift+Enter handling (#11609)

* Fix Shift+Enter handling

`exact` on the Enter handler means the message is not sent when Shift+Enter is pressed anyway

* build index.html.gz

---------

Co-authored-by: Xuan Son Nguyen <redacted>