]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
4 months agoggml: Fix data race in ggml threadpool (#11736)
Karol Kontny [Sat, 8 Feb 2025 14:30:53 +0000 (15:30 +0100)]
ggml: Fix data race in ggml threadpool (#11736)

After the barrier in last iteration is executed, still the loop termination
condition will be executed. However main thread can destroy the cgraph object
and its nodes already, then another thread will access it, but the thing is already gone.
Also trouble can happen when n_nodes == 0 or abort is called, but I'm not sure if the
prior situation is possible.

Last syncronization should be done after the loop to ensure the cgraph/cplan won't be
accessed after the main thread exits from the function.

4 months agoCUDA: fix min. version for movmatrix (#11751)
Johannes Gäßler [Sat, 8 Feb 2025 09:46:07 +0000 (10:46 +0100)]
CUDA: fix min. version for movmatrix (#11751)

4 months agoreadme : update front-end framework (#11753)
Nikolaos Pothitos [Sat, 8 Feb 2025 09:43:04 +0000 (11:43 +0200)]
readme : update front-end framework (#11753)

After the migration to React with #11688

4 months agoserver : (webui) fix numeric settings being saved as string (#11739)
Xuan-Son Nguyen [Sat, 8 Feb 2025 09:42:34 +0000 (10:42 +0100)]
server : (webui) fix numeric settings being saved as string (#11739)

* server : (webui) fix numeric settings being saved as string

* add some more comments

4 months agoMake logging more verbose (#11714)
Eric Curtin [Fri, 7 Feb 2025 14:42:46 +0000 (14:42 +0000)]
Make logging more verbose (#11714)

Debugged an issue with a user who was on a read-only filesystem.

Signed-off-by: Eric Curtin <redacted>
4 months agollama : fix defrag logic (#11707)
Georgi Gerganov [Fri, 7 Feb 2025 14:05:34 +0000 (16:05 +0200)]
llama : fix defrag logic (#11707)

* llama : fix defrag logic

ggml-ci

* cont : better logic

ggml-ci

* cont : clamp fragmentation to 0.0

ggml-ci

4 months agovocab : ignore invalid UTF-8 input in the BPE tokenizer (#11729)
Christian Fillion [Fri, 7 Feb 2025 13:55:47 +0000 (08:55 -0500)]
vocab : ignore invalid UTF-8 input in the BPE tokenizer (#11729)

Silently insert U+FFFD(s) (Unicode replacement character) instead until the
next valid codepoint can be found.

This fixes `llama_tokenize` throwing an exception across the C API boundary
or libllama's module boundary (the caller's runtime might be incompatible!)

Returing a proper error code might be desirable, however the signature
of `llama_tokenize` doesn't allow it as all return values already have
existing meaning.

4 months agollama : fix progress dots (#11730)
magicse [Fri, 7 Feb 2025 13:48:47 +0000 (15:48 +0200)]
llama : fix progress dots (#11730)

* Update llama.cpp

For display progress dots in terminal.
Without this it didn't display dots progress during loading model from file.

* Update llama.cpp

removed trailing spaces

4 months agovulkan: print shared memory size (#11719)
Jeff Bolz [Fri, 7 Feb 2025 10:26:03 +0000 (04:26 -0600)]
vulkan: print shared memory size (#11719)

4 months agollama : add llama_sampler_init for safe usage of llama_sampler_free (#11727)
Christian Fillion [Fri, 7 Feb 2025 09:33:27 +0000 (04:33 -0500)]
llama : add llama_sampler_init for safe usage of llama_sampler_free (#11727)

The C API in llama.h claims users can implement `llama_sampler_i` to
create custom `llama_sampler`. The sampler chain takes ownership and
calls `llama_sampler_free` on them. However, `llama_sampler_free` is
hard-coded to use `delete`. This is undefined behavior if the object
wasn't also allocated via `new` from libllama's C++ runtime. Callers
in C and C-compatible languages do not use C++'s `new` operator. C++
callers may not be sharing the same heap as libllama.

4 months agoSYCL: remove XMX info from print devices (#11712)
Akarshan Biswas [Fri, 7 Feb 2025 09:27:53 +0000 (14:57 +0530)]
SYCL: remove XMX info from print devices (#11712)

4 months agocommon : add default embeddings presets (#11677)
Daniel Bevenius [Fri, 7 Feb 2025 08:15:22 +0000 (09:15 +0100)]
common : add default embeddings presets (#11677)

* common : add default embeddings presets

This commit adds default embeddings presets for the following models:
- bge-small-en-v1.5
- e5-small-v2
- gte-small

These can be used with llama-embedding and llama-server.

For example, with llama-embedding:
```console
./build/bin/llama-embedding --embd-gte-small-default -p "Hello, how are you?"
```

And with llama-server:
```console
./build/bin/llama-server --embd-gte-small-default
```
And the embeddings endpoint can then be called with a POST request:
```console
curl --request POST \
    --url http://localhost:8080/embeddings \
    --header "Content-Type: application/json" \
    --data '{"input": "Hello, how are you?"}'
```

I'm not sure if these are the most common embedding models but hopefully
this can be a good starting point for discussion and further
improvements.

Refs: https://github.com/ggerganov/llama.cpp/issues/10932

4 months agoggml : optimize and build warning fix for LoongArch (#11709)
Jinyang He [Fri, 7 Feb 2025 07:38:31 +0000 (15:38 +0800)]
ggml : optimize and build warning fix for LoongArch (#11709)

* ggml : optimize convert f32<->f16 for loongarch_asx

* ggml : optimize loongarch_asx extend i16,i8,u8 to i32,i16

* ggml : Fix warnings when run cpu CI locally on LoongArch

4 months agollama : fix old glm4 models (#11670)
tv1wnd [Thu, 6 Feb 2025 21:48:51 +0000 (22:48 +0100)]
llama : fix old glm4 models (#11670)

4 months agosync : ggml
Georgi Gerganov [Thu, 6 Feb 2025 19:23:03 +0000 (21:23 +0200)]
sync : ggml

4 months agorpc: fix known RCE in rpc-server (ggml/1103)
Patrick Peng [Thu, 6 Feb 2025 14:29:13 +0000 (09:29 -0500)]
rpc: fix known RCE in rpc-server (ggml/1103)

Add bounds checking in `rpc_server::copy_tensor` to prevent out-of-bounds writes
+ Check if  `(uint8_t *)dst->data + ggml_nbytes(src)` remains within the destination buffer’s allocated region.

4 months agoserver : (webui) migrate project to ReactJS with typescript (#11688)
Xuan-Son Nguyen [Thu, 6 Feb 2025 16:32:29 +0000 (17:32 +0100)]
server : (webui) migrate project to ReactJS with typescript (#11688)

* init version

* fix auto scroll

* bring back copy btn

* bring back thought process

* add lint and format check on CI

* remove lang from html tag

* allow multiple generations at the same time

* lint and format combined

* fix unused var

* improve MarkdownDisplay

* fix more latex

* fix code block cannot be selected while generating

4 months agodocs: update fedora cuda guide for 12.8 release (#11393)
Tei Home [Thu, 6 Feb 2025 12:16:15 +0000 (20:16 +0800)]
docs: update fedora cuda guide for 12.8 release (#11393)

* docs: update fedora cuda guide for 12.8 release

* docs: build cuda update

4 months agoSYCL: Adjust support condition for norm operators (#11674)
Akarshan Biswas [Thu, 6 Feb 2025 11:42:35 +0000 (17:12 +0530)]
SYCL: Adjust support condition for norm operators (#11674)

SYCL does not support non contiguous tensors for norm operations

4 months agollama : add log about loading model tensors (#11699)
Georgi Gerganov [Thu, 6 Feb 2025 11:41:37 +0000 (13:41 +0200)]
llama : add log about loading model tensors (#11699)

4 months agobuild : fix llama.pc (#11658)
Adrien Gallouët [Thu, 6 Feb 2025 11:08:13 +0000 (12:08 +0100)]
build : fix llama.pc (#11658)

Signed-off-by: Adrien Gallouët <redacted>
4 months agoggml : fix LoongArch compile error with 128-bit SIMD (#11701)
junchao-zhao [Thu, 6 Feb 2025 09:20:00 +0000 (17:20 +0800)]
ggml : fix LoongArch compile error with 128-bit SIMD (#11701)

4 months agovulkan: optimize coopmat2 iq2/iq3 callbacks (#11521)
Jeff Bolz [Thu, 6 Feb 2025 06:15:30 +0000 (00:15 -0600)]
vulkan: optimize coopmat2 iq2/iq3 callbacks (#11521)

* vulkan: optimize coopmat2 iq2/iq3 callbacks

* build: trigger CI on GLSL compute shader changes

4 months agovulkan: initial support for IQ4_XS quantization (#11501)
Rémy O [Thu, 6 Feb 2025 06:09:59 +0000 (07:09 +0100)]
vulkan: initial support for IQ4_XS quantization (#11501)

4 months agovulkan: use smaller combined allocations to avoid fragmentation (#11551)
Jeff Bolz [Thu, 6 Feb 2025 06:02:18 +0000 (00:02 -0600)]
vulkan: use smaller combined allocations to avoid fragmentation (#11551)

4 months agometal : avoid breaking build when metal API predates TARGET_OS_VISION (#11690)
Charles Duffy [Thu, 6 Feb 2025 01:52:31 +0000 (19:52 -0600)]
metal : avoid breaking build when metal API predates TARGET_OS_VISION (#11690)

Avoids breakage in nix flake build introduced by b0569130c5e9c671152c913d82803b7c2f014ff9

4 months agoreadme : add link to Autopen under UIs (#11684)
Matvey Soloviev [Thu, 6 Feb 2025 00:55:25 +0000 (01:55 +0100)]
readme : add link to Autopen under UIs (#11684)

Autopen (https://github.com/blackhole89/autopen) is a graphical text editor that uses llama.cpp to tokenize the buffer on the fly, score the buffer, visualise token logits and allow you to switch back and forth between different possible completions at any point. It hopefully meets the criteria for inclusion, as the dependency on llama.cpp is stated prominently.

4 months agometal : adjust support conditions for norm operators (#11671)
Georgi Gerganov [Wed, 5 Feb 2025 08:57:42 +0000 (10:57 +0200)]
metal : adjust support conditions for norm operators (#11671)

cont #11659

ggml-ci

4 months agoCUDA: support for mat. mul. with ne03 != ne13 (#11656)
Johannes Gäßler [Wed, 5 Feb 2025 07:58:31 +0000 (08:58 +0100)]
CUDA: support for mat. mul. with ne03 != ne13 (#11656)

4 months agollava: add quantization for the visual projector LLAVA, Qwen2VL (#11644)
SAMI [Wed, 5 Feb 2025 07:45:40 +0000 (14:45 +0700)]
llava: add quantization for the visual projector LLAVA, Qwen2VL (#11644)

* Added quantization for visual projector
* Added README
* Fixed the clip quantize implementation in the file

* Fixed the gcc warning regarding minor linting

* Removed trailing whitespace

4 months ago`sync`: minja (#11641)
Olivier Chafik [Wed, 5 Feb 2025 01:00:12 +0000 (01:00 +0000)]
`sync`: minja (#11641)

* `sync`: minja

https://github.com/google/minja/commit/182de30cdaee78ba86179122f8047b3bdbab7f7f

https://github.com/google/minja/pull/46

https://github.com/google/minja/pull/45

4 months agoCUDA: non-contiguous (RMS) norm support (#11659)
Johannes Gäßler [Tue, 4 Feb 2025 21:21:42 +0000 (22:21 +0100)]
CUDA: non-contiguous (RMS) norm support (#11659)

* CUDA: non-contiguous (RMS) norm support

---------

Co-authored-by: Georgi Gerganov <redacted>
4 months agoHIP: force max threads per block to be 1024 (#11621)
fxzjshm [Tue, 4 Feb 2025 18:18:38 +0000 (02:18 +0800)]
HIP: force max threads per block to be 1024 (#11621)

Some old/vendor forked version of llvm still use 256. Explicitly set it to 1024 to align with upstream llvm.

Signed-off-by: fxzjshm <redacted>
4 months agoserver : add try..catch to places not covered by set_exception_handler (#11620)
Xuan-Son Nguyen [Tue, 4 Feb 2025 17:25:42 +0000 (18:25 +0100)]
server : add try..catch to places not covered by set_exception_handler (#11620)

* server : add try..catch to places not covered by set_exception_handler

* log_server_request: rm try catch, add reminder

4 months agoarg : list RPC devices first when using --list-devices (#11655)
Radoslav Gerganov [Tue, 4 Feb 2025 16:16:20 +0000 (18:16 +0200)]
arg : list RPC devices first when using --list-devices (#11655)

List devices in the same order as they appear when evaluating the model
and splitting tensors across devices, i.e. RPC devices come first in the
list.

ref #11435

4 months ago`tool-call`: command r7b fix for normal responses (#11608)
Olivier Chafik [Tue, 4 Feb 2025 15:48:53 +0000 (15:48 +0000)]
`tool-call`: command r7b fix for normal responses (#11608)

* fix command r7b normal response regex + add to server test

* test multiline non-tool-call responses in test-chat

4 months agoreadme : add llm_client Rust crate to readme bindings (#11628)
Shelby Jenkins [Tue, 4 Feb 2025 11:20:55 +0000 (05:20 -0600)]
readme : add llm_client Rust crate to readme bindings (#11628)

[This crate](https://github.com/ShelbyJenkins/llm_client) has been in a usable state for quite awhile, so I figured now is fair to add it.

It installs from crates.io, and automatically downloads the llama.cpp repo and builds it for the target platform - with the goal being the easiest user experience possible.

It also integrates model presets and choosing the largest quant given the target's available VRAM. So a user just has to specify one of the presets (I manually add the most popular models), and it will download from hugging face.

So, it's like a Rust Ollama, but it's not really for chatting. It makes heavy use of llama.cpp's grammar system to do structured output for decision making and control flow tasks.

4 months agoswift : fix llama-vocab api usage (#11645)
Jhen-Jie Hong [Tue, 4 Feb 2025 11:15:24 +0000 (19:15 +0800)]
swift : fix llama-vocab api usage (#11645)

* swiftui : fix vocab api usage

* batched.swift : fix vocab api usage

4 months agometal : use residency set for other platforms (#11648)
Jhen-Jie Hong [Tue, 4 Feb 2025 11:07:18 +0000 (19:07 +0800)]
metal : use residency set for other platforms (#11648)

4 months agoauthors : update
Georgi Gerganov [Tue, 4 Feb 2025 11:04:10 +0000 (13:04 +0200)]
authors : update

4 months agosync : ggml upstream/0.0.4631
Georgi Gerganov [Tue, 4 Feb 2025 10:59:21 +0000 (12:59 +0200)]
sync : ggml

4 months agocmake: Add ability to pass in GGML_BUILD_NUMBER (ggml/1096)
Christian Kastner [Mon, 3 Feb 2025 23:17:15 +0000 (00:17 +0100)]
cmake: Add ability to pass in GGML_BUILD_NUMBER (ggml/1096)

This makes git as a dependency optional, and is useful in the case where
ggml is built not from git, but from a tarball, or a distribution source
package.

This conditional also affects GGML_BUILD_COMMIT. Nothing seems to be
using it, though, so there doesn't seem much value factor it out, or
even require it.

4 months agoci : do not stale-close roadmap issues
Georgi Gerganov [Tue, 4 Feb 2025 07:30:42 +0000 (09:30 +0200)]
ci : do not stale-close roadmap issues

4 months ago`tool-call`: allow `--chat-template chatml` w/ `--jinja`, default to chatml upon...
Olivier Chafik [Mon, 3 Feb 2025 23:49:27 +0000 (23:49 +0000)]
`tool-call`: allow `--chat-template chatml` w/ `--jinja`, default to chatml upon parsing issue, avoid double bos (#11616)

* tool-call: allow `--jinja --chat-template chatml`

* fix double bos issue (drop bos/eos tokens from jinja template)

* add missing try catch around jinja parsing to default to chatml

* Simplify default chatml logic

4 months agoserver : (webui) revert hacky solution from #11626 (#11634)
Xuan-Son Nguyen [Mon, 3 Feb 2025 23:10:52 +0000 (00:10 +0100)]
server : (webui) revert hacky solution from #11626 (#11634)

4 months agoserver : (webui) allow typing and submitting during llm response (#11626)
Woof Dog [Mon, 3 Feb 2025 22:16:27 +0000 (22:16 +0000)]
server : (webui) allow typing and submitting during llm response (#11626)

4 months agoserver : remove CPPHTTPLIB_NO_EXCEPTIONS define (#11622)
Daniel Bevenius [Mon, 3 Feb 2025 15:45:38 +0000 (16:45 +0100)]
server : remove CPPHTTPLIB_NO_EXCEPTIONS define (#11622)

This commit removes the CPPHTTPLIB_NO_EXCEPTIONS define from the server
code.

The motivation for this is that when using a debug build the server
would crash when an exception was throws and terminate the server
process, as it was unhandled. When CPPHTTPLIB_NO_EXCEPTIONS is set
cpp_httplib will not call the exception handler, which would normally
return a 500 error to the client. This caused tests to fail when using
a debug build.

Fixes: https://github.com/ggerganov/llama.cpp/issues/11613
4 months agosync : ggml
Georgi Gerganov [Mon, 3 Feb 2025 12:57:08 +0000 (14:57 +0200)]
sync : ggml

4 months agoCUDA: fix Volta FlashAttention logic (#11615)
Johannes Gäßler [Mon, 3 Feb 2025 12:25:56 +0000 (13:25 +0100)]
CUDA: fix Volta FlashAttention logic (#11615)

4 months agoserver : (webui) Fix Shift+Enter handling (#11609)
mashdragon [Mon, 3 Feb 2025 09:42:55 +0000 (09:42 +0000)]
server : (webui) Fix Shift+Enter handling (#11609)

* Fix Shift+Enter handling

`exact` on the Enter handler means the message is not sent when Shift+Enter is pressed anyway

* build index.html.gz

---------

Co-authored-by: Xuan Son Nguyen <redacted>
4 months agoHIP: fix flash_attn_stream_k_fixup warning (#11604)
Johannes Gäßler [Sun, 2 Feb 2025 22:48:29 +0000 (23:48 +0100)]
HIP: fix flash_attn_stream_k_fixup warning (#11604)

4 months agoCUDA/HIP: add support for selectable warp size to mmv (#11519)
uvos [Sun, 2 Feb 2025 21:40:09 +0000 (22:40 +0100)]
CUDA/HIP: add support for selectable warp size to mmv (#11519)

CUDA/HIP: add support for selectable warp size to mmv

4 months agoHIP: add GGML_CUDA_CC_IS_* for amd familys as increasing cc archtectures for amd...
uvos [Sun, 2 Feb 2025 21:08:05 +0000 (22:08 +0100)]
HIP: add GGML_CUDA_CC_IS_* for amd familys as increasing cc archtectures for amd gpus are not supersets of eatch other (#11601)

This fixes a bug where RDNA1 gpus other than gfx1010 where not handled correctly

4 months agonit: more informative crash when grammar sampler fails (#11593)
Olivier Chafik [Sun, 2 Feb 2025 19:58:34 +0000 (19:58 +0000)]
nit: more informative crash when grammar sampler fails (#11593)

4 months agoCUDA: use mma PTX instructions for FlashAttention (#11583)
Johannes Gäßler [Sun, 2 Feb 2025 18:31:09 +0000 (19:31 +0100)]
CUDA: use mma PTX instructions for FlashAttention (#11583)

* CUDA: use mma PTX instructions for FlashAttention

* __shfl_sync workaround for movmatrix

* add __shfl_sync to HIP

Co-authored-by: Diego Devesa <redacted>
4 months agoName colors (#11573)
Eric Curtin [Sun, 2 Feb 2025 15:14:48 +0000 (16:14 +0100)]
Name colors (#11573)

It's more descriptive, use #define's so we can use compile-time
concatenations.

Signed-off-by: Eric Curtin <redacted>
4 months ago`tool-call`: support Command R7B (+ return tool_plan "thoughts" in API) (#11585)
Olivier Chafik [Sun, 2 Feb 2025 09:25:38 +0000 (09:25 +0000)]
`tool-call`: support Command R7B (+ return tool_plan "thoughts" in API) (#11585)

* `tool-call`: support Command R7B (w/ tool_plan return)

* `tool-call`: cleaner preservation of tokens + warn when likely bad chat template override

* `tool-call`: test cleanup / handle lazy grammar triggers

4 months agoFix exotic ci env that lacks ostringstream::str (#11581)
Olivier Chafik [Sun, 2 Feb 2025 09:10:15 +0000 (09:10 +0000)]
Fix exotic ci env that lacks ostringstream::str (#11581)

4 months agosampling : support for llguidance grammars (#10224)
Michał Moskal [Sun, 2 Feb 2025 07:55:32 +0000 (23:55 -0800)]
sampling : support for llguidance grammars (#10224)

* initial porting of previous LLG patch

* update for new APIs

* build: integrate llguidance as an external project

* use '%llguidance' as marker to enable llg lark syntax

* add some docs

* clarify docs

* code style fixes

* remove llguidance.h from .gitignore

* fix tests when llg is enabled

* pass vocab not model to llama_sampler_init_llg()

* copy test-grammar-integration.cpp to test-llguidance.cpp

* clang fmt

* fix ref-count bug

* build and run test

* gbnf -> lark syntax

* conditionally include llguidance test based on LLAMA_LLGUIDANCE flag

* rename llguidance test file to test-grammar-llguidance.cpp

* add gh action for llg test

* align tests with LLG grammar syntax and JSON Schema spec

* llama_tokenizer() in fact requires valid utf8

* update llg

* format file

* add $LLGUIDANCE_LOG_LEVEL support

* fix whitespace

* fix warning

* include <cmath> for INFINITY

* add final newline

* fail llama_sampler_init_llg() at runtime

* Link gbnf_to_lark.py script; fix links; refer to llg docs for lexemes

* simplify #includes

* improve doc string for LLAMA_LLGUIDANCE

* typo in merge

* bump llguidance to 0.6.12

4 months agollama : add support for GLM-Edge and GLM-Edge-V series models (#10573)
piDack [Sun, 2 Feb 2025 07:48:46 +0000 (15:48 +0800)]
llama : add support for GLM-Edge and GLM-Edge-V series models (#10573)

* add glm edge chat model

* use config partial_rotary_factor as rope ratio

* support for glm edge model

* vision model support

* remove debug info

* fix format

* llava.cpp trailing whitespace

* remove unused AutoTokenizer

* Update src/llama.cpp for not contain <|end|> or </s>

Co-authored-by: Xuan Son Nguyen <redacted>
* add edge template

* fix chat template

* fix confict

* fix confict

* fix ci err

* fix format err

* fix template err

* 9b hf chat support

* format

* format clip.cpp

* fix format

* Apply suggestions from code review

* Apply suggestions from code review

* Update examples/llava/clip.cpp

* fix format

* minor : style

---------

Co-authored-by: liyuhang <redacted>
Co-authored-by: piDack <redacted>
Co-authored-by: Xuan Son Nguyen <redacted>
Co-authored-by: liyuhang <redacted>
Co-authored-by: Georgi Gerganov <redacted>
4 months agoci: use sccache on windows HIP jobs (#11553)
Olivier Chafik [Sat, 1 Feb 2025 18:22:38 +0000 (18:22 +0000)]
ci: use sccache on windows HIP jobs (#11553)

4 months ago`sync`: minja (https://github.com/google/minja/commit/418a2364b56dc9be4ed9a1a2b0fb16f...
Olivier Chafik [Sat, 1 Feb 2025 12:24:51 +0000 (12:24 +0000)]
`sync`: minja (https://github.com/google/minja/commit/418a2364b56dc9be4ed9a1a2b0fb16fb53a7a22e) (#11574)

4 months agoImplement s3:// protocol (#11511)
Eric Curtin [Sat, 1 Feb 2025 10:30:54 +0000 (11:30 +0100)]
Implement s3:// protocol (#11511)

For those that want to pull from s3

Signed-off-by: Eric Curtin <redacted>
4 months agoci: simplify cmake build commands (#11548)
Olivier Chafik [Sat, 1 Feb 2025 00:01:20 +0000 (00:01 +0000)]
ci: simplify cmake build commands (#11548)

4 months ago`ci`: use sccache on windows instead of ccache (#11545)
Olivier Chafik [Fri, 31 Jan 2025 17:12:40 +0000 (17:12 +0000)]
`ci`: use sccache on windows instead of ccache (#11545)

* Use sccache on ci for windows

* Detect sccache in cmake

4 months ago`tool-call`: fix llama 3.x and functionary 3.2, play nice w/ pydantic_ai package...
Olivier Chafik [Fri, 31 Jan 2025 14:15:25 +0000 (14:15 +0000)]
`tool-call`: fix llama 3.x and functionary 3.2, play nice w/ pydantic_ai package, update readme (#11539)

* An empty tool_call_id is better than none!

* sync: minja (tool call name optional https://github.com/google/minja/pull/36)

* Force-disable parallel_tool_calls if template doesn't support it

* More debug logs

* Llama 3.x tools: accept / trigger on more varied spaced outputs

* Fix empty content for functionary v3.2 tool call

* Add proper tool call docs to server README

* readme: function calling *is* supported now

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
4 months agofix stop regression (#11543)
Olivier Chafik [Fri, 31 Jan 2025 13:48:31 +0000 (13:48 +0000)]
fix stop regression (#11543)

4 months agoFix chatml fallback for unsupported builtin templates (when --jinja not enabled)...
Olivier Chafik [Fri, 31 Jan 2025 08:24:29 +0000 (08:24 +0000)]
Fix chatml fallback for unsupported builtin templates (when --jinja not enabled) (#11533)

4 months agoserver : fix --jinja when there's no tools or schema (typo was forcing JSON) (#11531)
Olivier Chafik [Fri, 31 Jan 2025 08:12:40 +0000 (08:12 +0000)]
server : fix --jinja when there's no tools or schema (typo was forcing JSON) (#11531)

4 months agocommon: Add missing va_end (#11529)
Steve Grubb [Fri, 31 Jan 2025 05:58:55 +0000 (00:58 -0500)]
common: Add missing va_end (#11529)

The va_copy man page states that va_end must be called to revert
whatever the copy did. For some implementaions, not calling va_end
has no consequences. For others it could leak memory.

4 months agoserver : update help metrics processing/deferred (#11512)
Daniel Bevenius [Fri, 31 Jan 2025 05:04:53 +0000 (06:04 +0100)]
server : update help metrics processing/deferred (#11512)

This commit updates the help text for the metrics `requests_processing`
and `requests_deferred` to be more grammatically correct.

Currently the returned metrics look like this:
```console
\# HELP llamacpp:requests_processing Number of request processing.
\# TYPE llamacpp:requests_processing gauge
llamacpp:requests_processing 0
\# HELP llamacpp:requests_deferred Number of request deferred.
\# TYPE llamacpp:requests_deferred gauge
llamacpp:requests_deferred 0
```

With this commit, the metrics will look like this:
```console
\# HELP llamacpp:requests_processing Number of requests processing.
\# TYPE llamacpp:requests_processing gauge
llamacpp:requests_processing 0
\# HELP llamacpp:requests_deferred Number of requests deferred.
\# TYPE llamacpp:requests_deferred gauge
llamacpp:requests_deferred 0
```
This is also consistent with the description of the metrics in the
server examples [README.md](https://github.com/ggerganov/llama.cpp/tree/master/examples/server#get-metrics-prometheus-compatible-metrics-exporter).

4 months ago`ci`: ccache for all github worfklows (#11516)
Olivier Chafik [Thu, 30 Jan 2025 22:01:06 +0000 (22:01 +0000)]
`ci`: ccache for all github worfklows (#11516)

4 months agoTool call support (generic + native for Llama, Functionary, Hermes, Mistral, Firefunc...
Olivier Chafik [Thu, 30 Jan 2025 19:13:58 +0000 (19:13 +0000)]
Tool call support (generic + native for Llama, Functionary, Hermes, Mistral, Firefunction, DeepSeek) w/ lazy grammars (#9639)

---------

Co-authored-by: Xuan Son Nguyen <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Xuan Son Nguyen <redacted>
4 months agoHIP: require at least HIP 5.5
uvos [Wed, 29 Jan 2025 18:36:00 +0000 (19:36 +0100)]
HIP: require at least HIP 5.5

4 months agoHIP: Prepare reduction operators for wave 64
uvos [Wed, 29 Jan 2025 18:12:42 +0000 (19:12 +0100)]
HIP: Prepare reduction operators for wave 64

4 months agoCUDA/HIP: add warp_size to cuda_device_info
uvos [Wed, 29 Jan 2025 16:46:23 +0000 (17:46 +0100)]
CUDA/HIP: add warp_size to cuda_device_info

4 months agosync: minja (#11499)
Olivier Chafik [Thu, 30 Jan 2025 10:30:27 +0000 (10:30 +0000)]
sync: minja (#11499)

4 months agovocab : correctly identify LF token for GPT-2 style BPE tokenizer (#11496)
mgroeber9110 [Thu, 30 Jan 2025 10:10:59 +0000 (11:10 +0100)]
vocab : correctly identify LF token for GPT-2 style BPE tokenizer (#11496)

4 months agoserver : use lambda instead of std::bind (#11507)
Daniel Bevenius [Thu, 30 Jan 2025 10:05:00 +0000 (11:05 +0100)]
server : use lambda instead of std::bind (#11507)

This commit replaces the two usages of `std::bind` in favor of lambdas for
the callback functions for `callback_new_task` and
`callback_update_slots`.

The motivation for this changes is consistency with the rest of the code
in server.cpp (lambdas are used for all other callbacks/handlers). Also
lambdas are more readable (perhaps this is subjective) but also they are
recommended over `std::bind` in modern C++.

Ref: https://github.com/LithoCoders/dailycpp/blob/master/EffectiveModernC%2B%2B/chapter6/Item34_Prefer_lambdas_to_std::bind.md

4 months agoserver : (docs) added response format for /apply-template [no ci] (#11503)
Isaac McFadyen [Thu, 30 Jan 2025 09:11:53 +0000 (04:11 -0500)]
server : (docs) added response format for /apply-template [no ci] (#11503)

4 months agoreadme : reference examples relative links (#11505)
Guspan Tanadi [Thu, 30 Jan 2025 05:58:02 +0000 (12:58 +0700)]
readme : reference examples relative links (#11505)

4 months agoserver : update json snippets in README.md [no ci] (#11492)
Daniel Bevenius [Thu, 30 Jan 2025 04:48:14 +0000 (05:48 +0100)]
server : update json snippets in README.md [no ci] (#11492)

This commit updates some of JSON snippets in README.md file and
removes the `json` language tag from the code blocks.

The motivation for this changes is that if there is invalid json in a
code snippet these are highlighted in red which can make it somewhat
difficult to read and can be a little distracting.

4 months agoserver : add /apply-template endpoint for additional use cases of Minja functionality...
Nigel Bosch [Wed, 29 Jan 2025 18:45:44 +0000 (12:45 -0600)]
server : add /apply-template endpoint for additional use cases of Minja functionality (#11489)

* add /apply-template endpoint to server

* remove unnecessary line

* add /apply-template documentation

* return only "prompt" field in /apply-template

* use suggested idea instead of my overly verbose way

4 months agovulkan: implement initial support for IQ2 and IQ3 quantizations (#11360)
Rémy Oudompheng [Wed, 29 Jan 2025 17:29:39 +0000 (18:29 +0100)]
vulkan: implement initial support for IQ2 and IQ3 quantizations (#11360)

* vulkan: initial support for IQ3_S

* vulkan: initial support for IQ3_XXS

* vulkan: initial support for IQ2_XXS

* vulkan: initial support for IQ2_XS

* vulkan: optimize Q3_K by removing branches

* vulkan: implement dequantize variants for coopmat2

* vulkan: initial support for IQ2_S

* vulkan: vertically realign code

* port failing dequant callbacks from mul_mm

* Fix array length mismatches

* vulkan: avoid using workgroup size before it is referenced

* tests: increase timeout for Vulkan llvmpipe backend

---------

Co-authored-by: Jeff Bolz <redacted>
4 months agoserver : update auto gen files comments [no ci] (#11484)
Daniel Bevenius [Wed, 29 Jan 2025 15:34:18 +0000 (16:34 +0100)]
server : update auto gen files comments [no ci] (#11484)

* server : update auto gen files comments

This commit updates the 'auto generated files' comments in server.cpp
and removes `deps.sh` from the comment.

The motivation for this change is that `deps.sh` was removed in
Commit 91c36c269bca75b2d08119c653512cd20b4ea2ba ("server : (web ui)
Various improvements, now use vite as bundler (#10599)").

* squash! server : update auto gen files comments [no ci]

Move comments about file generation to README.md.

* squash! server : update auto gen files comments [no ci]

Remove the comments in server.cpp that mention that information
can be found in the README.md file.

4 months agovulkan: Catch pipeline creation failure and print an error message (#11436)
Jeff Bolz [Wed, 29 Jan 2025 15:26:50 +0000 (09:26 -0600)]
vulkan: Catch pipeline creation failure and print an error message (#11436)

* vulkan: Catch pipeline creation failure and print an error message

Also, fix some warnings from my on-demand compile change.

* vulkan: fix pipeline creation logging

4 months agoParse https://ollama.com/library/ syntax (#11480)
Eric Curtin [Wed, 29 Jan 2025 11:23:10 +0000 (12:23 +0100)]
Parse https://ollama.com/library/ syntax (#11480)

People search for ollama models using the web ui, this change
allows one to copy the url from the browser and for it to be
compatible with llama-run.

Signed-off-by: Eric Curtin <redacted>
4 months agosync : ggml
Georgi Gerganov [Wed, 29 Jan 2025 09:25:29 +0000 (11:25 +0200)]
sync : ggml

4 months agoggml : add option to not print stack on abort (ggml/1081)
William Tambellini [Thu, 23 Jan 2025 19:59:08 +0000 (11:59 -0800)]
ggml : add option to not print stack on abort (ggml/1081)

* Add option to not print stack on abort

Add option/envvar to disable stack printing on abort.
Also link some unittests with Threads to fix link errors on
ubuntu/g++11.

* Update ggml/src/ggml.c

---------

Co-authored-by: Diego Devesa <redacted>
4 months agoggml-cpu : fix ggml_graph_compute_thread did not terminate on abort. (ggml/1065)
issixx [Fri, 17 Jan 2025 12:29:08 +0000 (21:29 +0900)]
ggml-cpu : fix ggml_graph_compute_thread did not terminate on abort. (ggml/1065)

some threads kept looping and failed to terminate properly after an abort during CPU execution.

Co-authored-by: issi <redacted>
4 months agoembedding : enable --no-warmup option (#11475)
Daniel Bevenius [Wed, 29 Jan 2025 08:38:54 +0000 (09:38 +0100)]
embedding : enable --no-warmup option (#11475)

This commit enables the `--no-warmup` option for the llama-embeddings.

The motivation for this change is to allow the user to disable the
warmup when running the the program.

4 months agollama: fix missing k_cache store for rwkv6qwen2 (#11445)
Molly Sophia [Wed, 29 Jan 2025 04:07:21 +0000 (12:07 +0800)]
llama: fix missing k_cache store for rwkv6qwen2 (#11445)

Signed-off-by: Molly Sophia <redacted>
4 months agocmake: add hints for locating ggml on Windows using Llama find-package (#11466)
Emreerdog [Tue, 28 Jan 2025 23:22:06 +0000 (02:22 +0300)]
cmake: add hints for locating ggml on Windows using Llama find-package (#11466)

4 months agoserver : Fixed wrong function name in llamacpp server unit test (#11473)
peidaqi [Tue, 28 Jan 2025 23:03:42 +0000 (16:03 -0700)]
server : Fixed wrong function name in llamacpp server unit test (#11473)

The test_completion_stream_with_openai_library() function is actually with stream=False by default, and test_completion_with_openai_library() with stream=True

4 months agoci : fix build CPU arm64 (#11472)
Xuan-Son Nguyen [Tue, 28 Jan 2025 23:02:56 +0000 (00:02 +0100)]
ci : fix build CPU arm64 (#11472)

* ci : fix build CPU arm64

* failed, trying ubuntu 22

* vulkan: ubuntu 24

* vulkan : jammy --> noble

4 months agoHIP: Supress transformation warning in softmax.cu
uvos [Tue, 28 Jan 2025 22:06:32 +0000 (23:06 +0100)]
HIP: Supress transformation warning in softmax.cu

loops with bounds not known at compile time can not be unrolled.
when ncols_template == 0, the bounds of the loop are not constexpr, thus llvm cant unroll the loops here.

4 months agoHIP: Only call rocblas_initialize on rocblas versions with the multiple instantation...
Nikita Sarychev [Tue, 28 Jan 2025 15:42:20 +0000 (07:42 -0800)]
HIP: Only call rocblas_initialize on rocblas versions with the multiple instantation bug (#11080)

This disables the workaround on rocblas fixed versions (>=4.0.0) to eliminate the runtime cost and unnecessary VRAM allocation of loading all tensile objects.

4 months agoAdd github protocol pulling and http:// (#11465)
Eric Curtin [Tue, 28 Jan 2025 14:45:41 +0000 (15:45 +0100)]
Add github protocol pulling and http:// (#11465)

As pulling protocols to llama-run

Signed-off-by: Eric Curtin <redacted>
4 months agodocker: allow installing pip packages system-wide (#11437)
Nuno [Tue, 28 Jan 2025 14:17:25 +0000 (15:17 +0100)]
docker: allow installing pip packages system-wide (#11437)

Signed-off-by: rare-magma <redacted>
4 months agocmake : don't fail on `GGML_CPU=OFF` (#11457)
someone13574 [Tue, 28 Jan 2025 14:15:34 +0000 (09:15 -0500)]
cmake : don't fail on `GGML_CPU=OFF` (#11457)