]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
3 months agoggml : fix kleidiai build (#12159)
ag2s20150909 [Mon, 3 Mar 2025 12:54:08 +0000 (20:54 +0800)]
ggml : fix kleidiai build (#12159)

The libggml API has changed, but this has not been updated.

3 months agoAdding UTF-8 support to llama.cpp (#12111)
Eric Curtin [Mon, 3 Mar 2025 12:44:56 +0000 (12:44 +0000)]
Adding UTF-8 support to llama.cpp (#12111)

For emojis, non-alpha characters, etc.

Signed-off-by: Eric Curtin <redacted>
3 months agowebui : add ?m=... and ?q=... params (#12148)
Xuan-Son Nguyen [Mon, 3 Mar 2025 10:42:45 +0000 (11:42 +0100)]
webui : add ?m=... and ?q=... params (#12148)

* webui : add ?m=... and ?q=... params

* also clear prefilledMessage variable

* better approach

* fix comment

* test: bump timeout on GITHUB_ACTION

3 months agoSYCL: Move CPY kernels to a separate file and add few missing kernels (#12133)
Akarshan Biswas [Mon, 3 Mar 2025 10:07:22 +0000 (15:37 +0530)]
SYCL: Move CPY kernels to a separate file and add few missing kernels (#12133)

* SYCL: refactor and move cpy kernels to a separate file

* Add few missing cpy kernels

* refactor and add debug logs

3 months agoggml-backend : keep paths in native string type when possible (#12144)
Diego Devesa [Sun, 2 Mar 2025 21:11:00 +0000 (22:11 +0100)]
ggml-backend : keep paths in native string type when possible (#12144)

3 months agomain: use jinja chat template system prompt by default (#12118)
Sigbjørn Skjæret [Sun, 2 Mar 2025 13:53:48 +0000 (14:53 +0100)]
main: use jinja chat template system prompt by default (#12118)

* Use jinja chat template system prompt by default

* faster conditional order

* remove nested ternary

---------

Co-authored-by: Xuan Son Nguyen <redacted>
3 months agomain: update outdated system prompt message (followup to #12131) (#12132)
Sigbjørn Skjæret [Sat, 1 Mar 2025 14:22:27 +0000 (15:22 +0100)]
main: update outdated system prompt message (followup to #12131) (#12132)

* Update outdated message

* wording

Co-authored-by: Xuan-Son Nguyen <redacted>
---------

Co-authored-by: Xuan-Son Nguyen <redacted>
3 months agocommon : add --system-prompt parameter, replace behavior of -p in conversation mode...
Sigbjørn Skjæret [Sat, 1 Mar 2025 12:56:45 +0000 (13:56 +0100)]
common : add --system-prompt parameter, replace behavior of -p in conversation mode (#12131)

* Add --system-prompt parameter

* use user defined system prompt

* clarify

Co-authored-by: Xuan-Son Nguyen <redacted>
* add warning

* clarify

Co-authored-by: Xuan-Son Nguyen <redacted>
---------

Co-authored-by: Xuan-Son Nguyen <redacted>
3 months agoCUDA: compress mode option and default to size (#12029)
Erik Scholz [Sat, 1 Mar 2025 11:57:22 +0000 (12:57 +0100)]
CUDA: compress mode option and default to size (#12029)

cuda 12.8 added the option to specify stronger compression for binaries, so we now default to "size".

3 months agowebui : minor typo fixes (#12116)
Vivian [Sat, 1 Mar 2025 10:15:09 +0000 (15:45 +0530)]
webui : minor typo fixes (#12116)

* fix typos and improve menu text clarity

* rename variable trimedValue to trimmedValue

* add updated index.html.gz

* rebuild

---------

Co-authored-by: Xuan Son Nguyen <redacted>
3 months agoconvert : fix Norway problem when parsing YAML (#12114)
Xuan-Son Nguyen [Fri, 28 Feb 2025 16:44:46 +0000 (17:44 +0100)]
convert : fix Norway problem when parsing YAML (#12114)

* convert : fix Norway problem when parsing YAML

* Update gguf-py/gguf/metadata.py

* add newline at correct place

3 months agoggml : upgrade init_tensor API to return a ggml_status (#11854)
William Tambellini [Fri, 28 Feb 2025 13:41:47 +0000 (05:41 -0800)]
ggml : upgrade init_tensor API to return a ggml_status (#11854)

* Upgrade init_tensor API to return a ggml_status

To prepare for an 'abort-free' ggml
(ggml not to abort on OOMs but return a OOM status),
as agreeed with Diego in the ggml repo,
upgrade the init_tensor() and view_init() APIs
to return a ggml_status.

* misc fixes

---------

Co-authored-by: slaren <redacted>
3 months agollama : add Phi-4-mini support (supersede #12099) (#12108)
Xuan-Son Nguyen [Fri, 28 Feb 2025 11:44:11 +0000 (12:44 +0100)]
llama : add Phi-4-mini support (supersede #12099) (#12108)

* Added Phi-4-mini-instruct support

* Update regex per ngxson

* Change the vocab base to Xenova/gpt-4o

* fix conversion update script

* no need to check longrope

* minor style fix

* fix python style

---------

Co-authored-by: Nicholas Sparks <redacted>
3 months agoUpdate granite vision docs for 3.2 model (#12105)
Alex Brooks [Fri, 28 Feb 2025 11:31:47 +0000 (04:31 -0700)]
Update granite vision docs for 3.2 model (#12105)

Signed-off-by: Alex-Brooks <redacted>
3 months agovulkan: add specific MMV kernels for IQ2 and IQ3 quants + optimizations (#11595)
Rémy O [Fri, 28 Feb 2025 08:42:52 +0000 (09:42 +0100)]
vulkan: add specific MMV kernels for IQ2 and IQ3 quants + optimizations (#11595)

* vulkan: implement specialized MMV kernels for IQ2 quantizations

* vulkan: add MMV kernels for IQ3 quants

* vulkan: Increase MMV batch size and unroll IQ LUT setup

* vulkan: fix init_iq_shmem for WG sizes larger than tables

* vulkan: common batch size for all I-quants

3 months agoCUDA: fix logic for V100 + GGML_CUDA_FORCE_MMQ (#12098)
Johannes Gäßler [Fri, 28 Feb 2025 08:26:43 +0000 (09:26 +0100)]
CUDA: fix logic for V100 + GGML_CUDA_FORCE_MMQ (#12098)

3 months agoggml: aarch64: implement SVE kernels for q2_k_q8_k vector dot (#12064)
Prashant Vithule [Fri, 28 Feb 2025 07:36:12 +0000 (13:06 +0530)]
ggml: aarch64: implement SVE kernels for q2_k_q8_k vector dot (#12064)

* Added SVE Support for Q2_K Quantized Models

* Use 4-space indentation in the switch cases

* removed comments lines

* Remove the loop Retain the curly bracess for better understanding of code

* Remove the comment like added for q3_k_q8_k kernel

---------

Co-authored-by: vithulep <redacted>
3 months agoCANN: Fix build error with GCC 13 (#11990)
hipudding [Fri, 28 Feb 2025 07:23:47 +0000 (15:23 +0800)]
CANN: Fix build error with GCC 13 (#11990)

Remove unused header file that causes compilation failure on ARM
platform with GCC 13.

3 months agovulkan: matmul dequantization improvements (#12015)
Eve [Fri, 28 Feb 2025 07:20:08 +0000 (07:20 +0000)]
vulkan: matmul dequantization improvements (#12015)

* faster dequant for old quants

* dont use unpack for iq4_nl

* vec2 unpack for q8

3 months agovulkan: improve im2col (#11826)
Daniele [Fri, 28 Feb 2025 06:52:51 +0000 (06:52 +0000)]
vulkan: improve im2col (#11826)

* vulkan: improve im2col performance

3 months agocmake: Fix ggml backend dependencies and installation (#11818)
Vladimir Vuksanovic [Thu, 27 Feb 2025 07:42:48 +0000 (08:42 +0100)]
cmake: Fix ggml backend dependencies and installation (#11818)

* Fix dependencies between ggml and backends

ggml backends link only to ggml-base and ggml links to all backends.

* Fix installation of ggml backends

Set up GNUInstallDirs before setting the installation directory of ggml backends

4 months agollava : add struct for FFI bindgen (#12079)
Ting Lou [Wed, 26 Feb 2025 14:26:52 +0000 (22:26 +0800)]
llava : add struct for FFI bindgen (#12079)

* add struct for FFI bindgen

* Apply suggestions from code review

---------

Co-authored-by: Xuan-Son Nguyen <redacted>
4 months agoRefactor gguf scripts to improve metadata handling (#11909) gguf-v0.16.0
Sigbjørn Skjæret [Wed, 26 Feb 2025 13:04:48 +0000 (14:04 +0100)]
Refactor gguf scripts to improve metadata handling (#11909)

* Refactor gguf scripts to improve metadata handling

Added contents method to ReaderField class
Added endianess property to GGUFReader class

* update scripts

* fix import

* remove unused import

* attempt to work around flake and pyright errors

* second attempt

* give up, ignore type

* bump version

* apply newbyteorder fixes

4 months agogguf-py: enable reading non-native endian files (#12081)
Aleksei Nikiforov [Wed, 26 Feb 2025 11:39:27 +0000 (12:39 +0100)]
gguf-py: enable reading non-native endian files (#12081)

Currently self.byte_order is never used.
Actually use it to byteswap read data to
allow reading big endian files on little endian systems
and vice versa.

Now it's possible to convert little-endian model
into a big-endian model and back
on a little-endian system.

4 months agoreadme : update infra list (#9096)
Kante Yin [Wed, 26 Feb 2025 07:49:36 +0000 (15:49 +0800)]
readme : update infra list (#9096)

Signed-off-by: kerthcet <redacted>
4 months agodocs: add docs/function-calling.md to lighten server/README.md's plight (#12069)
Olivier Chafik [Tue, 25 Feb 2025 18:52:56 +0000 (18:52 +0000)]
docs: add docs/function-calling.md to lighten server/README.md's plight (#12069)

4 months agovulkan: fix assertion when qy_needs_dequant (#12068)
Jeff Bolz [Tue, 25 Feb 2025 15:30:21 +0000 (09:30 -0600)]
vulkan: fix assertion when qy_needs_dequant (#12068)

Looks like a copy/paste bug from qx_needs_dequant.

4 months agoserver: handle echo=false on /v1/completions (#12060)
rhjdvsgsgks [Tue, 25 Feb 2025 11:52:52 +0000 (11:52 +0000)]
server: handle echo=false on /v1/completions (#12060)

4 months agoadd OP sigmoid (#12056)
Judd [Tue, 25 Feb 2025 11:32:20 +0000 (19:32 +0800)]
add OP sigmoid (#12056)

Co-authored-by: Judd <redacted>
4 months agoggml-cpu: Fix build with sve (#12059)
Molly Sophia [Tue, 25 Feb 2025 11:28:22 +0000 (19:28 +0800)]
ggml-cpu: Fix build with sve (#12059)

* ggml-cpu: Fix build with sve

Signed-off-by: Molly Sophia <redacted>
* ggml-cpu: Remove unused variable in sve q3_k vec dot

Signed-off-by: Molly Sophia <redacted>
---------

Signed-off-by: Molly Sophia <redacted>
4 months agovulkan: implement more backpropagation operators (#11914)
Rémy O [Tue, 25 Feb 2025 11:04:45 +0000 (12:04 +0100)]
vulkan: implement more backpropagation operators (#11914)

* vulkan: implement GGML_OP_ROPE_BACK

* vulkan: implement GGML_OP_RMS_NORM_BACK

* vulkan: implement GGML_OP_SILU_BACK

* vulkan: implement GGML_OP_SOFTMAX_BACK

4 months agoserver: support add_generation_prompt query param (#12062)
Olivier Chafik [Tue, 25 Feb 2025 10:40:22 +0000 (10:40 +0000)]
server: support add_generation_prompt query param (#12062)

4 months agoAdd Doc for Converting Granite Vision -> GGUF (#12006)
Alex Brooks [Tue, 25 Feb 2025 09:46:05 +0000 (02:46 -0700)]
Add Doc for Converting Granite Vision -> GGUF (#12006)

* Add example docs for granite vision

Signed-off-by: Alex-Brooks <redacted>
4 months agollama : expose llama_model_n_head_kv in the API (#11997)
Vitali Lovich [Tue, 25 Feb 2025 09:29:33 +0000 (01:29 -0800)]
llama : expose llama_model_n_head_kv in the API (#11997)

It's useful to be able to have this from the library layer as it's a key
parameter of the model (e.g. to figure out how much KV cache memory is
needed).

4 months agometal : copy kernels for quant to F32/F16 conversions (#12017)
Gian-Carlo Pascutto [Tue, 25 Feb 2025 09:27:58 +0000 (10:27 +0100)]
metal : copy kernels for quant to F32/F16 conversions (#12017)

metal: use dequantize_q templates

---------

Co-authored-by: Georgi Gerganov <redacted>
4 months agoopencl: fix for small models (#11950)
lhez [Mon, 24 Feb 2025 21:47:07 +0000 (13:47 -0800)]
opencl: fix for small models (#11950)

* opencl: fix small shape gemv, remove unused extensions

* opencl: fix `transpose_16`, `dump_tensor`, enforce subgroup size

* opencl: fix for token length < 4

* opencl: use wave size of 64 for all Adreno GPUs

---------

Co-authored-by: Shawn Gu <redacted>
Co-authored-by: Skyler Szot <redacted>
4 months agollava : Add Granite Vision Support (#11794)
Alex Brooks [Mon, 24 Feb 2025 16:09:51 +0000 (09:09 -0700)]
llava : Add Granite Vision Support (#11794)

* Add super wip scripts for multimodal granite gguf

Signed-off-by: Alex-Brooks <redacted>
* Add example for converting mmgranite to gguf

Signed-off-by: Alex-Brooks <redacted>
* remove hardcoded path

Signed-off-by: Alex-Brooks <redacted>
* Add vision feature layer to gguf params

Signed-off-by: Alex-Brooks <redacted>
* Clean up llava surgery and remove name substitution hacks

Signed-off-by: Alex-Brooks <redacted>
* Add transformers llava next tensor name mapping

Signed-off-by: Alex-Brooks <redacted>
* Make siglip / openclip mutuall exclusive

Signed-off-by: Alex-Brooks <redacted>
* Fix projector linear substitution

Signed-off-by: Alex-Brooks <redacted>
* Fix linear 2 substitution index

Signed-off-by: Alex-Brooks <redacted>
* Increase max flattened gridpoints to 64

Signed-off-by: Alex-Brooks <redacted>
* Fix hardcoded concat for multiple feature layers

Signed-off-by: Alex-Brooks <redacted>
* Pull vision feature layers out of gguf keys

Signed-off-by: Alex-Brooks <redacted>
* fix num gridpoints and use all layers

Signed-off-by: Alex-Brooks <redacted>
* Avoid dropping last image encoder layer in llava models

Signed-off-by: Alex-Brooks <redacted>
* Use 10 for max number of patches

Signed-off-by: Alex-Brooks <redacted>
* Standardize vision feature layers

Signed-off-by: Alex-Brooks <redacted>
* Cleanup logs

Signed-off-by: Alex-Brooks <redacted>
* Update comment for vision feature layer init

Signed-off-by: Alex-Brooks <redacted>
* Update notes for alternative to legacy llm conversion script

Signed-off-by: Alex-Brooks <redacted>
* Fix notes rendering

Signed-off-by: Alex-Brooks <redacted>
* Add v prefix to vision feature layer log

Signed-off-by: Alex-Brooks <redacted>
* Use current defaults for feature layer

Signed-off-by: Alex-Brooks <redacted>
* Use constant for max gridpoints / feat layers, style fixes

Signed-off-by: Alex-Brooks <redacted>
* clarify non-negative feature layers

Signed-off-by: Alex-Brooks <redacted>
* Remove CLIP_API from func signature

Signed-off-by: Alex-Brooks <redacted>
* USE MAX_IMAGE_FEATURE_LAYERS const in layer calc

Signed-off-by: Alex-Brooks <redacted>
* Clarify feature layers are non negative ints and not uint

Signed-off-by: Alex-Brooks <redacted>
* Fix condition for reading feature layers

Signed-off-by: Alex-Brooks <redacted>
* pop last llava layer when feature layers are unset

Signed-off-by: Alex-Brooks <redacted>
* Fix unset vision layer 0

Signed-off-by: Alex-Brooks <redacted>
* Update examples/llava/clip.cpp

Co-authored-by: Xuan-Son Nguyen <redacted>
* Reenable assertion for out of bounds get_rows

Signed-off-by: Alex-Brooks <redacted>
* Use std vector for gridpoints and feature layers

Signed-off-by: Alex-Brooks <redacted>
* Caculate max feature layer at load time

Signed-off-by: Alex-Brooks <redacted>
* Include base patch for granite vision allocation

Signed-off-by: Alex-Brooks <redacted>
* Fix trailing whitespace

Signed-off-by: Alex-Brooks <redacted>
* Add max num patches = 10 back for minicpmv

Signed-off-by: Alex-Brooks <redacted>
* Use unordered set to store feature layers

Co-authored-by: Xuan-Son Nguyen <redacted>
Signed-off-by: Alex-Brooks <redacted>
* Use max feature layer for postnorm

Signed-off-by: Alex-Brooks <redacted>
* Apply suggestions from code review

---------

Signed-off-by: Alex-Brooks <redacted>
Co-authored-by: Xuan-Son Nguyen <redacted>
4 months ago[SYCL] Optimize mul_mat for Q4_0 on Intel GPU (#12035)
Neo Zhang Jianyu [Mon, 24 Feb 2025 14:33:23 +0000 (22:33 +0800)]
[SYCL] Optimize mul_mat for Q4_0 on Intel GPU (#12035)

* opt performance by reorder for Intel GPU

* detect hw type and save opt feature, and print opt feature

* correct name

* support optimize graph once when compute graph, record the opt status in tensor->extra, make CI passed

* add env variable GGML_SYCL_DISABLE_OPT for debug

* use syclex::architecture replace the custom hw define, update the guide for GGML_SYCL_DISABLE_OPT

* add performance data

* mv getrows functions to separeted files

* fix global variables

---------

Co-authored-by: arthw <redacted>
4 months agogguf_convert_endian.py: implement byteswapping for q4_k and q6_k (#11349)
Aleksei Nikiforov [Mon, 24 Feb 2025 11:27:01 +0000 (12:27 +0100)]
gguf_convert_endian.py: implement byteswapping for q4_k and q6_k (#11349)

4 months agoSYCL: Fix GGML_SYCL_DEBUG macro (#11995)
Akarshan Biswas [Mon, 24 Feb 2025 10:18:25 +0000 (15:48 +0530)]
SYCL: Fix GGML_SYCL_DEBUG macro (#11995)

4 months agorun: allow to customize prompt by env var LLAMA_PROMPT_PREFIX (#12041)
Florent BENOIT [Sun, 23 Feb 2025 17:15:51 +0000 (18:15 +0100)]
run: allow to customize prompt by env var LLAMA_PROMPT_PREFIX (#12041)

Signed-off-by: Florent Benoit <redacted>
4 months agoSome llama-run cleanups (#11973)
Eric Curtin [Sun, 23 Feb 2025 13:14:32 +0000 (13:14 +0000)]
Some llama-run cleanups (#11973)

Use consolidated open function call from File class. Change
read_all to to_string(). Remove exclusive locking, the intent for
that lock is to avoid multiple processes writing to the same file,
it's not an issue for readers, although we may want to consider
adding a shared lock. Remove passing nullptr as reference,
references are never supposed to be null. clang-format the code
for consistent styling.

Signed-off-by: Eric Curtin <redacted>
4 months agoggml-cpu: Support s390x SIMD Instruction Set (#12019)
Aaron Teo [Sat, 22 Feb 2025 21:39:24 +0000 (05:39 +0800)]
ggml-cpu: Support s390x SIMD Instruction Set (#12019)

* ggml: add s390x ARCH_FLAGS for compilation

Signed-off-by: Aaron Teo <redacted>
* ggml: add SIMD for s390x using vector intrinsics

SIMD is activated for:
* ggml_vec_dot_f32
* ggml_vec_dot_f16
* ggml_vec_mad_f32
* ggml_vec_mad_f16
* ggml_vec_mad_f32_unroll
* ggml_vec_scale_f32
* ggml_vec_scale_f16

SIMD is NOT activated for:
* ggml_vec_dot_f16_unroll (pending bugfix)

Signed-off-by: Aaron Teo <redacted>
* ggml: fix missing escape character in GGML_F32x4_REDUCE

Signed-off-by: Aaron Teo <redacted>
* ggml: add temporary patch for GGML_F32_ARR and GGML_F16_ARR

Signed-off-by: Aaron Teo <redacted>
* ggml: fix s390x GGML_F32x4_REDUCE

Signed-off-by: Aaron Teo <redacted>
* ggml: full SIMD activation for F32,F16 s390x

Signed-off-by: Aaron Teo <redacted>
* ggml: add option to disable s390x VXE/VXE2

Signed-off-by: Aaron Teo <redacted>
* ggml: change vecintrin.h include to ggml-cpu-impl

* add __VXE__ and __VXE2__ macros

Signed-off-by: Aaron Teo <redacted>
* cmake: add s390x target detection for VX/VXE/VXE2

Signed-off-by: Aaron Teo <redacted>
* ggml: move s390x vector intrinsics to ggml-cpu-impl.h

Signed-off-by: Aaron Teo <redacted>
* ggml: s390x Q8_0 SIMD

Signed-off-by: Aaron Teo <redacted>
* ggml: correct documentation for Q8_0

Signed-off-by: Aaron Teo <redacted>
* ggml: s390x reduce code complexity Q8_0

Signed-off-by: Aaron Teo <redacted>
* ggml: s390x bugfix typo Q8_0

Signed-off-by: Aaron Teo <redacted>
* ggml: s390x SIMD activated for Q4_1

Signed-off-by: Aaron Teo <redacted>
* ggml: s390x inline vec_reve

Signed-off-by: Aaron Teo <redacted>
* ggml: s390x SIMD activation for Q4_0

Signed-off-by: Aaron Teo <redacted>
* ggml: add VXE backend feature

Signed-off-by: Aaron Teo <redacted>
* ggml: remove test.py

Signed-off-by: Aaron Teo <redacted>
* ggml: s390x SIMD activation for quantize_row_q8_0

Signed-off-by: Aaron Teo <redacted>
* ggml: s390x SIMD activation for quantize_row_q8_1

Signed-off-by: Aaron Teo <redacted>
* ggml: s390x SIMD activation for iq4_xs

Signed-off-by: Aaron Teo <redacted>
* ggml: bugfix iq4_xs

Signed-off-by: Aaron Teo <redacted>
* ggml: s390x SIMD activation for iq4_nl

Signed-off-by: Aaron Teo <redacted>
* ggml: add float, double, and long vector data type

Signed-off-by: Aaron Teo <redacted>
* ggml: clean up iq4_xs SIMD

Signed-off-by: Aaron Teo <redacted>
* ggml: fix improper use of restrict keyword

Signed-off-by: Aaron Teo <redacted>
* ggml: update warning message for ggml_vec_tbl

Signed-off-by: Aaron Teo <redacted>
* ggml: untested implementation of ggml_vec_dot_iq2_xxs_q8_K

Signed-off-by: Aaron Teo <redacted>
* ggml: update ggml_vec_dot_q4_1_q8_1 to use typedefs

Signed-off-by: Aaron Teo <redacted>
* ggml: switch to restrict for iq4_nl

Signed-off-by: Aaron Teo <redacted>
* ggml: slight dot product speed improvement for q4_1_q8_1

Signed-off-by: Aaron Teo <redacted>
* ggml: s390x SIMD activation for q6_K

Signed-off-by: Aaron Teo <redacted>
* ggml: add missing `_t` to ggml_int8x16x4_t

Signed-off-by: Aaron Teo <redacted>
* ggml: fix missing `_t` for ggml_vec_xl_s8x4

Signed-off-by: Aaron Teo <redacted>
* ggml: fix more missing `_t`

Signed-off-by: Aaron Teo <redacted>
* ggml: add unroll and prefetch to Q8_0

increase of 3.86% for prompt processing and 32.22% for token generation

Signed-off-by: Aaron Teo <redacted>
* ggml: patch Q8_0 to use proper vector sizes

Signed-off-by: Aaron Teo <redacted>
* ggml: optimise Q8_0 dot prod compute kernel further

Signed-off-by: Aaron Teo <redacted>
* ggml: add unroll and prefetch to Q4_1

Signed-off-by: Aaron Teo <redacted>
* ggml: refactor Q6_K variable naming for readability

Signed-off-by: Aaron Teo <redacted>
* ggml: fix Q6_K typos

Signed-off-by: Aaron Teo <redacted>
* ggml: s390x SIMD activation for Q5_K

Signed-off-by: Aaron Teo <redacted>
* ggml: fix wrong char*x16_t naming

Signed-off-by: Aaron Teo <redacted>
* ggml: Q5_K y0 wrong signness

Signed-off-by: Aaron Teo <redacted>
* ggml: fix Q5_K invalid uchar type

Signed-off-by: Aaron Teo <redacted>
* ggml: fix Q5_K invalid uchar type

Signed-off-by: Aaron Teo <redacted>
* ggml: s390x SIMD activation for Q4_K

Signed-off-by: Aaron Teo <redacted>
* ggml: fix Q4_K invalid vector intrinsics

Signed-off-by: Aaron Teo <redacted>
* ggml: simplify ggml_padd_s16 compute kernel

Signed-off-by: Aaron Teo <redacted>
* ggml: correct ggml-cpu vxe wording

Signed-off-by: Aaron Teo <redacted>
* ggml: change ggml_aligned_malloc alignment to 256

256 is the cache line size for s390x platforms

Signed-off-by: Aaron Teo <redacted>
* ggml: resolve pr merge via cherry-pick 225bbbf

Signed-off-by: Aaron Teo <redacted>
* ggml : fix LoongArch compile error with 128-bit SIMD (#11701)

* ggml: resolve pr merge via cherry-pick 4571953

Signed-off-by: Aaron Teo <redacted>
* ggml: cmake remove fork when determining s390x machine type

thank you @ericcurtin

Signed-off-by: Aaron Teo <redacted>
---------

Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Jinyang He <redacted>
Co-authored-by: junchao-zhao <redacted>
4 months agoCUDA: app option to compile without FlashAttention (#12025)
Johannes Gäßler [Sat, 22 Feb 2025 19:44:34 +0000 (20:44 +0100)]
CUDA: app option to compile without FlashAttention (#12025)

4 months agollava: build clip image from pixels (#11999)
Ting Lou [Sat, 22 Feb 2025 14:28:28 +0000 (22:28 +0800)]
llava: build clip image from pixels (#11999)

* llava: export function `clip_build_img_from_pixels` to build image from pixels decoded by other libraries instead of stb_image.h for better performance

* Apply suggestions from code review

---------

Co-authored-by: Xuan-Son Nguyen <redacted>
4 months agoci : fix arm upload artifacts (#12024)
Georgi Gerganov [Sat, 22 Feb 2025 13:03:00 +0000 (15:03 +0200)]
ci : fix arm upload artifacts (#12024)

* ci : fix arm upload artifacts

* cont : fix archive name to use matrix

4 months agoCUDA: optimize FA for GQA + large batches (#12014)
Johannes Gäßler [Sat, 22 Feb 2025 11:20:17 +0000 (12:20 +0100)]
CUDA: optimize FA for GQA + large batches (#12014)

4 months agoci : Build on Github-hosted arm64 runners (#12009)
Rohanjames1997 [Sat, 22 Feb 2025 10:48:57 +0000 (04:48 -0600)]
ci : Build on Github-hosted arm64 runners (#12009)

4 months agoserver : disable Nagle's algorithm (#12020)
Georgi Gerganov [Sat, 22 Feb 2025 10:46:31 +0000 (12:46 +0200)]
server : disable Nagle's algorithm (#12020)

4 months agocuda: Add Q5_1, Q5_0, Q4_1 and Q4_0 to F32 conversion support. (#12000)
Gian-Carlo Pascutto [Sat, 22 Feb 2025 08:43:24 +0000 (09:43 +0100)]
cuda: Add Q5_1, Q5_0, Q4_1 and Q4_0 to F32 conversion support. (#12000)

4 months agollama.swiftui : add "Done" dismiss button to help view (#11998)
Daniel Bevenius [Sat, 22 Feb 2025 05:33:29 +0000 (06:33 +0100)]
llama.swiftui : add "Done" dismiss button to help view (#11998)

The commit updates the help view in the llama.swiftui example to use a
NavigationView and a Done button to dismiss the help view.

The motivation for this is that without this change there is now way to
dimiss the help view.

4 months agollama : skip loading unused tensors (#12004)
Georgi Gerganov [Fri, 21 Feb 2025 16:33:18 +0000 (18:33 +0200)]
llama : skip loading unused tensors (#12004)

* llama : assign unknown/unused tensors to host buffer type

ggml-ci

* llama : skip unused tensors

ggml-ci

4 months agodoc: update contributing guidelines [no ci] (#11969)
Johannes Gäßler [Fri, 21 Feb 2025 11:51:25 +0000 (12:51 +0100)]
doc: update contributing guidelines [no ci] (#11969)

4 months agoCUDA: correct the lowest Maxwell supported by CUDA 12 (#11984)
PureJourney [Fri, 21 Feb 2025 11:21:05 +0000 (19:21 +0800)]
CUDA: correct the lowest Maxwell supported by CUDA 12 (#11984)

* CUDA: correct the lowest Maxwell supported by CUDA 12

---------

Co-authored-by: Johannes Gäßler <redacted>
4 months agoMUSA: support ARM64 and enable dp4a .etc (#11843)
Bodhi [Fri, 21 Feb 2025 07:46:23 +0000 (15:46 +0800)]
MUSA: support ARM64 and enable dp4a .etc (#11843)

* MUSA:  support ARM64 and enable __dp4a .etc

* fix cross entropy loss op for musa

* update

* add cc info log for musa

* add comment for the MUSA .cc calculation block

---------

Co-authored-by: Bodhi Hu <redacted>
4 months agoclip : fix visual encoders with no CLS (#11982)
Alex Brooks [Fri, 21 Feb 2025 06:11:03 +0000 (23:11 -0700)]
clip : fix visual encoders with no CLS (#11982)

Signed-off-by: Alex-Brooks <redacted>
4 months agoserver (webui): Fix Premature Submission During IME Conversion (#11971)
momonga [Thu, 20 Feb 2025 18:43:22 +0000 (03:43 +0900)]
server (webui): Fix Premature Submission During IME Conversion (#11971)

* fix skip ime composing

* fix npm rebuild

* fix warn

---------

Co-authored-by: momonga <redacted>
Co-authored-by: Xuan Son Nguyen <redacted>
4 months agoggml-cpu: Add CPU backend support for KleidiAI library (#11390)
Charles Xu [Thu, 20 Feb 2025 13:06:51 +0000 (14:06 +0100)]
ggml-cpu: Add CPU backend support for KleidiAI library (#11390)

* ggml-cpu: Add CPU backend support for KleidiAI library

* Add environmental variable GGML_KLEIDIAI_SME

* Add support for multithread LHS conversion

* Switch kernel selection order to dotprod and i8mm

* updates for review comments

* More updates for review comments

* Reorganize and rename KleidiAI files

* Move ggml-cpu-traits.h to source file

* Update cmake for SME build and add alignment for SME

* Remove append GGML_USE_CPU_KLEIDIAI to the GGML_CDEF_PUBLIC list

4 months agoggml: aarch64: implement SVE kernels for q3_K_q8_K vector dot (#11917)
Prashant Vithule [Thu, 20 Feb 2025 10:08:32 +0000 (15:38 +0530)]
ggml: aarch64: implement SVE kernels for q3_K_q8_K vector dot (#11917)

* Added SVE Implementation for Q3_K Kernel in ggml-cpu-quants.c file

* Improved Formating of code in  ggml-cpu-quants.c file

* style : minor fixes

* style : less whitespaces

* style : ptr spaceing

---------

Co-authored-by: vithulep <redacted>
Co-authored-by: Georgi Gerganov <redacted>
4 months agorun : add --chat-template-file (#11961)
Michael Engel [Thu, 20 Feb 2025 08:35:11 +0000 (09:35 +0100)]
run : add --chat-template-file (#11961)

Relates to: https://github.com/ggml-org/llama.cpp/issues/11178

Added --chat-template-file CLI option to llama-run. If specified, the file
will be read and the content passed for overwriting the chat template of
the model to common_chat_templates_from_model.

Signed-off-by: Michael Engel <redacted>
4 months agodoc: add links to ggml examples [no ci] (#11958)
Johannes Gäßler [Wed, 19 Feb 2025 19:45:17 +0000 (20:45 +0100)]
doc: add links to ggml examples [no ci] (#11958)

4 months agocommon : add llama.vim preset for Qwen2.5 Coder (#11945)
Daniel Bevenius [Wed, 19 Feb 2025 11:29:52 +0000 (12:29 +0100)]
common : add llama.vim preset for Qwen2.5 Coder (#11945)

This commit adds a preset for llama.vim to use the default Qwen 2.5
Coder models.

The motivation for this change is to make it easier to start a server
suitable to be used with the llama.vim plugin. For example, the server
can be started with a command like the following:
```console
$ llama.vim --fim-qwen-1.5b-default
```

Refs: https://github.com/ggml-org/llama.cpp/issues/10932

4 months agospeculative : update default params (#11954)
Georgi Gerganov [Wed, 19 Feb 2025 11:29:42 +0000 (13:29 +0200)]
speculative : update default params (#11954)

* speculative : update default params

* speculative : do not discard the last drafted token

4 months agollama : fix indentation in llama-grammar [no ci] (#11943)
Daniel Bevenius [Wed, 19 Feb 2025 05:16:23 +0000 (06:16 +0100)]
llama : fix indentation in llama-grammar [no ci] (#11943)

This commit adjusts the indentation for the functions `parse_sequence`
and `parse_rule` in src/llama-grammar.cpp.

The motivation is consistency and improve readability.

4 months agoserver : (webui) Enable communication with parent html (if webui is in iframe) (...
igardev [Tue, 18 Feb 2025 22:01:44 +0000 (00:01 +0200)]
server : (webui) Enable communication with parent html (if webui is in iframe) (#11940)

* Webui: Enable communication with parent html (if webui is in iframe):
- Listens for "setText" command from parent with "text" and "context" fields. "text" is set in inputMsg, "context" is used as hidden context on the following requests to the llama.cpp server
- On pressing na Escape button sends command "escapePressed" to the parent

Example handling from the parent html side:
- Send command "setText" from parent html to webui in iframe:
const iframe = document.getElementById('askAiIframe');
if (iframe) {
iframe.contentWindow.postMessage({ command: 'setText', text: text, context: context }, '*');
}

- Listen for Escape key from webui on parent html:
// Listen for escape key event in the iframe
window.addEventListener('keydown', (event) => {
if (event.key === 'Escape') {
// Process case when Escape is pressed inside webui
}
});

* Move the extraContext from storage to app.context.

* Fix formatting.

* add Message.extra

* format + build

* MessageExtraContext

* build

* fix display

* rm console.log

---------

Co-authored-by: igardev <redacted>
Co-authored-by: Xuan Son Nguyen <redacted>
4 months agotool-call: refactor common chat / tool-call api (+ tests / fixes) (#11900)
Olivier Chafik [Tue, 18 Feb 2025 18:03:23 +0000 (18:03 +0000)]
tool-call: refactor common chat / tool-call api (+ tests / fixes) (#11900)

* tool-call refactoring: moved common_chat_* to chat.h, common_chat_templates_init return a unique_ptr to opaque type

* addressed clang-tidy lints in [test-]chat.*

* rm minja deps from util & common & move it to common/minja/

* add name & tool_call_id to common_chat_msg

* add common_chat_tool

* added json <-> tools, msgs conversions to chat.h

* fix double bos/eos jinja avoidance hack (was preventing inner bos/eos tokens)

* fix deepseek r1 slow test (no longer <think> opening w/ new template)

* allow empty tools w/ auto + grammar

* fix & test server grammar & json_schema params w/ & w/o --jinja

4 months agoserver : add TEI API format for /rerank endpoint (#11942)
Xuan-Son Nguyen [Tue, 18 Feb 2025 13:21:41 +0000 (14:21 +0100)]
server : add TEI API format for /rerank endpoint (#11942)

* server : add TEI API format for /rerank endpoint

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <redacted>
* fix

* also gitignore examples/server/*.gz.hpp

---------

Co-authored-by: Georgi Gerganov <redacted>
4 months agoscripts: corrected encoding when getting chat template (#11866) (#11907)
MoonRide303 [Tue, 18 Feb 2025 09:30:16 +0000 (10:30 +0100)]
scripts: corrected encoding when getting chat template (#11866) (#11907)

Signed-off-by: MoonRide303 <redacted>
4 months agodocs : Fix duplicated file extension in test command (#11935)
xiaobing318 [Tue, 18 Feb 2025 09:12:49 +0000 (17:12 +0800)]
docs : Fix duplicated file extension in test command (#11935)

This commit fixes an issue in the llama.cpp project where the command for testing the llama-server object contained a duplicated file extension. The original command was:
./tests.sh unit/test_chat_completion.py.py -v -x
It has been corrected to:
./tests.sh unit/test_chat_completion.py -v -x
This change ensures that the test script correctly locates and executes the intended test file, preventing test failures due to an incorrect file name.

4 months agoCUDA: use async data loading for FlashAttention (#11894)
Johannes Gäßler [Mon, 17 Feb 2025 13:03:24 +0000 (14:03 +0100)]
CUDA: use async data loading for FlashAttention (#11894)

* CUDA: use async data loading for FlashAttention

---------

Co-authored-by: Diego Devesa <redacted>
4 months agoupdate release requirements (#11897)
Eve [Mon, 17 Feb 2025 11:20:23 +0000 (11:20 +0000)]
update release requirements (#11897)

4 months agoserver : fix divide-by-zero in metrics reporting (#11915)
Antoine Viallon [Mon, 17 Feb 2025 10:25:12 +0000 (11:25 +0100)]
server : fix divide-by-zero in metrics reporting (#11915)

4 months agovulkan: implement several ops relevant for ggml_opt (#11769)
Rémy O [Mon, 17 Feb 2025 06:55:57 +0000 (07:55 +0100)]
vulkan: implement several ops relevant for ggml_opt (#11769)

* vulkan: support memset_tensor

* vulkan: support GGML_OP_SUM

* vulkan: implement GGML_OP_ARGMAX

* vulkan: implement GGML_OP_SUB

* vulkan: implement GGML_OP_COUNT_EQUAL

* vulkan: implement GGML_OP_OPT_STEP_ADAMW

* vulkan: fix check_results RWKV_WKV6 crash and memory leaks

* vulkan: implement GGML_OP_REPEAT_BACK

* tests: remove invalid test-backend-ops REPEAT_BACK tests

* vulkan: fix COUNT_EQUAL memset using a fillBuffer command

4 months agoserver : bump httplib to 0.19.0 (#11908)
Xuan-Son Nguyen [Sun, 16 Feb 2025 17:11:22 +0000 (18:11 +0100)]
server : bump httplib to 0.19.0 (#11908)

4 months agocommon : Fix a typo in help (#11899)
standby24x7 [Sun, 16 Feb 2025 09:51:13 +0000 (18:51 +0900)]
common : Fix a typo in help (#11899)

This patch fixes a typo in command help.
prefx -> prefix

Signed-off-by: Masanari Iida <redacted>
4 months agoci : fix (again) arm64 build fails (#11895)
Xuan-Son Nguyen [Sun, 16 Feb 2025 09:36:39 +0000 (10:36 +0100)]
ci : fix (again) arm64 build fails (#11895)

* docker : attempt fixing arm64 build on ci

* qemu v7.0.0-28

4 months agovulkan: support multi/vision rope, and noncontiguous rope (#11902)
Jeff Bolz [Sun, 16 Feb 2025 07:52:23 +0000 (01:52 -0600)]
vulkan: support multi/vision rope, and noncontiguous rope (#11902)

4 months agometal : fix the crash caused by the lack of residency set support on Intel Macs....
Hale Chan [Sun, 16 Feb 2025 06:50:26 +0000 (14:50 +0800)]
metal : fix the crash caused by the lack of residency set support on Intel Macs. (#11904)

4 months agoscripts: fix compare-llama-bench commit hash logic (#11891)
Johannes Gäßler [Sat, 15 Feb 2025 19:23:22 +0000 (20:23 +0100)]
scripts: fix compare-llama-bench commit hash logic (#11891)

4 months agoexamples: fix typo in imatrix/README.md (#11884)
708-145 [Sat, 15 Feb 2025 19:03:30 +0000 (20:03 +0100)]
examples: fix typo in imatrix/README.md (#11884)

* simple typo fixed

* Update examples/imatrix/README.md

---------

Co-authored-by: Tobias Bergmann <redacted>
Co-authored-by: Georgi Gerganov <redacted>
4 months agometal : optimize dequant q6_K kernel (#11892)
Adrian Kretz [Sat, 15 Feb 2025 18:39:20 +0000 (19:39 +0100)]
metal : optimize dequant q6_K kernel (#11892)

4 months agoreadme : add notice about new package registry (#11890)
Georgi Gerganov [Sat, 15 Feb 2025 18:29:56 +0000 (20:29 +0200)]
readme : add notice about new package registry (#11890)

* readme : add notice about new package registry

* cont : fix whitespace

4 months agorepo : update links to new url (#11886)
Georgi Gerganov [Sat, 15 Feb 2025 14:40:57 +0000 (16:40 +0200)]
repo : update links to new url (#11886)

* repo : update links to new url

ggml-ci

* cont : more urls

ggml-ci

4 months agoserver: fix type promotion typo causing crashes w/ --jinja w/o tools (#11880)
Olivier Chafik [Sat, 15 Feb 2025 10:11:36 +0000 (10:11 +0000)]
server: fix type promotion typo causing crashes w/ --jinja w/o tools  (#11880)

4 months agovulkan: initial support for IQ1_S and IQ1_M quantizations (#11528)
Rémy O [Sat, 15 Feb 2025 08:01:40 +0000 (09:01 +0100)]
vulkan: initial support for IQ1_S and IQ1_M quantizations (#11528)

* vulkan: initial support for IQ1_S and IQ1_M quantizations

* vulkan: define MMV kernels for IQ1 quantizations

* devops: increase timeout of Vulkan tests again

* vulkan: simplify ifdef for init_iq_shmem

4 months agollguidance build fixes for Windows (#11664) upstream/0.0.4719
Michał Moskal [Fri, 14 Feb 2025 20:46:08 +0000 (12:46 -0800)]
llguidance build fixes for Windows (#11664)

* setup windows linking for llguidance; thanks @phil-scott-78

* add build instructions for windows and update script link

* change VS Community link from DE to EN

* whitespace fix

4 months agoopencl: Fix rope and softmax (#11833)
lhez [Fri, 14 Feb 2025 19:12:23 +0000 (11:12 -0800)]
opencl: Fix rope and softmax (#11833)

* opencl: fix `ROPE`

* opencl: fix `SOFT_MAX`

* Add fp16 variant

* opencl: enforce subgroup size for `soft_max`

4 months agocuda : add ampere to the list of default architectures (#11870)
Diego Devesa [Fri, 14 Feb 2025 14:33:52 +0000 (15:33 +0100)]
cuda : add ampere to the list of default architectures (#11870)

4 months agodocker : drop to CUDA 12.4 (#11869)
Georgi Gerganov [Fri, 14 Feb 2025 12:48:40 +0000 (14:48 +0200)]
docker : drop to CUDA 12.4 (#11869)

* docker : drop to CUDA 12.4

* docker : update readme [no ci]

4 months agollama : add completion for --chat-template-file (#11860)
Daniel Bevenius [Fri, 14 Feb 2025 10:16:56 +0000 (11:16 +0100)]
llama : add completion for --chat-template-file (#11860)

This commit adds completion for `--chat-template-file`, enabling only
`.jinja` files to be displayed as completions.

Example usage:
```console
$ ./build/bin/llama-cli --chat-template-file models/templates/<TAB>
models/templates/CohereForAI-c4ai-command-r7b-12-2024-tool_use.jinja
models/templates/CohereForAI-c4ai-command-r-plus-tool_use.jinja
models/templates/deepseek-ai-DeepSeek-R1-Distill-Llama-8B.jinja
models/templates/deepseek-ai-DeepSeek-R1-Distill-Qwen-32B.jinja
models/templates/fireworks-ai-llama-3-firefunction-v2.jinja
models/templates/google-gemma-2-2b-it.jinja
models/templates/llama-cpp-deepseek-r1.jinja
models/templates/meetkai-functionary-medium-v3.1.jinja
models/templates/meetkai-functionary-medium-v3.2.jinja
models/templates/meta-llama-Llama-3.1-8B-Instruct.jinja
models/templates/meta-llama-Llama-3.2-3B-Instruct.jinja
models/templates/meta-llama-Llama-3.3-70B-Instruct.jinja
models/templates/microsoft-Phi-3.5-mini-instruct.jinja
models/templates/mistralai-Mistral-Nemo-Instruct-2407.jinja
models/templates/NousResearch-Hermes-2-Pro-Llama-3-8B-tool_use.jinja
models/templates/NousResearch-Hermes-3-Llama-3.1-8B-tool_use.jinja
models/templates/Qwen-Qwen2.5-7B-Instruct.jinja
```
This is not limited to the models/templates directory, it can be used
anywhere in the filesystem, the above is just an example.

4 months agoggml: optimize some vec dot functions for LoongArch ASX (#11842)
Jinyang He [Fri, 14 Feb 2025 08:54:27 +0000 (16:54 +0800)]
ggml: optimize some vec dot functions for LoongArch ASX (#11842)

* Optimize ggml_vec_dot_q3_K_q8_K for LoongArch ASX

* Optimize ggml_vec_dot_q4_K_q8_K for LoongArch ASX

* Optimize ggml_vec_dot_q6_K_q8_K for LoongArch ASX

* Optimize ggml_vec_dot_q5_K_q8_K for LoongArch ASX

* Optimize ggml_vec_dot_q2_K_q8_K for LoongArch ASX

* Optimize mul_sum_i8_pairs_float for LoongArch ASX

* Optimize ggml_vec_dot_iq4_xs_q8_K for LoongArch ASX

4 months agovulkan: linux builds + small subgroup size fixes (#11767)
Eve [Fri, 14 Feb 2025 02:59:40 +0000 (02:59 +0000)]
vulkan: linux builds + small subgroup size fixes (#11767)

* mm subgroup size

* upload vulkan x86 builds

4 months agollama-bench : fix unexpected global variable initialize sequence issue (#11832)
theraininsky [Fri, 14 Feb 2025 01:13:43 +0000 (09:13 +0800)]
llama-bench : fix unexpected global variable initialize sequence issue (#11832)

* llama-bench : fix unexpected global variable initialize sequence issue

* Update examples/llama-bench/llama-bench.cpp

---------

Co-authored-by: Diego Devesa <redacted>
4 months agoreadme : minor
Georgi Gerganov [Thu, 13 Feb 2025 22:16:56 +0000 (00:16 +0200)]
readme : minor

4 months agollamafile: use member variable instead of constant for iq4nlt (#11780)
Jeffrey Morgan [Thu, 13 Feb 2025 17:05:04 +0000 (09:05 -0800)]
llamafile: use member variable instead of constant for iq4nlt (#11780)

4 months agoserver : (docs) Update wrong tool calling example (#11809)
Reza Rahemtola [Thu, 13 Feb 2025 16:22:44 +0000 (17:22 +0100)]
server : (docs) Update wrong tool calling example (#11809)

Call updated to match the tool used in the output just below, following the example in https://github.com/ggerganov/llama.cpp/pull/9639

4 months agollama : add --completion-bash option (#11846)
Daniel Bevenius [Thu, 13 Feb 2025 13:46:59 +0000 (14:46 +0100)]
llama : add --completion-bash option (#11846)

This commit adds a new option `--completion-bash` to the llama.cpp which
outputs a source-able bash completion script.

The motivation for this change is to provide a more user-friendly
experience for users who use the command-line interface of llama.cpp.

This is currently only basic and all options are displayed for all llama
executables but this can be improved in the future if needed.

Example usage:
```console
$ build/bin/llama-cli --completion-bash > ~/.llama-completion.bash
$ source ~/.llama-completion.bash

$ ./build/bin/llama-server --m<TAB>
--main-gpu         --mirostat         --mirostat-lr      --model            --multiline-input
--min-p            --mirostat-ent     --mlock            --model-url
```

4 months agomusa: bump MUSA SDK version to rc3.1.1 (#11822)
R0CKSTAR [Thu, 13 Feb 2025 12:28:18 +0000 (20:28 +0800)]
musa: bump MUSA SDK version to rc3.1.1  (#11822)

* musa: Update MUSA SDK version to rc3.1.1

Signed-off-by: Xiaodong Ye <redacted>
* musa: Remove workaround in PR #10042

Signed-off-by: Xiaodong Ye <redacted>
---------

Signed-off-by: Xiaodong Ye <redacted>
4 months ago`server`: fix tool-call of DeepSeek R1 Qwen, return reasoning_content (Command 7RB...
Olivier Chafik [Thu, 13 Feb 2025 10:05:16 +0000 (10:05 +0000)]
`server`: fix tool-call of DeepSeek R1 Qwen, return reasoning_content (Command 7RB & DeepSeek R1) unless `--reasoning-format none` (#11607)

* extract & return thoughts in reasoning_content field (unless --reasoning-format) for DeepSeek R1 & Command R7B

* tool-calls: add deepseek r1 template (models/templates/llama-cpp-deepseek-r1.jinja) + hackommodate broken official template

* tool-calls: accommodate variety of wrong tool call opening tags both R1 Qwen 32B and 7B distills like to spit out

* server/oai: ensure content is null when there are tool calls, and reasoning_content appears before content for readability

* tool-calls: add DeepSeek R1 Qwen distills to server/README.md & server tests

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
4 months agosampling: add Top-nσ sampler (#11223)
Vinesh Janarthanan [Thu, 13 Feb 2025 06:45:57 +0000 (00:45 -0600)]
sampling: add Top-nσ sampler (#11223)

* initial sampling changes:

* completed top nsigma sampler implementation

* apply parameter to only llama-cli

* updated readme

* added tests and fixed nsigma impl

* cleaned up pr

* format

* format

* format

* removed commented tests

* cleanup pr and remove explicit floats

* added top-k sampler to improve performance

* changed sigma to float

* fixed string format to float

* Update src/llama-sampling.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Update common/sampling.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Update src/llama-sampling.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Update src/llama-sampling.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Update src/llama-sampling.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Update src/llama-sampling.cpp

Co-authored-by: Georgi Gerganov <redacted>
* added llama_sampler_init

---------

Co-authored-by: Georgi Gerganov <redacted>