]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Jeff Bolz [Tue, 25 Feb 2025 15:30:21 +0000 (09:30 -0600)]
vulkan: fix assertion when qy_needs_dequant (#12068)
Looks like a copy/paste bug from qx_needs_dequant.
rhjdvsgsgks [Tue, 25 Feb 2025 11:52:52 +0000 (11:52 +0000)]
server: handle echo=false on /v1/completions (#12060)
Judd [Tue, 25 Feb 2025 11:32:20 +0000 (19:32 +0800)]
add OP sigmoid (#12056)
Co-authored-by: Judd <redacted>
Molly Sophia [Tue, 25 Feb 2025 11:28:22 +0000 (19:28 +0800)]
ggml-cpu: Fix build with sve (#12059)
* ggml-cpu: Fix build with sve
Signed-off-by: Molly Sophia <redacted>
* ggml-cpu: Remove unused variable in sve q3_k vec dot
Signed-off-by: Molly Sophia <redacted>
---------
Signed-off-by: Molly Sophia <redacted>
Rémy O [Tue, 25 Feb 2025 11:04:45 +0000 (12:04 +0100)]
vulkan: implement more backpropagation operators (#11914)
* vulkan: implement GGML_OP_ROPE_BACK
* vulkan: implement GGML_OP_RMS_NORM_BACK
* vulkan: implement GGML_OP_SILU_BACK
* vulkan: implement GGML_OP_SOFTMAX_BACK
Olivier Chafik [Tue, 25 Feb 2025 10:40:22 +0000 (10:40 +0000)]
server: support add_generation_prompt query param (#12062)
Alex Brooks [Tue, 25 Feb 2025 09:46:05 +0000 (02:46 -0700)]
Add Doc for Converting Granite Vision -> GGUF (#12006)
* Add example docs for granite vision
Signed-off-by: Alex-Brooks <redacted>
Vitali Lovich [Tue, 25 Feb 2025 09:29:33 +0000 (01:29 -0800)]
llama : expose llama_model_n_head_kv in the API (#11997)
It's useful to be able to have this from the library layer as it's a key
parameter of the model (e.g. to figure out how much KV cache memory is
needed).
Gian-Carlo Pascutto [Tue, 25 Feb 2025 09:27:58 +0000 (10:27 +0100)]
metal : copy kernels for quant to F32/F16 conversions (#12017)
metal: use dequantize_q templates
---------
Co-authored-by: Georgi Gerganov <redacted>
lhez [Mon, 24 Feb 2025 21:47:07 +0000 (13:47 -0800)]
opencl: fix for small models (#11950)
* opencl: fix small shape gemv, remove unused extensions
* opencl: fix `transpose_16`, `dump_tensor`, enforce subgroup size
* opencl: fix for token length < 4
* opencl: use wave size of 64 for all Adreno GPUs
---------
Co-authored-by: Shawn Gu <redacted>
Co-authored-by: Skyler Szot <redacted>
Alex Brooks [Mon, 24 Feb 2025 16:09:51 +0000 (09:09 -0700)]
llava : Add Granite Vision Support (#11794)
* Add super wip scripts for multimodal granite gguf
Signed-off-by: Alex-Brooks <redacted>
* Add example for converting mmgranite to gguf
Signed-off-by: Alex-Brooks <redacted>
* remove hardcoded path
Signed-off-by: Alex-Brooks <redacted>
* Add vision feature layer to gguf params
Signed-off-by: Alex-Brooks <redacted>
* Clean up llava surgery and remove name substitution hacks
Signed-off-by: Alex-Brooks <redacted>
* Add transformers llava next tensor name mapping
Signed-off-by: Alex-Brooks <redacted>
* Make siglip / openclip mutuall exclusive
Signed-off-by: Alex-Brooks <redacted>
* Fix projector linear substitution
Signed-off-by: Alex-Brooks <redacted>
* Fix linear 2 substitution index
Signed-off-by: Alex-Brooks <redacted>
* Increase max flattened gridpoints to 64
Signed-off-by: Alex-Brooks <redacted>
* Fix hardcoded concat for multiple feature layers
Signed-off-by: Alex-Brooks <redacted>
* Pull vision feature layers out of gguf keys
Signed-off-by: Alex-Brooks <redacted>
* fix num gridpoints and use all layers
Signed-off-by: Alex-Brooks <redacted>
* Avoid dropping last image encoder layer in llava models
Signed-off-by: Alex-Brooks <redacted>
* Use 10 for max number of patches
Signed-off-by: Alex-Brooks <redacted>
* Standardize vision feature layers
Signed-off-by: Alex-Brooks <redacted>
* Cleanup logs
Signed-off-by: Alex-Brooks <redacted>
* Update comment for vision feature layer init
Signed-off-by: Alex-Brooks <redacted>
* Update notes for alternative to legacy llm conversion script
Signed-off-by: Alex-Brooks <redacted>
* Fix notes rendering
Signed-off-by: Alex-Brooks <redacted>
* Add v prefix to vision feature layer log
Signed-off-by: Alex-Brooks <redacted>
* Use current defaults for feature layer
Signed-off-by: Alex-Brooks <redacted>
* Use constant for max gridpoints / feat layers, style fixes
Signed-off-by: Alex-Brooks <redacted>
* clarify non-negative feature layers
Signed-off-by: Alex-Brooks <redacted>
* Remove CLIP_API from func signature
Signed-off-by: Alex-Brooks <redacted>
* USE MAX_IMAGE_FEATURE_LAYERS const in layer calc
Signed-off-by: Alex-Brooks <redacted>
* Clarify feature layers are non negative ints and not uint
Signed-off-by: Alex-Brooks <redacted>
* Fix condition for reading feature layers
Signed-off-by: Alex-Brooks <redacted>
* pop last llava layer when feature layers are unset
Signed-off-by: Alex-Brooks <redacted>
* Fix unset vision layer 0
Signed-off-by: Alex-Brooks <redacted>
* Update examples/llava/clip.cpp
Co-authored-by: Xuan-Son Nguyen <redacted>
* Reenable assertion for out of bounds get_rows
Signed-off-by: Alex-Brooks <redacted>
* Use std vector for gridpoints and feature layers
Signed-off-by: Alex-Brooks <redacted>
* Caculate max feature layer at load time
Signed-off-by: Alex-Brooks <redacted>
* Include base patch for granite vision allocation
Signed-off-by: Alex-Brooks <redacted>
* Fix trailing whitespace
Signed-off-by: Alex-Brooks <redacted>
* Add max num patches = 10 back for minicpmv
Signed-off-by: Alex-Brooks <redacted>
* Use unordered set to store feature layers
Co-authored-by: Xuan-Son Nguyen <redacted>
Signed-off-by: Alex-Brooks <redacted>
* Use max feature layer for postnorm
Signed-off-by: Alex-Brooks <redacted>
* Apply suggestions from code review
---------
Signed-off-by: Alex-Brooks <redacted>
Co-authored-by: Xuan-Son Nguyen <redacted>
Neo Zhang Jianyu [Mon, 24 Feb 2025 14:33:23 +0000 (22:33 +0800)]
[SYCL] Optimize mul_mat for Q4_0 on Intel GPU (#12035)
* opt performance by reorder for Intel GPU
* detect hw type and save opt feature, and print opt feature
* correct name
* support optimize graph once when compute graph, record the opt status in tensor->extra, make CI passed
* add env variable GGML_SYCL_DISABLE_OPT for debug
* use syclex::architecture replace the custom hw define, update the guide for GGML_SYCL_DISABLE_OPT
* add performance data
* mv getrows functions to separeted files
* fix global variables
---------
Co-authored-by: arthw <redacted>
Aleksei Nikiforov [Mon, 24 Feb 2025 11:27:01 +0000 (12:27 +0100)]
gguf_convert_endian.py: implement byteswapping for q4_k and q6_k (#11349)
Akarshan Biswas [Mon, 24 Feb 2025 10:18:25 +0000 (15:48 +0530)]
SYCL: Fix GGML_SYCL_DEBUG macro (#11995)
Florent BENOIT [Sun, 23 Feb 2025 17:15:51 +0000 (18:15 +0100)]
run: allow to customize prompt by env var LLAMA_PROMPT_PREFIX (#12041)
Signed-off-by: Florent Benoit <redacted>
Eric Curtin [Sun, 23 Feb 2025 13:14:32 +0000 (13:14 +0000)]
Some llama-run cleanups (#11973)
Use consolidated open function call from File class. Change
read_all to to_string(). Remove exclusive locking, the intent for
that lock is to avoid multiple processes writing to the same file,
it's not an issue for readers, although we may want to consider
adding a shared lock. Remove passing nullptr as reference,
references are never supposed to be null. clang-format the code
for consistent styling.
Signed-off-by: Eric Curtin <redacted>
Aaron Teo [Sat, 22 Feb 2025 21:39:24 +0000 (05:39 +0800)]
ggml-cpu: Support s390x SIMD Instruction Set (#12019)
* ggml: add s390x ARCH_FLAGS for compilation
Signed-off-by: Aaron Teo <redacted>
* ggml: add SIMD for s390x using vector intrinsics
SIMD is activated for:
* ggml_vec_dot_f32
* ggml_vec_dot_f16
* ggml_vec_mad_f32
* ggml_vec_mad_f16
* ggml_vec_mad_f32_unroll
* ggml_vec_scale_f32
* ggml_vec_scale_f16
SIMD is NOT activated for:
* ggml_vec_dot_f16_unroll (pending bugfix)
Signed-off-by: Aaron Teo <redacted>
* ggml: fix missing escape character in GGML_F32x4_REDUCE
Signed-off-by: Aaron Teo <redacted>
* ggml: add temporary patch for GGML_F32_ARR and GGML_F16_ARR
Signed-off-by: Aaron Teo <redacted>
* ggml: fix s390x GGML_F32x4_REDUCE
Signed-off-by: Aaron Teo <redacted>
* ggml: full SIMD activation for F32,F16 s390x
Signed-off-by: Aaron Teo <redacted>
* ggml: add option to disable s390x VXE/VXE2
Signed-off-by: Aaron Teo <redacted>
* ggml: change vecintrin.h include to ggml-cpu-impl
* add __VXE__ and __VXE2__ macros
Signed-off-by: Aaron Teo <redacted>
* cmake: add s390x target detection for VX/VXE/VXE2
Signed-off-by: Aaron Teo <redacted>
* ggml: move s390x vector intrinsics to ggml-cpu-impl.h
Signed-off-by: Aaron Teo <redacted>
* ggml: s390x Q8_0 SIMD
Signed-off-by: Aaron Teo <redacted>
* ggml: correct documentation for Q8_0
Signed-off-by: Aaron Teo <redacted>
* ggml: s390x reduce code complexity Q8_0
Signed-off-by: Aaron Teo <redacted>
* ggml: s390x bugfix typo Q8_0
Signed-off-by: Aaron Teo <redacted>
* ggml: s390x SIMD activated for Q4_1
Signed-off-by: Aaron Teo <redacted>
* ggml: s390x inline vec_reve
Signed-off-by: Aaron Teo <redacted>
* ggml: s390x SIMD activation for Q4_0
Signed-off-by: Aaron Teo <redacted>
* ggml: add VXE backend feature
Signed-off-by: Aaron Teo <redacted>
* ggml: remove test.py
Signed-off-by: Aaron Teo <redacted>
* ggml: s390x SIMD activation for quantize_row_q8_0
Signed-off-by: Aaron Teo <redacted>
* ggml: s390x SIMD activation for quantize_row_q8_1
Signed-off-by: Aaron Teo <redacted>
* ggml: s390x SIMD activation for iq4_xs
Signed-off-by: Aaron Teo <redacted>
* ggml: bugfix iq4_xs
Signed-off-by: Aaron Teo <redacted>
* ggml: s390x SIMD activation for iq4_nl
Signed-off-by: Aaron Teo <redacted>
* ggml: add float, double, and long vector data type
Signed-off-by: Aaron Teo <redacted>
* ggml: clean up iq4_xs SIMD
Signed-off-by: Aaron Teo <redacted>
* ggml: fix improper use of restrict keyword
Signed-off-by: Aaron Teo <redacted>
* ggml: update warning message for ggml_vec_tbl
Signed-off-by: Aaron Teo <redacted>
* ggml: untested implementation of ggml_vec_dot_iq2_xxs_q8_K
Signed-off-by: Aaron Teo <redacted>
* ggml: update ggml_vec_dot_q4_1_q8_1 to use typedefs
Signed-off-by: Aaron Teo <redacted>
* ggml: switch to restrict for iq4_nl
Signed-off-by: Aaron Teo <redacted>
* ggml: slight dot product speed improvement for q4_1_q8_1
Signed-off-by: Aaron Teo <redacted>
* ggml: s390x SIMD activation for q6_K
Signed-off-by: Aaron Teo <redacted>
* ggml: add missing `_t` to ggml_int8x16x4_t
Signed-off-by: Aaron Teo <redacted>
* ggml: fix missing `_t` for ggml_vec_xl_s8x4
Signed-off-by: Aaron Teo <redacted>
* ggml: fix more missing `_t`
Signed-off-by: Aaron Teo <redacted>
* ggml: add unroll and prefetch to Q8_0
increase of 3.86% for prompt processing and 32.22% for token generation
Signed-off-by: Aaron Teo <redacted>
* ggml: patch Q8_0 to use proper vector sizes
Signed-off-by: Aaron Teo <redacted>
* ggml: optimise Q8_0 dot prod compute kernel further
Signed-off-by: Aaron Teo <redacted>
* ggml: add unroll and prefetch to Q4_1
Signed-off-by: Aaron Teo <redacted>
* ggml: refactor Q6_K variable naming for readability
Signed-off-by: Aaron Teo <redacted>
* ggml: fix Q6_K typos
Signed-off-by: Aaron Teo <redacted>
* ggml: s390x SIMD activation for Q5_K
Signed-off-by: Aaron Teo <redacted>
* ggml: fix wrong char*x16_t naming
Signed-off-by: Aaron Teo <redacted>
* ggml: Q5_K y0 wrong signness
Signed-off-by: Aaron Teo <redacted>
* ggml: fix Q5_K invalid uchar type
Signed-off-by: Aaron Teo <redacted>
* ggml: fix Q5_K invalid uchar type
Signed-off-by: Aaron Teo <redacted>
* ggml: s390x SIMD activation for Q4_K
Signed-off-by: Aaron Teo <redacted>
* ggml: fix Q4_K invalid vector intrinsics
Signed-off-by: Aaron Teo <redacted>
* ggml: simplify ggml_padd_s16 compute kernel
Signed-off-by: Aaron Teo <redacted>
* ggml: correct ggml-cpu vxe wording
Signed-off-by: Aaron Teo <redacted>
* ggml: change ggml_aligned_malloc alignment to 256
256 is the cache line size for s390x platforms
Signed-off-by: Aaron Teo <redacted>
* ggml: resolve pr merge via cherry-pick
225bbbf
Signed-off-by: Aaron Teo <redacted>
* ggml : fix LoongArch compile error with 128-bit SIMD (#11701)
* ggml: resolve pr merge via cherry-pick
4571953
Signed-off-by: Aaron Teo <redacted>
* ggml: cmake remove fork when determining s390x machine type
thank you @ericcurtin
Signed-off-by: Aaron Teo <redacted>
---------
Signed-off-by: Aaron Teo <redacted>
Co-authored-by: Jinyang He <redacted>
Co-authored-by: junchao-zhao <redacted>
Johannes Gäßler [Sat, 22 Feb 2025 19:44:34 +0000 (20:44 +0100)]
CUDA: app option to compile without FlashAttention (#12025)
Ting Lou [Sat, 22 Feb 2025 14:28:28 +0000 (22:28 +0800)]
llava: build clip image from pixels (#11999)
* llava: export function `clip_build_img_from_pixels` to build image from pixels decoded by other libraries instead of stb_image.h for better performance
* Apply suggestions from code review
---------
Co-authored-by: Xuan-Son Nguyen <redacted>
Georgi Gerganov [Sat, 22 Feb 2025 13:03:00 +0000 (15:03 +0200)]
ci : fix arm upload artifacts (#12024)
* ci : fix arm upload artifacts
* cont : fix archive name to use matrix
Johannes Gäßler [Sat, 22 Feb 2025 11:20:17 +0000 (12:20 +0100)]
CUDA: optimize FA for GQA + large batches (#12014)
Rohanjames1997 [Sat, 22 Feb 2025 10:48:57 +0000 (04:48 -0600)]
ci : Build on Github-hosted arm64 runners (#12009)
Georgi Gerganov [Sat, 22 Feb 2025 10:46:31 +0000 (12:46 +0200)]
server : disable Nagle's algorithm (#12020)
Gian-Carlo Pascutto [Sat, 22 Feb 2025 08:43:24 +0000 (09:43 +0100)]
cuda: Add Q5_1, Q5_0, Q4_1 and Q4_0 to F32 conversion support. (#12000)
Daniel Bevenius [Sat, 22 Feb 2025 05:33:29 +0000 (06:33 +0100)]
llama.swiftui : add "Done" dismiss button to help view (#11998)
The commit updates the help view in the llama.swiftui example to use a
NavigationView and a Done button to dismiss the help view.
The motivation for this is that without this change there is now way to
dimiss the help view.
Georgi Gerganov [Fri, 21 Feb 2025 16:33:18 +0000 (18:33 +0200)]
llama : skip loading unused tensors (#12004)
* llama : assign unknown/unused tensors to host buffer type
ggml-ci
* llama : skip unused tensors
ggml-ci
Johannes Gäßler [Fri, 21 Feb 2025 11:51:25 +0000 (12:51 +0100)]
doc: update contributing guidelines [no ci] (#11969)
PureJourney [Fri, 21 Feb 2025 11:21:05 +0000 (19:21 +0800)]
CUDA: correct the lowest Maxwell supported by CUDA 12 (#11984)
* CUDA: correct the lowest Maxwell supported by CUDA 12
---------
Co-authored-by: Johannes Gäßler <redacted>
Bodhi [Fri, 21 Feb 2025 07:46:23 +0000 (15:46 +0800)]
MUSA: support ARM64 and enable dp4a .etc (#11843)
* MUSA: support ARM64 and enable __dp4a .etc
* fix cross entropy loss op for musa
* update
* add cc info log for musa
* add comment for the MUSA .cc calculation block
---------
Co-authored-by: Bodhi Hu <redacted>
Alex Brooks [Fri, 21 Feb 2025 06:11:03 +0000 (23:11 -0700)]
clip : fix visual encoders with no CLS (#11982)
Signed-off-by: Alex-Brooks <redacted>
momonga [Thu, 20 Feb 2025 18:43:22 +0000 (03:43 +0900)]
server (webui): Fix Premature Submission During IME Conversion (#11971)
* fix skip ime composing
* fix npm rebuild
* fix warn
---------
Co-authored-by: momonga <redacted>
Co-authored-by: Xuan Son Nguyen <redacted>
Charles Xu [Thu, 20 Feb 2025 13:06:51 +0000 (14:06 +0100)]
ggml-cpu: Add CPU backend support for KleidiAI library (#11390)
* ggml-cpu: Add CPU backend support for KleidiAI library
* Add environmental variable GGML_KLEIDIAI_SME
* Add support for multithread LHS conversion
* Switch kernel selection order to dotprod and i8mm
* updates for review comments
* More updates for review comments
* Reorganize and rename KleidiAI files
* Move ggml-cpu-traits.h to source file
* Update cmake for SME build and add alignment for SME
* Remove append GGML_USE_CPU_KLEIDIAI to the GGML_CDEF_PUBLIC list
Prashant Vithule [Thu, 20 Feb 2025 10:08:32 +0000 (15:38 +0530)]
ggml: aarch64: implement SVE kernels for q3_K_q8_K vector dot (#11917)
* Added SVE Implementation for Q3_K Kernel in ggml-cpu-quants.c file
* Improved Formating of code in ggml-cpu-quants.c file
* style : minor fixes
* style : less whitespaces
* style : ptr spaceing
---------
Co-authored-by: vithulep <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Michael Engel [Thu, 20 Feb 2025 08:35:11 +0000 (09:35 +0100)]
run : add --chat-template-file (#11961)
Relates to: https://github.com/ggml-org/llama.cpp/issues/11178
Added --chat-template-file CLI option to llama-run. If specified, the file
will be read and the content passed for overwriting the chat template of
the model to common_chat_templates_from_model.
Signed-off-by: Michael Engel <redacted>
Johannes Gäßler [Wed, 19 Feb 2025 19:45:17 +0000 (20:45 +0100)]
doc: add links to ggml examples [no ci] (#11958)
Daniel Bevenius [Wed, 19 Feb 2025 11:29:52 +0000 (12:29 +0100)]
common : add llama.vim preset for Qwen2.5 Coder (#11945)
This commit adds a preset for llama.vim to use the default Qwen 2.5
Coder models.
The motivation for this change is to make it easier to start a server
suitable to be used with the llama.vim plugin. For example, the server
can be started with a command like the following:
```console
$ llama.vim --fim-qwen-1.5b-default
```
Refs: https://github.com/ggml-org/llama.cpp/issues/10932
Georgi Gerganov [Wed, 19 Feb 2025 11:29:42 +0000 (13:29 +0200)]
speculative : update default params (#11954)
* speculative : update default params
* speculative : do not discard the last drafted token
Daniel Bevenius [Wed, 19 Feb 2025 05:16:23 +0000 (06:16 +0100)]
llama : fix indentation in llama-grammar [no ci] (#11943)
This commit adjusts the indentation for the functions `parse_sequence`
and `parse_rule` in src/llama-grammar.cpp.
The motivation is consistency and improve readability.
igardev [Tue, 18 Feb 2025 22:01:44 +0000 (00:01 +0200)]
server : (webui) Enable communication with parent html (if webui is in iframe) (#11940)
* Webui: Enable communication with parent html (if webui is in iframe):
- Listens for "setText" command from parent with "text" and "context" fields. "text" is set in inputMsg, "context" is used as hidden context on the following requests to the llama.cpp server
- On pressing na Escape button sends command "escapePressed" to the parent
Example handling from the parent html side:
- Send command "setText" from parent html to webui in iframe:
const iframe = document.getElementById('askAiIframe');
if (iframe) {
iframe.contentWindow.postMessage({ command: 'setText', text: text, context: context }, '*');
}
- Listen for Escape key from webui on parent html:
// Listen for escape key event in the iframe
window.addEventListener('keydown', (event) => {
if (event.key === 'Escape') {
// Process case when Escape is pressed inside webui
}
});
* Move the extraContext from storage to app.context.
* Fix formatting.
* add Message.extra
* format + build
* MessageExtraContext
* build
* fix display
* rm console.log
---------
Co-authored-by: igardev <redacted>
Co-authored-by: Xuan Son Nguyen <redacted>
Olivier Chafik [Tue, 18 Feb 2025 18:03:23 +0000 (18:03 +0000)]
tool-call: refactor common chat / tool-call api (+ tests / fixes) (#11900)
* tool-call refactoring: moved common_chat_* to chat.h, common_chat_templates_init return a unique_ptr to opaque type
* addressed clang-tidy lints in [test-]chat.*
* rm minja deps from util & common & move it to common/minja/
* add name & tool_call_id to common_chat_msg
* add common_chat_tool
* added json <-> tools, msgs conversions to chat.h
* fix double bos/eos jinja avoidance hack (was preventing inner bos/eos tokens)
* fix deepseek r1 slow test (no longer <think> opening w/ new template)
* allow empty tools w/ auto + grammar
* fix & test server grammar & json_schema params w/ & w/o --jinja
Xuan-Son Nguyen [Tue, 18 Feb 2025 13:21:41 +0000 (14:21 +0100)]
server : add TEI API format for /rerank endpoint (#11942)
* server : add TEI API format for /rerank endpoint
* Apply suggestions from code review
Co-authored-by: Georgi Gerganov <redacted>
* fix
* also gitignore examples/server/*.gz.hpp
---------
Co-authored-by: Georgi Gerganov <redacted>
MoonRide303 [Tue, 18 Feb 2025 09:30:16 +0000 (10:30 +0100)]
scripts: corrected encoding when getting chat template (#11866) (#11907)
Signed-off-by: MoonRide303 <redacted>
xiaobing318 [Tue, 18 Feb 2025 09:12:49 +0000 (17:12 +0800)]
docs : Fix duplicated file extension in test command (#11935)
This commit fixes an issue in the llama.cpp project where the command for testing the llama-server object contained a duplicated file extension. The original command was:
./tests.sh unit/test_chat_completion.py.py -v -x
It has been corrected to:
./tests.sh unit/test_chat_completion.py -v -x
This change ensures that the test script correctly locates and executes the intended test file, preventing test failures due to an incorrect file name.
Johannes Gäßler [Mon, 17 Feb 2025 13:03:24 +0000 (14:03 +0100)]
CUDA: use async data loading for FlashAttention (#11894)
* CUDA: use async data loading for FlashAttention
---------
Co-authored-by: Diego Devesa <redacted>
Eve [Mon, 17 Feb 2025 11:20:23 +0000 (11:20 +0000)]
update release requirements (#11897)
Antoine Viallon [Mon, 17 Feb 2025 10:25:12 +0000 (11:25 +0100)]
server : fix divide-by-zero in metrics reporting (#11915)
Rémy O [Mon, 17 Feb 2025 06:55:57 +0000 (07:55 +0100)]
vulkan: implement several ops relevant for ggml_opt (#11769)
* vulkan: support memset_tensor
* vulkan: support GGML_OP_SUM
* vulkan: implement GGML_OP_ARGMAX
* vulkan: implement GGML_OP_SUB
* vulkan: implement GGML_OP_COUNT_EQUAL
* vulkan: implement GGML_OP_OPT_STEP_ADAMW
* vulkan: fix check_results RWKV_WKV6 crash and memory leaks
* vulkan: implement GGML_OP_REPEAT_BACK
* tests: remove invalid test-backend-ops REPEAT_BACK tests
* vulkan: fix COUNT_EQUAL memset using a fillBuffer command
Xuan-Son Nguyen [Sun, 16 Feb 2025 17:11:22 +0000 (18:11 +0100)]
server : bump httplib to 0.19.0 (#11908)
standby24x7 [Sun, 16 Feb 2025 09:51:13 +0000 (18:51 +0900)]
common : Fix a typo in help (#11899)
This patch fixes a typo in command help.
prefx -> prefix
Signed-off-by: Masanari Iida <redacted>
Xuan-Son Nguyen [Sun, 16 Feb 2025 09:36:39 +0000 (10:36 +0100)]
ci : fix (again) arm64 build fails (#11895)
* docker : attempt fixing arm64 build on ci
* qemu v7.0.0-28
Jeff Bolz [Sun, 16 Feb 2025 07:52:23 +0000 (01:52 -0600)]
vulkan: support multi/vision rope, and noncontiguous rope (#11902)
Hale Chan [Sun, 16 Feb 2025 06:50:26 +0000 (14:50 +0800)]
metal : fix the crash caused by the lack of residency set support on Intel Macs. (#11904)
Johannes Gäßler [Sat, 15 Feb 2025 19:23:22 +0000 (20:23 +0100)]
scripts: fix compare-llama-bench commit hash logic (#11891)
708-145 [Sat, 15 Feb 2025 19:03:30 +0000 (20:03 +0100)]
examples: fix typo in imatrix/README.md (#11884)
* simple typo fixed
* Update examples/imatrix/README.md
---------
Co-authored-by: Tobias Bergmann <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Adrian Kretz [Sat, 15 Feb 2025 18:39:20 +0000 (19:39 +0100)]
metal : optimize dequant q6_K kernel (#11892)
Georgi Gerganov [Sat, 15 Feb 2025 18:29:56 +0000 (20:29 +0200)]
readme : add notice about new package registry (#11890)
* readme : add notice about new package registry
* cont : fix whitespace
Georgi Gerganov [Sat, 15 Feb 2025 14:40:57 +0000 (16:40 +0200)]
repo : update links to new url (#11886)
* repo : update links to new url
ggml-ci
* cont : more urls
ggml-ci
Olivier Chafik [Sat, 15 Feb 2025 10:11:36 +0000 (10:11 +0000)]
server: fix type promotion typo causing crashes w/ --jinja w/o tools (#11880)
Rémy O [Sat, 15 Feb 2025 08:01:40 +0000 (09:01 +0100)]
vulkan: initial support for IQ1_S and IQ1_M quantizations (#11528)
* vulkan: initial support for IQ1_S and IQ1_M quantizations
* vulkan: define MMV kernels for IQ1 quantizations
* devops: increase timeout of Vulkan tests again
* vulkan: simplify ifdef for init_iq_shmem
Michał Moskal [Fri, 14 Feb 2025 20:46:08 +0000 (12:46 -0800)]
llguidance build fixes for Windows (#11664)
* setup windows linking for llguidance; thanks @phil-scott-78
* add build instructions for windows and update script link
* change VS Community link from DE to EN
* whitespace fix
lhez [Fri, 14 Feb 2025 19:12:23 +0000 (11:12 -0800)]
opencl: Fix rope and softmax (#11833)
* opencl: fix `ROPE`
* opencl: fix `SOFT_MAX`
* Add fp16 variant
* opencl: enforce subgroup size for `soft_max`
Diego Devesa [Fri, 14 Feb 2025 14:33:52 +0000 (15:33 +0100)]
cuda : add ampere to the list of default architectures (#11870)
Georgi Gerganov [Fri, 14 Feb 2025 12:48:40 +0000 (14:48 +0200)]
docker : drop to CUDA 12.4 (#11869)
* docker : drop to CUDA 12.4
* docker : update readme [no ci]
Daniel Bevenius [Fri, 14 Feb 2025 10:16:56 +0000 (11:16 +0100)]
llama : add completion for --chat-template-file (#11860)
This commit adds completion for `--chat-template-file`, enabling only
`.jinja` files to be displayed as completions.
Example usage:
```console
$ ./build/bin/llama-cli --chat-template-file models/templates/<TAB>
models/templates/CohereForAI-c4ai-command-r7b-12-2024-tool_use.jinja
models/templates/CohereForAI-c4ai-command-r-plus-tool_use.jinja
models/templates/deepseek-ai-DeepSeek-R1-Distill-Llama-8B.jinja
models/templates/deepseek-ai-DeepSeek-R1-Distill-Qwen-32B.jinja
models/templates/fireworks-ai-llama-3-firefunction-v2.jinja
models/templates/google-gemma-2-2b-it.jinja
models/templates/llama-cpp-deepseek-r1.jinja
models/templates/meetkai-functionary-medium-v3.1.jinja
models/templates/meetkai-functionary-medium-v3.2.jinja
models/templates/meta-llama-Llama-3.1-8B-Instruct.jinja
models/templates/meta-llama-Llama-3.2-3B-Instruct.jinja
models/templates/meta-llama-Llama-3.3-70B-Instruct.jinja
models/templates/microsoft-Phi-3.5-mini-instruct.jinja
models/templates/mistralai-Mistral-Nemo-Instruct-2407.jinja
models/templates/NousResearch-Hermes-2-Pro-Llama-3-8B-tool_use.jinja
models/templates/NousResearch-Hermes-3-Llama-3.1-8B-tool_use.jinja
models/templates/Qwen-Qwen2.5-7B-Instruct.jinja
```
This is not limited to the models/templates directory, it can be used
anywhere in the filesystem, the above is just an example.
Jinyang He [Fri, 14 Feb 2025 08:54:27 +0000 (16:54 +0800)]
ggml: optimize some vec dot functions for LoongArch ASX (#11842)
* Optimize ggml_vec_dot_q3_K_q8_K for LoongArch ASX
* Optimize ggml_vec_dot_q4_K_q8_K for LoongArch ASX
* Optimize ggml_vec_dot_q6_K_q8_K for LoongArch ASX
* Optimize ggml_vec_dot_q5_K_q8_K for LoongArch ASX
* Optimize ggml_vec_dot_q2_K_q8_K for LoongArch ASX
* Optimize mul_sum_i8_pairs_float for LoongArch ASX
* Optimize ggml_vec_dot_iq4_xs_q8_K for LoongArch ASX
Eve [Fri, 14 Feb 2025 02:59:40 +0000 (02:59 +0000)]
vulkan: linux builds + small subgroup size fixes (#11767)
* mm subgroup size
* upload vulkan x86 builds
theraininsky [Fri, 14 Feb 2025 01:13:43 +0000 (09:13 +0800)]
llama-bench : fix unexpected global variable initialize sequence issue (#11832)
* llama-bench : fix unexpected global variable initialize sequence issue
* Update examples/llama-bench/llama-bench.cpp
---------
Co-authored-by: Diego Devesa <redacted>
Georgi Gerganov [Thu, 13 Feb 2025 22:16:56 +0000 (00:16 +0200)]
readme : minor
Jeffrey Morgan [Thu, 13 Feb 2025 17:05:04 +0000 (09:05 -0800)]
llamafile: use member variable instead of constant for iq4nlt (#11780)
Reza Rahemtola [Thu, 13 Feb 2025 16:22:44 +0000 (17:22 +0100)]
server : (docs) Update wrong tool calling example (#11809)
Call updated to match the tool used in the output just below, following the example in https://github.com/ggerganov/llama.cpp/pull/9639
Daniel Bevenius [Thu, 13 Feb 2025 13:46:59 +0000 (14:46 +0100)]
llama : add --completion-bash option (#11846)
This commit adds a new option `--completion-bash` to the llama.cpp which
outputs a source-able bash completion script.
The motivation for this change is to provide a more user-friendly
experience for users who use the command-line interface of llama.cpp.
This is currently only basic and all options are displayed for all llama
executables but this can be improved in the future if needed.
Example usage:
```console
$ build/bin/llama-cli --completion-bash > ~/.llama-completion.bash
$ source ~/.llama-completion.bash
$ ./build/bin/llama-server --m<TAB>
--main-gpu --mirostat --mirostat-lr --model --multiline-input
--min-p --mirostat-ent --mlock --model-url
```
R0CKSTAR [Thu, 13 Feb 2025 12:28:18 +0000 (20:28 +0800)]
musa: bump MUSA SDK version to rc3.1.1 (#11822)
* musa: Update MUSA SDK version to rc3.1.1
Signed-off-by: Xiaodong Ye <redacted>
* musa: Remove workaround in PR #10042
Signed-off-by: Xiaodong Ye <redacted>
---------
Signed-off-by: Xiaodong Ye <redacted>
Olivier Chafik [Thu, 13 Feb 2025 10:05:16 +0000 (10:05 +0000)]
`server`: fix tool-call of DeepSeek R1 Qwen, return reasoning_content (Command 7RB & DeepSeek R1) unless `--reasoning-format none` (#11607)
* extract & return thoughts in reasoning_content field (unless --reasoning-format) for DeepSeek R1 & Command R7B
* tool-calls: add deepseek r1 template (models/templates/llama-cpp-deepseek-r1.jinja) + hackommodate broken official template
* tool-calls: accommodate variety of wrong tool call opening tags both R1 Qwen 32B and 7B distills like to spit out
* server/oai: ensure content is null when there are tool calls, and reasoning_content appears before content for readability
* tool-calls: add DeepSeek R1 Qwen distills to server/README.md & server tests
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Vinesh Janarthanan [Thu, 13 Feb 2025 06:45:57 +0000 (00:45 -0600)]
sampling: add Top-nσ sampler (#11223)
* initial sampling changes:
* completed top nsigma sampler implementation
* apply parameter to only llama-cli
* updated readme
* added tests and fixed nsigma impl
* cleaned up pr
* format
* format
* format
* removed commented tests
* cleanup pr and remove explicit floats
* added top-k sampler to improve performance
* changed sigma to float
* fixed string format to float
* Update src/llama-sampling.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Update common/sampling.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Update src/llama-sampling.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Update src/llama-sampling.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Update src/llama-sampling.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Update src/llama-sampling.cpp
Co-authored-by: Georgi Gerganov <redacted>
* added llama_sampler_init
---------
Co-authored-by: Georgi Gerganov <redacted>
Oleksandr Kuvshynov [Thu, 13 Feb 2025 06:25:34 +0000 (01:25 -0500)]
llama.cpp: fix warning message (#11839)
There was a typo-like error, which would print the same number twice if
request is received with n_predict > server-side config.
Before the fix:
```
slot launch_slot_: id 0 | task 0 | n_predict = 4096 exceeds server configuration, setting to 4096
```
After the fix:
```
slot launch_slot_: id 0 | task 0 | n_predict = 8192 exceeds server configuration, setting to 4096
```
Daniel Bevenius [Thu, 13 Feb 2025 06:07:51 +0000 (07:07 +0100)]
llama : update llama_decode_internal ref [no ci] (#11840)
This commit updates the comment in llama_kv_cache.h to reflect the
change of the function name from llama_decode_internal to
llama_decode_impl.
Diego Devesa [Thu, 13 Feb 2025 00:02:38 +0000 (01:02 +0100)]
ggml-cpu : add chunking support to mul_mat_id (#11666)
* ggml-cpu : add chunking support to mul_mat_id
* allocate chunk counter in wdata
parallelize src1 quantization by column to allows parallelization even when there is only one row
* disable for arm
* cleanup
* better way to disable for arm
* fix uninitialized counter when using 1 thread only
* revert test-backend-ops changes
Xuan-Son Nguyen [Wed, 12 Feb 2025 23:33:45 +0000 (00:33 +0100)]
ggml : x2 speed for WASM by optimizing SIMD (#11453)
* ggml : x2 speed for WASM by optimizing SIMD
* fix bad merging
* rm trailing spaces
* rm redundant clamp
* better quantize_row_q8_K
Co-authored-by: camel-cdr <redacted>
* remove memset that causes buffer overflow
Co-authored-by: camel-cdr <redacted>
---------
Co-authored-by: camel-cdr <redacted>
Woof Dog [Wed, 12 Feb 2025 22:47:11 +0000 (22:47 +0000)]
server : (webui) Give copy button back to all message bubbles (#11814)
* All messages get the copy button
* Update index.html.gz
uvos [Wed, 12 Feb 2025 21:25:28 +0000 (22:25 +0100)]
HIP: Remove GCN from list of devices that avoid MMQ (#11831)
JC [Wed, 12 Feb 2025 20:36:11 +0000 (20:36 +0000)]
Fix: Compile failure due to Microsoft STL breaking change (#11836)
Georgi Gerganov [Wed, 12 Feb 2025 19:46:02 +0000 (21:46 +0200)]
sync : ggml
uvos [Wed, 12 Feb 2025 16:25:03 +0000 (17:25 +0100)]
HIP: Switch to std::vector in rocblas version check (#11820)
bandoti [Wed, 12 Feb 2025 14:06:53 +0000 (10:06 -0400)]
cleanup: fix compile warnings associated with gnu_printf (#11811)
Richard [Wed, 12 Feb 2025 13:57:33 +0000 (13:57 +0000)]
ggml : fix multi-threaded clamp_f32 (#11824)
* Bug fix for clamp_f32
When using tensors larger than 1d clamp operation does not work due to the restriction of returning if ith is not 0.
* Bug fix for clamp_f32
* Bug fix for clamp_f32
Weizhao Ouyang [Wed, 12 Feb 2025 12:22:58 +0000 (20:22 +0800)]
ggml-cpu: Fix duplicate MATMUL_INT8 (#11817)
Signed-off-by: Weizhao Ouyang <redacted>
Johannes Gäßler [Wed, 12 Feb 2025 12:16:39 +0000 (13:16 +0100)]
CUDA: fix CUDART_VERSION checks (#11821)
Daniel Bevenius [Wed, 12 Feb 2025 07:40:01 +0000 (08:40 +0100)]
llama : fix typo in llama-grammar.h [no ci] (#11816)
lhez [Tue, 11 Feb 2025 22:04:13 +0000 (14:04 -0800)]
docs: add OpenCL (#11697)
Sheldon Robinson [Tue, 11 Feb 2025 15:55:45 +0000 (10:55 -0500)]
Fix #11802: Compile bug - RegQueryValueExA changed to RegQueryValueEx (#11803)
* Fix #11802: Compile bug - RegQueryValueExA changed to RegQueryValueEx
* Fix #11802: PR #11803 - keep RegQueryValueExA, remove TEXT macro, description needs to be ANSI string
Daniel Bevenius [Tue, 11 Feb 2025 13:06:45 +0000 (14:06 +0100)]
server : use common_token_to_piece instead of common_detokenize (#11740)
* server : use common_token_to_piece instead of common_detokenize
This commit replaces the call to common_detokenize with
common_token_to_piece in the populate_token_probs.
The motivation for this change is to avoid an issue where
common_detokenize would remove the word boundary character for tokens,
which caused a regression in the server generated token probabilities.
Resolves: https://github.com/ggerganov/llama.cpp/issues/11728
* squash! server : use common_token_to_piece instead of common_detokenize
Use common_token_to_piece for post_sampling_probs as well.
Johannes Gäßler [Mon, 10 Feb 2025 23:17:22 +0000 (00:17 +0100)]
CUDA: use arch list for compatibility check (#11775)
* CUDA: use arch list for feature availability check
---------
Co-authored-by: Diego Devesa <redacted>
Maxim Evtush [Mon, 10 Feb 2025 22:21:31 +0000 (23:21 +0100)]
fix: typos in documentation files (#11791)
* Update ggml.c
* Update arg.cpp
* Update speculative.h
jason_w [Mon, 10 Feb 2025 22:17:48 +0000 (06:17 +0800)]
docs: utilize the forward slash (/) as the path separator for Unix-like systems (#11770)
Xuan-Son Nguyen [Mon, 10 Feb 2025 20:23:17 +0000 (21:23 +0100)]
server : (webui) introduce conversation branching + idb storage (#11792)
* server : (webui) introduce conversation branching + idb storage
* mark old conv as "migrated" instead deleting them
* improve migration
* add more comments
* more clarification
Wilken Gottwalt [Mon, 10 Feb 2025 18:58:18 +0000 (19:58 +0100)]
llama-mmap: fix missing include (#11796)
Technically the fixed width types come only from iostream and
cstdint/stdint.h headers. memory and vector headers should not provide
these. In GCC 15 the headers are cleaned up and you require the proper
header cstdint.
src/llama-mmap.h:26:5: error: ‘uint32_t’ does not name a type
26 | uint32_t read_u32() const;
| ^~~~~~~~
Xuan-Son Nguyen [Mon, 10 Feb 2025 17:03:28 +0000 (18:03 +0100)]
server : correct signal handler (#11795)
Olivier Chafik [Mon, 10 Feb 2025 09:34:09 +0000 (09:34 +0000)]
sync: minja (https://github.com/google/minja/commit/
a72057e5190de2c612d4598bb10b4bfd0f53011f ) (#11774)
pascal-lc [Mon, 10 Feb 2025 08:05:57 +0000 (16:05 +0800)]
Update README.md [no ci] (#11781)
typo: `\` -> `/`
Change the UNIX path separator to` \`.
Danny Milosavljevic [Mon, 10 Feb 2025 06:17:21 +0000 (07:17 +0100)]
vulkan: Make Vulkan optional at runtime (#11493). (#11494)
Co-authored-by: Jeff Bolz <redacted>