]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Eve [Fri, 14 Feb 2025 02:59:40 +0000 (02:59 +0000)]
vulkan: linux builds + small subgroup size fixes (#11767)
* mm subgroup size
* upload vulkan x86 builds
theraininsky [Fri, 14 Feb 2025 01:13:43 +0000 (09:13 +0800)]
llama-bench : fix unexpected global variable initialize sequence issue (#11832)
* llama-bench : fix unexpected global variable initialize sequence issue
* Update examples/llama-bench/llama-bench.cpp
---------
Co-authored-by: Diego Devesa <redacted>
Georgi Gerganov [Thu, 13 Feb 2025 22:16:56 +0000 (00:16 +0200)]
readme : minor
Jeffrey Morgan [Thu, 13 Feb 2025 17:05:04 +0000 (09:05 -0800)]
llamafile: use member variable instead of constant for iq4nlt (#11780)
Reza Rahemtola [Thu, 13 Feb 2025 16:22:44 +0000 (17:22 +0100)]
server : (docs) Update wrong tool calling example (#11809)
Call updated to match the tool used in the output just below, following the example in https://github.com/ggerganov/llama.cpp/pull/9639
Daniel Bevenius [Thu, 13 Feb 2025 13:46:59 +0000 (14:46 +0100)]
llama : add --completion-bash option (#11846)
This commit adds a new option `--completion-bash` to the llama.cpp which
outputs a source-able bash completion script.
The motivation for this change is to provide a more user-friendly
experience for users who use the command-line interface of llama.cpp.
This is currently only basic and all options are displayed for all llama
executables but this can be improved in the future if needed.
Example usage:
```console
$ build/bin/llama-cli --completion-bash > ~/.llama-completion.bash
$ source ~/.llama-completion.bash
$ ./build/bin/llama-server --m<TAB>
--main-gpu --mirostat --mirostat-lr --model --multiline-input
--min-p --mirostat-ent --mlock --model-url
```
R0CKSTAR [Thu, 13 Feb 2025 12:28:18 +0000 (20:28 +0800)]
musa: bump MUSA SDK version to rc3.1.1 (#11822)
* musa: Update MUSA SDK version to rc3.1.1
Signed-off-by: Xiaodong Ye <redacted>
* musa: Remove workaround in PR #10042
Signed-off-by: Xiaodong Ye <redacted>
---------
Signed-off-by: Xiaodong Ye <redacted>
Olivier Chafik [Thu, 13 Feb 2025 10:05:16 +0000 (10:05 +0000)]
`server`: fix tool-call of DeepSeek R1 Qwen, return reasoning_content (Command 7RB & DeepSeek R1) unless `--reasoning-format none` (#11607)
* extract & return thoughts in reasoning_content field (unless --reasoning-format) for DeepSeek R1 & Command R7B
* tool-calls: add deepseek r1 template (models/templates/llama-cpp-deepseek-r1.jinja) + hackommodate broken official template
* tool-calls: accommodate variety of wrong tool call opening tags both R1 Qwen 32B and 7B distills like to spit out
* server/oai: ensure content is null when there are tool calls, and reasoning_content appears before content for readability
* tool-calls: add DeepSeek R1 Qwen distills to server/README.md & server tests
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Vinesh Janarthanan [Thu, 13 Feb 2025 06:45:57 +0000 (00:45 -0600)]
sampling: add Top-nσ sampler (#11223)
* initial sampling changes:
* completed top nsigma sampler implementation
* apply parameter to only llama-cli
* updated readme
* added tests and fixed nsigma impl
* cleaned up pr
* format
* format
* format
* removed commented tests
* cleanup pr and remove explicit floats
* added top-k sampler to improve performance
* changed sigma to float
* fixed string format to float
* Update src/llama-sampling.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Update common/sampling.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Update src/llama-sampling.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Update src/llama-sampling.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Update src/llama-sampling.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Update src/llama-sampling.cpp
Co-authored-by: Georgi Gerganov <redacted>
* added llama_sampler_init
---------
Co-authored-by: Georgi Gerganov <redacted>
Oleksandr Kuvshynov [Thu, 13 Feb 2025 06:25:34 +0000 (01:25 -0500)]
llama.cpp: fix warning message (#11839)
There was a typo-like error, which would print the same number twice if
request is received with n_predict > server-side config.
Before the fix:
```
slot launch_slot_: id 0 | task 0 | n_predict = 4096 exceeds server configuration, setting to 4096
```
After the fix:
```
slot launch_slot_: id 0 | task 0 | n_predict = 8192 exceeds server configuration, setting to 4096
```
Daniel Bevenius [Thu, 13 Feb 2025 06:07:51 +0000 (07:07 +0100)]
llama : update llama_decode_internal ref [no ci] (#11840)
This commit updates the comment in llama_kv_cache.h to reflect the
change of the function name from llama_decode_internal to
llama_decode_impl.
Diego Devesa [Thu, 13 Feb 2025 00:02:38 +0000 (01:02 +0100)]
ggml-cpu : add chunking support to mul_mat_id (#11666)
* ggml-cpu : add chunking support to mul_mat_id
* allocate chunk counter in wdata
parallelize src1 quantization by column to allows parallelization even when there is only one row
* disable for arm
* cleanup
* better way to disable for arm
* fix uninitialized counter when using 1 thread only
* revert test-backend-ops changes
Xuan-Son Nguyen [Wed, 12 Feb 2025 23:33:45 +0000 (00:33 +0100)]
ggml : x2 speed for WASM by optimizing SIMD (#11453)
* ggml : x2 speed for WASM by optimizing SIMD
* fix bad merging
* rm trailing spaces
* rm redundant clamp
* better quantize_row_q8_K
Co-authored-by: camel-cdr <redacted>
* remove memset that causes buffer overflow
Co-authored-by: camel-cdr <redacted>
---------
Co-authored-by: camel-cdr <redacted>
Woof Dog [Wed, 12 Feb 2025 22:47:11 +0000 (22:47 +0000)]
server : (webui) Give copy button back to all message bubbles (#11814)
* All messages get the copy button
* Update index.html.gz
uvos [Wed, 12 Feb 2025 21:25:28 +0000 (22:25 +0100)]
HIP: Remove GCN from list of devices that avoid MMQ (#11831)
JC [Wed, 12 Feb 2025 20:36:11 +0000 (20:36 +0000)]
Fix: Compile failure due to Microsoft STL breaking change (#11836)
Georgi Gerganov [Wed, 12 Feb 2025 19:46:02 +0000 (21:46 +0200)]
sync : ggml
uvos [Wed, 12 Feb 2025 16:25:03 +0000 (17:25 +0100)]
HIP: Switch to std::vector in rocblas version check (#11820)
bandoti [Wed, 12 Feb 2025 14:06:53 +0000 (10:06 -0400)]
cleanup: fix compile warnings associated with gnu_printf (#11811)
Richard [Wed, 12 Feb 2025 13:57:33 +0000 (13:57 +0000)]
ggml : fix multi-threaded clamp_f32 (#11824)
* Bug fix for clamp_f32
When using tensors larger than 1d clamp operation does not work due to the restriction of returning if ith is not 0.
* Bug fix for clamp_f32
* Bug fix for clamp_f32
Weizhao Ouyang [Wed, 12 Feb 2025 12:22:58 +0000 (20:22 +0800)]
ggml-cpu: Fix duplicate MATMUL_INT8 (#11817)
Signed-off-by: Weizhao Ouyang <redacted>
Johannes Gäßler [Wed, 12 Feb 2025 12:16:39 +0000 (13:16 +0100)]
CUDA: fix CUDART_VERSION checks (#11821)
Daniel Bevenius [Wed, 12 Feb 2025 07:40:01 +0000 (08:40 +0100)]
llama : fix typo in llama-grammar.h [no ci] (#11816)
lhez [Tue, 11 Feb 2025 22:04:13 +0000 (14:04 -0800)]
docs: add OpenCL (#11697)
Sheldon Robinson [Tue, 11 Feb 2025 15:55:45 +0000 (10:55 -0500)]
Fix #11802: Compile bug - RegQueryValueExA changed to RegQueryValueEx (#11803)
* Fix #11802: Compile bug - RegQueryValueExA changed to RegQueryValueEx
* Fix #11802: PR #11803 - keep RegQueryValueExA, remove TEXT macro, description needs to be ANSI string
Daniel Bevenius [Tue, 11 Feb 2025 13:06:45 +0000 (14:06 +0100)]
server : use common_token_to_piece instead of common_detokenize (#11740)
* server : use common_token_to_piece instead of common_detokenize
This commit replaces the call to common_detokenize with
common_token_to_piece in the populate_token_probs.
The motivation for this change is to avoid an issue where
common_detokenize would remove the word boundary character for tokens,
which caused a regression in the server generated token probabilities.
Resolves: https://github.com/ggerganov/llama.cpp/issues/11728
* squash! server : use common_token_to_piece instead of common_detokenize
Use common_token_to_piece for post_sampling_probs as well.
Johannes Gäßler [Mon, 10 Feb 2025 23:17:22 +0000 (00:17 +0100)]
CUDA: use arch list for compatibility check (#11775)
* CUDA: use arch list for feature availability check
---------
Co-authored-by: Diego Devesa <redacted>
Maxim Evtush [Mon, 10 Feb 2025 22:21:31 +0000 (23:21 +0100)]
fix: typos in documentation files (#11791)
* Update ggml.c
* Update arg.cpp
* Update speculative.h
jason_w [Mon, 10 Feb 2025 22:17:48 +0000 (06:17 +0800)]
docs: utilize the forward slash (/) as the path separator for Unix-like systems (#11770)
Xuan-Son Nguyen [Mon, 10 Feb 2025 20:23:17 +0000 (21:23 +0100)]
server : (webui) introduce conversation branching + idb storage (#11792)
* server : (webui) introduce conversation branching + idb storage
* mark old conv as "migrated" instead deleting them
* improve migration
* add more comments
* more clarification
Wilken Gottwalt [Mon, 10 Feb 2025 18:58:18 +0000 (19:58 +0100)]
llama-mmap: fix missing include (#11796)
Technically the fixed width types come only from iostream and
cstdint/stdint.h headers. memory and vector headers should not provide
these. In GCC 15 the headers are cleaned up and you require the proper
header cstdint.
src/llama-mmap.h:26:5: error: ‘uint32_t’ does not name a type
26 | uint32_t read_u32() const;
| ^~~~~~~~
Xuan-Son Nguyen [Mon, 10 Feb 2025 17:03:28 +0000 (18:03 +0100)]
server : correct signal handler (#11795)
Olivier Chafik [Mon, 10 Feb 2025 09:34:09 +0000 (09:34 +0000)]
sync: minja (https://github.com/google/minja/commit/
a72057e5190de2c612d4598bb10b4bfd0f53011f ) (#11774)
pascal-lc [Mon, 10 Feb 2025 08:05:57 +0000 (16:05 +0800)]
Update README.md [no ci] (#11781)
typo: `\` -> `/`
Change the UNIX path separator to` \`.
Danny Milosavljevic [Mon, 10 Feb 2025 06:17:21 +0000 (07:17 +0100)]
vulkan: Make Vulkan optional at runtime (#11493). (#11494)
Co-authored-by: Jeff Bolz <redacted>
Wagner Bruna [Mon, 10 Feb 2025 06:08:22 +0000 (03:08 -0300)]
vulkan: add environment variable GGML_VK_PREFER_HOST_MEMORY to avoid VRAM allocation (#11592)
Eric Curtin [Sun, 9 Feb 2025 10:34:49 +0000 (10:34 +0000)]
There's a better way of clearing lines (#11756)
Use the ANSI escape code for clearing a line.
Signed-off-by: Eric Curtin <redacted>
Jeff Bolz [Sun, 9 Feb 2025 07:43:51 +0000 (01:43 -0600)]
vulkan: account for lookup tables when checking shared memory size (#11502)
Xuan-Son Nguyen [Sat, 8 Feb 2025 20:54:50 +0000 (21:54 +0100)]
server : (webui) revamp Settings dialog, add Pyodide interpreter (#11759)
* redo Settings modal UI
* add python code interpreter
* fix auto scroll
* build
* fix overflow for long output lines
* bring back sticky copy button
* adapt layout on mobile view
* fix multiple lines output and color scheme
* handle python exception
* better state management
* add webworker
* add headers
* format code
* speed up by loading pyodide on page load
* (small tweak) add small animation to make it feels like claude
Woof Dog [Sat, 8 Feb 2025 19:09:55 +0000 (19:09 +0000)]
server : (webui) increase edit textarea size (#11763)
Georgi Gerganov [Sat, 8 Feb 2025 16:08:43 +0000 (18:08 +0200)]
server : minor log updates (#11760)
ggml-ci
Georgi Gerganov [Sat, 8 Feb 2025 14:49:38 +0000 (16:49 +0200)]
cont : fix mmap flag print (#11699)
Karol Kontny [Sat, 8 Feb 2025 14:30:53 +0000 (15:30 +0100)]
ggml: Fix data race in ggml threadpool (#11736)
After the barrier in last iteration is executed, still the loop termination
condition will be executed. However main thread can destroy the cgraph object
and its nodes already, then another thread will access it, but the thing is already gone.
Also trouble can happen when n_nodes == 0 or abort is called, but I'm not sure if the
prior situation is possible.
Last syncronization should be done after the loop to ensure the cgraph/cplan won't be
accessed after the main thread exits from the function.
Johannes Gäßler [Sat, 8 Feb 2025 09:46:07 +0000 (10:46 +0100)]
CUDA: fix min. version for movmatrix (#11751)
Nikolaos Pothitos [Sat, 8 Feb 2025 09:43:04 +0000 (11:43 +0200)]
readme : update front-end framework (#11753)
After the migration to React with #11688
Xuan-Son Nguyen [Sat, 8 Feb 2025 09:42:34 +0000 (10:42 +0100)]
server : (webui) fix numeric settings being saved as string (#11739)
* server : (webui) fix numeric settings being saved as string
* add some more comments
Eric Curtin [Fri, 7 Feb 2025 14:42:46 +0000 (14:42 +0000)]
Make logging more verbose (#11714)
Debugged an issue with a user who was on a read-only filesystem.
Signed-off-by: Eric Curtin <redacted>
Georgi Gerganov [Fri, 7 Feb 2025 14:05:34 +0000 (16:05 +0200)]
llama : fix defrag logic (#11707)
* llama : fix defrag logic
ggml-ci
* cont : better logic
ggml-ci
* cont : clamp fragmentation to 0.0
ggml-ci
Christian Fillion [Fri, 7 Feb 2025 13:55:47 +0000 (08:55 -0500)]
vocab : ignore invalid UTF-8 input in the BPE tokenizer (#11729)
Silently insert U+FFFD(s) (Unicode replacement character) instead until the
next valid codepoint can be found.
This fixes `llama_tokenize` throwing an exception across the C API boundary
or libllama's module boundary (the caller's runtime might be incompatible!)
Returing a proper error code might be desirable, however the signature
of `llama_tokenize` doesn't allow it as all return values already have
existing meaning.
magicse [Fri, 7 Feb 2025 13:48:47 +0000 (15:48 +0200)]
llama : fix progress dots (#11730)
* Update llama.cpp
For display progress dots in terminal.
Without this it didn't display dots progress during loading model from file.
* Update llama.cpp
removed trailing spaces
Jeff Bolz [Fri, 7 Feb 2025 10:26:03 +0000 (04:26 -0600)]
vulkan: print shared memory size (#11719)
Christian Fillion [Fri, 7 Feb 2025 09:33:27 +0000 (04:33 -0500)]
llama : add llama_sampler_init for safe usage of llama_sampler_free (#11727)
The C API in llama.h claims users can implement `llama_sampler_i` to
create custom `llama_sampler`. The sampler chain takes ownership and
calls `llama_sampler_free` on them. However, `llama_sampler_free` is
hard-coded to use `delete`. This is undefined behavior if the object
wasn't also allocated via `new` from libllama's C++ runtime. Callers
in C and C-compatible languages do not use C++'s `new` operator. C++
callers may not be sharing the same heap as libllama.
Akarshan Biswas [Fri, 7 Feb 2025 09:27:53 +0000 (14:57 +0530)]
SYCL: remove XMX info from print devices (#11712)
Daniel Bevenius [Fri, 7 Feb 2025 08:15:22 +0000 (09:15 +0100)]
common : add default embeddings presets (#11677)
* common : add default embeddings presets
This commit adds default embeddings presets for the following models:
- bge-small-en-v1.5
- e5-small-v2
- gte-small
These can be used with llama-embedding and llama-server.
For example, with llama-embedding:
```console
./build/bin/llama-embedding --embd-gte-small-default -p "Hello, how are you?"
```
And with llama-server:
```console
./build/bin/llama-server --embd-gte-small-default
```
And the embeddings endpoint can then be called with a POST request:
```console
curl --request POST \
--url http://localhost:8080/embeddings \
--header "Content-Type: application/json" \
--data '{"input": "Hello, how are you?"}'
```
I'm not sure if these are the most common embedding models but hopefully
this can be a good starting point for discussion and further
improvements.
Refs: https://github.com/ggerganov/llama.cpp/issues/10932
Jinyang He [Fri, 7 Feb 2025 07:38:31 +0000 (15:38 +0800)]
ggml : optimize and build warning fix for LoongArch (#11709)
* ggml : optimize convert f32<->f16 for loongarch_asx
* ggml : optimize loongarch_asx extend i16,i8,u8 to i32,i16
* ggml : Fix warnings when run cpu CI locally on LoongArch
tv1wnd [Thu, 6 Feb 2025 21:48:51 +0000 (22:48 +0100)]
llama : fix old glm4 models (#11670)
Georgi Gerganov [Thu, 6 Feb 2025 19:23:03 +0000 (21:23 +0200)]
sync : ggml
Patrick Peng [Thu, 6 Feb 2025 14:29:13 +0000 (09:29 -0500)]
rpc: fix known RCE in rpc-server (ggml/1103)
Add bounds checking in `rpc_server::copy_tensor` to prevent out-of-bounds writes
+ Check if `(uint8_t *)dst->data + ggml_nbytes(src)` remains within the destination buffer’s allocated region.
Xuan-Son Nguyen [Thu, 6 Feb 2025 16:32:29 +0000 (17:32 +0100)]
server : (webui) migrate project to ReactJS with typescript (#11688)
* init version
* fix auto scroll
* bring back copy btn
* bring back thought process
* add lint and format check on CI
* remove lang from html tag
* allow multiple generations at the same time
* lint and format combined
* fix unused var
* improve MarkdownDisplay
* fix more latex
* fix code block cannot be selected while generating
Tei Home [Thu, 6 Feb 2025 12:16:15 +0000 (20:16 +0800)]
docs: update fedora cuda guide for 12.8 release (#11393)
* docs: update fedora cuda guide for 12.8 release
* docs: build cuda update
Akarshan Biswas [Thu, 6 Feb 2025 11:42:35 +0000 (17:12 +0530)]
SYCL: Adjust support condition for norm operators (#11674)
SYCL does not support non contiguous tensors for norm operations
Georgi Gerganov [Thu, 6 Feb 2025 11:41:37 +0000 (13:41 +0200)]
llama : add log about loading model tensors (#11699)
Adrien Gallouët [Thu, 6 Feb 2025 11:08:13 +0000 (12:08 +0100)]
build : fix llama.pc (#11658)
Signed-off-by: Adrien Gallouët <redacted>
junchao-zhao [Thu, 6 Feb 2025 09:20:00 +0000 (17:20 +0800)]
ggml : fix LoongArch compile error with 128-bit SIMD (#11701)
Jeff Bolz [Thu, 6 Feb 2025 06:15:30 +0000 (00:15 -0600)]
vulkan: optimize coopmat2 iq2/iq3 callbacks (#11521)
* vulkan: optimize coopmat2 iq2/iq3 callbacks
* build: trigger CI on GLSL compute shader changes
Rémy O [Thu, 6 Feb 2025 06:09:59 +0000 (07:09 +0100)]
vulkan: initial support for IQ4_XS quantization (#11501)
Jeff Bolz [Thu, 6 Feb 2025 06:02:18 +0000 (00:02 -0600)]
vulkan: use smaller combined allocations to avoid fragmentation (#11551)
Charles Duffy [Thu, 6 Feb 2025 01:52:31 +0000 (19:52 -0600)]
metal : avoid breaking build when metal API predates TARGET_OS_VISION (#11690)
Avoids breakage in nix flake build introduced by
b0569130c5e9c671152c913d82803b7c2f014ff9
Matvey Soloviev [Thu, 6 Feb 2025 00:55:25 +0000 (01:55 +0100)]
readme : add link to Autopen under UIs (#11684)
Autopen (https://github.com/blackhole89/autopen) is a graphical text editor that uses llama.cpp to tokenize the buffer on the fly, score the buffer, visualise token logits and allow you to switch back and forth between different possible completions at any point. It hopefully meets the criteria for inclusion, as the dependency on llama.cpp is stated prominently.
Georgi Gerganov [Wed, 5 Feb 2025 08:57:42 +0000 (10:57 +0200)]
metal : adjust support conditions for norm operators (#11671)
cont #11659
ggml-ci
Johannes Gäßler [Wed, 5 Feb 2025 07:58:31 +0000 (08:58 +0100)]
CUDA: support for mat. mul. with ne03 != ne13 (#11656)
SAMI [Wed, 5 Feb 2025 07:45:40 +0000 (14:45 +0700)]
llava: add quantization for the visual projector LLAVA, Qwen2VL (#11644)
* Added quantization for visual projector
* Added README
* Fixed the clip quantize implementation in the file
* Fixed the gcc warning regarding minor linting
* Removed trailing whitespace
Olivier Chafik [Wed, 5 Feb 2025 01:00:12 +0000 (01:00 +0000)]
`sync`: minja (#11641)
* `sync`: minja
https://github.com/google/minja/commit/
182de30cdaee78ba86179122f8047b3bdbab7f7f
https://github.com/google/minja/pull/46
https://github.com/google/minja/pull/45
Johannes Gäßler [Tue, 4 Feb 2025 21:21:42 +0000 (22:21 +0100)]
CUDA: non-contiguous (RMS) norm support (#11659)
* CUDA: non-contiguous (RMS) norm support
---------
Co-authored-by: Georgi Gerganov <redacted>
fxzjshm [Tue, 4 Feb 2025 18:18:38 +0000 (02:18 +0800)]
HIP: force max threads per block to be 1024 (#11621)
Some old/vendor forked version of llvm still use 256. Explicitly set it to 1024 to align with upstream llvm.
Signed-off-by: fxzjshm <redacted>
Xuan-Son Nguyen [Tue, 4 Feb 2025 17:25:42 +0000 (18:25 +0100)]
server : add try..catch to places not covered by set_exception_handler (#11620)
* server : add try..catch to places not covered by set_exception_handler
* log_server_request: rm try catch, add reminder
Radoslav Gerganov [Tue, 4 Feb 2025 16:16:20 +0000 (18:16 +0200)]
arg : list RPC devices first when using --list-devices (#11655)
List devices in the same order as they appear when evaluating the model
and splitting tensors across devices, i.e. RPC devices come first in the
list.
ref #11435
Olivier Chafik [Tue, 4 Feb 2025 15:48:53 +0000 (15:48 +0000)]
`tool-call`: command r7b fix for normal responses (#11608)
* fix command r7b normal response regex + add to server test
* test multiline non-tool-call responses in test-chat
Shelby Jenkins [Tue, 4 Feb 2025 11:20:55 +0000 (05:20 -0600)]
readme : add llm_client Rust crate to readme bindings (#11628)
[This crate](https://github.com/ShelbyJenkins/llm_client) has been in a usable state for quite awhile, so I figured now is fair to add it.
It installs from crates.io, and automatically downloads the llama.cpp repo and builds it for the target platform - with the goal being the easiest user experience possible.
It also integrates model presets and choosing the largest quant given the target's available VRAM. So a user just has to specify one of the presets (I manually add the most popular models), and it will download from hugging face.
So, it's like a Rust Ollama, but it's not really for chatting. It makes heavy use of llama.cpp's grammar system to do structured output for decision making and control flow tasks.
Jhen-Jie Hong [Tue, 4 Feb 2025 11:15:24 +0000 (19:15 +0800)]
swift : fix llama-vocab api usage (#11645)
* swiftui : fix vocab api usage
* batched.swift : fix vocab api usage
Jhen-Jie Hong [Tue, 4 Feb 2025 11:07:18 +0000 (19:07 +0800)]
metal : use residency set for other platforms (#11648)
Georgi Gerganov [Tue, 4 Feb 2025 11:04:10 +0000 (13:04 +0200)]
authors : update
Georgi Gerganov [Tue, 4 Feb 2025 10:59:21 +0000 (12:59 +0200)]
sync : ggml
Christian Kastner [Mon, 3 Feb 2025 23:17:15 +0000 (00:17 +0100)]
cmake: Add ability to pass in GGML_BUILD_NUMBER (ggml/1096)
This makes git as a dependency optional, and is useful in the case where
ggml is built not from git, but from a tarball, or a distribution source
package.
This conditional also affects GGML_BUILD_COMMIT. Nothing seems to be
using it, though, so there doesn't seem much value factor it out, or
even require it.
Georgi Gerganov [Tue, 4 Feb 2025 07:30:42 +0000 (09:30 +0200)]
ci : do not stale-close roadmap issues
Olivier Chafik [Mon, 3 Feb 2025 23:49:27 +0000 (23:49 +0000)]
`tool-call`: allow `--chat-template chatml` w/ `--jinja`, default to chatml upon parsing issue, avoid double bos (#11616)
* tool-call: allow `--jinja --chat-template chatml`
* fix double bos issue (drop bos/eos tokens from jinja template)
* add missing try catch around jinja parsing to default to chatml
* Simplify default chatml logic
Xuan-Son Nguyen [Mon, 3 Feb 2025 23:10:52 +0000 (00:10 +0100)]
server : (webui) revert hacky solution from #11626 (#11634)
Woof Dog [Mon, 3 Feb 2025 22:16:27 +0000 (22:16 +0000)]
server : (webui) allow typing and submitting during llm response (#11626)
Daniel Bevenius [Mon, 3 Feb 2025 15:45:38 +0000 (16:45 +0100)]
server : remove CPPHTTPLIB_NO_EXCEPTIONS define (#11622)
This commit removes the CPPHTTPLIB_NO_EXCEPTIONS define from the server
code.
The motivation for this is that when using a debug build the server
would crash when an exception was throws and terminate the server
process, as it was unhandled. When CPPHTTPLIB_NO_EXCEPTIONS is set
cpp_httplib will not call the exception handler, which would normally
return a 500 error to the client. This caused tests to fail when using
a debug build.
Fixes: https://github.com/ggerganov/llama.cpp/issues/11613
Georgi Gerganov [Mon, 3 Feb 2025 12:57:08 +0000 (14:57 +0200)]
sync : ggml
Johannes Gäßler [Mon, 3 Feb 2025 12:25:56 +0000 (13:25 +0100)]
CUDA: fix Volta FlashAttention logic (#11615)
mashdragon [Mon, 3 Feb 2025 09:42:55 +0000 (09:42 +0000)]
server : (webui) Fix Shift+Enter handling (#11609)
* Fix Shift+Enter handling
`exact` on the Enter handler means the message is not sent when Shift+Enter is pressed anyway
* build index.html.gz
---------
Co-authored-by: Xuan Son Nguyen <redacted>
Johannes Gäßler [Sun, 2 Feb 2025 22:48:29 +0000 (23:48 +0100)]
HIP: fix flash_attn_stream_k_fixup warning (#11604)
uvos [Sun, 2 Feb 2025 21:40:09 +0000 (22:40 +0100)]
CUDA/HIP: add support for selectable warp size to mmv (#11519)
CUDA/HIP: add support for selectable warp size to mmv
uvos [Sun, 2 Feb 2025 21:08:05 +0000 (22:08 +0100)]
HIP: add GGML_CUDA_CC_IS_* for amd familys as increasing cc archtectures for amd gpus are not supersets of eatch other (#11601)
This fixes a bug where RDNA1 gpus other than gfx1010 where not handled correctly
Olivier Chafik [Sun, 2 Feb 2025 19:58:34 +0000 (19:58 +0000)]
nit: more informative crash when grammar sampler fails (#11593)
Johannes Gäßler [Sun, 2 Feb 2025 18:31:09 +0000 (19:31 +0100)]
CUDA: use mma PTX instructions for FlashAttention (#11583)
* CUDA: use mma PTX instructions for FlashAttention
* __shfl_sync workaround for movmatrix
* add __shfl_sync to HIP
Co-authored-by: Diego Devesa <redacted>
Eric Curtin [Sun, 2 Feb 2025 15:14:48 +0000 (16:14 +0100)]
Name colors (#11573)
It's more descriptive, use #define's so we can use compile-time
concatenations.
Signed-off-by: Eric Curtin <redacted>
Olivier Chafik [Sun, 2 Feb 2025 09:25:38 +0000 (09:25 +0000)]
`tool-call`: support Command R7B (+ return tool_plan "thoughts" in API) (#11585)
* `tool-call`: support Command R7B (w/ tool_plan return)
* `tool-call`: cleaner preservation of tokens + warn when likely bad chat template override
* `tool-call`: test cleanup / handle lazy grammar triggers
Olivier Chafik [Sun, 2 Feb 2025 09:10:15 +0000 (09:10 +0000)]
Fix exotic ci env that lacks ostringstream::str (#11581)