]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Junil Kim [Wed, 15 Jan 2025 13:17:42 +0000 (22:17 +0900)]
fix: ggml: fix vulkan-shaders-gen build (#10448)
* fix: ggml: fix vulkan-shaders-gen build
The vulkan-shaders-gen target was not being built correctly
in case of cross-compilation.
Other outputs need to be built for the cross compile target,
but vulkan-shaders-gen needs to be built for the host.
* refactor: ggml: Improve vulkan-shaders-gen toolchain setup
- Add GGML_SHADERS_GEN_TOOLCHAIN CMake option.
- Auto-detect host toolchain if not set.
* refactor: ggml: Improve vulkan-shaders-gen toolchain setup
Use configure_file to generate host_toolchain.cmake from template
* fix: ggml: Fix compile error
Fix compile error not finding vulkan-shaders-gen
* fix: vulkan-shaders-gen build and path handling
Fix build issues with vulkan-shaders-gen:
- Add target dependency for correct build order
- Use CMAKE_HOST_SYSTEM_NAME for executable suffix
- Fix MSVC output directory in host toolchain
- Normalize path handling for cross-compilation
* fix: improve host compiler detection in vulkan shader build
Improve host compiler detection for vulkan shader generation:
- Add NO_CMAKE_FIND_ROOT_PATH to all compiler searches
- Consolidate compiler detection logic
- Fix Windows-specific MSVC detection
- Ensure correct compiler search in cross-compilation
* refactor: Simplify CMake function for detecting host compiler
Simplified the CMake function to improve the process of detecting the host compiler.
* fix: Remove unnecessary Vulkan library linkage in CMakeLists.txt
Since `vulkan-shader-gen.cpp` only requires the `glslc` executable
and not the Vulkan headers or libraries, CMakeLists.txt needs to
be corrected.
(See:
ecc93d0558fc3ecb8a5af69d2ece02fae4710ade )
* refactor: Rename host_toolchain.cmake.in
- Rename host_toolchain.cmake.in to cmake/host-toolchain.cmake.in
* refactor: GGML_VULKAN_SHADERS_GEN_TOOLCHAIN
Rename the macro GGML_SHADERS_GEN_TOOLCHAIN to GGML_VULKAN_SHADERS_GEN_TOOLCHAIN
Johannes Gäßler [Wed, 15 Jan 2025 11:51:37 +0000 (12:51 +0100)]
RoPE: fix back, CUDA support for back + noncont. (#11240)
* RoPE: fix back, CUDA support for back + noncont.
* fix comments reg. non-cont. RoPE support [no-ci]
Daniel Bevenius [Wed, 15 Jan 2025 04:44:38 +0000 (05:44 +0100)]
examples : add embd_to_audio to tts-outetts.py [no ci] (#11235)
This commit contains a suggestion for adding the missing embd_to_audio
function from tts.cpp to tts-outetts.py. This introduces a depencency
numpy which I was not sure if that is acceptable or not (only PyTorch
was mentioned in referened PR).
Also the README has been updated with instructions to run the example
with llama-server and the python script.
Refs: https://github.com/ggerganov/llama.cpp/pull/10784#issuecomment-
2548377734
Akarshan Biswas [Wed, 15 Jan 2025 03:20:17 +0000 (08:50 +0530)]
SYCL: Add gated linear attention kernel (#11175)
* SYCL: Add Gated Linear attention kernel
* glahpp: add a space at the end of file
* gla: Put the barrier inside the main logic loop
Xuan Son Nguyen [Tue, 14 Jan 2025 14:42:23 +0000 (15:42 +0100)]
ci : add -no-cnv for tests (#11238)
Georgi Gerganov [Tue, 14 Jan 2025 10:54:58 +0000 (12:54 +0200)]
vocab : add dummy tokens for "no_vocab" type (#11231)
* vocab : add dummy tokens for "no_vocab" type
ggml-ci
* vocab : minor [no ci]
ebraminio [Tue, 14 Jan 2025 10:39:33 +0000 (14:09 +0330)]
server : Improve code snippets direction between RTL text (#11221)
Olivier Chafik [Tue, 14 Jan 2025 10:16:41 +0000 (10:16 +0000)]
Refactor test-chat-template.cpp (#11224)
* Refactor test-chat-template
* Update test-chat-template.cpp
Georgi Gerganov [Tue, 14 Jan 2025 08:39:42 +0000 (10:39 +0200)]
sync : ggml
Georgi Gerganov [Tue, 14 Jan 2025 07:40:15 +0000 (09:40 +0200)]
scripts : sync gguf (cont)
Georgi Gerganov [Tue, 14 Jan 2025 07:36:58 +0000 (09:36 +0200)]
scripts : sync gguf
Georgi Gerganov [Tue, 14 Jan 2025 07:19:58 +0000 (09:19 +0200)]
scripts : sync opencl
ebraminio [Mon, 13 Jan 2025 19:23:31 +0000 (22:53 +0330)]
server : (UI) Improve messages bubble shape in RTL (#11220)
I simply have overlooked message bubble's tail placement for RTL
text as I use the dark mode and that isn't visible there and this
fixes it.
Xuan Son Nguyen [Mon, 13 Jan 2025 19:18:12 +0000 (20:18 +0100)]
cli : auto activate conversation mode if chat template is available (#11214)
* cli : auto activate conversation mode if chat template is detected
* add warn on bad template
* update readme (writing with the help of chatgpt)
* update readme (2)
* do not activate -cnv for non-instruct models
Andreas Kieslinger [Mon, 13 Jan 2025 15:45:53 +0000 (16:45 +0100)]
cuda : CUDA Graph Compute Function Refactor (precursor for performance improvements) (#11042)
* Refactor: Moves cuda graph executable update step to separate function.
* Refactor: Moves cuda graph update check to separate function.
* Refactor: Moves cuda graph maintenance (update or adjusting copy parameters) to separate function for improved readability.
* Fix: Adds missing reference to maintain_cuda_graph() definition.
* Refactor: Improves structure and abstractions by moving CUDA graph evaluation and capture to its own function.
* Refactor: Moves node graph checks and copy ops into individual function for improved readability.
* Refactor: Removes code permanently excluded from compilation to increase readability.
* Style: Adds missing newline
* Style: Consolidates several neighboring '#ifdef USE_CUDA_GRAPH' into a single one
* Refactor: Makes 'cuda_graph_update_required' a local variable
* remove double lines between functions
---------
Co-authored-by: slaren <redacted>
Georgi Gerganov [Mon, 13 Jan 2025 13:59:26 +0000 (15:59 +0200)]
contrib : add naming guidelines (cont) (#11177)
ebraminio [Mon, 13 Jan 2025 13:46:39 +0000 (17:16 +0330)]
server : (UI) Support for RTL text as models input or output (#11208)
Georgi Gerganov [Mon, 13 Jan 2025 13:08:44 +0000 (15:08 +0200)]
contrib : add naming guidelines (cont) (#11177)
Xuan Son Nguyen [Mon, 13 Jan 2025 12:56:23 +0000 (13:56 +0100)]
common : support tag-based --hf-repo like on ollama (#11195)
* common : support tag-based hf_repo like on ollama
* fix build
* various fixes
* small fixes
* fix style
* fix windows build?
* move common_get_hf_file to common.cpp
* fix complain with noreturn
Georgi Gerganov [Mon, 13 Jan 2025 12:46:36 +0000 (14:46 +0200)]
contrib : add naming guidelines (#11177)
* contrib : add naming guidelines
* contrib : expand naming guidelines [no ci]
* contrib : cont [no ci]
* contrib : add `_t` suffix guideline [no ci]
* contrib : cont [no ci]
* minor [no ci]
* contrib : move coding guidelines to correct section [no ci]
* contrib : minor reword coding guidelines [no ci]
* contrib : add TODO for preprocessor directives [no ci]
* contrib : expand [no ci]
* minor [no ci]
* contrib : clarify `_context` suffix usage [no ci]
* contrib : filename guidelines [no ci]
* contrib : fix notes [no ci]
Daniel Bevenius [Mon, 13 Jan 2025 12:38:20 +0000 (13:38 +0100)]
llama : remove 'd' from bad special token log (#11212)
This commit removes the 'd' from the log message in llama-vocab.cpp
when logging a bad special token.
The motivation for this is that currently the output can look something
like the following:
```console
load: bad special token:
'tokenizer.ggml.image_token_id' =
128256d , using default id -1
```
Radoslav Gerganov [Mon, 13 Jan 2025 11:31:41 +0000 (13:31 +0200)]
ggml : do not define GGML_USE_CUDA when building with GGML_BACKEND_DL (#11211)
Build fails when using HIP and GGML_BACKEND_DL:
```
/usr/bin/ld: ../ggml/src/libggml.so: undefined reference to `ggml_backend_cuda_reg'
collect2: error: ld returned 1 exit status
```
This patch fixes this.
Eric Curtin [Sun, 12 Jan 2025 18:23:10 +0000 (18:23 +0000)]
Reset color before we exit (#11205)
We don't want colors to leak post termination of llama-run.
Signed-off-by: Eric Curtin <redacted>
Xuan Son Nguyen [Sun, 12 Jan 2025 12:45:14 +0000 (13:45 +0100)]
llama : fix chat template gguf key (#11201)
Georgi Gerganov [Sun, 12 Jan 2025 10:15:53 +0000 (12:15 +0200)]
llama : remove notion of CLS token (#11064)
ggml-ci
Georgi Gerganov [Sun, 12 Jan 2025 09:32:42 +0000 (11:32 +0200)]
llama : add `llama_vocab`, functions -> methods, naming (#11110)
* llama : functions -> methods (#11110)
* llama : add struct llama_vocab to the API (#11156)
ggml-ci
* hparams : move vocab params to llama_vocab (#11159)
ggml-ci
* vocab : more pimpl (#11165)
ggml-ci
* vocab : minor tokenization optimizations (#11160)
ggml-ci
Co-authored-by: Diego Devesa <redacted>
* lora : update API names (#11167)
ggml-ci
* llama : update API names to use correct prefix (#11174)
* llama : update API names to use correct prefix
ggml-ci
* cont
ggml-ci
* cont
ggml-ci
* minor [no ci]
* vocab : llama_vocab_add_[be]os -> llama_vocab_get_add_[be]os (#11174)
ggml-ci
* vocab : llama_vocab_n_vocab -> llama_vocab_n_tokens (#11174)
ggml-ci
---------
Co-authored-by: Diego Devesa <redacted>
Vinesh Janarthanan [Sat, 11 Jan 2025 09:42:31 +0000 (03:42 -0600)]
gguf-py: fixed local detection of gguf package (#11180)
* updated path to gguf package for non-installed setups
* added reader.py to readme
* Bumped gguf version to 0.15.0
Daniel Bevenius [Sat, 11 Jan 2025 04:50:33 +0000 (05:50 +0100)]
convert : sort print supported models [no ci] (#11179)
This commit sorts the list of supported models when printing them out.
The motivation for this change is to make it easier to find a specific
model in the list of supported models. For example:
```console
$ ./convert_hf_to_gguf.py --print-supported-models
Supported models:
- ArcticForCausalLM
- BaiChuanForCausalLM
- BaichuanForCausalLM
- BertForMaskedLM
- BertModel
- BitnetForCausalLM
- BloomForCausalLM
- BloomModel
- CamembertModel
- ChameleonForCausalLM
- ChameleonForConditionalGeneration
- ChatGLMForConditionalGeneration
- ChatGLMModel
- CodeShellForCausalLM
- Cohere2ForCausalLM
- CohereForCausalLM
- DbrxForCausalLM
- DeciLMForCausalLM
- DeepseekForCausalLM
- DeepseekV2ForCausalLM
- DeepseekV3ForCausalLM
- ExaoneForCausalLM
- FalconForCausalLM
- FalconMambaForCausalLM
- GPT2LMHeadModel
- GPTBigCodeForCausalLM
- GPTNeoXForCausalLM
- GPTRefactForCausalLM
- Gemma2ForCausalLM
- GemmaForCausalLM
- GraniteForCausalLM
- GraniteMoeForCausalLM
- GrokForCausalLM
- InternLM2ForCausalLM
- JAISLMHeadModel
- JinaBertForMaskedLM
- JinaBertModel
- LLaMAForCausalLM
- LlamaForCausalLM
- LlavaStableLMEpochForCausalLM
- MPTForCausalLM
- MT5ForConditionalGeneration
- MambaForCausalLM
- MambaLMHeadModel
- MiniCPM3ForCausalLM
- MiniCPMForCausalLM
- MistralForCausalLM
- MixtralForCausalLM
- NemotronForCausalLM
- NomicBertModel
- OLMoForCausalLM
- Olmo2ForCausalLM
- OlmoForCausalLM
- OlmoeForCausalLM
- OpenELMForCausalLM
- OrionForCausalLM
- Phi3ForCausalLM
- PhiForCausalLM
- PhiMoEForCausalLM
- PlamoForCausalLM
- QWenLMHeadModel
- Qwen2ForCausalLM
- Qwen2MoeForCausalLM
- Qwen2VLForConditionalGeneration
- RWForCausalLM
- RWKV6Qwen2ForCausalLM
- RobertaModel
- Rwkv6ForCausalLM
- StableLMEpochForCausalLM
- StableLmForCausalLM
- Starcoder2ForCausalLM
- T5EncoderModel
- T5ForConditionalGeneration
- T5WithLMHeadModel
- UMT5ForConditionalGeneration
- WavTokenizerDec
- XLMRobertaForSequenceClassification
- XLMRobertaModel
- XverseForCausalLM
```
Daniel Bevenius [Fri, 10 Jan 2025 12:16:16 +0000 (13:16 +0100)]
examples : add README.md to tts example [no ci] (#11155)
* examples : add README.md to tts example [no ci]
* squash! examples : add README.md to tts example [no ci]
Fix heading to be consistent with other examples, and add a quickstart
section to README.md.
* squash! examples : add README.md to tts example [no ci]
Fix spelling mistake.
Daniel Bevenius [Fri, 10 Jan 2025 10:30:53 +0000 (11:30 +0100)]
convert : add --print-supported-models option (#11172)
* convert : add --print-supported-models option
This commit adds a new option to the convert_hf_to_gguf.py script to
print the supported models.
The motivation for this is that it can be useful to know which models
are supported by the script without having to look at the code.
Example usage:
```console
$ ./convert_hf_to_gguf.py --print-supported-models
Supported models:
- GPTNeoXForCausalLM
- BloomForCausalLM
- BloomModel
- MPTForCausalLM
- OrionForCausalLM
- BaichuanForCausalLM
- BaiChuanForCausalLM
- XverseForCausalLM
- FalconForCausalLM
- RWForCausalLM
- GPTBigCodeForCausalLM
- GPTRefactForCausalLM
- StableLmForCausalLM
- StableLMEpochForCausalLM
- LlavaStableLMEpochForCausalLM
- LLaMAForCausalLM
- LlamaForCausalLM
- MistralForCausalLM
- MixtralForCausalLM
- DeciLMForCausalLM
- BitnetForCausalLM
- GrokForCausalLM
- DbrxForCausalLM
- MiniCPMForCausalLM
- MiniCPM3ForCausalLM
- QWenLMHeadModel
- Qwen2ForCausalLM
- Qwen2VLForConditionalGeneration
- WavTokenizerDec
- Qwen2MoeForCausalLM
- GPT2LMHeadModel
- PhiForCausalLM
- Phi3ForCausalLM
- PhiMoEForCausalLM
- PlamoForCausalLM
- CodeShellForCausalLM
- InternLM2ForCausalLM
- BertModel
- BertForMaskedLM
- CamembertModel
- RobertaModel
- NomicBertModel
- XLMRobertaModel
- XLMRobertaForSequenceClassification
- GemmaForCausalLM
- Gemma2ForCausalLM
- Starcoder2ForCausalLM
- Rwkv6ForCausalLM
- RWKV6Qwen2ForCausalLM
- MambaForCausalLM
- MambaLMHeadModel
- FalconMambaForCausalLM
- CohereForCausalLM
- Cohere2ForCausalLM
- OLMoForCausalLM
- OlmoForCausalLM
- Olmo2ForCausalLM
- OlmoeForCausalLM
- JinaBertModel
- JinaBertForMaskedLM
- OpenELMForCausalLM
- ArcticForCausalLM
- DeepseekForCausalLM
- DeepseekV3ForCausalLM
- DeepseekV2ForCausalLM
- UMT5ForConditionalGeneration
- MT5ForConditionalGeneration
- T5ForConditionalGeneration
- T5WithLMHeadModel
- T5EncoderModel
- JAISLMHeadModel
- ChatGLMModel
- ChatGLMForConditionalGeneration
- NemotronForCausalLM
- ExaoneForCausalLM
- GraniteForCausalLM
- GraniteMoeForCausalLM
- ChameleonForCausalLM
- ChameleonForConditionalGeneration
```
* squash! convert : add --print-supported-models option
Fix flake8 error.
0cc4m [Fri, 10 Jan 2025 05:39:33 +0000 (06:39 +0100)]
Vulkan: Fix float16 use on devices without float16 support + fix subgroup_size_control validation error (#11161)
* Vulkan: Remove float16 use in shaders
* Fix validation error about subgroup_size_control extension
Molly Sophia [Fri, 10 Jan 2025 01:58:08 +0000 (09:58 +0800)]
llama: add support for QRWKV6 model architecture (#11001)
llama: add support for QRWKV6 model architecture (#11001)
* WIP: Add support for RWKV6Qwen2
Signed-off-by: Molly Sophia <redacted>
* RWKV: Some graph simplification
Signed-off-by: Molly Sophia <redacted>
* Add support for RWKV6Qwen2 with cpu and cuda GLA
Signed-off-by: Molly Sophia <redacted>
* RWKV6[QWEN2]: Concat lerp weights together to reduce cpu overhead
Signed-off-by: Molly Sophia <redacted>
* Fix some typos
Signed-off-by: Molly Sophia <redacted>
* code format changes
Signed-off-by: Molly Sophia <redacted>
* Fix wkv test & add gla test
Signed-off-by: Molly Sophia <redacted>
* Fix cuda warning
Signed-off-by: Molly Sophia <redacted>
* Update README.md
Signed-off-by: Molly Sophia <redacted>
* Update ggml/src/ggml-cuda/gla.cu
Co-authored-by: Georgi Gerganov <redacted>
* Fix fused lerp weights loading with RWKV6
Signed-off-by: Molly Sophia <redacted>
* better sanity check skipping for QRWKV6 in llama-quant
thanks @compilade
Signed-off-by: Molly Sophia <redacted>
Co-authored-by: compilade <redacted>
---------
Signed-off-by: Molly Sophia <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: compilade <redacted>
Akarshan Biswas [Fri, 10 Jan 2025 00:13:03 +0000 (05:43 +0530)]
SYCL: Refactor ggml_sycl_compute_forward (#11121)
* SYCL: refactor ggml_sycl_compute_forward
* SYCL: add back GGML_USED(dst) to ggml_sycl_cpy
* SYCL: add function name to noop debug
* SYCL: Some device info print refactoring and add details of XMX availability
Tei Home [Thu, 9 Jan 2025 11:32:06 +0000 (19:32 +0800)]
doc: add cuda guide for fedora (#11135)
Since NVIDIA does not release CUDA for in-maintenance versions of Fedora, the process of setting up the CUDA toolkit on Fedora has become quite involved. This guide should help mere mortals install CUDA for development in a Fedora 39 toolbox environment, without affecting the host system.
Daniel Bevenius [Thu, 9 Jan 2025 10:28:29 +0000 (11:28 +0100)]
server : add tooltips to settings and themes btn (#11154)
* server : add tooltips to settings and themes btn
This commit adds tooltips to the settings and themes buttons in the
webui. The tooltip will be displayed below the actual buttons when
hovered over.
The motivation for this change is to clarify the purpose of the themes
button.
* squash! server : add tooltips to settings and themes btn
This commit adds a tooltip to the '...' button when a chat has been
started. The tooltip is "Chat options" which think could be a good
description as the dropdown contains options to delete or download the
current chat.
* rm tooltip for 3 dots button
---------
Co-authored-by: Xuan Son Nguyen <redacted>
Pierrick Hymbert [Thu, 9 Jan 2025 10:21:41 +0000 (11:21 +0100)]
model: Add support for PhiMoE arch (#11003)
* model: support phimoe
* python linter
* doc: minor
Co-authored-by: ThiloteE <redacted>
* doc: minor
Co-authored-by: ThiloteE <redacted>
* doc: add phimoe as supported model
ggml-ci
---------
Co-authored-by: ThiloteE <redacted>
Georgi Gerganov [Thu, 9 Jan 2025 09:15:15 +0000 (11:15 +0200)]
media : remove old img [no ci]
Xuan Son Nguyen [Thu, 9 Jan 2025 09:07:33 +0000 (10:07 +0100)]
llama-chat : add phi 4 template (#11148)
hydai [Wed, 8 Jan 2025 20:03:28 +0000 (04:03 +0800)]
fix: add missing msg in static_assert (#11143)
Signed-off-by: hydai <redacted>
Vinesh Janarthanan [Wed, 8 Jan 2025 18:54:58 +0000 (12:54 -0600)]
gguf-py : move scripts directory (#11116)
* Moved scripts dir and fixed pyproject.toml
* updated readme
* fixed README urls
* bump pypi gguf to v0.14.0
* retrigger ci
* empty commit - trigger ci
Eric Curtin [Wed, 8 Jan 2025 18:47:05 +0000 (18:47 +0000)]
Enhance user input handling for llama-run (#11138)
The main motivation for this change is it was not handing
ctrl-c/ctrl-d correctly. Modify `read_user_input` to handle EOF,
"/bye" command, and empty input cases. Introduce `get_user_input`
function to manage user input loop and handle different return
cases.
Signed-off-by: Eric Curtin <redacted>
Xuan Son Nguyen [Wed, 8 Jan 2025 15:09:20 +0000 (16:09 +0100)]
ci : use actions from ggml-org (#11140)
Xuan Son Nguyen [Wed, 8 Jan 2025 14:59:53 +0000 (15:59 +0100)]
lora : improve compat with `mergekit-extract-lora` (#11131)
* (wip) support mergekit-extracted lora
* support mergekit-extract-lora
* use lora->get_scale
* correct comment
* correct norm name & condition
* add some hints
Georgi Gerganov [Wed, 8 Jan 2025 14:19:36 +0000 (16:19 +0200)]
llama : avoid hardcoded QK_K (#11061)
ggml-ci
Georgi Gerganov [Wed, 8 Jan 2025 11:40:30 +0000 (13:40 +0200)]
sync : ggml
Radoslav Gerganov [Sun, 5 Jan 2025 07:50:37 +0000 (09:50 +0200)]
ggml : allow loading backend with env variable (ggml/1059)
ref: #1058
Xuan Son Nguyen [Wed, 8 Jan 2025 11:07:20 +0000 (12:07 +0100)]
ci : pin dependency to specific version (#11137)
* ci : pin dependency to specific version
* will this fix ec?
Georgi Gerganov [Wed, 8 Jan 2025 10:55:36 +0000 (12:55 +0200)]
arg : option to exclude arguments from specific examples (#11136)
* arg : option to exclude arguments from specific examples
ggml-ci
* readme : remove old args [no ci]
amritahs-ibm [Wed, 8 Jan 2025 10:54:19 +0000 (16:24 +0530)]
llamafile : ppc64le MMA INT8 implementation (#10912)
This change upstreams llamafile's cpu matrix
multiplication kernels for ppc64le using MMA
builtins for quantised int8 datatype.
This change results in 10% - 70% improvement
in total speed(ie all tokens/total time), across
various batch sizes.
The patch is tested with Meta-Lllama-3-8B,
Mistral-7B, Llama-2-7B-chat-hf models on a
IBM POWER10 machine.
Signed-off-by: Amrita H S <redacted>
Georgi Gerganov [Wed, 8 Jan 2025 09:29:34 +0000 (11:29 +0200)]
ci : fix cmake option (#11125)
Mathieu Baudier [Wed, 8 Jan 2025 08:18:13 +0000 (09:18 +0100)]
Disable GL_KHR_cooperative_matrix Vulkan extension if not available. (#11117)
* Disable GL_KHR_cooperative_matrix Vulkan extension if not available.
* Perform Vulkan extensions checks in a more sensible order
* Remove unnecessary #ifdef directive
ag2s20150909 [Wed, 8 Jan 2025 08:17:29 +0000 (16:17 +0800)]
fix: Vulkan shader gen binary path when Cross-compiling (#11096)
* fix: Vulkan shader gen binary path when cross compiling
Johannes Gäßler [Tue, 7 Jan 2025 17:01:58 +0000 (18:01 +0100)]
GGUF: C++ refactor, backend support, misc fixes (#11030)
* GGUF: C++ refactor, backend support, misc fixes
remove ggml_tensor.backend
update CODEOWNERS [no ci]
remove gguf_get_data from API
revise GGUF API data types
Diego Devesa [Tue, 7 Jan 2025 15:11:57 +0000 (16:11 +0100)]
ggml-backend : only offload from host buffers (fix) (#11124)
Diego Devesa [Tue, 7 Jan 2025 11:38:05 +0000 (12:38 +0100)]
ggml-backend : only offload from host buffers (#11120)
Radoslav Gerganov [Tue, 7 Jan 2025 06:37:02 +0000 (08:37 +0200)]
rpc : code cleanup (#11107)
Remove duplicated macros, use GGML_LOG_ERROR for errors
Akarshan Biswas [Tue, 7 Jan 2025 06:26:07 +0000 (11:56 +0530)]
SYCL: Use get_multi_ptr instead of deprecated get_pointer in wkv6 (#11087)
* SYCL: Use get_multi_ptr instead of deprecated get_pointer in wkv6
* Revert "SYCL: Use get_multi_ptr instead of deprecated get_pointer in wkv6"
This reverts commit
f62dc45f318e48d375e7734b34cbddee81deed52 .
* Reland: Use get_multi_ptr instead of deprecated get_pointer in wkv6
Eric Curtin [Mon, 6 Jan 2025 22:45:28 +0000 (22:45 +0000)]
llama-run : fix context size (#11094)
Set `n_ctx` equal to `n_batch` in `Opt` class. Now context size is
a more reasonable 2048.
Signed-off-by: Eric Curtin <redacted>
Georgi Gerganov [Mon, 6 Jan 2025 15:52:35 +0000 (17:52 +0200)]
llama : remove unused headers (#11109)
ggml-ci
Xuan Son Nguyen [Mon, 6 Jan 2025 15:34:49 +0000 (16:34 +0100)]
github : add cmd line field to bug report (#11090)
* github : cmd line to bug report
* codeowners : (@ngxson) only watch dockerfile
* Apply suggestions from code review [no ci]
Co-authored-by: Johannes Gäßler <redacted>
* rm cmd in log output [no ci]
* rm 2 [no ci]
* no need backticks [no ci]
---------
Co-authored-by: Johannes Gäßler <redacted>
Georgi Gerganov [Mon, 6 Jan 2025 13:36:08 +0000 (15:36 +0200)]
server : fix extra BOS in infill endpoint (#11106)
* server : fix extra BOS in infill endpoing
ggml-ci
* server : update infill tests
Xuan Son Nguyen [Mon, 6 Jan 2025 12:41:12 +0000 (13:41 +0100)]
llama : remove check flash_attn with lora (#11104)
Asghar Ghorbani [Mon, 6 Jan 2025 11:21:46 +0000 (12:21 +0100)]
llama : prevent system info string accumulation across calls (#11101)
Daniel Bevenius [Mon, 6 Jan 2025 09:28:17 +0000 (10:28 +0100)]
llama : rename missed batch params/vars to ubatch (#10059)
This commit renames the `batch` parameter to `ubatch` in the
`llama_kv_cache_find_slot`, `llm_build_inp_embd`, and
`llm_build_mamba` functions.
The motivation for this is that this should have been done as part of
Commit
19d900a7565b8f6b0a708836a57d26966cb9efe2 ("llama : rename batch
to ubatch (#9950)") but for some reason I missed these functions in
that commit and only noticed them now (sorry).
Georgi Gerganov [Mon, 6 Jan 2025 08:55:18 +0000 (10:55 +0200)]
llama : update llama_model API names (#11063)
* llama : deprecate llama_free_model, add llama_model_free
ggml-ci
* llama : change `llama_load_model_from_file` -> `llama_model_load_from_file`
ggml-ci
Georgi Gerganov [Mon, 6 Jan 2025 08:54:25 +0000 (10:54 +0200)]
tokenize : escape the prompt (#11058)
* tokenize : escape the prompt
* tokenize : update help
Georgi Gerganov [Mon, 6 Jan 2025 08:52:38 +0000 (10:52 +0200)]
mmap : fix fileno macro clash (#11076)
* mmap : fix fileno macro clash
ggml-ci
* cont
ggml-ci
Georgi Gerganov [Mon, 6 Jan 2025 08:52:15 +0000 (10:52 +0200)]
llama : use LLAMA_TOKEN_NULL (#11062)
ggml-ci
Georgi Gerganov [Mon, 6 Jan 2025 08:52:01 +0000 (10:52 +0200)]
llama : use _impl suffix instead of _internal (#11060)
ggml-ci
Johannes Gäßler [Mon, 6 Jan 2025 01:33:52 +0000 (02:33 +0100)]
CUDA: add BF16 support (#11093)
* CUDA: add BF16 support
0cc4m [Sat, 4 Jan 2025 20:09:59 +0000 (21:09 +0100)]
Vulkan: Add device-specific blacklist for coopmat for the AMD proprietary driver (#11074)
* Vulkan: Add device-specific blacklist for coopmat for the AMD proprietary driver
* Add (TM) to AMD name check
fairydreaming [Sat, 4 Jan 2025 20:06:11 +0000 (21:06 +0100)]
llama : Add support for DeepSeek V3 (#11049)
* convert : extend DEEPSEEK2 model architecture to support DeepseekV3ForCausalLM by adding EXPERT_WEIGHTS_NORM and EXPERT_GATING_FUNC model parameters and FFN_EXP_PROBS_B tensor type
* vocab : add DeepSeek V3 pre-tokenizer regexes
* unicode : handle ACCENT_MARK and SYMBOL categories in regex
* llama : add DeepSeek V3 chat template, handle new model parameters and tensor types
---------
Co-authored-by: Stanisław Szymczyk <redacted>
matt23654 [Sat, 4 Jan 2025 16:10:30 +0000 (16:10 +0000)]
[GGML][RPC] Support for models with non-512-aligned tensors over RPC. (#11047)
* Added init tensor calling code
* Added get_alloc_size forwarding
* Cleaned up and improved type/error handling.
* fix: remove trailing whitespaces.
* Cleanup and use GGML error logging functions.
* Handle potentially dangerous edge cases.
* Apply suggestions from code review
Co-authored-by: Diego Devesa <redacted>
---------
Co-authored-by: Diego Devesa <redacted>
DAN™ [Sat, 4 Jan 2025 14:33:31 +0000 (09:33 -0500)]
llama : add support for the cohere2 model architecture (#10900)
Georgi Gerganov [Sat, 4 Jan 2025 08:54:01 +0000 (10:54 +0200)]
sync : ggml
Georgi Gerganov [Sat, 4 Jan 2025 08:53:54 +0000 (10:53 +0200)]
ggml : do not install metal source when embed library (ggml/1054)
Daniel Bevenius [Thu, 19 Dec 2024 02:50:12 +0000 (03:50 +0100)]
ggml : improve inputs log sched_print_assignments (ggml/1053)
This commit attempts to improve the log message for the inputs of the
splits in the sched_print_assignments function.
The motivation for this change is that currently even if there are no
inputs a colon is displayed at the end of the line, which can make it a
little confusing when reading the output as it could be interpreted as
the line below are inputs when they are in fact nodes. With this change
the colon will only be printed if there actually are inputs.
Gilad S. [Sat, 4 Jan 2025 08:17:31 +0000 (10:17 +0200)]
fix: Vulkan shader gen binary path (#11037)
Molly Sophia [Fri, 3 Jan 2025 12:13:18 +0000 (20:13 +0800)]
common : disable KV cache shifting automatically for unsupported models (#11053)
* Disable KV cache shifting automatically for unsupported models
instead of exiting directly
Signed-off-by: Molly Sophia <redacted>
* Update common/common.cpp
Co-authored-by: Georgi Gerganov <redacted>
---------
Signed-off-by: Molly Sophia <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Fri, 3 Jan 2025 09:26:14 +0000 (11:26 +0200)]
metal : avoid uint (#11019)
Georgi Gerganov [Fri, 3 Jan 2025 08:18:53 +0000 (10:18 +0200)]
llama : refactor `src/llama.cpp` (#10902)
* llama : scatter llama.cpp into multiple modules (wip)
* llama : control-vector -> adapter
* llama : arch
* llama : mmap
ggml-ci
* ci : remove BUILD_SHARED_LIBS=OFF
ggml-ci
* llama : arch (cont)
ggml-ci
* llama : chat
ggml-ci
* llama : model
ggml-ci
* llama : hparams
ggml-ci
* llama : adapter
ggml-ci
* examples : fix
ggml-ci
* rebase
ggml-ci
* minor
* llama : kv cache
ggml-ci
* llama : impl
ggml-ci
* llama : batch
ggml-ci
* cont
ggml-ci
* llama : context
ggml-ci
* minor
* llama : context (cont)
ggml-ci
* llama : model loader
ggml-ci
* common : update lora
ggml-ci
* llama : quant
ggml-ci
* llama : quant (cont)
ggml-ci
* minor [no ci]
Pierrick Hymbert [Thu, 2 Jan 2025 17:06:12 +0000 (18:06 +0100)]
server: bench: minor fixes (#10765)
* server/bench:
- support openAI streaming standard output with [DONE]\n\n
- export k6 raw results in csv
- fix too many tcp idle connection in tcp_wait
- add metric time to emit first token
* server/bench:
- fix when prometheus not started
- wait for server to be ready before starting bench
Xuan Son Nguyen [Thu, 2 Jan 2025 14:05:18 +0000 (15:05 +0100)]
server : allow using LoRA adapters per-request (#10994)
* slot.can_batch_with
* lora per request
* test: force disable cache prompt
* move can_batch_with check
* fix condition
* add slow test with llama 8b
* update docs
* move lora change task to queue
* Apply suggestions from code review
Co-authored-by: Georgi Gerganov <redacted>
* lora_base
* remove redundant check
---------
Co-authored-by: Georgi Gerganov <redacted>
Benson Wong [Thu, 2 Jan 2025 07:14:54 +0000 (23:14 -0800)]
readme : add llama-swap to infrastructure section (#11032)
* list llama-swap under tools in README
* readme: add llama-swap to Infrastructure
Srihari-mcw [Tue, 31 Dec 2024 14:23:33 +0000 (19:53 +0530)]
ggml : fixes for AVXVNNI instruction set with MSVC and Clang (#11027)
* Fixes for clang AVX VNNI
* enable AVX VNNI and alder lake build for MSVC
* Apply suggestions from code review
---------
Co-authored-by: slaren <redacted>
Xuan Son Nguyen [Tue, 31 Dec 2024 14:22:01 +0000 (15:22 +0100)]
server : clean up built-in template detection (#11026)
* server : clean up built-in template detection
* fix compilation
* add chat template test
* fix condition
Xuan Son Nguyen [Tue, 31 Dec 2024 11:34:13 +0000 (12:34 +0100)]
server : add OAI compat for /v1/completions (#10974)
* server : add OAI compat for /v1/completions
* add test
* add docs
* better docs
ymcki [Tue, 31 Dec 2024 11:04:48 +0000 (19:04 +0800)]
convert : fix Llama-3_1-Nemotron-51B rope settings (#11008)
* conflict resolution
* move comments after bracket to its own line
* DeciLMCausalModel now reads rope_theta from config.json properly
Peter [Tue, 31 Dec 2024 00:46:06 +0000 (11:46 +1100)]
common, examples, ggml : fix MSYS2 GCC compiler errors and warnings when building with LLAMA_CURL=ON and GGML_OPENCL=ON (#11013)
In common/common.cpp:
* Convert usage of stat() function call to check if file exists to standard library function std::filesystem::exists (error unable to match to correct function signature)
* Additional conditions to check if PATH_MAX is already defined in WIN32 environment (warning it is already defined in MSYS2)
In examples/run/run.cpp:
* Add io.h header inclusion (error cannot find function _get_osfhandle)
* Change initialisers for OVERLAPPED to empty struct (warning about uninitialised members)
* Add initialiser for hFile (warning it may be uninitialised)
* Add cast for curl_off_t percentage value to long int in generate_progress_prefix function (warning that curl_off_t is long long int)
In ggml/src/ggml-opencl/ggml-opencl.cpp:
* Initialise certain declared cl_mem variables to nullptr for greater safety (warning about B_d variable possibly used unassigned)
Jeff Bolz [Mon, 30 Dec 2024 17:27:11 +0000 (11:27 -0600)]
vulkan: optimize mul_mat for small values of N (#10991)
Make the mul_mat_vec shaders support N>1 (as a spec constant, NUM_COLS) where
the batch_strides are overloaded to hold the row strides. Put the loads from the
B matrix in the innermost loop because it should cache better.
Share some code for reducing the result values to memory in mul_mat_vec_base.
ag2s20150909 [Mon, 30 Dec 2024 12:35:13 +0000 (20:35 +0800)]
android : fix llama_batch free (#11014)
Jeff Bolz [Sun, 29 Dec 2024 09:16:34 +0000 (03:16 -0600)]
vulkan: im2col and matmul optimizations for stable diffusion (#10942)
* tests: Add im2col perf tests
* vulkan: optimize im2col, more elements per thread
* vulkan: increase small tile size for NV_coopmat2
* vulkan: change im2col to 512 elements per workgroup
Jeff Bolz [Sun, 29 Dec 2024 08:35:11 +0000 (02:35 -0600)]
vulkan: Use push constant offset to handle misaligned descriptors (#10987)
Isaac McFadyen [Sat, 28 Dec 2024 15:09:19 +0000 (10:09 -0500)]
server: added more docs for response_fields field (#10995)
Alexey Parfenov [Sat, 28 Dec 2024 15:08:54 +0000 (15:08 +0000)]
server : fix token duplication when streaming with stop strings (#10997)
Eve [Thu, 26 Dec 2024 15:54:44 +0000 (10:54 -0500)]
vulkan: multi-row k quants (#10846)
* multi row k quant shaders!
* better row selection
* more row choices
* readjust row selection
* rm_kq=2 by default
Peter [Thu, 26 Dec 2024 13:59:11 +0000 (00:59 +1100)]
examples, ggml : fix GCC compiler warnings (#10983)
Warning types fixed (observed under MSYS2 GCC 14.2.0):
* format '%ld' expects argument of type 'long int', but argument has type 'size_t'
* llama.cpp/ggml/src/ggml-vulkan/vulkan-shaders/vulkan-shaders-gen.cpp:81:46: warning: missing initializer for member '_STARTUPINFOA::lpDesktop' [-Wmissing-field-initializers] (emitted for all struct field except first)
Reza Kakhki [Tue, 24 Dec 2024 20:33:04 +0000 (21:33 +0100)]
server : add support for "encoding_format": "base64" to the */embeddings endpoints (#10967)
* add support for base64
* fix base64 test
* improve test
---------
Co-authored-by: Xuan Son Nguyen <redacted>
Djip007 [Tue, 24 Dec 2024 17:54:49 +0000 (18:54 +0100)]
ggml : more perfo with llamafile tinyblas on x86_64 (#10714)
* more perfo with llamafile tinyblas on x86_64.
- add bf16 suport
- change dispache strategie (thanks:
https://github.com/ikawrakow/ik_llama.cpp/pull/71 )
- reduce memory bandwidth
simple tinyblas dispache and more cache freindly
* tinyblas dynamic dispaching
* sgemm: add M blocs.
* - git 2.47 use short id of len 9.
- show-progress is not part of GNU Wget2
* remove not stable test
NeverLucky [Tue, 24 Dec 2024 16:39:49 +0000 (19:39 +0300)]
server: allow filtering llama server response fields (#10940)
* llama_server_response_fields
* llama_server_response_fields_fix_issues
* params fixes
* fix
* clarify docs
* change to "response_fields"
---------
Co-authored-by: Xuan Son Nguyen <redacted>