]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Georgi Gerganov [Mon, 6 Jan 2025 08:55:18 +0000 (10:55 +0200)]
llama : update llama_model API names (#11063)
* llama : deprecate llama_free_model, add llama_model_free
ggml-ci
* llama : change `llama_load_model_from_file` -> `llama_model_load_from_file`
ggml-ci
Georgi Gerganov [Mon, 6 Jan 2025 08:54:25 +0000 (10:54 +0200)]
tokenize : escape the prompt (#11058)
* tokenize : escape the prompt
* tokenize : update help
Georgi Gerganov [Mon, 6 Jan 2025 08:52:38 +0000 (10:52 +0200)]
mmap : fix fileno macro clash (#11076)
* mmap : fix fileno macro clash
ggml-ci
* cont
ggml-ci
Georgi Gerganov [Mon, 6 Jan 2025 08:52:15 +0000 (10:52 +0200)]
llama : use LLAMA_TOKEN_NULL (#11062)
ggml-ci
Georgi Gerganov [Mon, 6 Jan 2025 08:52:01 +0000 (10:52 +0200)]
llama : use _impl suffix instead of _internal (#11060)
ggml-ci
Johannes Gäßler [Mon, 6 Jan 2025 01:33:52 +0000 (02:33 +0100)]
CUDA: add BF16 support (#11093)
* CUDA: add BF16 support
0cc4m [Sat, 4 Jan 2025 20:09:59 +0000 (21:09 +0100)]
Vulkan: Add device-specific blacklist for coopmat for the AMD proprietary driver (#11074)
* Vulkan: Add device-specific blacklist for coopmat for the AMD proprietary driver
* Add (TM) to AMD name check
fairydreaming [Sat, 4 Jan 2025 20:06:11 +0000 (21:06 +0100)]
llama : Add support for DeepSeek V3 (#11049)
* convert : extend DEEPSEEK2 model architecture to support DeepseekV3ForCausalLM by adding EXPERT_WEIGHTS_NORM and EXPERT_GATING_FUNC model parameters and FFN_EXP_PROBS_B tensor type
* vocab : add DeepSeek V3 pre-tokenizer regexes
* unicode : handle ACCENT_MARK and SYMBOL categories in regex
* llama : add DeepSeek V3 chat template, handle new model parameters and tensor types
---------
Co-authored-by: Stanisław Szymczyk <redacted>
matt23654 [Sat, 4 Jan 2025 16:10:30 +0000 (16:10 +0000)]
[GGML][RPC] Support for models with non-512-aligned tensors over RPC. (#11047)
* Added init tensor calling code
* Added get_alloc_size forwarding
* Cleaned up and improved type/error handling.
* fix: remove trailing whitespaces.
* Cleanup and use GGML error logging functions.
* Handle potentially dangerous edge cases.
* Apply suggestions from code review
Co-authored-by: Diego Devesa <redacted>
---------
Co-authored-by: Diego Devesa <redacted>
DAN™ [Sat, 4 Jan 2025 14:33:31 +0000 (09:33 -0500)]
llama : add support for the cohere2 model architecture (#10900)
Georgi Gerganov [Sat, 4 Jan 2025 08:54:01 +0000 (10:54 +0200)]
sync : ggml
Georgi Gerganov [Sat, 4 Jan 2025 08:53:54 +0000 (10:53 +0200)]
ggml : do not install metal source when embed library (ggml/1054)
Daniel Bevenius [Thu, 19 Dec 2024 02:50:12 +0000 (03:50 +0100)]
ggml : improve inputs log sched_print_assignments (ggml/1053)
This commit attempts to improve the log message for the inputs of the
splits in the sched_print_assignments function.
The motivation for this change is that currently even if there are no
inputs a colon is displayed at the end of the line, which can make it a
little confusing when reading the output as it could be interpreted as
the line below are inputs when they are in fact nodes. With this change
the colon will only be printed if there actually are inputs.
Gilad S. [Sat, 4 Jan 2025 08:17:31 +0000 (10:17 +0200)]
fix: Vulkan shader gen binary path (#11037)
Molly Sophia [Fri, 3 Jan 2025 12:13:18 +0000 (20:13 +0800)]
common : disable KV cache shifting automatically for unsupported models (#11053)
* Disable KV cache shifting automatically for unsupported models
instead of exiting directly
Signed-off-by: Molly Sophia <redacted>
* Update common/common.cpp
Co-authored-by: Georgi Gerganov <redacted>
---------
Signed-off-by: Molly Sophia <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Fri, 3 Jan 2025 09:26:14 +0000 (11:26 +0200)]
metal : avoid uint (#11019)
Georgi Gerganov [Fri, 3 Jan 2025 08:18:53 +0000 (10:18 +0200)]
llama : refactor `src/llama.cpp` (#10902)
* llama : scatter llama.cpp into multiple modules (wip)
* llama : control-vector -> adapter
* llama : arch
* llama : mmap
ggml-ci
* ci : remove BUILD_SHARED_LIBS=OFF
ggml-ci
* llama : arch (cont)
ggml-ci
* llama : chat
ggml-ci
* llama : model
ggml-ci
* llama : hparams
ggml-ci
* llama : adapter
ggml-ci
* examples : fix
ggml-ci
* rebase
ggml-ci
* minor
* llama : kv cache
ggml-ci
* llama : impl
ggml-ci
* llama : batch
ggml-ci
* cont
ggml-ci
* llama : context
ggml-ci
* minor
* llama : context (cont)
ggml-ci
* llama : model loader
ggml-ci
* common : update lora
ggml-ci
* llama : quant
ggml-ci
* llama : quant (cont)
ggml-ci
* minor [no ci]
Pierrick Hymbert [Thu, 2 Jan 2025 17:06:12 +0000 (18:06 +0100)]
server: bench: minor fixes (#10765)
* server/bench:
- support openAI streaming standard output with [DONE]\n\n
- export k6 raw results in csv
- fix too many tcp idle connection in tcp_wait
- add metric time to emit first token
* server/bench:
- fix when prometheus not started
- wait for server to be ready before starting bench
Xuan Son Nguyen [Thu, 2 Jan 2025 14:05:18 +0000 (15:05 +0100)]
server : allow using LoRA adapters per-request (#10994)
* slot.can_batch_with
* lora per request
* test: force disable cache prompt
* move can_batch_with check
* fix condition
* add slow test with llama 8b
* update docs
* move lora change task to queue
* Apply suggestions from code review
Co-authored-by: Georgi Gerganov <redacted>
* lora_base
* remove redundant check
---------
Co-authored-by: Georgi Gerganov <redacted>
Benson Wong [Thu, 2 Jan 2025 07:14:54 +0000 (23:14 -0800)]
readme : add llama-swap to infrastructure section (#11032)
* list llama-swap under tools in README
* readme: add llama-swap to Infrastructure
Srihari-mcw [Tue, 31 Dec 2024 14:23:33 +0000 (19:53 +0530)]
ggml : fixes for AVXVNNI instruction set with MSVC and Clang (#11027)
* Fixes for clang AVX VNNI
* enable AVX VNNI and alder lake build for MSVC
* Apply suggestions from code review
---------
Co-authored-by: slaren <redacted>
Xuan Son Nguyen [Tue, 31 Dec 2024 14:22:01 +0000 (15:22 +0100)]
server : clean up built-in template detection (#11026)
* server : clean up built-in template detection
* fix compilation
* add chat template test
* fix condition
Xuan Son Nguyen [Tue, 31 Dec 2024 11:34:13 +0000 (12:34 +0100)]
server : add OAI compat for /v1/completions (#10974)
* server : add OAI compat for /v1/completions
* add test
* add docs
* better docs
ymcki [Tue, 31 Dec 2024 11:04:48 +0000 (19:04 +0800)]
convert : fix Llama-3_1-Nemotron-51B rope settings (#11008)
* conflict resolution
* move comments after bracket to its own line
* DeciLMCausalModel now reads rope_theta from config.json properly
Peter [Tue, 31 Dec 2024 00:46:06 +0000 (11:46 +1100)]
common, examples, ggml : fix MSYS2 GCC compiler errors and warnings when building with LLAMA_CURL=ON and GGML_OPENCL=ON (#11013)
In common/common.cpp:
* Convert usage of stat() function call to check if file exists to standard library function std::filesystem::exists (error unable to match to correct function signature)
* Additional conditions to check if PATH_MAX is already defined in WIN32 environment (warning it is already defined in MSYS2)
In examples/run/run.cpp:
* Add io.h header inclusion (error cannot find function _get_osfhandle)
* Change initialisers for OVERLAPPED to empty struct (warning about uninitialised members)
* Add initialiser for hFile (warning it may be uninitialised)
* Add cast for curl_off_t percentage value to long int in generate_progress_prefix function (warning that curl_off_t is long long int)
In ggml/src/ggml-opencl/ggml-opencl.cpp:
* Initialise certain declared cl_mem variables to nullptr for greater safety (warning about B_d variable possibly used unassigned)
Jeff Bolz [Mon, 30 Dec 2024 17:27:11 +0000 (11:27 -0600)]
vulkan: optimize mul_mat for small values of N (#10991)
Make the mul_mat_vec shaders support N>1 (as a spec constant, NUM_COLS) where
the batch_strides are overloaded to hold the row strides. Put the loads from the
B matrix in the innermost loop because it should cache better.
Share some code for reducing the result values to memory in mul_mat_vec_base.
ag2s20150909 [Mon, 30 Dec 2024 12:35:13 +0000 (20:35 +0800)]
android : fix llama_batch free (#11014)
Jeff Bolz [Sun, 29 Dec 2024 09:16:34 +0000 (03:16 -0600)]
vulkan: im2col and matmul optimizations for stable diffusion (#10942)
* tests: Add im2col perf tests
* vulkan: optimize im2col, more elements per thread
* vulkan: increase small tile size for NV_coopmat2
* vulkan: change im2col to 512 elements per workgroup
Jeff Bolz [Sun, 29 Dec 2024 08:35:11 +0000 (02:35 -0600)]
vulkan: Use push constant offset to handle misaligned descriptors (#10987)
Isaac McFadyen [Sat, 28 Dec 2024 15:09:19 +0000 (10:09 -0500)]
server: added more docs for response_fields field (#10995)
Alexey Parfenov [Sat, 28 Dec 2024 15:08:54 +0000 (15:08 +0000)]
server : fix token duplication when streaming with stop strings (#10997)
Eve [Thu, 26 Dec 2024 15:54:44 +0000 (10:54 -0500)]
vulkan: multi-row k quants (#10846)
* multi row k quant shaders!
* better row selection
* more row choices
* readjust row selection
* rm_kq=2 by default
Peter [Thu, 26 Dec 2024 13:59:11 +0000 (00:59 +1100)]
examples, ggml : fix GCC compiler warnings (#10983)
Warning types fixed (observed under MSYS2 GCC 14.2.0):
* format '%ld' expects argument of type 'long int', but argument has type 'size_t'
* llama.cpp/ggml/src/ggml-vulkan/vulkan-shaders/vulkan-shaders-gen.cpp:81:46: warning: missing initializer for member '_STARTUPINFOA::lpDesktop' [-Wmissing-field-initializers] (emitted for all struct field except first)
Reza Kakhki [Tue, 24 Dec 2024 20:33:04 +0000 (21:33 +0100)]
server : add support for "encoding_format": "base64" to the */embeddings endpoints (#10967)
* add support for base64
* fix base64 test
* improve test
---------
Co-authored-by: Xuan Son Nguyen <redacted>
Djip007 [Tue, 24 Dec 2024 17:54:49 +0000 (18:54 +0100)]
ggml : more perfo with llamafile tinyblas on x86_64 (#10714)
* more perfo with llamafile tinyblas on x86_64.
- add bf16 suport
- change dispache strategie (thanks:
https://github.com/ikawrakow/ik_llama.cpp/pull/71 )
- reduce memory bandwidth
simple tinyblas dispache and more cache freindly
* tinyblas dynamic dispaching
* sgemm: add M blocs.
* - git 2.47 use short id of len 9.
- show-progress is not part of GNU Wget2
* remove not stable test
NeverLucky [Tue, 24 Dec 2024 16:39:49 +0000 (19:39 +0300)]
server: allow filtering llama server response fields (#10940)
* llama_server_response_fields
* llama_server_response_fields_fix_issues
* params fixes
* fix
* clarify docs
* change to "response_fields"
---------
Co-authored-by: Xuan Son Nguyen <redacted>
Georgi Gerganov [Tue, 24 Dec 2024 07:44:20 +0000 (09:44 +0200)]
llama : the WPM vocabs use the CLS token as BOS (#10930)
* llama : the WPM vocabs use the CLS token as BOS
ggml-ci
* llama : add comment
Diego Devesa [Tue, 24 Dec 2024 03:05:27 +0000 (04:05 +0100)]
ggml : use wstring for backend search paths (#10960)
ggml-ci
Diego Devesa [Tue, 24 Dec 2024 03:05:17 +0000 (04:05 +0100)]
ggml : fix arm enabled features check (#10961)
Diego Devesa [Mon, 23 Dec 2024 19:25:52 +0000 (20:25 +0100)]
ggml : fix const usage in SSE path (#10962)
Xuan Son Nguyen [Mon, 23 Dec 2024 11:52:25 +0000 (12:52 +0100)]
server : fix missing model id in /model endpoint (#10957)
* server : fix missing model id in /model endpoint
* fix ci
Xuan Son Nguyen [Mon, 23 Dec 2024 11:02:44 +0000 (12:02 +0100)]
server : add system_fingerprint to chat/completion (#10917)
* server : add system_fingerprint to chat/completion
* update README
Radoslav Gerganov [Mon, 23 Dec 2024 08:39:30 +0000 (10:39 +0200)]
rpc-server : add support for the SYCL backend (#10934)
Yun Dou [Mon, 23 Dec 2024 00:35:44 +0000 (08:35 +0800)]
llama : support InfiniAI Megrez 3b (#10893)
* Support InfiniAI Megrez 3b
* Fix tokenizer_clean_spaces for megrez
ymcki [Mon, 23 Dec 2024 00:22:33 +0000 (08:22 +0800)]
llama : support for Llama-3_1-Nemotron-51B (#10669)
* conflict resolution
* move comments after bracket to its own line
Eric Curtin [Mon, 23 Dec 2024 00:21:40 +0000 (00:21 +0000)]
llama-run : include temperature option (#10899)
This commit updates the `examples/run/README.md` file to include a new
option for setting the temperature and updates the `run.cpp` file to
parse this option.
Signed-off-by: Eric Curtin <redacted>
yuri@FreeBSD [Mon, 23 Dec 2024 00:20:11 +0000 (16:20 -0800)]
ggml : fix run-time on FreeBSD in get_executable_path() (#10948)
Rudi Servo [Sun, 22 Dec 2024 22:22:58 +0000 (21:22 -0100)]
devops : add docker-multi-stage builds (#10832)
Billel Mokeddem [Sun, 22 Dec 2024 22:09:58 +0000 (01:09 +0300)]
llama : add Falcon3 support (#10883)
* Add Falcon3 model support
* Add fix for adding bos to added special tokens
* Add comment explaining the logic behind the if statement
* Add a log message to better track the when the following line of code is triggered
* Update log to only print when input and output characters are different
* Fix handling pre-normalized tokens
* Refactoring
Jeff Bolz [Sun, 22 Dec 2024 09:44:01 +0000 (03:44 -0600)]
vulkan: build fixes for 32b (#10927)
* vulkan: build fixes for 32b
Should fix #10923
* vulkan: initialize some buffer/offset variables
Georgi Gerganov [Sat, 21 Dec 2024 08:10:18 +0000 (10:10 +0200)]
convert : add BertForMaskedLM (#10919)
Jeff Bolz [Sat, 21 Dec 2024 07:04:45 +0000 (01:04 -0600)]
vulkan: optimize coopmat2 dequant functions (#10855)
Change the code to do 16b loads when possible and extract the appropriate
component late, so the code is effectively decoding a pair of elements and
then selecting one. This can allow more commoning to happen in the compiler
when neighboring elements are loaded.
Adrien Gallouët [Fri, 20 Dec 2024 23:33:37 +0000 (00:33 +0100)]
ggml-cpu: replace NEON asm with intrinsics in ggml_gemv_q4_0_4x8_q8_0() (#10874)
* ggml-cpu: replace NEON asm with intrinsics in ggml_gemv_q4_0_4x8_q8_0()
Signed-off-by: Adrien Gallouët <redacted>
* ggml-cpu: format code
Signed-off-by: Adrien Gallouët <redacted>
---------
Signed-off-by: Adrien Gallouët <redacted>
Akarshan Biswas [Fri, 20 Dec 2024 15:31:28 +0000 (21:01 +0530)]
SYCL: Migrate away from deprecated ggml_tensor->backend (#10840)
* Migrate to tensor->buffer for checking backend buffer type: 1
* SYCL: common.cpp try to migrate away from tensor->backend
* SYCL: fix assertions and add proper comments
* SYCL: remove extra space
* SYCL: Add back static to ggml_backend_buffer_is_sycl_split function
* SYCL: Add pragma directive to suppress warning spam
* SYCL: Integrate debug logs with GGML_LOG and other fixes
* Revert "SYCL: Integrate debug logs with GGML_LOG and other fixes"
This reverts commit
2607b7de0f0d2f4f1f690226f86fa861aa39cb97 .
Let's keep the current SYCL specific logging mechanism for now
* SYCL: Use GGML_SYCL_DEBUG after reverting
* SYCL: reg_get_proc_address func, update to the current func signature
* SYCL: Refactor SYCL buffer checks in ggml_sycl_cpy_tensor_2d
Xuan Son Nguyen [Fri, 20 Dec 2024 13:12:06 +0000 (14:12 +0100)]
server : (UI) fix copy to clipboard function (#10916)
Diego Devesa [Fri, 20 Dec 2024 12:31:28 +0000 (13:31 +0100)]
ggml : add test for SVE and disable when it fails (#10906)
Molly Sophia [Fri, 20 Dec 2024 09:44:58 +0000 (17:44 +0800)]
convert : fix RWKV v6 model conversion (#10913)
* Enable --no-context-shift for llama-perplexity example
Signed-off-by: Molly Sophia <redacted>
* RWKV 6: Fix error in ggml_cuda_op_bin_bcast
Signed-off-by: Molly Sophia <redacted>
---------
Signed-off-by: Molly Sophia <redacted>
Georgi Gerganov [Thu, 19 Dec 2024 16:47:15 +0000 (18:47 +0200)]
clip : disable GPU support (#10896)
ggml-ci
Georgi Gerganov [Thu, 19 Dec 2024 15:42:13 +0000 (17:42 +0200)]
llama : minor grammar refactor (#10897)
ggml-ci
Georgi Gerganov [Thu, 19 Dec 2024 15:35:15 +0000 (17:35 +0200)]
tts : small QoL for easy model fetch (#10903)
Xuan Son Nguyen [Thu, 19 Dec 2024 14:40:08 +0000 (15:40 +0100)]
server : fix logprobs, make it OAI-compatible (#10783)
* server : fix logprobs, make it openai-compatible
* update docs
* add std::log
* return pre-sampling p
* sort before apply softmax
* add comment
* fix test
* set p for sampled token
* update docs
* add --multi-token-probs
* update docs
* add `post_sampling_probs` option
* update docs [no ci]
* remove --multi-token-probs
* "top_probs" with "post_sampling_probs"
* resolve review comments
* rename struct token_prob to prob_info
* correct comment placement
* fix setting prob for sampled token
Adrien Gallouët [Thu, 19 Dec 2024 13:20:41 +0000 (14:20 +0100)]
ggml: fix arm build with gcc (#10895)
Signed-off-by: Adrien Gallouët <redacted>
Sukriti Sharma [Thu, 19 Dec 2024 13:04:51 +0000 (06:04 -0700)]
llama : fix Roberta embeddings (#10856)
* fix: Use gpt2 tokenizer for roberta and add eos/bos tokens
Branch: RobertaTokenizer
Signed-off-by: Gabe Goodhart <redacted>
* fixes to position embeddings
Signed-off-by: Sukriti-Sharma4 <redacted>
* map roberta-bpe to gpt-2
Signed-off-by: Sukriti-Sharma4 <redacted>
* fix linting
Signed-off-by: Sukriti-Sharma4 <redacted>
---------
Signed-off-by: Gabe Goodhart <redacted>
Signed-off-by: Sukriti-Sharma4 <redacted>
Co-authored-by: Gabe Goodhart <redacted>
fairydreaming [Thu, 19 Dec 2024 09:37:12 +0000 (10:37 +0100)]
convert : Add support for Microsoft Phi-4 model (#10817)
* convert : use GPT2 vocab for Phi-4 model
* convert : use null value of sliding_window to distinguish Phi-4 from other PHI3-based models
* llama : do not use sliding window attention mask for Phi-4 model
---------
Co-authored-by: Stanisław Szymczyk <redacted>
Johannes Gäßler [Thu, 19 Dec 2024 07:53:58 +0000 (08:53 +0100)]
tests: disable GGUF test for bad value size (#10886)
Eric Curtin [Thu, 19 Dec 2024 02:58:00 +0000 (02:58 +0000)]
llama-run : improve progress bar (#10821)
Set default width to whatever the terminal is. Also fixed a small bug around
default n_gpu_layers value.
Signed-off-by: Eric Curtin <redacted>
Diego Devesa [Wed, 18 Dec 2024 22:21:42 +0000 (23:21 +0100)]
ggml : fix arm build (#10890)
* ggml: GGML_NATIVE uses -mcpu=native on ARM
Signed-off-by: Adrien Gallouët <redacted>
* ggml: Show detected features with GGML_NATIVE
Signed-off-by: Adrien Gallouët <redacted>
* remove msvc support, add GGML_CPU_ARM_ARCH option
* disable llamafile in android example
* march -> mcpu, skip adding feature macros
ggml-ci
---------
Signed-off-by: Adrien Gallouët <redacted>
Co-authored-by: Adrien Gallouët <redacted>
Georgi Gerganov [Wed, 18 Dec 2024 17:27:21 +0000 (19:27 +0200)]
tts : add OuteTTS support (#10784)
* server : add "tokens" output
ggml-ci
* server : output embeddings for all tokens when pooling = none
ggml-ci
* server : be explicit about the pooling type in the tests
ggml-ci
* server : do not normalize embeddings when there is no pooling
ggml-ci
* llama : add OuteTTS support (wip)
* wip
* extract features
* first conv
* group norm
* resnet conv
* resnet
* attn
* pos net
* layer norm
* convnext
* head
* hann window
* fix n_embd + remove llama.cpp hacks
* compute hann window
* fft
* spectrum processing
* clean-up
* tts : receive input text and generate codes
* clip : fix new conv name
* tts : minor fix
* tts : add header + minor fixes
ggml-ci
* tts : add matchematical constant
ggml-ci
* tts : fix sampling + cut initial noise
* tts : fixes
* tts : update default samplers
ggml-ci
* tts : text pre-processing
* tts : outetts-voc -> wavtokenizer-dec
* tts : remove hardcoded constants
ggml-ci
* tts : fix tensor shapes
* llama : refactor wavtokenizer tensors
ggml-ci
* cont
ggml-ci
* cont [no ci]
* llama : update WavTokenizer to non-causal attn
* llama : handle no-vocab detokenization
* tts : add Python example for OuteTTS (wip)
* tts : extend python example to generate spectrogram
ggml-ci
* server : fix rebase artifacts
* tts : enable "return_tokens" in Python example
ggml-ci
* tts : minor fixes
* common : support HF download for vocoder
Gaetan Bisson [Wed, 18 Dec 2024 14:00:07 +0000 (04:00 -1000)]
server: avoid overwriting Authorization header (#10878)
* server: avoid overwriting Authorization header
If no API key is set, leave the Authorization header as is. It may be
used by another part of the Web stack, such as an authenticating proxy.
Fixes https://github.com/ggerganov/llama.cpp/issues/10854
* rebuild
---------
Co-authored-by: Xuan Son Nguyen <redacted>
Georgi Gerganov [Wed, 18 Dec 2024 11:01:41 +0000 (13:01 +0200)]
server : output embeddings for all tokens when pooling = none (#10861)
* server : add "tokens" output
ggml-ci
* server : output embeddings for all tokens when pooling = none
ggml-ci
* server : update readme [no ci]
* server : fix spacing [no ci]
Co-authored-by: Xuan Son Nguyen <redacted>
* server : be explicit about the pooling type in the tests
ggml-ci
* server : update /embeddings and /v1/embeddings endpoints
ggml-ci
* server : do not normalize embeddings when there is no pooling
ggml-ci
* server : update readme
ggml-ci
* server : fixes
* tests : update server tests
ggml-ci
* server : update readme [no ci]
* server : remove rebase artifact
---------
Co-authored-by: Xuan Son Nguyen <redacted>
Georgi Gerganov [Wed, 18 Dec 2024 09:05:29 +0000 (11:05 +0200)]
server : add "tokens" output (#10853)
* server : add "tokens" output
ggml-ci
* server : update readme
ggml-ci
* server : return tokens ids only if requested
ggml-ci
* tests : improve "tokens" type check
Co-authored-by: Xuan Son Nguyen <redacted>
* server : remove "tokens" from the OAI endpoint
ggml-ci
---------
Co-authored-by: Xuan Son Nguyen <redacted>
Xuan Son Nguyen [Wed, 18 Dec 2024 08:55:09 +0000 (09:55 +0100)]
server : (embeddings) using same format for "input" and "content" (#10872)
* server : (embeddings) using same format for "input" and "content"
* fix test case
* handle empty input case
* fix test
redbeard [Wed, 18 Dec 2024 08:35:00 +0000 (00:35 -0800)]
docs: Fix HIP (née hipBLAS) in README (#10880)
Related to #10524 /
be0e350c references to hipBLAS have been removed
across the repository. This fixes the link from the repositories
`README.md`.
Signed-off-by: Brian 'redbeard' Harrington <redacted>
Diego Devesa [Wed, 18 Dec 2024 00:36:46 +0000 (01:36 +0100)]
Revert "llama : add Falcon3 support (#10864)" (#10876)
This reverts commit
382bc7f2e8ffd0b89f23e840d097e21f301197ba .
DAN™ [Tue, 17 Dec 2024 22:24:22 +0000 (17:24 -0500)]
Use model->gguf_kv for loading the template instead of using the C API. (#10868)
* Bump model_template to 16384 bytes to support larger chat templates.
* Use `model->gguf_kv` for efficiency.
Johannes Gäßler [Tue, 17 Dec 2024 18:09:35 +0000 (19:09 +0100)]
tests: add tests for GGUF (#10830)
Georgi Gerganov [Tue, 17 Dec 2024 16:36:02 +0000 (18:36 +0200)]
sync : ggml
Georgi Gerganov [Tue, 17 Dec 2024 16:34:32 +0000 (18:34 +0200)]
cmake : fix "amd64" processor string (whisper/2638)
gn64 [Mon, 16 Dec 2024 10:34:38 +0000 (19:34 +0900)]
vulkan : fix soft_max.comp division by zero (whisper/2633)
This change prevents a division by zero error when p.KY is 0.
Daniel Bevenius [Sat, 14 Dec 2024 02:23:08 +0000 (03:23 +0100)]
ggml : remove return from ggml_gallocr_allocate_node (ggml/1048)
This commit removes the return statement from ggml_gallocr_allocate_node
function.
The motivation behind this change is to make the code more readable and
consistent.
Daniel Bevenius [Fri, 13 Dec 2024 07:19:38 +0000 (08:19 +0100)]
ggml : add check for grad_accs (ggml/1046)
* ggml : add check for grad_accs
This commit adds a check for grad_accs in ggml_graph_get_grad and
ggml_graph_get_grad_acc functions. This is necessary to avoid segfaults
when grad_accs is not initialized.
The motivation for this change is that I find it nice to be able to
print out a computation graph using ggml_graph_print but this function
segfaults when grad_accs is not initialized:
```console
(gdb) p g1
$2 = (ggml_cgraph *) 0x7ffff66004b0
(gdb) p *g1
$3 = {size = 2048, n_nodes = 1, n_leafs = 2, nodes = 0x7ffff6600500,
grads = 0x0, grad_accs = 0x0, leafs = 0x7ffff6604500,
visited_hash_set = {size = 4099, used = 0x7ffff6610518,
keys = 0x7ffff6608500}, order = GGML_CGRAPH_EVAL_ORDER_LEFT_TO_RIGHT}
(gdb) p ggml_graph_print(g1)
=== GRAPH ===
n_nodes = 1
Program received signal SIGSEGV, Segmentation fault.
0x0000555555579775 in ggml_graph_get_grad
(cgraph=0x7ffff66004b0,node=0x7ffff6600340)
at /ggml/ggml/src/ggml.c:5990
5990 return igrad != GGML_HASHSET_FULL &&
ggml_bitset_get(cgraph->visited_hash_set.used, igrad) ?
cgraph->grads[igrad] : NULL;
```
* squash! ggml : add check for grad_accs
Fix the check in ggml_graph_get_grad. The check was incorrectly using
cgraph->grad_accs instead of cgraph->grads.
Georgi Gerganov [Tue, 17 Dec 2024 16:35:42 +0000 (18:35 +0200)]
ggml : update ggml_backend_cpu_device_supports_op (#10867)
* ggml : fix cpy op for IQ-quants to use reference impl
ggml-ci
* ggml : disable tests involving i-matrix quantization
* ggml : update ggml_backend_cpu_device_supports_op
ggml-ci
krystiancha [Tue, 17 Dec 2024 16:00:24 +0000 (16:00 +0000)]
server : fill usage info in embeddings and rerank responses (#10852)
* server : fill usage info in embeddings response
* server : fill usage info in reranking response
Billel Mokeddem [Tue, 17 Dec 2024 15:24:56 +0000 (19:24 +0400)]
llama : add Falcon3 support (#10864)
Ruan [Tue, 17 Dec 2024 09:47:20 +0000 (17:47 +0800)]
readme : update typos (#10863)
Xuan Son Nguyen [Tue, 17 Dec 2024 08:52:09 +0000 (09:52 +0100)]
server : (UI) fix missing async generator on safari (#10857)
* server : (UI) fix missing async generator on safari
* fix
Eve [Tue, 17 Dec 2024 05:52:55 +0000 (05:52 +0000)]
vulkan: bugfixes for small subgroup size systems + llvmpipe test (#10809)
* ensure mul mat shaders work on systems with subgroup size less than 32
more fixes
add test
* only s_warptile_mmq needs to be run with 32 threads or more
Zhiyuan Li [Mon, 16 Dec 2024 21:00:46 +0000 (05:00 +0800)]
rwkv6: add wkv6 support for Vulkan backend (#10829)
* rwkv_wkv6 vulkan shader
* RWKV_WKV6 Vulkan op tests passed
Signed-off-by: Molly Sophia <redacted>
* Apply code format changes
Signed-off-by: Molly Sophia <redacted>
* add [[unroll]] and remove unnecessary conditions
* add uma support
* fix erros in EditorConfig Checker
---------
Signed-off-by: Molly Sophia <redacted>
Co-authored-by: Molly Sophia <redacted>
Georgi Gerganov [Mon, 16 Dec 2024 10:31:45 +0000 (12:31 +0200)]
unicode : improve naming style (#10838)
* unicode : improve naming style
ggml-ci
* cont [no ci]
Georgi Gerganov [Mon, 16 Dec 2024 10:31:14 +0000 (12:31 +0200)]
sampling : refactor + optimize penalties sampler (#10803)
* sampling : refactor + optimize penalties sampler
ggml-ci
* common : apply ignore_eos as logit bias
ggml-ci
* batched : remove penalties sampler
* params : allow penalty_last_n == -1 to be equal to context size
ggml-ci
* common : by default, move the penalties at the end of the sampling chain
ggml-ci
* common : ignore all EOG tokens
Co-authored-by: Diego Devesa <redacted>
* common : move back the penalties at the front of the sampling chain
ggml-ci
* readme : restore hint about --ignore-eos flag [no ci]
* llama : minor
ggml-ci
* webui : update
---------
Co-authored-by: Diego Devesa <redacted>
Bartowski [Sun, 15 Dec 2024 20:43:25 +0000 (15:43 -0500)]
llava : Allow locally downloaded models for QwenVL (#10833)
* Allow locally downloaded models for QwenVL
* Define model_path
* rm trailing space
---------
Co-authored-by: Xuan Son Nguyen <redacted>
Valentin Mamedov [Sun, 15 Dec 2024 17:02:46 +0000 (00:02 +0700)]
llama : add Deepseek MoE v1 & GigaChat models (#10827)
* Add deepseek v1 arch & gigachat template
* improve template code
* add readme
* delete comments
* remove comment
* fix format
* lint llama.cpp
* fix order of deepseek and deepseek2, move gigachat temlate to the end of func
* fix order of deepseek and deepseek2 in constants; mark shared exp as deepseek arch need
* remove comments
* move deepseek above deepseek2
* change placement of gigachat chat template
Georgi Gerganov [Sun, 15 Dec 2024 16:44:47 +0000 (18:44 +0200)]
scripts : change build path to "build-bench" for compare-commits.sh (#10836)
Vinesh Janarthanan [Sun, 15 Dec 2024 11:55:54 +0000 (05:55 -0600)]
server: (UI) add syntax highlighting and latex math rendering (#10808)
* add code highlighting and math formatting
* code cleanup
* build public/index.html
* rebuild public/index.html
* fixed coding style
* fixed coding style
* style fixes
* highlight: smaller bundle size, fix light & dark theme
* remove katex
* add bundle size check
* add more languages
* add php
* reuse some langs
* use gzip
* Revert "remove katex"
This reverts commit
c0e5046accd10be3f83018cffdc29a652849fc61 .
* use better maintained @vscode/markdown-it-katex
* fix gzip non deterministic
* ability to add a demo conversation for dev
* fix latex rendering
* add comment
* latex codeblock as code
---------
Co-authored-by: Xuan Son Nguyen <redacted>
Georgi Gerganov [Sun, 15 Dec 2024 11:16:42 +0000 (13:16 +0200)]
gguf-py : bump to v0.13.0
Michelle Tan [Sat, 14 Dec 2024 22:29:45 +0000 (22:29 +0000)]
server: Fix `has_next_line` in JSON response (#10818)
* Update server JSON response.
* Add unit test to check `has_new_line` JSON response
* Remove `has_new_line` unit test changes.
* Address code review comment: type check for `has_new_line` in unit test
Evgeny Kurnevsky [Sat, 14 Dec 2024 18:17:36 +0000 (18:17 +0000)]
nix: allow to override rocm gpu targets (#10794)
This allows to reduce compile time when you are building for a single GPU.
HimariO [Sat, 14 Dec 2024 12:43:46 +0000 (20:43 +0800)]
llama : add Qwen2VL support + multimodal RoPE (#10361)
* Barebone Qwen2VL LLM convertor
* Add Qwen2VL cli entrypoint
* [WIP] add qwen2vl arch
* Verify m-rope output
* Add vl-rope/2d-rope support for qwen2vl ViT
* update qwen2vl cli tool
* update 5D tensor op workaround
* [WIP] qwen2vl vision model
* make batch and clip utils compatible with qwen2vl
* [WIP] create inference workflow, gguf convert script but fix
* correcting vision-rope behavior, add the missing last layer back to ViT
* add arg parser to qwen2vl_surgery
* replace variable size array with vector
* cuda-gdb cmake preset
* add fp32 mrope, vision rope kernel
* add fp16 support for qwen2vl and m-rope
* add `GGML_ROPE_TYPE_MROPE`, `GGML_ROPE_TYPE_VISION`
* fix rope op mode switching, out dated func args
* update `llama_hparams`
* update to keep up stream changes
* resolve linter, test errors
* add makefile entry, update speical image padding token
* add mrope unit test, fix few compiler warnings
* rename `mrope` related function, params
* minor updates on debug util, bug fixs
* add `m-rope` testcase to `test-backend-ops`
* Apply suggestions from code review
Co-authored-by: Georgi Gerganov <redacted>
* fix traililng whitespce
* store `llama_hparams.rope_sections` with fixed size array
* update position id tensor size check in GGML_OP_ROPE
* minor updates
* update `ggml_backend_*_supports_op` of unsupported backends
* remote old `rope_section` compare operator
---------
Co-authored-by: Georgi Gerganov <redacted>
cduk [Fri, 13 Dec 2024 22:21:49 +0000 (23:21 +0100)]
Removes spurious \r in output that causes logging in journalctl to treat lines as binary and therefore hidden by default (#10771)
Signed-off-by: Charles Darke <redacted>
Co-authored-by: Charles Darke <redacted>
lhez [Fri, 13 Dec 2024 20:23:52 +0000 (12:23 -0800)]
Introducing experimental OpenCL backend with support for Qualcomm Adreno GPUs (#10693)
* [cl][adreno] Add Adreno GPU support
Add new OpenCL backend to support Adreno GPUs
---------
Co-authored-by: Skyler Szot <redacted>
Co-authored-by: Shangqing Gu <redacted>
Co-authored-by: Alexander Angus <redacted>
Co-authored-by: Hongqiang Wang <redacted>
Co-authored-by: Max Krasnyansky <redacted>
* [cl][ci] Add workflow for CL
* [cl][adreno] Fix memory leak for non SMALL_ALLOC path
* opencl: integrate backend dyn.load interface and fix compiler and format warnings
* opencl: remove small-alloc support and fix build errors for non-opencl platforms
* opencl: fixed merge conflict (MUSA added twice in cmake)
* opencl-ci: use RUNNER_TEMP instead of github.workspace
* opencl: fix embed tool invocation with python3
* opencl: CI workflow fixes
* opencl: Clean up small-alloc in CMake files
* opencl: cleanup ggml-opencl2 header file
* opencl: use ulong for offsets and strides in ADD kernel
* opencl: use cl_ulong for all offsets
* opencl: use cl_ulong for sizes and strides
* opencl: use `GGML_LOG_xxx` instead of `fprintf(stderr, ...)`
* opencl: rename backend `opencl2` -> `opencl`
* opencl: rename kernel files `ggml-opencl2` -> `ggml-opencl`
* opencl: make OpenCL required, remove redundant lib and inc directories
* `ggml-base`, `..` and `.` are added by `ggml_add_backend_library`
* opencl: rename backend - funcs, structs, etc `opencl2` -> `opencl`
* opencl: remove copyright marker since main license already covers
* opencl: replace some more OPENCL2 leftovers
* opencl: remove limits on `tensor_extra`
* opencl: use pools for `tensor_extra`
* opencl: fix compiler warnings with GCC and Clang
Still getting the warning about clCreateCmdQueue being obsolete.
Will fix that separately.
* opencl: fail gracefully if opencl devices are not available
Also for unsupported GPUs.
* opencl: fix MSVC builds (string length error)
* opencl: check for various requirements, allow deprecated API
* opencl: update log message for unsupported GPUs
---------
Co-authored-by: Skyler Szot <redacted>
Co-authored-by: Shangqing Gu <redacted>
Co-authored-by: Alexander Angus <redacted>
Co-authored-by: Hongqiang Wang <redacted>
Co-authored-by: Max Krasnyansky <redacted>