]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Eric Curtin [Wed, 29 Jan 2025 11:23:10 +0000 (12:23 +0100)]
Parse https://ollama.com/library/ syntax (#11480)
People search for ollama models using the web ui, this change
allows one to copy the url from the browser and for it to be
compatible with llama-run.
Signed-off-by: Eric Curtin <redacted>
Georgi Gerganov [Wed, 29 Jan 2025 09:25:29 +0000 (11:25 +0200)]
sync : ggml
William Tambellini [Thu, 23 Jan 2025 19:59:08 +0000 (11:59 -0800)]
ggml : add option to not print stack on abort (ggml/1081)
* Add option to not print stack on abort
Add option/envvar to disable stack printing on abort.
Also link some unittests with Threads to fix link errors on
ubuntu/g++11.
* Update ggml/src/ggml.c
---------
Co-authored-by: Diego Devesa <redacted>
issixx [Fri, 17 Jan 2025 12:29:08 +0000 (21:29 +0900)]
ggml-cpu : fix ggml_graph_compute_thread did not terminate on abort. (ggml/1065)
some threads kept looping and failed to terminate properly after an abort during CPU execution.
Co-authored-by: issi <redacted>
Daniel Bevenius [Wed, 29 Jan 2025 08:38:54 +0000 (09:38 +0100)]
embedding : enable --no-warmup option (#11475)
This commit enables the `--no-warmup` option for the llama-embeddings.
The motivation for this change is to allow the user to disable the
warmup when running the the program.
Molly Sophia [Wed, 29 Jan 2025 04:07:21 +0000 (12:07 +0800)]
llama: fix missing k_cache store for rwkv6qwen2 (#11445)
Signed-off-by: Molly Sophia <redacted>
Emreerdog [Tue, 28 Jan 2025 23:22:06 +0000 (02:22 +0300)]
cmake: add hints for locating ggml on Windows using Llama find-package (#11466)
peidaqi [Tue, 28 Jan 2025 23:03:42 +0000 (16:03 -0700)]
server : Fixed wrong function name in llamacpp server unit test (#11473)
The test_completion_stream_with_openai_library() function is actually with stream=False by default, and test_completion_with_openai_library() with stream=True
Xuan-Son Nguyen [Tue, 28 Jan 2025 23:02:56 +0000 (00:02 +0100)]
ci : fix build CPU arm64 (#11472)
* ci : fix build CPU arm64
* failed, trying ubuntu 22
* vulkan: ubuntu 24
* vulkan : jammy --> noble
uvos [Tue, 28 Jan 2025 22:06:32 +0000 (23:06 +0100)]
HIP: Supress transformation warning in softmax.cu
loops with bounds not known at compile time can not be unrolled.
when ncols_template == 0, the bounds of the loop are not constexpr, thus llvm cant unroll the loops here.
Nikita Sarychev [Tue, 28 Jan 2025 15:42:20 +0000 (07:42 -0800)]
HIP: Only call rocblas_initialize on rocblas versions with the multiple instantation bug (#11080)
This disables the workaround on rocblas fixed versions (>=4.0.0) to eliminate the runtime cost and unnecessary VRAM allocation of loading all tensile objects.
Eric Curtin [Tue, 28 Jan 2025 14:45:41 +0000 (15:45 +0100)]
Add github protocol pulling and http:// (#11465)
As pulling protocols to llama-run
Signed-off-by: Eric Curtin <redacted>
Nuno [Tue, 28 Jan 2025 14:17:25 +0000 (15:17 +0100)]
docker: allow installing pip packages system-wide (#11437)
Signed-off-by: rare-magma <redacted>
someone13574 [Tue, 28 Jan 2025 14:15:34 +0000 (09:15 -0500)]
cmake : don't fail on `GGML_CPU=OFF` (#11457)
Nuno [Tue, 28 Jan 2025 10:42:32 +0000 (11:42 +0100)]
docker: add perplexity and bench commands to full image (#11438)
Signed-off-by: rare-magma <redacted>
Akarshan Biswas [Tue, 28 Jan 2025 09:56:58 +0000 (15:26 +0530)]
SYCL : SOFTMAX F16 mask support and other fixes (#11261)
Implemented ggml_sycl_op_soft_max() F16 src1(mask) support for which a pragma deprecation warning was added during #5021.
To do this, had to decouple it from ggml_sycl_op_flatten which always considered src1 to be of fp32 type(many OP functions are dependent on it).
* SYCL: SOFTMAX F16 mask support and other fixes
* test-backend-ops: Add F16 mask test cases
Michael Engel [Tue, 28 Jan 2025 08:32:40 +0000 (09:32 +0100)]
Handle missing model in CLI parameters for llama-run (#11399)
The HTTP client in llama-run only prints an error in case the download of
a resource failed. If the model name in the CLI parameter list is missing,
this causes the application to crash.
In order to prevent this, a check for the required model parameter has been
added and errors for resource downloads get propagated to the caller.
Signed-off-by: Michael Engel <redacted>
Eric Curtin [Mon, 27 Jan 2025 18:36:10 +0000 (19:36 +0100)]
Add new hf protocol for ollama (#11449)
https://huggingface.co/docs/hub/en/ollama
Signed-off-by: Eric Curtin <redacted>
Haus1 [Mon, 27 Jan 2025 13:58:17 +0000 (08:58 -0500)]
AMD: parse the architecture as supplied by gcnArchName (#11244)
The value provided by minor doesn't include stepping for AMD, parse the value returned by gcnArchName instead to retrieve an accurate ID.
lexasub [Mon, 27 Jan 2025 13:42:09 +0000 (17:42 +0400)]
llama : minor fixes for up llama load model speed (#11448)
* impl::load change map bpe_ranks to onordered map for reduce time of impl::load on 30%
* llama_model_loader::init_mapping - replace new llama_mmap to std::make_unique<llama_mmap> for clean code & reduce (/2) time of running init_mappings
* Update src/llama-vocab.cpp
---------
Co-authored-by: lexasub <redacted>
Co-authored-by: Diego Devesa <redacted>
Johannes Gäßler [Mon, 27 Jan 2025 11:07:12 +0000 (12:07 +0100)]
llama: refactor llama_decode_impl (#11381)
Ihar Hrachyshka [Mon, 27 Jan 2025 07:41:59 +0000 (02:41 -0500)]
metal: Handle null returned from MTLCreateSystemDefaultDevice() (#11441)
This fixes segmentation fault error when running tests when no metal
devices are available (for example, when not linked with Core Graphics
framework or otherwise).
Xuan Son Nguyen [Sun, 26 Jan 2025 21:45:32 +0000 (22:45 +0100)]
docker : fix ARM build and Vulkan build (#11434)
* ci : do not fail-fast for docker
* build arm64/amd64 separatedly
* fix pip
* no fast fail
* vulkan: try jammy
Georgi Gerganov [Sun, 26 Jan 2025 18:06:16 +0000 (20:06 +0200)]
metal : use residency sets (#11427)
* metal : use residency sets
ggml-ci
* metal : restore commandBufferWithUnretainedReferences calls [no ci]
* metal : release descriptors
ggml-ci
* metal : check env GGML_METAL_NO_RESIDENCY
ggml-ci
* metal : fix build + clean-up
ggml-ci
Nuno [Sun, 26 Jan 2025 17:22:43 +0000 (18:22 +0100)]
docker: add missing vulkan library to base layer and update to 24.04 (#11422)
Signed-off-by: rare-magma <redacted>
bandoti [Sun, 26 Jan 2025 16:07:48 +0000 (12:07 -0400)]
cmake: add ggml find package (#11369)
* Add initial ggml cmake package
* Add build numbers to ggml find-package
* Expand variables with GGML_ prefix
* Guard against adding to cache variable twice
* Add git to msys2 workflow
* Handle ggml-cpu-* variants
* Link ggml/ggml-base libraries to their targets
* Replace main-cmake-pkg with simple-cmake-pkg
* Interface features require c_std_90
* Fix typo
* Removed unnecessary bracket from status message
* Update examples/simple-cmake-pkg/README.md
Co-authored-by: Georgi Gerganov <redacted>
* Update examples/simple-cmake-pkg/README.md
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Frank Mai [Sun, 26 Jan 2025 15:20:34 +0000 (23:20 +0800)]
rpc: fix register position (#11424)
Signed-off-by: thxCode <redacted>
Georgi Gerganov [Sun, 26 Jan 2025 12:30:15 +0000 (14:30 +0200)]
readme : update hot topics
Jeff Bolz [Sun, 26 Jan 2025 02:10:03 +0000 (20:10 -0600)]
build: apply MSVC /bigobj option to c/cpp files only (#11423)
Jeff Bolz [Sat, 25 Jan 2025 21:29:57 +0000 (15:29 -0600)]
vulkan: compile shaders on-demand (#11406)
Reduce first-run startup time and memory consumption.
Should fix #11339.
uvos [Sat, 25 Jan 2025 20:01:12 +0000 (21:01 +0100)]
Hip: disable VMM on hip as it seams that it dosent work in some configurations (#11420)
Jeff Bolz [Sat, 25 Jan 2025 17:26:37 +0000 (11:26 -0600)]
build: add /bigobj to MSVC build (#11407)
Diego Devesa [Sat, 25 Jan 2025 16:22:41 +0000 (17:22 +0100)]
docker : add GGML_CPU_ARM_ARCH arg to select ARM architecture to build for (#11419)
Xuan Son Nguyen [Sat, 25 Jan 2025 15:36:44 +0000 (16:36 +0100)]
server : fix cleaning up stream task (#11418)
* server : fix cleaning up stream task
* one more spot
Diego Devesa [Sat, 25 Jan 2025 14:22:29 +0000 (15:22 +0100)]
docker : fix CPU ARM build (#11403)
* docker : fix CPU ARM build
* add CURL to other builds
Georgi Gerganov [Sat, 25 Jan 2025 11:36:48 +0000 (13:36 +0200)]
ci : fix line breaks on windows builds (#11409)
* ci : fix line breaks on windows builds
* cont : another try
* ci : fix powershell line breaks
jiahao su [Fri, 24 Jan 2025 23:26:01 +0000 (07:26 +0800)]
CANN: Add Ascend CANN build ci (#10217)
* CANN: Add Ascend CANN build ci
* Update build.yml
* Modify cann image version
* Update build.yml
* Change to run on x86 system
* Update build.yml
* Update build.yml
* Modify format error
* Update build.yml
* Add 'Ascend NPU' label restrictions
* Exclude non PR event
Co-authored-by: Yuanhao Ji <redacted>
* Update build.yml
---------
Co-authored-by: Yuanhao Ji <redacted>
uvos [Fri, 24 Jan 2025 23:02:23 +0000 (00:02 +0100)]
hip : Add hipGraph and VMM support to ROCM (#11362)
* Add hipGraph support
* Enable VMM on rocm
Johannes Gäßler [Fri, 24 Jan 2025 20:02:43 +0000 (21:02 +0100)]
CUDA: fix FP16 cuBLAS GEMM (#11396)
uvos [Fri, 24 Jan 2025 16:50:49 +0000 (17:50 +0100)]
rocBLAS: Avoid fp32->fp16->fp32 conversion on cdna (#11356)
Georgi Gerganov [Fri, 24 Jan 2025 16:41:30 +0000 (18:41 +0200)]
release : pack /lib in the packages (#11392)
* release : pack /lib and /include in the packages
* cmake : put libs in /bin
* TMP : push artifacts
* Revert "TMP : push artifacts"
This reverts commit
4decf2c4dfc5cdf5d96ea44c03c8f9801ab41262 .
* ci : fix HIP cmake compiler options to be on first line
* ci : restore the original HIP commands
* ci : change ubuntu build from latest to 20.04
* ci : try to fix macos build rpaths
* ci : remove obsolete MacOS build
* TMP : push artifacts
* ci : change back to ubuntu latest
* ci : macos set build rpath to "@loader_path"
* ci : fix typo
* ci : change ubuntu package to 22.04
* Revert "TMP : push artifacts"
This reverts commit
537b09e70ffc604c414ee78acf3acb4c940ec597 .
Jafar Uruç [Fri, 24 Jan 2025 13:30:13 +0000 (13:30 +0000)]
docs : Update readme to build targets for local docker build (#11368)
Johannes Gäßler [Fri, 24 Jan 2025 11:38:31 +0000 (12:38 +0100)]
CPU/CUDA: fix (GQA) mul mat back, add CUDA support (#11380)
Bernhard M. Wiedemann [Fri, 24 Jan 2025 11:21:35 +0000 (12:21 +0100)]
cmake : avoid -march=native when reproducible build is wanted (#11366)
See https://reproducible-builds.org/ for why this is good
and https://reproducible-builds.org/specs/source-date-epoch/
for the definition of this variable.
Without this patch, compiling on different machines produced different binaries, which made verification of results difficult.
Fixes: #11317
This patch was done while working on reproducible builds for openSUSE.
Eric Curtin [Fri, 24 Jan 2025 09:39:24 +0000 (09:39 +0000)]
Update llama-run README.md (#11386)
For consistency
Signed-off-by: Eric Curtin <redacted>
stduhpf [Fri, 24 Jan 2025 08:02:38 +0000 (09:02 +0100)]
server : (webui) put DeepSeek R1 CoT in a collapsible <details> element (#11364)
* webui : put DeepSeek R1 CoT in a collapsible <details> element
* webui: refactor split
* webui: don't use regex to split cot and response
* webui: format+qol
* webui: no loading icon if the model isn't generating
* ui fix, add configs
* add jsdoc types
* only filter </think> for assistant msg
* build
* update build
---------
Co-authored-by: Xuan Son Nguyen <redacted>
Jeff Bolz [Thu, 23 Jan 2025 20:51:24 +0000 (14:51 -0600)]
tests: fix some mul_mat test gaps (#11375)
Now that we have batched mat-vec mul Vulkan shaders for up to n==8,
these tests weren't actually exercising the mat-mat mul path. Test
n==9 as well. Also, change to use all_types.
Eric Curtin [Thu, 23 Jan 2025 20:04:31 +0000 (20:04 +0000)]
Update documentation (#11373)
To show -n, -ngl, --ngl is acceptable.
Signed-off-by: Eric Curtin <redacted>
Eric Curtin [Thu, 23 Jan 2025 16:16:18 +0000 (16:16 +0000)]
Add -ngl (#11372)
Most other llama.cpp cli tools accept -ngl with a single dash.
Signed-off-by: Eric Curtin <redacted>
Xuan Son Nguyen [Thu, 23 Jan 2025 12:56:05 +0000 (13:56 +0100)]
server : add more clean up when cancel_tasks is called (#11340)
* server : add more clean up when cancel_tasks is called
* fix recv_with_timeout
* std::remove_if
* fix std::remove_if
Eric Curtin [Thu, 23 Jan 2025 10:38:20 +0000 (10:38 +0000)]
Treat hf.co/ prefix the same as hf:// (#11350)
ollama uses hf.co/ to specify huggingface prefix, like RamaLama
uses hf://
Treat them similarly.
Signed-off-by: Eric Curtin <redacted>
amd-dwang [Thu, 23 Jan 2025 07:14:28 +0000 (15:14 +0800)]
Vulkan-run-test: fix mmq_wg_denoms (#11343)
There should be a copy-and-paste error here.
*mmq_wg_denoms should be used together with *warptile_mmq, instead of
wg_denoms.
Jeff Bolz [Thu, 23 Jan 2025 07:07:50 +0000 (01:07 -0600)]
vulkan: sort shaders for more deterministic binary (#11315)
Fixes #11306.
Jeff Bolz [Thu, 23 Jan 2025 07:01:17 +0000 (01:01 -0600)]
vulkan: fix diag_mask_inf (#11323)
With robustbufferaccess disabled, this shader was showing OOB stores. There
is a bounds check in the code, but the workgrouop dimensions were reversed vs
CUDA and it was running the wrong number of threads. So fix the workgroup
dimensions and disable robustness for this pipeline.
Diego Devesa [Wed, 22 Jan 2025 18:22:20 +0000 (19:22 +0100)]
main : update README documentation for batch size (#11353)
* main : update README documentation for batch size
* fix formatting
* minor
Georgi Gerganov [Wed, 22 Jan 2025 17:44:26 +0000 (19:44 +0200)]
readme : add plugin links (#11355)
Diego Devesa [Wed, 22 Jan 2025 16:44:40 +0000 (17:44 +0100)]
server : fix draft context not being released (#11354)
Olivier Chafik [Wed, 22 Jan 2025 16:16:27 +0000 (16:16 +0000)]
`minja`: sync at https://github.com/google/minja/commit/
0f5f7f2b3770eb682fbc11763266d45204173686 (#11352)
Jiří Podivín [Wed, 22 Jan 2025 11:51:32 +0000 (12:51 +0100)]
Adding logprobs to /v1/completions (#11344)
Signed-off-by: Jiri Podivin <redacted>
Olivier Chafik [Wed, 22 Jan 2025 09:51:44 +0000 (09:51 +0000)]
`common`: utils to split / join / repeat strings (from json converter) (#11342)
* Factor string_join, string_split, string_repeat into common
* json: refactor to surface a versatile builder
* Update common.cpp
tc-mb [Wed, 22 Jan 2025 07:35:48 +0000 (15:35 +0800)]
llava : support Minicpm-omni (#11289)
* init
* add readme
* update readme
* no use make
* update readme
* update fix code
* fix editorconfig-checker
* no change convert py
* use clip_image_u8_free
Olivier Chafik [Tue, 21 Jan 2025 13:18:51 +0000 (13:18 +0000)]
Add Jinja template support (#11016)
* Copy minja from https://github.com/google/minja/commit/
58f0ca6dd74bcbfbd4e71229736640322b31c7f9
* Add --jinja and --chat-template-file flags
* Add missing <optional> include
* Avoid print in get_hf_chat_template.py
* No designated initializers yet
* Try and work around msvc++ non-macro max resolution quirk
* Update test_chat_completion.py
* Wire LLM_KV_TOKENIZER_CHAT_TEMPLATE_N in llama_model_chat_template
* Refactor test-chat-template
* Test templates w/ minja
* Fix deprecation
* Add --jinja to llama-run
* Update common_chat_format_example to use minja template wrapper
* Test chat_template in e2e test
* Update utils.py
* Update test_chat_completion.py
* Update run.cpp
* Update arg.cpp
* Refactor common_chat_* functions to accept minja template + use_jinja option
* Attempt to fix linkage of LLAMA_CHATML_TEMPLATE
* Revert LLAMA_CHATML_TEMPLATE refactor
* Normalize newlines in test-chat-templates for windows tests
* Forward decl minja::chat_template to avoid eager json dep
* Flush stdout in chat template before potential crash
* Fix copy elision warning
* Rm unused optional include
* Add missing optional include to server.cpp
* Disable jinja test that has a cryptic windows failure
* minja: fix vigogne (https://github.com/google/minja/pull/22)
* Apply suggestions from code review
Co-authored-by: Xuan Son Nguyen <redacted>
Co-authored-by: Georgi Gerganov <redacted>
* Finish suggested renamings
* Move chat_templates inside server_context + remove mutex
* Update --chat-template-file w/ recent change to --chat-template
* Refactor chat template validation
* Guard against missing eos/bos tokens (null token otherwise throws in llama_vocab::impl::token_get_attr)
* Warn against missing eos / bos tokens when jinja template references them
* rename: common_chat_template[s]
* reinstate assert on chat_templates.template_default
* Update minja to https://github.com/google/minja/commit/
b8437df626ac6cd0ce3b333b3c74ed1129c19f25
* Update minja to https://github.com/google/minja/pull/25
* Update minja from https://github.com/google/minja/pull/27
* rm unused optional header
---------
Co-authored-by: Xuan Son Nguyen <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Xuan Son Nguyen [Tue, 21 Jan 2025 13:07:12 +0000 (14:07 +0100)]
export-lora : fix tok_embd tensor (#11330)
Radoslav Gerganov [Tue, 21 Jan 2025 13:06:41 +0000 (15:06 +0200)]
rpc : better caching of the base buffer pointer (#11331)
There is no need to use map, just store the base pointer in the buffer
context.
Eric Curtin [Tue, 21 Jan 2025 09:32:35 +0000 (09:32 +0000)]
linenoise.cpp refactoring (#11301)
More RAII mainly
Signed-off-by: Eric Curtin <redacted>
Georgi Gerganov [Tue, 21 Jan 2025 06:48:13 +0000 (08:48 +0200)]
metal : fix out-of-bounds write (#11314)
ggml-ci
Georgi Gerganov [Mon, 20 Jan 2025 20:29:43 +0000 (22:29 +0200)]
common : add -hfd option for the draft model (#11318)
* common : add -hfd option for the draft model
* cont : fix env var
* cont : more fixes
Jeff Bolz [Mon, 20 Jan 2025 16:38:32 +0000 (10:38 -0600)]
vulkan: fix coopmat2 validation failures (#11284)
mul mat and flash attention shaders were loading f32 types directly into
A/B matrices, which happens to work but is technically invalid usage.
For FA, we can load it as an Accumulator matrix and convert and this
is not in the inner loop and is cheap enough. For mul mat, it's more
efficient to do this conversion in a separate pass and have the input(s)
be f16.
coopmat2 requires SPIR-V 1.6 (related using to LocalSizeId). LocalSizeId
requires maintenance4 be enabled, and SPIR-V 1.6 requires Vulkan 1.3.
Georgi Gerganov [Mon, 20 Jan 2025 14:36:08 +0000 (16:36 +0200)]
examples : fix add_special conditions (#11311)
Christopher Nielsen [Mon, 20 Jan 2025 14:02:43 +0000 (09:02 -0500)]
mmap: add include for cerrno (#11296)
ggml-ci
Co-authored-by: Xuan Son Nguyen <redacted>
Michael Podvitskiy [Mon, 20 Jan 2025 14:02:15 +0000 (15:02 +0100)]
cmake: fix shell command quoting in build-info script (#11309)
Xuan Son Nguyen [Mon, 20 Jan 2025 13:35:07 +0000 (14:35 +0100)]
llama : add support for Deepseek-R1-Qwen distill model (#11310)
* llama : add support for Deepseek-R1-Qwen distill model
* coding style
Georgi Gerganov [Mon, 20 Jan 2025 07:29:32 +0000 (09:29 +0200)]
cont : fix whitespaces (#11305)
Kyle Bruene [Mon, 20 Jan 2025 07:21:01 +0000 (01:21 -0600)]
llama : re-add LLM_ARCH_PHIMOE (#11305)
Phi 3.5 MoE was partially removed during a refactor. The code was originally in llama.cpp and should be in llama-model.cpp after the refactor.
Georgi Gerganov [Sun, 19 Jan 2025 18:22:30 +0000 (20:22 +0200)]
tests : increase timeout when sanitizers are enabled (#11300)
* tests : increase timeout when sanitizers are enabled
* tests : add DEFAULT_HTTP_TIMEOUT
Georgi Gerganov [Sun, 19 Jan 2025 16:12:09 +0000 (18:12 +0200)]
simple-chat : fix BOS being added to each message (#11278)
Nicolò Scipione [Sun, 19 Jan 2025 13:33:34 +0000 (14:33 +0100)]
SYCL: Introducing memory host pool (#11251)
* Implement host pool for matrix_info
Creating a new memory pool on the host to store memory location for
matrix_info needed to launch gemm_batch from oneMKL/oneMath.
Removing complex support in gemm_batch since it is not used in llama.cpp
* Remove unnecessary headers and cast
* Reorder member variable to avoid warning on initialization
* Formatting
* Remove unused variable
* Address PR review feedback - remove warning
---------
Signed-off-by: nscipione <redacted>
Eric Curtin [Sat, 18 Jan 2025 14:42:31 +0000 (14:42 +0000)]
Adding linenoise.cpp to llama-run (#11252)
This is a fork of linenoise that is C++17 compatible. I intend on
adding it to llama-run so we can do things like traverse prompt
history via the up and down arrows:
https://github.com/ericcurtin/linenoise.cpp
Signed-off-by: Eric Curtin <redacted>
Georgi Gerganov [Sat, 18 Jan 2025 14:18:15 +0000 (16:18 +0200)]
cmake : add sanitizer flags for llama.cpp (#11279)
* cmake : add sanitizer flags for llama.cpp
ggml-ci
* tests : fix compile warnings
ggml-ci
* cmake : move sanitizer flags to llama_add_compile_flags
ggml-ci
* cmake : move llama.cpp compile flags to top level lists
ggml-ci
* cmake : apply only sanitizer flags at top level
ggml-ci
* tests : fix gguf context use in same_tensor_data
* gguf-test: tensor data comparison
* dummy : trigger ggml-ci
* unicode : silence gcc warnings
ggml-ci
* ci : use sanitizer builds only in Debug mode
ggml-ci
* cmake : add status messages [no ci]
---------
Co-authored-by: Johannes Gäßler <redacted>
Xuan Son Nguyen [Sat, 18 Jan 2025 13:12:05 +0000 (14:12 +0100)]
server : implement cancellable request (#11285)
* server : implement cancellable request
* fix typo
* httplib 0.18.5
* fix i underflow
Georgi Gerganov [Sat, 18 Jan 2025 11:18:32 +0000 (13:18 +0200)]
scripts : restore hf.sh (#11288)
ggml-ci
LostRuins Concedo [Sat, 18 Jan 2025 10:20:57 +0000 (18:20 +0800)]
tts : add guide tokens support (#11186)
* Added the ability to use guide tokens for OuteTTS, greatly improving TTS recitation accuracy over long input sequences.
* applied linting suggestions, updated to latest llama_vocab changes, added a safety check, added newline to guide token start
Jeff Bolz [Sat, 18 Jan 2025 08:26:50 +0000 (02:26 -0600)]
vulkan: fix coopmat2 flash attention for non-contiguous inputs (#11281)
Add code similar to mul_mm_cm2 to force alignment of strides, to avoid
a performance regression.
Add noncontiguous FA tests in test-backend-ops.
Fixes #11268.
codezjx [Fri, 17 Jan 2025 12:57:56 +0000 (20:57 +0800)]
llama.android: add field formatChat to control whether to parse special tokens when send message (#11270)
Radoslav Gerganov [Fri, 17 Jan 2025 08:57:09 +0000 (10:57 +0200)]
rpc : early register backend devices (#11262)
Early register RPC devices and do not propagate RPC specifics in the
llama model structures.
ref: #10609
Georgi Gerganov [Fri, 17 Jan 2025 07:28:00 +0000 (09:28 +0200)]
vocab : fix double-eos check (#11273)
ggml-ci
David Renshaw [Fri, 17 Jan 2025 07:12:01 +0000 (02:12 -0500)]
llama : fix deprecation message: vocabable -> vocab (#11269)
musoles [Fri, 17 Jan 2025 00:10:49 +0000 (00:10 +0000)]
README : added kalavai to infrastructure list (#11216)
Jeff Bolz [Thu, 16 Jan 2025 21:47:10 +0000 (15:47 -0600)]
vulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl (#11166)
* vulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl
Shaders are based on cpy.cu.
* vulkan: support copy from q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl to f32
* ggml: copy q->f32 assumes some contiguity in the destination
Jeff Bolz [Thu, 16 Jan 2025 21:23:49 +0000 (15:23 -0600)]
vulkan: optimize coopmat2 q4_k/q5_k dequant functions. (#11206)
Do masking on whole dwords, fetch all scales at once.
Jeff Bolz [Thu, 16 Jan 2025 21:16:39 +0000 (15:16 -0600)]
vulkan: optimize coopmat2 q2_k dequant function (#11130)
RunningLeon [Thu, 16 Jan 2025 18:10:38 +0000 (02:10 +0800)]
llama : add internlm3 support (#11233)
* support internlm3
* fix lint
Johannes Gäßler [Thu, 16 Jan 2025 15:43:38 +0000 (16:43 +0100)]
CUDA: backwards pass for misc. ops, add tests (#11257)
* CUDA: backwards pass for misc. ops, add tests
* remove restrict from pointers
Xuan Son Nguyen [Thu, 16 Jan 2025 12:54:08 +0000 (13:54 +0100)]
llama : add `llama_model_load_from_splits` (#11255)
* llama : add `llama_model_load_from_splits`
* update
fj-y-saito [Thu, 16 Jan 2025 09:11:49 +0000 (18:11 +0900)]
ggml: aarch64: implement SVE kernels for q4_K_q8_K vector dot (#11227)
* Add SVE support for q4_K_q8_K
* Update ggml/src/ggml-cpu/ggml-cpu-quants.c
change to use K_SCALE_SIZE
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Eve [Wed, 15 Jan 2025 19:50:13 +0000 (19:50 +0000)]
vulkan: scale caching for k quants + misc fixes (#11081)
* q6_k scale caching
* 16 bit unpack
* q4_k test (slow)
* revert it
* q3_k
* q2_k
* little stuff
* try precalculating products of a and q2_k scales
* Revert "try precalculating products of a and q2_k scales"
This reverts commit
65110b81f23f66331a50c6e889a7c1ab9470a86b .
* unpack should be u16, add vim swap to gitignore (about time)
* better q4_k scales
* q5_k
* better q6_k with separate paths for all threads and partial threads in use, plus some more optimizations
* q2_k better dequant
* q3_k optimizations
* q3_k use hmask simd from cpu avx version
* make the caches happy
* q3_k separate out calculation
* q2_k separate out
* little stuff
* use calc_superblock everywhere
* q2_k optimize scale calculation
* more barriers
Georgi Gerganov [Wed, 15 Jan 2025 16:28:35 +0000 (18:28 +0200)]
ci : use -no-cnv in gguf-split tests (#11254)
* ci : use -no-cnv in gguf-split tests
ggml-ci
* ci : use -no-cnv in requantize tests
ggml-ci
* scripts : fix [no ci]
Junil Kim [Wed, 15 Jan 2025 13:17:42 +0000 (22:17 +0900)]
fix: ggml: fix vulkan-shaders-gen build (#10448)
* fix: ggml: fix vulkan-shaders-gen build
The vulkan-shaders-gen target was not being built correctly
in case of cross-compilation.
Other outputs need to be built for the cross compile target,
but vulkan-shaders-gen needs to be built for the host.
* refactor: ggml: Improve vulkan-shaders-gen toolchain setup
- Add GGML_SHADERS_GEN_TOOLCHAIN CMake option.
- Auto-detect host toolchain if not set.
* refactor: ggml: Improve vulkan-shaders-gen toolchain setup
Use configure_file to generate host_toolchain.cmake from template
* fix: ggml: Fix compile error
Fix compile error not finding vulkan-shaders-gen
* fix: vulkan-shaders-gen build and path handling
Fix build issues with vulkan-shaders-gen:
- Add target dependency for correct build order
- Use CMAKE_HOST_SYSTEM_NAME for executable suffix
- Fix MSVC output directory in host toolchain
- Normalize path handling for cross-compilation
* fix: improve host compiler detection in vulkan shader build
Improve host compiler detection for vulkan shader generation:
- Add NO_CMAKE_FIND_ROOT_PATH to all compiler searches
- Consolidate compiler detection logic
- Fix Windows-specific MSVC detection
- Ensure correct compiler search in cross-compilation
* refactor: Simplify CMake function for detecting host compiler
Simplified the CMake function to improve the process of detecting the host compiler.
* fix: Remove unnecessary Vulkan library linkage in CMakeLists.txt
Since `vulkan-shader-gen.cpp` only requires the `glslc` executable
and not the Vulkan headers or libraries, CMakeLists.txt needs to
be corrected.
(See:
ecc93d0558fc3ecb8a5af69d2ece02fae4710ade )
* refactor: Rename host_toolchain.cmake.in
- Rename host_toolchain.cmake.in to cmake/host-toolchain.cmake.in
* refactor: GGML_VULKAN_SHADERS_GEN_TOOLCHAIN
Rename the macro GGML_SHADERS_GEN_TOOLCHAIN to GGML_VULKAN_SHADERS_GEN_TOOLCHAIN
Johannes Gäßler [Wed, 15 Jan 2025 11:51:37 +0000 (12:51 +0100)]
RoPE: fix back, CUDA support for back + noncont. (#11240)
* RoPE: fix back, CUDA support for back + noncont.
* fix comments reg. non-cont. RoPE support [no-ci]
Daniel Bevenius [Wed, 15 Jan 2025 04:44:38 +0000 (05:44 +0100)]
examples : add embd_to_audio to tts-outetts.py [no ci] (#11235)
This commit contains a suggestion for adding the missing embd_to_audio
function from tts.cpp to tts-outetts.py. This introduces a depencency
numpy which I was not sure if that is acceptable or not (only PyTorch
was mentioned in referened PR).
Also the README has been updated with instructions to run the example
with llama-server and the python script.
Refs: https://github.com/ggerganov/llama.cpp/pull/10784#issuecomment-
2548377734