]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
9 months agoggml : AVX2 support for Q4_0_8_8 (#8713)
Srihari-mcw [Wed, 4 Sep 2024 16:51:22 +0000 (22:21 +0530)]
ggml : AVX2 support for Q4_0_8_8 (#8713)

* Add AVX2 based implementations for quantize_q8_0_4x8, ggml_gemv_q4_0_8x8_q8_0 and ggml_gemm_q4_0_8x8_q8_0 functions

* Update code to fix issues occuring due to non alignment of elements to be processed as multiple of 16 in MSVC

* Update comments and indentation

* Make updates to reduce number of load instructions

9 months ago[SYCL] Fix DMMV dequantization (#9279)
Ouadie EL FAROUKI [Wed, 4 Sep 2024 15:26:33 +0000 (16:26 +0100)]
[SYCL] Fix DMMV dequantization (#9279)

Fixed dmmv dequant for ncols== GGML_SYCL_DMMV_X

9 months agoFix broken links in docker.md (#9306)
杨朱 · Kiki [Wed, 4 Sep 2024 11:45:28 +0000 (19:45 +0800)]
Fix broken links in docker.md (#9306)

9 months agorpc : make RPC servers come first in the device list (#9296)
Radoslav Gerganov [Wed, 4 Sep 2024 08:08:32 +0000 (11:08 +0300)]
rpc : make RPC servers come first in the device list (#9296)

* rpc : make RPC servers come first in the device list

* rpc : disable options for non-RPC builds

* rpc : rpc_count always zero for non-RPC builds

9 months agoreadme : rename result_format to response_format (#9300)
Pascal Patry [Wed, 4 Sep 2024 06:45:40 +0000 (02:45 -0400)]
readme : rename result_format to response_format (#9300)

9 months agoflake.lock: Update (#9261)
Georgi Gerganov [Tue, 3 Sep 2024 23:36:43 +0000 (02:36 +0300)]
flake.lock: Update (#9261)

Flake lock file updates:

• Updated input 'flake-parts':
    'github:hercules-ci/flake-parts/8471fe90ad337a8074e957b69ca4d0089218391d?narHash=sha256-XOQkdLafnb/p9ij77byFQjDf5m5QYl9b2REiVClC%2Bx4%3D' (2024-08-01)
  → 'github:hercules-ci/flake-parts/af510d4a62d071ea13925ce41c95e3dec816c01d?narHash=sha256-ODYRm8zHfLTH3soTFWE452ydPYz2iTvr9T8ftDMUQ3E%3D' (2024-08-30)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/c374d94f1536013ca8e92341b540eba4c22f9c62?narHash=sha256-Z/ELQhrSd7bMzTO8r7NZgi9g5emh%2BaRKoCdaAv5fiO0%3D' (2024-08-21)
  → 'github:NixOS/nixpkgs/71e91c409d1e654808b2621f28a327acfdad8dc2?narHash=sha256-GnR7/ibgIH1vhoy8cYdmXE6iyZqKqFxQSVkFgosBh6w%3D' (2024-08-28)

Co-authored-by: github-actions[bot] <redacted>
9 months agollama-bench : add JSONL (NDJSON) output mode (#9288)
Aarni Koskela [Tue, 3 Sep 2024 17:58:54 +0000 (20:58 +0300)]
llama-bench : add JSONL (NDJSON) output mode (#9288)

* llama-bench : add JSONL (NDJSON) output mode

* llama-bench : update usage docs

9 months agoreadme : refactor API section + remove old hot topics
Georgi Gerganov [Tue, 3 Sep 2024 07:00:36 +0000 (10:00 +0300)]
readme : refactor API section + remove old hot topics

9 months agoserver : test script : add timeout for all requests (#9282)
Xuan Son Nguyen [Mon, 2 Sep 2024 20:08:38 +0000 (22:08 +0200)]
server : test script : add timeout for all requests (#9282)

9 months agosrc: make tail invalid when kv cell is intersection for mamba (#9249)
Zhenwei Jin [Mon, 2 Sep 2024 17:53:23 +0000 (01:53 +0800)]
src: make tail invalid when kv cell is intersection for mamba (#9249)

9 months agodocker : fix missing binaries in full-cuda image (#9278)
slaren [Mon, 2 Sep 2024 16:11:13 +0000 (18:11 +0200)]
docker : fix missing binaries in full-cuda image (#9278)

9 months agoggml : add pthread includes on FreeBSD (#9258)
yuri@FreeBSD [Mon, 2 Sep 2024 15:25:30 +0000 (08:25 -0700)]
ggml : add pthread includes on FreeBSD (#9258)

9 months agoserver : refactor multitask handling (#9274)
Xuan Son Nguyen [Mon, 2 Sep 2024 15:11:51 +0000 (17:11 +0200)]
server : refactor multitask handling (#9274)

* server : remove multitask from server_task

* refactor completions handler

* fix embeddings

* use res_ok everywhere

* small change for handle_slots_action

* use unordered_set everywhere

* (try) fix test

* no more "mutable" lambda

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <redacted>
* use deque

---------

Co-authored-by: Georgi Gerganov <redacted>
9 months agollama-cli : remove duplicated log message (#9275)
Guoliang Hua [Mon, 2 Sep 2024 12:36:43 +0000 (20:36 +0800)]
llama-cli : remove duplicated log message (#9275)

9 months agobuild(nix): Package gguf-py (#5664)
Tushar [Mon, 2 Sep 2024 11:21:01 +0000 (16:51 +0530)]
build(nix): Package gguf-py (#5664)

* style: format with nixfmt/rfc101-style

* build(nix): Package gguf-py

* build(nix): Refactor to new scope for gguf-py

* build(nix): Exclude gguf-py from devShells

* build(nix): Refactor gguf-py derivation to take in exact deps

* build(nix): Enable pytestCheckHook and pythonImportsCheck for gguf-py

* build(python): Package python scripts with pyproject.toml

* chore: Cleanup

* dev(nix): Break up python/C devShells

* build(python): Relax pytorch version constraint

Nix has an older version

* chore: Move cmake to nativeBuildInputs for devShell

* fmt: Reconcile formatting with rebase

* style: nix fmt

* cleanup: Remove unncessary __init__.py

* chore: Suggestions from review

- Filter out non-source files from llama-scripts flake derivation
- Clean up unused closure
- Remove scripts devShell

* revert: Bad changes

* dev: Simplify devShells, restore the -extra devShell

* build(nix): Add pyyaml for gguf-py

* chore: Remove some unused bindings

* dev: Add tiktoken to -extra devShells

9 months agollama : minor style
Georgi Gerganov [Mon, 2 Sep 2024 08:52:04 +0000 (11:52 +0300)]
llama : minor style

9 months agollama : support RWKV v6 models (#8980)
Molly Sophia [Sun, 1 Sep 2024 14:38:17 +0000 (22:38 +0800)]
llama : support RWKV v6 models (#8980)

* convert_hf_to_gguf: Add support for RWKV v6

Signed-off-by: Molly Sophia <redacted>
* Add RWKV tokenization

* Fix build

Signed-off-by: Molly Sophia <redacted>
* Do not use special tokens when matching in RWKV tokenizer

* Fix model loading

* Add (broken) placeholder graph builder for RWKV

* Add workaround for kv cache

* Add logits conversion to rwkv5

* Add rwkv5 layer norms

* Add time mix KVRG & correct merge mistake

* Add remaining time mix parameters

* Add time mix output loading

* Add placeholder llm_build_time_mix

* Fix build

Signed-off-by: Molly Sophia <redacted>
* Load more tensors for rwkv v6

Signed-off-by: Molly Sophia <redacted>
* Fix rwkv tokenizer

Signed-off-by: Molly Sophia <redacted>
* ggml: Add unary operator Exp

Signed-off-by: Molly Sophia <redacted>
* RWKV v6 graph building

Signed-off-by: Molly Sophia <redacted>
* Add ``rescale_every_n_layers`` parameter

Signed-off-by: Molly Sophia <redacted>
* Add ``wkv.head_size`` key for RWKV

so it doesn't reuse Mamba ssm parameters

Signed-off-by: Molly Sophia <redacted>
* Fix offloading layers to CUDA

Signed-off-by: Molly Sophia <redacted>
* Fix parallel inferencing for RWKV

Signed-off-by: Molly Sophia <redacted>
* Remove trailing whitespaces

Signed-off-by: Molly Sophia <redacted>
* build_rwkv: Avoid using inplace operations

Signed-off-by: Molly Sophia <redacted>
* convert_hf_to_gguf: rwkv: Avoid using ``eval``

Signed-off-by: Molly Sophia <redacted>
* convert_hf_to_gguf: rwkv tokenizer: Don't escape sequences manually

Signed-off-by: Molly Sophia <redacted>
* Update convert_hf_to_gguf.py

Co-authored-by: compilade <redacted>
* ggml: Add backward computation for unary op ``exp``

Signed-off-by: Molly Sophia <redacted>
* Update convert_hf_to_gguf.py

Co-authored-by: compilade <redacted>
* Update convert_hf_to_gguf.py

Co-authored-by: compilade <redacted>
* Use MODEL_ARCH.RWKV6 instead of MODEL_ARCH.RWKV

Signed-off-by: Molly Sophia <redacted>
* build_rwkv6: Simplify graph

Signed-off-by: Molly Sophia <redacted>
* llama: rwkv6: Detect model.type

Signed-off-by: Molly Sophia <redacted>
* llama: rwkv6: Fix tensor loading for 7B/14B models

Signed-off-by: Molly Sophia <redacted>
* llama: rwkv6: Fix group_norm assertion failure with Metal

Signed-off-by: Molly Sophia <redacted>
* llama: rwkv6: Clean up

Signed-off-by: Molly Sophia <redacted>
* llama: rwkv6: Add quantization tensor exclusion

Signed-off-by: Molly Sophia <redacted>
* llama: rwkv6: Use the new advanced batch splits

Signed-off-by: Molly Sophia <redacted>
* Update src/llama.cpp

Co-authored-by: compilade <redacted>
* llama: rwkv6: Use ``ggml_norm`` instead of ``ggml_group_norm``

Co-authored-by: compilade <redacted>
* llama: rwkv6: Apply code style and misc changes

Signed-off-by: Molly Sophia <redacted>
* converter: Use class name ``Rwkv6Model``

Signed-off-by: Molly Sophia <redacted>
* llama: rwkv6: Make use of key ``feed_forward_length``

Signed-off-by: Molly Sophia <redacted>
* llama: rwkv6: Add kv ``time_mix_extra_dim`` and ``time_decay_extra_dim``

Signed-off-by: Molly Sophia <redacted>
* converter: Match ``new_name`` instead of ``name`` for float32 explicit tensors

Signed-off-by: Molly Sophia <redacted>
* llama: rwkv6: Keep ``time_mix_w1/w2`` as F32

Signed-off-by: Molly Sophia <redacted>
* llama: rwkv6: Remove unused nodes

Signed-off-by: Molly Sophia <redacted>
* llama: rwkv6: Apply code format changes

Signed-off-by: Molly Sophia <redacted>
* llama: rwkv6: Add lora for some supported tensors

Currently att.key/receptance/value/gate/output, ffn.receptance/key/value, as well as head.weight

Signed-off-by: Molly Sophia <redacted>
* rwkv : speed-up tokenization using trie

* minor : style + indentation

* llama: rwkv6: Avoid division by zero

Co-authored-by: compilade <redacted>
* ggml: rwkv_wkv: Avoid copying the state

Signed-off-by: Molly Sophia <redacted>
---------

Signed-off-by: Molly Sophia <redacted>
Co-authored-by: Layl Bongers <redacted>
Co-authored-by: compilade <redacted>
Co-authored-by: Georgi Gerganov <redacted>
9 months agonix: fix CUDA build - replace deprecated autoAddOpenGLRunpathHook
Echo Nolan [Thu, 22 Aug 2024 21:19:14 +0000 (17:19 -0400)]
nix: fix CUDA build - replace deprecated autoAddOpenGLRunpathHook

The CUDA nix build broke when we updated nixpkgs in
8cd1bcfd3fc9f2b5cbafd7fb7581b3278acec25f. As far as I can tell all
that happened is cudaPackages.autoAddOpenGLRunpathHook got moved to
pkgs.autoAddDriverRunpath. This commit fixes it.

9 months agosgemm : improved Q4_0 and Q8_0 performance via 4xN and Mx4 gemm (#8908)
Srihari-mcw [Sat, 31 Aug 2024 08:20:35 +0000 (13:50 +0530)]
sgemm : improved Q4_0 and Q8_0 performance via 4xN and Mx4 gemm (#8908)

9 months agollama : fix typo in xcda_array_view comment [no ci] (#9132)
Daniel Bevenius [Sat, 31 Aug 2024 07:50:22 +0000 (09:50 +0200)]
llama : fix typo in xcda_array_view comment [no ci] (#9132)

9 months agollama : fix llama_split_mode enum values in main_gpu document (#9057)
Sutou Kouhei [Fri, 30 Aug 2024 18:08:10 +0000 (03:08 +0900)]
llama : fix llama_split_mode enum values in main_gpu document (#9057)

LLAMA_SPLIT_* were renamed to LLAMA_SPLIT_MODE_* in #5697.

9 months agoCorrect typo run_llama2.sh > run-llama2.sh (#9149)
蕭澧邦 [Fri, 30 Aug 2024 12:10:01 +0000 (20:10 +0800)]
Correct typo run_llama2.sh > run-llama2.sh (#9149)

10 months agollava : the function "clip" should be int (#9237)
tc-mb [Fri, 30 Aug 2024 05:21:57 +0000 (13:21 +0800)]
llava : the function "clip" should be int (#9237)

10 months agoThreadpool: take 2 (#8672)
Faisal Zaghloul [Thu, 29 Aug 2024 23:20:53 +0000 (19:20 -0400)]
Threadpool: take 2 (#8672)

* Introduce ggml_compute_threadpool

- OpenMP functional: check
- Vanilla ggml functional: Check
- ggml w/threadpool functional: Check
- OpenMP no regression: No glaring problems
- Vanilla ggml no regression: No glaring problems
- ggml w/threadpool no regression: No glaring problems

* Minor fixes

* fixed use after release bug

* fixed a harmless race condition

* Fix Android bulid issue

* fix more race conditions

* fix deadlock for cases where cgraph.n_nodes == 1

and fix --poll case

* threadpool: use cpu_get_num_math to set the default number of threadpool threads

This way we avoid using E-Cores and Hyperthreaded siblings.

* bench: create fresh threadpool for each test

For benchmarking it's better to start a fresh pool for each test with the exact number of threads
needed for that test. Having larger pools is suboptimal (causes more load, etc).

* atomics: always use stdatomics with clang and use relaxed memory order when polling in ggml_barrier

This also removes sched_yield() calls from ggml_barrier() to match OpenMP behavior.

* threadpool: make polling the default to match openmp behavior

All command line args now allow for setting poll to 0 (false).

* threadpool: do not wakeup threads in already paused threadpool

* fix potential race condition in check_for_work

* threadpool: do not create two threadpools if their params are identical

* threadpool: reduce pause/resume/wakeup overhead in common cases

We now start threadpool in paused state only if we have two.
The resume is now implicit (ie new work) which allows for reduced locking and context-switch overhead.

* threadpool: add support for hybrid polling

poll params (--poll, ...) now specify "polling level", i.e. how aggresively we poll before waiting on cond.var.
poll=0 means no polling, 1 means poll for 128K rounds then wait, 2 for 256K rounds, ...

The default value of 50 (ie 50x128K rounds) seems like a decent default across modern platforms.
We can tune this further as things evolve.

* threadpool: reduce the number of barrier required

New work is now indicated with an atomic counter that is incremented for
each new graph that needs to be computed.
This removes the need for extra barrier for clearing the "new_work" and
removes the special case for trivial graphs.

* threadpool: remove special-casing for disposable threadpools

With the efficient hybrid polling there is no need to make disposable pools any different.
This simplifies the overall logic and reduces branching.

Include n_threads in debug print for disposable threadpool.

Declare pause and stop flags as atomic_bool
This doesn't actually generate any memory barriers and simply informs
the thread sanitizer that these flags can be written & read by different
threads without locking.

* threadpool: do not clear barrier counters between graphs computes (fixes race with small graphs)

This fixes the race condition with very small graphs where the main thread happens to
start a new graph while the workers are just about to exit from barriers.

* threadpool: use relaxed order for chunk sync

Full memory barrier is an overkill for this since each thread works on different chunk

* threadpool: remove abort_callback from threadpool state

* threadpool: better naming for thread/cpumask releated functions

* threadpool: consistent use of int type for n_threads params

* threadpool: add support for ggml_threadpool_params_default/init

Also removes the need for explicit mask_specified param.
all-zero cpumask means use default (usually inherited) cpu affinity mask.

* threadpool: move typedef into ggml.h

* threadpool: fix apply_priority() function name

* threadpool: fix swift wrapper errors due to n_threads int type cleanup

* threadpool: enable --cpu-mask and other threadpool related options only if threadpool is enabled

* threadpool: replace checks for compute_thread ret code with proper status check

* threadpool: simplify threadpool init logic and fix main thread affinity application

Most of the init code is now exactly the same between threadpool and openmp.

* threadpool: update threadpool resume/pause function names

* threadpool: enable openmp by default for now

* threadpool: don't forget to free workers state when omp is enabled

* threadpool: avoid updating process priority on the platforms that do not require it

On Windows we need to change overall process priority class in order to set thread priorities,
but on Linux, Mac, etc we do not need to touch the overall process settings.

* threadpool: update calling thread prio and affinity only at start/resume

This avoids extra syscalls for each graph_compute()

* llama-bench: turn threadpool params into vectors, add output headers, etc

* llama-bench: add support for cool off between tests --delay

This helps for long running tests on platforms that are thermally limited (phones, laptops, etc).
--delay (disabled by default) introduces the sleep for N seconds before starting each test.

* threadpool: move process priority setting into the apps (bench and cli)

This avoids changing the overall process priority on Windows for the apps
that use ggml/llama.cpp directy.

* threadpool: move all pause/resume logic into ggml

* threadpool: futher api cleanup and prep for future refactoring

All threadpool related functions and structs use ggml_threadpool prefix.

* threadpool: minor indent fixes

* threadpool: improve setprioty error message

* Update examples/llama-bench/llama-bench.cpp

Co-authored-by: slaren <redacted>
* threadpool: fix indent in set_threadpool call

* use int32_t for n_thread type in public llama.cpp API

* threadpool: use _new and _free instead of _create and _release

* fix two more public APIs to use int32_t for n_threads

* build: set _GNU_SOURCE for Adroid

---------

Co-authored-by: Max Krasnyansky <redacted>
Co-authored-by: fmz <redacted>
Co-authored-by: Max Krasnyansky <redacted>
Co-authored-by: slaren <redacted>
10 months agoserver : fix crash when error handler dumps invalid utf-8 json (#9195)
Jan Boon [Tue, 27 Aug 2024 10:28:06 +0000 (18:28 +0800)]
server : fix crash when error handler dumps invalid utf-8 json (#9195)

10 months agoflake.lock: Update (#9162)
Georgi Gerganov [Thu, 29 Aug 2024 04:28:14 +0000 (07:28 +0300)]
flake.lock: Update (#9162)

Flake lock file updates:

• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/c3aa7b8938b17aebd2deecf7be0636000d62a2b9?narHash=sha256-med8%2B5DSWa2UnOqtdICndjDAEjxr5D7zaIiK4pn0Q7c%3D' (2024-08-14)
  → 'github:NixOS/nixpkgs/c374d94f1536013ca8e92341b540eba4c22f9c62?narHash=sha256-Z/ELQhrSd7bMzTO8r7NZgi9g5emh%2BaRKoCdaAv5fiO0%3D' (2024-08-21)

Co-authored-by: github-actions[bot] <redacted>
10 months agodocker : build images only once (#9225)
slaren [Wed, 28 Aug 2024 15:28:00 +0000 (17:28 +0200)]
docker : build images only once (#9225)

10 months agodocker : update CUDA images (#9213)
slaren [Wed, 28 Aug 2024 11:20:36 +0000 (13:20 +0200)]
docker : update CUDA images (#9213)

10 months agovulkan : fix build (#0)
Georgi Gerganov [Tue, 27 Aug 2024 19:10:58 +0000 (22:10 +0300)]
vulkan : fix build (#0)

ggml-ci

10 months agosync : ggml
Georgi Gerganov [Tue, 27 Aug 2024 19:01:45 +0000 (22:01 +0300)]
sync : ggml

10 months agoFix minicpm example directory (#9111)
Xie Yanbo [Tue, 27 Aug 2024 12:33:08 +0000 (20:33 +0800)]
Fix minicpm example directory (#9111)

10 months agollama : fix qs.n_attention_wv for DeepSeek-V2 (#9156)
compilade [Tue, 27 Aug 2024 10:09:23 +0000 (06:09 -0400)]
llama : fix qs.n_attention_wv for DeepSeek-V2 (#9156)

10 months agoserver : add some missing env variables (#9116)
Xuan Son Nguyen [Tue, 27 Aug 2024 09:07:01 +0000 (11:07 +0200)]
server : add some missing env variables (#9116)

* server : add some missing env variables

* add LLAMA_ARG_HOST to server dockerfile

* also add LLAMA_ARG_CONT_BATCHING

10 months agollama : fix ChatGLM4 wrong shape (#9194)
CausalLM [Tue, 27 Aug 2024 06:58:22 +0000 (14:58 +0800)]
llama : fix ChatGLM4 wrong shape (#9194)

This should fix THUDM/glm-4-9b-chat-1m and CausalLM/miniG

10 months agollama : fix llama3.1 rope_freqs not respecting custom head_dim (#9141)
Carsten Kragelund Jørgensen [Tue, 27 Aug 2024 06:53:40 +0000 (08:53 +0200)]
llama : fix llama3.1 rope_freqs not respecting custom head_dim (#9141)

* fix: llama3.1 rope_freqs not respecting custom head_dim

* fix: use potential head_dim for Exaone

10 months agocommon : Update stb_image.h to latest version (#9161)
arch-btw [Tue, 27 Aug 2024 05:58:50 +0000 (22:58 -0700)]
common : Update stb_image.h to latest version (#9161)

* Update stb_image.h to latest version

Fixes https://github.com/ggerganov/llama.cpp/issues/7431

* Update .ecrc

10 months agoggml : do not crash when quantizing q4_x_x with an imatrix (#9192)
slaren [Mon, 26 Aug 2024 17:44:43 +0000 (19:44 +0200)]
ggml : do not crash when quantizing q4_x_x with an imatrix (#9192)

10 months agometal : separate scale and mask from QKT in FA kernel (#9189)
Georgi Gerganov [Mon, 26 Aug 2024 15:31:02 +0000 (18:31 +0300)]
metal : separate scale and mask from QKT in FA kernel (#9189)

* metal : separate scale and mask from QKT in FA kernel

* metal : ne01 check no longer necessary

* metal : keep data in local memory

10 months agoggml : add SSM Metal kernels (#8546)
Georgi Gerganov [Mon, 26 Aug 2024 14:55:36 +0000 (17:55 +0300)]
ggml : add SSM Metal kernels (#8546)

* ggml : add ggml_ssm_conv metal impl

* ggml : add ssm_scan metal impl

ggml-ci

10 months agotests : fix compile warnings for unreachable code (#9185)
Georgi Gerganov [Mon, 26 Aug 2024 13:30:25 +0000 (16:30 +0300)]
tests : fix compile warnings for unreachable code (#9185)

ggml-ci

10 months agoci : add VULKAN support to ggml-ci (#9055)
Georgi Gerganov [Mon, 26 Aug 2024 09:19:39 +0000 (12:19 +0300)]
ci : add VULKAN support to ggml-ci (#9055)

10 months agoserver : update deps (#9183)
Georgi Gerganov [Mon, 26 Aug 2024 09:16:57 +0000 (12:16 +0300)]
server : update deps (#9183)

10 months agometal : gemma2 flash attention support (#9159)
slaren [Mon, 26 Aug 2024 09:08:59 +0000 (11:08 +0200)]
metal : gemma2 flash attention support (#9159)

10 months agoggml-ci : try to improve build time (#9160)
slaren [Mon, 26 Aug 2024 09:03:30 +0000 (11:03 +0200)]
ggml-ci : try to improve build time (#9160)

10 months agollama : fix time complexity of string replacement (#9163)
Justine Tunney [Mon, 26 Aug 2024 06:09:53 +0000 (23:09 -0700)]
llama : fix time complexity of string replacement (#9163)

This change fixes a bug where replacing text in a very long string could
cause llama.cpp to hang indefinitely. This is because the algorithm used
was quadratic, due to memmove() when s.replace() is called in a loop. It
seems most search results and LLM responses actually provide the O(n**2)
algorithm, which is a great tragedy. Using a builder string fixes things

10 months agocommon: fixed not working find argument --n-gpu-layers-draft (#9175)
Herman Semenov [Sun, 25 Aug 2024 22:54:37 +0000 (22:54 +0000)]
common: fixed not working find argument --n-gpu-layers-draft (#9175)

10 months agoCUDA: fix Gemma 2 numerical issues for FA (#9166)
Johannes Gäßler [Sun, 25 Aug 2024 20:11:48 +0000 (22:11 +0200)]
CUDA: fix Gemma 2 numerical issues for FA (#9166)

10 months agoCPU/CUDA: Gemma 2 FlashAttention support (#8542)
Johannes Gäßler [Sat, 24 Aug 2024 19:34:59 +0000 (21:34 +0200)]
CPU/CUDA: Gemma 2 FlashAttention support (#8542)

* CPU/CUDA: Gemma 2 FlashAttention support

* apply logit_softcap to scale in kernel

* disable logit softcapping tests on Metal

* remove metal check

10 months agoquantize : fix typo in usage help of `quantize.cpp` (#9145)
João Dinis Ferreira [Sat, 24 Aug 2024 06:22:45 +0000 (07:22 +0100)]
quantize : fix typo in usage help of `quantize.cpp` (#9145)

10 months agolora : fix llama conversion script with ROPE_FREQS (#9117)
Xuan Son Nguyen [Fri, 23 Aug 2024 10:58:53 +0000 (12:58 +0200)]
lora : fix llama conversion script with ROPE_FREQS (#9117)

10 months agollama : use F32 precision in GLM4 attention and no FA (#9130)
piDack [Fri, 23 Aug 2024 07:27:17 +0000 (15:27 +0800)]
llama : use F32 precision in GLM4 attention and no FA (#9130)

10 months ago[SYCL] Add a space to supress a cmake warning (#9133)
Akarshan Biswas [Thu, 22 Aug 2024 14:09:47 +0000 (19:39 +0530)]
[SYCL] Add a space to supress a cmake warning (#9133)

10 months ago[SYCL] Add oneDNN primitive support (#9091)
luoyu-intel [Thu, 22 Aug 2024 04:50:10 +0000 (12:50 +0800)]
[SYCL] Add oneDNN primitive support (#9091)

* add onednn

* add sycl_f16

* add dnnl stream

* add engine map

* use dnnl for intel only

* use fp16fp16fp16

* update doc

10 months agollama : simplify Mamba with advanced batch splits (#8526)
compilade [Wed, 21 Aug 2024 21:58:11 +0000 (17:58 -0400)]
llama : simplify Mamba with advanced batch splits (#8526)

* llama : advanced batch splits

This includes equal-sequence-length batch splits which are useful
to simplify recurrent model operators.

* llama : always make recurrent state slots contiguous

* ggml : simplify mamba operators

* llama : fix integer signedness mixing

* llama : logits_all has priority over batch->logits

Otherwise, the server embeddings tests failed.
This was likely an existing problem but was only detected here
because of an additional assertion.

* llama : apply suggestions

Co-authored-by: Georgi Gerganov <redacted>
* llama : fix t5 segfault

* llama : fix Mamba session save and restore

* llama : minor cosmetic changes

* llama : rename llama_reorder_outputs to llama_output_reorder

Also move it closer to llama_output_reserve.

* llama : fix pooled embeddings when using batches with equal_seqs

* minor : add struct members for clarity

ggml-ci

* llama : fix T5 segfault again

* llama : fix Mamba pooled embeddings with multiple sequences

Until the pooled embeddings are refactored to allow splitting
across ubatches for causal embeddings,
recurrent models can only process a single sequence per ubatch
when calculating pooled embeddings.

* llama : add llama_model_is_recurrent to simplify figuring that out

This will make it easier to more cleanly support RWKV-v6 and Mamba-2.

* llama : fix simple splits when the batch contains embeddings

---------

Co-authored-by: Georgi Gerganov <redacted>
10 months agoserver : support reading arguments from environment variables (#9105)
Xuan Son Nguyen [Wed, 21 Aug 2024 09:04:34 +0000 (11:04 +0200)]
server : support reading arguments from environment variables (#9105)

* server : support reading arguments from environment variables

* add -fa and -dt

* readme : specify non-arg env var

10 months agollama : support for `falcon-mamba` architecture (#9074)
Younes Belkada [Wed, 21 Aug 2024 08:06:36 +0000 (12:06 +0400)]
llama : support for `falcon-mamba` architecture (#9074)

* feat: initial support for llama.cpp

* fix: lint

* refactor: better refactor

* Update src/llama.cpp

Co-authored-by: compilade <redacted>
* Update src/llama.cpp

Co-authored-by: compilade <redacted>
* fix: address comments

* Update convert_hf_to_gguf.py

Co-authored-by: compilade <redacted>
* fix: add more cleanup and harmonization

* fix: lint

* Update gguf-py/gguf/gguf_writer.py

Co-authored-by: compilade <redacted>
* fix: change name

* Apply suggestions from code review

Co-authored-by: compilade <redacted>
* add in operator

* fix: add `dt_b_c_rms` in `llm_load_print_meta`

* fix: correct printf format for bool

* fix: correct print format

* Update src/llama.cpp

Co-authored-by: compilade <redacted>
* llama : quantize more Mamba tensors

* llama : use f16 as the fallback of fallback quant types

---------

Co-authored-by: compilade <redacted>
10 months agollava : zero-initialize clip_ctx structure fields with aggregate initialization 908)
fairydreaming [Wed, 21 Aug 2024 07:45:49 +0000 (09:45 +0200)]
llava : zero-initialize clip_ctx structure fields with aggregate initialization 908)

Co-authored-by: Stanisław Szymczyk <redacted>
10 months agollama : std::move llm_bigram_bpe from work_queue (#9062)
Daniel Bevenius [Wed, 21 Aug 2024 07:32:58 +0000 (09:32 +0200)]
llama : std::move llm_bigram_bpe from work_queue (#9062)

* llama : std::move llm_bigram_bpe from work_queue

This commit updates the retrieval of llm_bigram_bpe objects from
work_queue.top() by using std::move.

The motivation for this is to avoid the copying of the std::string
`text` member of the llm_bigram_bpe struct.

* squash! llama : std::move llm_bigram_bpe from work_queue

Introduced a MovablePriorityQueue class to allow moving elements
out of the priority queue for llm_bigram_bpe.

* squash! llama : std::move llm_bigram_bpe from work_queue

Rename MovablePriorityQueue to lama_priority_queue.

* squash! llama : std::move llm_bigram_bpe from work_queue

Rename lama_priority_queue -> llama_priority_queue.

10 months agollava: Add ACC OP for GPU acceleration to the Vulkan backend in the LLAVA CLIP model...
Changyeon Kim [Tue, 20 Aug 2024 19:00:00 +0000 (04:00 +0900)]
llava: Add ACC OP for GPU acceleration to the Vulkan backend in the LLAVA CLIP model. (#8984)

* llava: Add ACC OP for GPU acceleration to the Vulkan backend in the LLAVA CLIP model.

- The CLIP model now prioritizes the Vulkan backend over the CPU when vulkan available.
- A GGML_OP_ACC shader has been added.
- The encoding performance of the CLIP model improved from 4.2s on the CPU to 0.9s on the GPU.

Signed-off-by: Changyeon Kim <redacted>
* fix-up coding style.

Signed-off-by: Changyeon Kim <redacted>
* Fix-up the missing initial parameter to resolve the compilation warning.

Signed-off-by: Changyeon Kim <redacted>
* [fix] Add missing parameters.

Signed-off-by: Changyeon Kim <redacted>
* [fix] Use nb1 and nb2 for dst.

Signed-off-by: Changyeon Kim <redacted>
* Fix check results ggml_acc call

---------

Signed-off-by: Changyeon Kim <redacted>
Co-authored-by: 0cc4m <redacted>
10 months ago[SYCL] fallback mmvq (#9088)
Meng, Hengyu [Tue, 20 Aug 2024 15:50:17 +0000 (23:50 +0800)]
[SYCL] fallback mmvq (#9088)

* fallback mmvq to mul_mat

* mmvq in cuda path

* Update ggml/src/ggml-sycl.cpp

Co-authored-by: Alberto Cabrera Pérez <redacted>
---------

Co-authored-by: Alberto Cabrera Pérez <redacted>
10 months ago[SYCL] Fix SYCL `im2col` and `convert` Overflow with Large Dims (#9052)
zhentaoyu [Tue, 20 Aug 2024 15:06:51 +0000 (23:06 +0800)]
[SYCL] Fix SYCL `im2col` and `convert` Overflow with Large Dims (#9052)

* sycl: fix im2col overflow and sync with cuda

Signed-off-by: zhentaoyu <redacted>
* sycl: fix convert overflow

Signed-off-by: zhentaoyu <redacted>
* sycl: fix convert and dequantize

Signed-off-by: zhentaoyu <redacted>
* sycl: fix ib in dmmv

Signed-off-by: zhentaoyu <redacted>
* sycl:refine convert

Signed-off-by: zhentaoyu <redacted>
* sycl: move downsample global_range into common

Signed-off-by: zhentaoyu <redacted>
* test: add im2col and convert test cases

Signed-off-by: zhentaoyu <redacted>
* test: make new cases only in sycl

Signed-off-by: zhentaoyu <redacted>
* test: comment new test_cases for only local testing

Signed-off-by: zhentaoyu <redacted>
---------

Signed-off-by: zhentaoyu <redacted>
10 months agotests : add missing comma in grammar integration tests (#9099)
fairydreaming [Tue, 20 Aug 2024 09:09:55 +0000 (11:09 +0200)]
tests : add missing comma in grammar integration tests (#9099)

Co-authored-by: Stanisław Szymczyk <redacted>
10 months agocann: add doc for cann backend (#8867)
wangshuai09 [Mon, 19 Aug 2024 08:46:38 +0000 (16:46 +0800)]
cann: add doc for cann backend (#8867)

Co-authored-by: xuedinge233 <redacted>
Co-authored-by: hipudding <redacted>
10 months agorpc : print error message when failed to connect endpoint (#9042)
Radoslav Gerganov [Mon, 19 Aug 2024 07:11:45 +0000 (10:11 +0300)]
rpc : print error message when failed to connect endpoint (#9042)

10 months agorpc : prevent crashes on invalid input (#9040)
Radoslav Gerganov [Mon, 19 Aug 2024 07:10:21 +0000 (10:10 +0300)]
rpc : prevent crashes on invalid input (#9040)

Add more checks which prevent RPC server from crashing if invalid input
is received from client

10 months agoflake.lock: Update (#9068)
Georgi Gerganov [Sun, 18 Aug 2024 14:43:32 +0000 (17:43 +0300)]
flake.lock: Update (#9068)

10 months agotests : add integration test for lora adapters (#8957)
ltoniazzi [Sun, 18 Aug 2024 09:58:04 +0000 (10:58 +0100)]
tests : add integration test for lora adapters (#8957)

* Add printing to check weights match torch version

* minor code style changes

---------

Co-authored-by: Xuan Son Nguyen <redacted>
10 months agoFix incorrect use of ctx_split for bias tensors (#9063)
Yoshi Suhara [Sat, 17 Aug 2024 13:34:21 +0000 (06:34 -0700)]
Fix incorrect use of ctx_split for bias tensors (#9063)

10 months agoserver : refactor middleware and /health endpoint (#9056)
Xuan Son Nguyen [Fri, 16 Aug 2024 15:19:05 +0000 (17:19 +0200)]
server : refactor middleware and /health endpoint (#9056)

* server : refactor middleware and /health endpoint

* move "fail_on_no_slot" to /slots

* Update examples/server/server.cpp

Co-authored-by: Georgi Gerganov <redacted>
* fix server tests

* fix CI

* update server docs

---------

Co-authored-by: Georgi Gerganov <redacted>
10 months agollava : support MiniCPM-V-2.6 (#8967)
tc-mb [Fri, 16 Aug 2024 13:34:41 +0000 (21:34 +0800)]
llava : support MiniCPM-V-2.6 (#8967)

* init

* rename

* add run android for termux in readme

* add android readme

* add instructions in readme

* change name in readme

* Update README.md

* fixed line

* add result in readme

* random pos_embed

* add positions index

* change for ollama

* change for ollama

* better pos_embed in clip

* support ollama

* updata cmakelist

* updata cmakelist

* rename wrapper

* clear code

* replace and organize code

* add link

* sync master

* fix warnings

* fix warnings

* fix bug in bicubic resize when need resize iamge smaller

* receive review comments and modify

* receive review comments and modify

* put all code into llava dir

* fix quality problem in pr code

* change n_layer

* add space in "-1"

* imitate reshape bug of python code

* fix bug in clip

* fix issues for merging

* fix llama-minicpmv-cli in cmake file

* change pr readme

* fix code review

* remove in line 33 directory in the /cmakelists.txt (not in example, in the main dir

* fix cmakefile

* add warn

* fix KEY_HAS_MINICPMV_PROJ

* remove load_image_size into clip_ctx

* remove the extern "C", MINICPMV_API

* fix uhd code for review comment

* delete minicpmv-wrapper in pr

* remove uhd_image_embed

* Modify 2 notes

* support minicpmv2.6

* modify convert script of minicpmv

* modify convert

* modify convert

* add readme

* add resampler of v2.6

* modify clip

* modify readme

* fix type-check

* fix type-check

* fix type-check

* fix type-check

* modify convert script and readme

* fix convert script and readme

* fix convert

* fix num in convert

* fix type-check

---------

Co-authored-by: Hongji Zhu <redacted>
Co-authored-by: harvestingmoon <redacted>
10 months agopy : fix wrong input type for raw_dtype in ggml to gguf scripts (#8928)
Farbod Bijary [Fri, 16 Aug 2024 10:36:30 +0000 (14:06 +0330)]
py : fix wrong input type for raw_dtype in ggml to gguf scripts (#8928)

Co-authored-by: farbod <redacted>
10 months agoFix inference example lacks required parameters (#9035)
Aisuko [Fri, 16 Aug 2024 09:08:59 +0000 (19:08 +1000)]
Fix inference example lacks required parameters (#9035)

Signed-off-by: Aisuko <redacted>
10 months agogguf-py : bump version from 0.9.1 to 0.10.0 (#9051)
compilade [Fri, 16 Aug 2024 06:36:11 +0000 (02:36 -0400)]
gguf-py : bump version from 0.9.1 to 0.10.0 (#9051)

10 months agollama : add EXAONE model support (#9025)
Minsoo Cheong [Fri, 16 Aug 2024 06:35:18 +0000 (15:35 +0900)]
llama : add EXAONE model support (#9025)

* add exaone model support

* add chat template

* fix whitespace

Co-authored-by: Georgi Gerganov <redacted>
* add ftype

* add exaone pre-tokenizer in `llama-vocab.cpp`

Co-Authored-By: compilade <redacted>
* fix lint

Co-Authored-By: compilade <redacted>
* add `EXAONE` to supported models in `README.md`

* fix space

Co-authored-by: compilade <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: compilade <redacted>
Co-authored-by: compilade <redacted>
10 months agocommon : add support for cpu_get_num_physical_cores() on Windows (#8771)
Liu Jia [Fri, 16 Aug 2024 06:23:12 +0000 (14:23 +0800)]
common : add support for cpu_get_num_physical_cores() on Windows (#8771)

* Add support for cpu_get_num_phsical_cores() on Windows

* fix build bug on msys2-clang64 and ucrt64

* avoid adding new function

* add new macros to avoid windows+mingw64

* Add error checking to return default value

10 months agoAdd Nemotron/Minitron GGUF Conversion & Inference Support (#8922)
Yoshi Suhara [Fri, 16 Aug 2024 02:23:33 +0000 (19:23 -0700)]
Add Nemotron/Minitron GGUF Conversion & Inference Support (#8922)

* Add nemotron GGUF conversion & inference support

* Fix formatting issues

* Remove unnecessary write_tensors()

* Update convert_hf_to_gguf.py

Co-authored-by: compilade <redacted>
* Update src/llama.cpp

Co-authored-by: compilade <redacted>
* Address comments by @compilade

* Replace ggml_mul_mat()->llm_build_lora_mm()

* Remove mutable variable

* Use  for bias tensors

* Cover corner case for role_scaling not in config.json

---------

Co-authored-by: compilade <redacted>
10 months agoggml : dynamic ggml_sched_max_splits based on graph_size (#9047)
Nico Bosshard [Fri, 16 Aug 2024 02:22:55 +0000 (04:22 +0200)]
ggml : dynamic ggml_sched_max_splits based on graph_size (#9047)

* ggml : Dynamic ggml_sched_max_splits based on graph_size

* Fixed and readded debug code for causes

10 months agoretrieval : fix memory leak in retrieval query handling (#8955)
gtygo [Thu, 15 Aug 2024 07:40:12 +0000 (15:40 +0800)]
retrieval : fix memory leak in retrieval query handling (#8955)

* retrieval

* Reuse querybatch to reduce frequent memory allocation

* delete unused white space

10 months agoserver : fix duplicated n_predict key in the generation_settings (#8994)
Riceball LEE [Thu, 15 Aug 2024 07:28:05 +0000 (15:28 +0800)]
server : fix duplicated n_predict key in the generation_settings (#8994)

10 months agocommon : remove duplicate function llama_should_add_bos_token (#8778)
Zhenwei Jin [Thu, 15 Aug 2024 07:23:23 +0000 (15:23 +0800)]
common : remove duplicate function llama_should_add_bos_token (#8778)

10 months agollama : add pre-tokenizer regexes for BLOOM and gpt3-finnish (#8850)
Esko Toivonen [Thu, 15 Aug 2024 07:17:12 +0000 (10:17 +0300)]
llama : add pre-tokenizer regexes for BLOOM and gpt3-finnish (#8850)

10 months agoci : disable bench workflow (#9010)
Georgi Gerganov [Thu, 15 Aug 2024 07:11:11 +0000 (10:11 +0300)]
ci : disable bench workflow (#9010)

10 months agoserver : init stop and error fields of the result struct (#9026)
Jiří Podivín [Thu, 15 Aug 2024 06:21:57 +0000 (08:21 +0200)]
server : init stop and error fields of the result struct (#9026)

Signed-off-by: Jiri Podivin <redacted>
10 months agoVulkan Optimizations and Fixes (#8959)
0cc4m [Wed, 14 Aug 2024 16:32:53 +0000 (18:32 +0200)]
Vulkan Optimizations and Fixes (#8959)

* Optimize Vulkan REPEAT performance

* Use Vulkan GLSL fused multiply-add instruction where possible

* Add GGML_VULKAN_PERF option to output performance data per operator

* Rework and fix Vulkan descriptor set and descriptor pool handling

* Fix float32 concat f16 shader validation error

* Add Vulkan GROUP_NORM eps parameter

* Fix validation error with transfer queue memory barrier flags

* Remove trailing whitespaces

10 months agoserver : fix segfault on long system prompt (#8987)
compilade [Wed, 14 Aug 2024 06:51:02 +0000 (02:51 -0400)]
server : fix segfault on long system prompt (#8987)

* server : fix segfault on long system prompt

* server : fix parallel generation with very small batch sizes

* server : fix typo in comment

10 months agocmake : remove unused option GGML_CURL (#9011)
Georgi Gerganov [Wed, 14 Aug 2024 06:14:49 +0000 (09:14 +0300)]
cmake : remove unused option GGML_CURL (#9011)

10 months agoggml : move rope type enum to ggml.h (#8949)
Daniel Bevenius [Tue, 13 Aug 2024 19:13:15 +0000 (21:13 +0200)]
ggml : move rope type enum to ggml.h (#8949)

* ggml : move rope type enum to ggml.h

This commit moves the `llama_rope_type` enum from `llama.h` to
`ggml.h` and changes its name to `ggml_rope_type`.

The motivation for this change is to address the TODO in `llama.h` and
use the enum in ggml.

Note: This commit does not change the `mode` parameter to be of type
`enum ggml_rope_type`. The name `mode` and its usage suggest that it
might be more generic and possibly used as a bit field for multiple
flags. Further investigation/discussion may be needed to determine
if `mode` should be restricted to RoPE types.

* squash! ggml : move rope type enum to ggml.h

This commit removes GGML_ROPE_TYPE_NONE and GGML_ROPE_TYPE_GLM from
ggml.h, and back the llama_rope_type enum.

I've kept the assert for GGML_ROPE_TYPE_GLM as I'm not sure if it is
safe to remove it yet.

* squash! ggml : move rope type enum to ggml.h

This commit removes the enum ggml_rope_type from ggml.h and replaces it
with a define (GGML_ROPE_TYPE_NEOX). This define is used in the code to
check if the mode is set to GPT-NeoX. Also the enum llama_rope_type has
been updated to reflect this change.

* squash! ggml : move rope type enum to ggml.h

This commit contains a suggestion enable the GGML_ROPE_TYPE_NEOX
macro/define to be passed to the shader compiler.

* squash! ggml : move rope type enum to ggml.h

This commit fixes the editorconfig-checker warnings.

* squash! ggml : move rope type enum to ggml.h

Update comment for ggml_rope function.

* Revert "squash! ggml : move rope type enum to ggml.h"

This reverts commit 6261222bd0dc0efd51f0fb0435ad3f16a5b52fd6.

* squash! ggml : move rope type enum to ggml.h

Add GGML_ROPE_TYPE_NEOX to rope_common.comp.

* remove extra line

---------

Co-authored-by: slaren <redacted>
10 months agoexport-lora : throw error if lora is quantized (#9002)
Xuan Son Nguyen [Tue, 13 Aug 2024 09:41:14 +0000 (11:41 +0200)]
export-lora : throw error if lora is quantized (#9002)

10 months agoci : fix github workflow vulnerable to script injection (#9008)
Diogo Teles Sant'Anna [Mon, 12 Aug 2024 16:28:23 +0000 (13:28 -0300)]
ci : fix github workflow vulnerable to script injection (#9008)

Signed-off-by: Diogo Teles Sant'Anna <redacted>
10 months agoci : enable RPC in all of the released builds (#9006)
Radoslav Gerganov [Mon, 12 Aug 2024 16:17:03 +0000 (19:17 +0300)]
ci : enable RPC in all of the released builds (#9006)

ref: #8912

10 months agollama : model-based max number of graph nodes calculation (#8970)
Nico Bosshard [Mon, 12 Aug 2024 15:13:59 +0000 (17:13 +0200)]
llama : model-based max number of graph nodes calculation (#8970)

* llama : model-based max number of graph nodes calculation

* Update src/llama.cpp

---------

Co-authored-by: slaren <redacted>
10 months agodocs: introduce gpustack and gguf-parser (#8873)
Frank Mai [Mon, 12 Aug 2024 12:45:50 +0000 (20:45 +0800)]
docs: introduce gpustack and gguf-parser (#8873)

* readme: introduce gpustack

GPUStack is an open-source GPU cluster manager for running large
language models, which uses llama.cpp as the backend.

Signed-off-by: thxCode <redacted>
* readme: introduce gguf-parser

GGUF Parser is a tool to review/check the GGUF file and estimate the
memory usage without downloading the whole model.

Signed-off-by: thxCode <redacted>
---------

Signed-off-by: thxCode <redacted>
10 months agogrammar-parser : fix possible null-deref (#9004)
DavidKorczynski [Mon, 12 Aug 2024 12:36:41 +0000 (13:36 +0100)]
grammar-parser : fix possible null-deref (#9004)

Fixes: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=70680
Signed-off-by: David Korczynski <redacted>
10 months agoggml: fix div-by-zero (#9003)
DavidKorczynski [Mon, 12 Aug 2024 12:21:41 +0000 (13:21 +0100)]
ggml: fix div-by-zero (#9003)

Fixes: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=70724
In order to access the above bug you need to login using one of the
emails in
https://github.com/google/oss-fuzz/blob/master/projects/llamacpp/project.yaml#L3-L5

Signed-off-by: David Korczynski <redacted>
10 months agoFix a spelling mistake (#9001)
Liu Jia [Mon, 12 Aug 2024 09:46:03 +0000 (17:46 +0800)]
Fix a spelling mistake (#9001)

10 months agopy : fix requirements check '==' -> '~=' (#8982)
Georgi Gerganov [Mon, 12 Aug 2024 08:02:01 +0000 (11:02 +0300)]
py : fix requirements check '==' -> '~=' (#8982)

* py : fix requirements check '==' -> '~='

* cont : fix the fix

* ci : run on all requirements.txt

10 months agoserver : handle models with missing EOS token (#8997)
Georgi Gerganov [Mon, 12 Aug 2024 07:21:50 +0000 (10:21 +0300)]
server : handle models with missing EOS token (#8997)

ggml-ci

10 months agogguf-py : Numpy dequantization for most types (#8939)
compilade [Sun, 11 Aug 2024 18:45:41 +0000 (14:45 -0400)]
gguf-py : Numpy dequantization for most types (#8939)

* gguf-py : Numpy dequantization for most types

* gguf-py : Numpy dequantization for grid-based i-quants

10 months agoflake.lock: Update (#8979)
Georgi Gerganov [Sun, 11 Aug 2024 13:58:58 +0000 (16:58 +0300)]
flake.lock: Update (#8979)

10 months agoupdate guide (#8909)
Neo Zhang [Sun, 11 Aug 2024 08:37:43 +0000 (16:37 +0800)]
update guide (#8909)

Co-authored-by: Neo Zhang <>