]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
pkg/ggml/sources/llama.cpp
2 years agollama : pre-allocate input tensors in a separate buffer (#5100)
slaren [Wed, 24 Jan 2024 11:48:14 +0000 (12:48 +0100)]
llama : pre-allocate input tensors in a separate buffer (#5100)

2 years agometal : disable support for MUL_MAT F32 x F16
Georgi Gerganov [Tue, 23 Jan 2024 13:50:56 +0000 (15:50 +0200)]
metal : disable support for MUL_MAT F32 x F16

2 years agoAdditional KL-divergence statistics (#5081)
Kawrakow [Tue, 23 Jan 2024 13:17:20 +0000 (15:17 +0200)]
Additional KL-divergence statistics (#5081)

* perplexity: add top-token probability

* perplexity: add additional KL-divergence statistics

* perplexity: a better organized KL-divergence statistics output

---------

Co-authored-by: Iwan Kawrakow <redacted>
2 years agoCUDA: more info when no device code (#5088)
Johannes Gäßler [Tue, 23 Jan 2024 12:31:56 +0000 (13:31 +0100)]
CUDA: more info when no device code (#5088)

2 years agominor : clean-up some warnings and style (#5094)
Georgi Gerganov [Tue, 23 Jan 2024 12:12:57 +0000 (14:12 +0200)]
minor : clean-up some warnings and style (#5094)

* minor : clean-up some warnings and style

ggml-ci

* ggml : add comment

2 years agodevops : add intel oneapi dockerfile (#5068)
Xuan Son Nguyen [Tue, 23 Jan 2024 07:11:39 +0000 (08:11 +0100)]
devops : add intel oneapi dockerfile (#5068)

Co-authored-by: Xuan Son Nguyen <redacted>
2 years agollama.vim : added api key support (#5090)
Michael Coppola [Tue, 23 Jan 2024 06:51:27 +0000 (01:51 -0500)]
llama.vim : added api key support (#5090)

Co-authored-by: Michael Coppola <redacted>
2 years agollama : fix not enough space in buffer with Qwen (#5086)
slaren [Mon, 22 Jan 2024 22:42:41 +0000 (23:42 +0100)]
llama : fix not enough space in buffer with Qwen (#5086)

2 years agoKL-divergence (#5076)
Kawrakow [Mon, 22 Jan 2024 14:10:14 +0000 (16:10 +0200)]
KL-divergence (#5076)

* kl-divergence: be able to save all logits to a file

* Add ability to compute KL-divergence

---------

Co-authored-by: Iwan Kawrakow <redacted>
2 years agoggml : parallelize FP32 conversion when using BLAS (#5045)
Reinforce-II [Mon, 22 Jan 2024 13:15:08 +0000 (21:15 +0800)]
ggml : parallelize FP32 conversion when using BLAS (#5045)

* make GGML_TASK_INIT phase can be run in multithread

* multithreaded dequantize in mul_mat when using blas library

* minor fixes

* update outdated comment
* fix coding style

* simplify code

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agollava : MobileVLM support (#4954)
XiaotaoChen [Mon, 22 Jan 2024 13:09:35 +0000 (21:09 +0800)]
llava : MobileVLM support (#4954)

* MobileVLM native implementation

* delete depthwise_conv_2d and permute_cpy relative code, replace the two by the existed functions, and opt ldp definition, support LLAMA_PERF option for CMake

* move android script to example/llava directory

* Fix the editor config checks

---------

Co-authored-by: Chenxiaotao03 <redacted>
2 years agoflake.nix: add a comment about flakes vs nix
Someone Serge [Sun, 21 Jan 2024 03:41:37 +0000 (03:41 +0000)]
flake.nix: add a comment about flakes vs nix

2 years agonix: add a comment on the many nixpkgs-with-cuda instances
Someone Serge [Sun, 21 Jan 2024 03:29:38 +0000 (03:29 +0000)]
nix: add a comment on the many nixpkgs-with-cuda instances

2 years agonix: add a comment about makeScope
Someone Serge [Sun, 21 Jan 2024 03:15:13 +0000 (03:15 +0000)]
nix: add a comment about makeScope

2 years agonix: refactor the cleanSource rules
Someone Serge [Sat, 13 Jan 2024 17:45:01 +0000 (17:45 +0000)]
nix: refactor the cleanSource rules

2 years agoworkflows: nix-ci: drop the redundant "paths" filter
Someone Serge [Sat, 13 Jan 2024 17:38:32 +0000 (17:38 +0000)]
workflows: nix-ci: drop the redundant "paths" filter

2 years agoworkflows: nix-build-aarch64: rate limit
Someone Serge [Sat, 13 Jan 2024 17:16:54 +0000 (17:16 +0000)]
workflows: nix-build-aarch64: rate limit

2 years agoworkflows: nix-ci: rebuild on flake.lock updates
Someone Serge [Sat, 13 Jan 2024 17:10:19 +0000 (17:10 +0000)]
workflows: nix-ci: rebuild on flake.lock updates

2 years agoimatrix : keep intermediate imatrix results (#5077)
Kawrakow [Mon, 22 Jan 2024 12:18:43 +0000 (14:18 +0200)]
imatrix : keep intermediate imatrix results (#5077)

Co-authored-by: Iwan Kawrakow <redacted>
2 years agollama : support StableLM 2 1.6B (#5052)
compilade [Mon, 22 Jan 2024 11:21:52 +0000 (06:21 -0500)]
llama : support StableLM 2 1.6B (#5052)

* llama : support StableLM 2 1.6B

* convert : fix Qwen's set_vocab wrongly naming all special tokens [PAD{id}]

* convert : refactor Qwen's set_vocab to use it for StableLM 2 too

* nix : add tiktoken to llama-python-extra

* convert : use presence of tokenizer.json to determine StableLM tokenizer loader

It's a less arbitrary heuristic than the vocab size.

2 years agofinetune : print sample-start/include-sample-start (#5072)
Daniel Bevenius [Mon, 22 Jan 2024 11:11:01 +0000 (12:11 +0100)]
finetune : print sample-start/include-sample-start (#5072)

This commit adds `--sample-start` and `--include-sample-start` to the
output from the main function in finetune.cpp.

The motivation for this is that even though these are set explicitly by
the user via the command line, if one forgets to set them then it is
useful to have their values printed out. Otherwise it is possible to go
through the whole training process before realizing that the values are
not what one expected.

Signed-off-by: Daniel Bevenius <redacted>
2 years agollama : add Q3_K_XS (#5060)
Kawrakow [Mon, 22 Jan 2024 10:43:33 +0000 (12:43 +0200)]
llama : add Q3_K_XS (#5060)

* Add Q3_K_XS - intermediate size between Q2_K and Q3_K_S

* Q3_K_XS: quanize first 1/8 of ffn_down layers with Q4_K

Together with an importance matrix, this brings perplexity
for LLaMA-v2-70B below the perplexity of the former Q2_K
with a 800 MB smaller quantized model size.

---------

Co-authored-by: Iwan Kawrakow <redacted>
2 years agoci : fix Windows CI by updating Intel SDE version (#5053)
bobqianic [Mon, 22 Jan 2024 08:55:05 +0000 (08:55 +0000)]
ci : fix Windows CI by updating Intel SDE version (#5053)

2 years agollama : add more qwen2 models (#5071)
Shijie [Mon, 22 Jan 2024 07:33:19 +0000 (15:33 +0800)]
llama : add more qwen2 models (#5071)

2 years agoRevert LLAMA_NATIVE to OFF in flake.nix (#5066)
iSma [Sun, 21 Jan 2024 21:37:13 +0000 (22:37 +0100)]
Revert LLAMA_NATIVE to OFF in flake.nix (#5066)

2 years agoadd safetensors support to convert-lora-to-ggml.py (#5062)
kuronekosaiko [Sun, 21 Jan 2024 16:28:14 +0000 (00:28 +0800)]
add safetensors support to convert-lora-to-ggml.py (#5062)

* add safetensors support to convert-lora-to-ggml.py

* Update convert-lora-to-ggml.py

Remove white space in line 69.

2 years agoadd `#include <string>` to unicode.h (#5051)
bobqianic [Sun, 21 Jan 2024 15:17:35 +0000 (15:17 +0000)]
add `#include <string>` to unicode.h (#5051)

Co-authored-by: Jared Van Bortel <redacted>
2 years agoAdd ability to evauate multiple choice tasks (#5047)
Kawrakow [Sun, 21 Jan 2024 12:42:44 +0000 (14:42 +0200)]
Add ability to evauate multiple choice tasks  (#5047)

* TruthfulQA: 1st attempt, does not look like it is working

The same implementation can be used for HellaSwag as well,
so I converted a HellaSwag validation dataset to the binary
format used here and tested with that. The score is only
around 50, so something is not quite right.

* TruthfulQA: works but the result is bad

I know it works because if I convert the HellaSwag validation
data to the binary format used in the truthful_qa_score() function
I get the exact same result as from the hellaswag_score() function.
But I guess, the questions are tricky and the way I have done
the combination of question + answer is very likely not the best.
The TruthfulQA validation dataset contains 817 questions, with
random chance result around 19%. With this version I get
29.1% for Mistral-7B and 55.2% for Mistral-7B-Instruct-v0.2.
The HF leader board results for these two models are
42.2% and 68.3%, respectively.

* TruthfulQA: fix random sample

* TruthfulQA: prepare tasks in parallel for large test datasets

* Rename truthful_qa to multiple_choice

* Make MSVC happy

I had forgotten that MSVC does not make constexpr's available
inside a lambda.

---------

Co-authored-by: Iwan Kawrakow <redacted>
2 years agoSlightly faster imatrix (#5050)
Kawrakow [Sun, 21 Jan 2024 06:01:20 +0000 (08:01 +0200)]
Slightly faster imatrix (#5050)

* imatrix: speedup by avoiding unnecessary allocations and copies

* imatrix: add --no-ppl option to skip PPL calculations altogether

---------

Co-authored-by: Iwan Kawrakow <redacted>
2 years agoflake.lock: Update (#5054)
Georgi Gerganov [Sun, 21 Jan 2024 03:17:27 +0000 (05:17 +0200)]
flake.lock: Update (#5054)

Flake lock file updates:

• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/9b19f5e77dd906cb52dade0b7bd280339d2a1f3d' (2024-01-13)
  → 'github:NixOS/nixpkgs/bbe7d8f876fbbe7c959c90ba2ae2852220573261' (2024-01-19)

Co-authored-by: github-actions[bot] <redacted>
2 years agoconvert : partially revert PR #4818 (#5041)
Jared Van Bortel [Sat, 20 Jan 2024 23:14:18 +0000 (18:14 -0500)]
convert : partially revert PR #4818 (#5041)

2 years agoperplexity : fix MSVC build after #5020 (#5043)
Jared Van Bortel [Sat, 20 Jan 2024 15:08:08 +0000 (10:08 -0500)]
perplexity : fix MSVC build after #5020 (#5043)

* perplexity : fix MSVC build after #5020

* try a differerent fix

2 years agollama : run all KQV ops on the CPU with no KV offload (#5049)
slaren [Sat, 20 Jan 2024 15:05:49 +0000 (16:05 +0100)]
llama : run all KQV ops on the CPU with no KV offload (#5049)

ggml-ci

2 years agocmake : add support for ccache (#5002)
Herman Semenov [Sat, 20 Jan 2024 08:11:31 +0000 (08:11 +0000)]
cmake : add support for ccache (#5002)

* Added support ccache for speedup recompilation

* cmake : option to disable ccache

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoAdd a dart/flutter binding to README.md (#4882)
adel boussaken [Sat, 20 Jan 2024 08:05:43 +0000 (09:05 +0100)]
Add a dart/flutter binding to README.md (#4882)

2 years agocuda : fix compile error in jetson platform (#4975)
Kylin [Sat, 20 Jan 2024 07:01:46 +0000 (15:01 +0800)]
cuda : fix compile error in jetson platform (#4975)

* cuda: fix compile error in jetson platform

* cuda: update comment in ggml-cuda.cu

* cuda: update ggml-cuda.cu comment

2 years agofinetune : fix ggml_allocr lifetimes (tmp workaround) (#5033)
Uzo Nweke [Fri, 19 Jan 2024 18:20:50 +0000 (13:20 -0500)]
finetune : fix ggml_allocr lifetimes (tmp workaround) (#5033)

* Fix issue with alloc causing max_compute_size to be calculated

* remove ggml_allocr_free as suggested in issue #4791

2 years agoimatrix : add README.md
Georgi Gerganov [Fri, 19 Jan 2024 13:24:47 +0000 (15:24 +0200)]
imatrix : add README.md

2 years agollama : support upcoming Qwen2 (#5037)
Shijie [Fri, 19 Jan 2024 11:53:13 +0000 (19:53 +0800)]
llama : support upcoming Qwen2 (#5037)

2 years agopy : fix flake8 lint
Georgi Gerganov [Fri, 19 Jan 2024 11:52:22 +0000 (13:52 +0200)]
py : fix flake8 lint

2 years agowinogrande: evaluate log-probs in parallel (#5036)
Kawrakow [Fri, 19 Jan 2024 09:39:11 +0000 (11:39 +0200)]
winogrande: evaluate log-probs in parallel (#5036)

This is a relatively minor performance tweak resulting in
~10% speedup on my system.

Co-authored-by: Iwan Kawrakow <redacted>
2 years agollama : add CodeShell support (#5016)
chiranko [Fri, 19 Jan 2024 09:07:27 +0000 (17:07 +0800)]
llama : add CodeShell support (#5016)

* llama: add codeshell support

* llama.cpp: fix codeshell with NeoX rope

Co-authored-by: Georgi Gerganov <redacted>
---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoperplexity: avoid unnecessary alloocations and logit copies (#5035)
Kawrakow [Fri, 19 Jan 2024 09:02:39 +0000 (11:02 +0200)]
perplexity: avoid unnecessary alloocations and logit copies (#5035)

Co-authored-by: Iwan Kawrakow <redacted>
2 years agoperplexity : faster Winogrande via batching (#5024)
Georgi Gerganov [Fri, 19 Jan 2024 08:45:06 +0000 (10:45 +0200)]
perplexity : faster Winogrande via batching (#5024)

* perplexity : faster Winogrande via batching

ggml-ci

* perplexity : remove unused function

* perplexity : only tokenize selected tasks for Winogrande

2 years agollama : fix falcon arch for tied output embeddings (#4978)
John [Thu, 18 Jan 2024 22:12:15 +0000 (23:12 +0100)]
llama : fix falcon arch for tied output embeddings (#4978)

* falcon arch fix for tied output embeddings

* Update llama.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Update llama.cpp

* Update llama.cpp

Co-authored-by: Georgi Gerganov <redacted>
* Update llama.cpp

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agocmake : add ggml public headers (#5011)
Georgi Gerganov [Thu, 18 Jan 2024 21:36:07 +0000 (23:36 +0200)]
cmake : add ggml public headers (#5011)

2 years agoserver : defer tasks when "slot unavailable" (#5018)
Xuan Son Nguyen [Thu, 18 Jan 2024 20:33:05 +0000 (21:33 +0100)]
server : defer tasks when "slot unavailable" (#5018)

* server: defer task when no slot is available

* remove unnecessary log

---------

Co-authored-by: Xuan Son Nguyen <redacted>
2 years agollama : fix mlock with no-mmap with Metal (#5025)
slaren [Thu, 18 Jan 2024 20:12:15 +0000 (21:12 +0100)]
llama : fix mlock with no-mmap with Metal (#5025)

2 years agoimatrix : fix assert for src0 non-cont check
Georgi Gerganov [Thu, 18 Jan 2024 19:45:51 +0000 (21:45 +0200)]
imatrix : fix assert for src0 non-cont check

2 years agoperplexity : fix winogrande N tasks option
Georgi Gerganov [Thu, 18 Jan 2024 18:49:00 +0000 (20:49 +0200)]
perplexity : fix winogrande N tasks option

2 years agoscripts : add get-winogrande.sh
Georgi Gerganov [Thu, 18 Jan 2024 18:45:39 +0000 (20:45 +0200)]
scripts : add get-winogrande.sh

2 years agoconvert.py : fix llama/llama2 conversion due to vocab_size=-1 (#5019)
David Sommers [Thu, 18 Jan 2024 17:20:59 +0000 (12:20 -0500)]
convert.py : fix llama/llama2 conversion due to vocab_size=-1 (#5019)

PR #4818 (merged last week) reintroduced a config check for vocab_size that was addressed in PR #4258 (merged 2023-11-30).

Without the fix, llama2 models can't be converted. The error is:

`ValueError: The model's vocab size is set to -1 in params.json. Please update it manually. Maybe 32000?`

2 years agoHellaSwag: speed up by parallelizing log-prob evaluation (#5020)
Kawrakow [Thu, 18 Jan 2024 17:18:21 +0000 (19:18 +0200)]
HellaSwag: speed up by parallelizing log-prob evaluation (#5020)

For Mistral-7B and fp16, time on my system goes down from 536 seconds
to 423 seconds for the full evaluation dataset (10042 tasks).

Co-authored-by: Iwan Kawrakow <redacted>
2 years agoperplexity : faster HellaSwag via batching (#5017)
Georgi Gerganov [Thu, 18 Jan 2024 13:33:01 +0000 (15:33 +0200)]
perplexity : faster HellaSwag via batching (#5017)

* perplexity : faster HellaSwag

ggml-ci

* perplexity : clean-up

ggml-ci

* perplexity : no need for decode_helper

ggml-ci

* perplexity : add comments

* perplexity : option to specify max batched tasks via `n_parallel`

* perplexity : remove HellaSwag restruction for n_batch

2 years agoAdd Winogrande evaluation (#5015)
Kawrakow [Thu, 18 Jan 2024 11:46:27 +0000 (13:46 +0200)]
Add Winogrande evaluation (#5015)

* winogrande: simple implementation

It doesn't look like it is working - why?
For Mistral-7B it is barely better than
random chance (score ~60% for 1267 tasks), while I see
Mistral-7B scoring 78.4% on the HF leader board.
1-sigma statistical uncertainty for 1267 tasks is ~1.4,
so no way the difference is due to statistics.

* winogrande: somewhat better

Score for Mistrali7-B is now 68.9 on the validation set of
winogrande_debiased. Still far from the reported 78.4, but
better than what I had before.

* winogrande: improving

Mistral-7B score is now 73.56.
Still not quite 78.4 but getting there.
We are also getting a lower score on HellaSwag
compared to HF leader board, so I'm not expecting
we will get up to 78.4 anyway.

It looks like it is better to skip the choice word(s)
when evaluating the average log-likelihood. This kind of
makes sense because a more common word (in Winogrande this is
often a name) will have a higher probability without knowing
about the follow up context, and this will skew the log-likelihood
towards the more common word. We can only do this if the
choice words are not last in the sentence.

It also looks like it is better to skip the punctuation at the
end of the sentence, provided the choice words are not last.

* winogrande: add dataset instructions

---------

Co-authored-by: Iwan Kawrakow <redacted>
2 years agoscritps : add helper script to get hellaswag data in txt format
Georgi Gerganov [Thu, 18 Jan 2024 09:44:49 +0000 (11:44 +0200)]
scritps : add helper script to get hellaswag data in txt format

2 years agometal : fix memory leak, dangling pointer and unused autorel (#5007)
Paul Tsochantaris [Thu, 18 Jan 2024 08:47:24 +0000 (08:47 +0000)]
metal : fix memory leak, dangling pointer and unused autorel (#5007)

* Metal memory: Small memory leak on init, dangling pointer, and unused autorelease pool in graph compute

* SPM header potential fix

* Reverting symlinks

2 years agosync : ggml
Georgi Gerganov [Wed, 17 Jan 2024 18:54:50 +0000 (20:54 +0200)]
sync : ggml

2 years agoggml : add IQ2 to test-backend-ops + refactoring (#4990)
Georgi Gerganov [Wed, 17 Jan 2024 16:54:56 +0000 (18:54 +0200)]
ggml : add IQ2 to test-backend-ops + refactoring (#4990)

* ggml : add IQ2 to test-backend-ops + refactoring

ggml-ci

* cuda : update supports_op for IQ2

ggml-ci

* ci : enable LLAMA_CUBLAS=1 for CUDA nodes

ggml-ci

* cuda : fix out-of-bounds-access in `mul_mat_vec_q`

ggml-ci

* tests : avoid creating RNGs for each Q tensor

ggml-ci

* tests : avoid creating RNGs for each tensor

ggml-ci

2 years agoimatrix : offload to GPU support (#4957)
Georgi Gerganov [Wed, 17 Jan 2024 16:46:30 +0000 (18:46 +0200)]
imatrix : offload to GPU support (#4957)

* backend : add eval callback

ggml-ci

* backend : group nodes in a single compute when user don't need them

* backend : clean-up the implementation

ggml-ci

* simple : do not perform tensor data copy if not needed

* simple : fix

* imatrix : offload to GPU support

* imatrix : fix ggml_mul_mat_id hanlding

ggml-ci

* ci : add imatrix test

ggml-ci

* ci : rearrange output

ggml-ci

2 years agobackend : add eval callback (#4935)
Georgi Gerganov [Wed, 17 Jan 2024 16:39:41 +0000 (18:39 +0200)]
backend : add eval callback (#4935)

* backend : add eval callback

ggml-ci

* backend : group nodes in a single compute when user don't need them

* backend : clean-up the implementation

ggml-ci

* simple : do not perform tensor data copy if not needed

* simple : fix

* simple : no need for ggml_is_contiguous + fix bool parse

* llama : fix callback placement in llama_context_params

* backend : avoid double-ask callback calls

* simple : restore examples, imatrix will serve as a demo

2 years agometal : create autorelease pool during library build (#4970)
Georgi Gerganov [Wed, 17 Jan 2024 16:38:39 +0000 (18:38 +0200)]
metal : create autorelease pool during library build (#4970)

* metal : create autorelease pool during library build

ggml-ci

* test : simplify

ggml-ci

2 years agopy : fix whitespace
Georgi Gerganov [Wed, 17 Jan 2024 16:37:36 +0000 (18:37 +0200)]
py : fix whitespace

2 years agopy : fix missing added_tokens_dict for SPM and BPE vocabs (#4971)
Georgi Gerganov [Wed, 17 Jan 2024 13:45:03 +0000 (15:45 +0200)]
py : fix missing added_tokens_dict for SPM and BPE vocabs (#4971)

* py : fix missing added_tokens_dict for SPM vocab

* py : pad with unknown tokens when data is missing

ggml-ci

* py : fix BPE vocab conversion

ggml-ci

* py : fix padded dummy tokens (I hope)

2 years agollama : use Q4_K for attn_v for Q2_K_S when n_gqa >= 4 (#4996)
Kawrakow [Wed, 17 Jan 2024 10:36:37 +0000 (12:36 +0200)]
llama : use Q4_K for attn_v for Q2_K_S when n_gqa >= 4 (#4996)

Co-authored-by: Iwan Kawrakow <redacted>
2 years agometal : remove unnecessary nil check (#4986)
Paul Tsochantaris [Wed, 17 Jan 2024 08:07:24 +0000 (08:07 +0000)]
metal : remove unnecessary nil check (#4986)

2 years agollama : fix copy/paste error in llama_sampling_params comment (#4994)
David Renshaw [Wed, 17 Jan 2024 07:17:50 +0000 (02:17 -0500)]
llama : fix copy/paste error in llama_sampling_params comment (#4994)

2 years agopy : remove unnecessary hasattr (#4903)
Georgi Gerganov [Tue, 16 Jan 2024 18:59:31 +0000 (20:59 +0200)]
py : remove unnecessary hasattr (#4903)

2 years agonix: remove nixConfig from flake.nix (#4984)
Philip Taron [Tue, 16 Jan 2024 17:56:21 +0000 (09:56 -0800)]
nix: remove nixConfig from flake.nix (#4984)

2 years agofinetune : add training data file to log message (#4979)
Daniel Bevenius [Tue, 16 Jan 2024 17:54:24 +0000 (18:54 +0100)]
finetune : add training data file to log message (#4979)

This commit adds the name of the training data file to the log message
printed when the training data is tokenized.

The motivation for this change is that it can be useful to show which
file is being tokenized when running the finetune example.

Signed-off-by: Daniel Bevenius <redacted>
2 years agoggml : importance matrix support for legacy quants (#4969)
Kawrakow [Tue, 16 Jan 2024 17:51:26 +0000 (19:51 +0200)]
ggml : importance matrix support for legacy quants (#4969)

* imatrix: adding support for legacy quants

* imatrix: guard Q4_0/Q5_0 against ffn_down craziness

---------

Co-authored-by: Iwan Kawrakow <redacted>
2 years agoexamples : add complete parallel function calling example (#4974)
Maximilian Winter [Tue, 16 Jan 2024 17:41:42 +0000 (18:41 +0100)]
examples : add complete parallel function calling example (#4974)

2 years agoperplexity : fix kv cache handling for hellaswag (#4981)
Georgi Gerganov [Tue, 16 Jan 2024 17:34:54 +0000 (19:34 +0200)]
perplexity : fix kv cache handling for hellaswag (#4981)

ggml-ci

2 years agoflake.lock: update flake-parts, flake-parts/nixpkgs-lib, and nixpkgs (#4920)
Georgi Gerganov [Tue, 16 Jan 2024 17:13:54 +0000 (19:13 +0200)]
flake.lock: update flake-parts, flake-parts/nixpkgs-lib, and nixpkgs (#4920)

Flake lock file updates:

• Updated input 'flake-parts':
    'github:hercules-ci/flake-parts/34fed993f1674c8d06d58b37ce1e0fe5eebcb9f5' (2023-12-01)
  → 'github:hercules-ci/flake-parts/07f6395285469419cf9d078f59b5b49993198c00' (2024-01-11)
• Updated input 'flake-parts/nixpkgs-lib':
    'github:NixOS/nixpkgs/e92039b55bcd58469325ded85d4f58dd5a4eaf58?dir=lib' (2023-11-29)
  → 'github:NixOS/nixpkgs/b0d36bd0a420ecee3bc916c91886caca87c894e9?dir=lib' (2023-12-30)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/cfc3698c31b1fb9cdcf10f36c9643460264d0ca8' (2023-12-27)
  → 'github:NixOS/nixpkgs/317484b1ead87b9c1b8ac5261a8d2dd748a0492d' (2024-01-08)

Co-authored-by: github-actions[bot] <redacted>
2 years agometal : localized logic in `ggml_metal_graph_compute` (#4924)
Paul Tsochantaris [Tue, 16 Jan 2024 17:05:19 +0000 (17:05 +0000)]
metal : localized logic in `ggml_metal_graph_compute` (#4924)

* Metal: Localized logic in `ggml_metal_graph_compute`, minor performance improvement

* Whitespace

* Collecting command buffer completions on single thread

* Whitespace

* Reduce diff noise

2 years agoandroid : introduce starter project example (#4926)
Neuman Vong [Tue, 16 Jan 2024 13:47:34 +0000 (00:47 +1100)]
android : introduce starter project example (#4926)

* Introduce starter project for Android

Based on examples/llama.swiftui.

* Add github workflow

* Set NDK version

* Only build arm64-v8a in CI

* Sync bench code

* Rename CI prop to skip-armeabi-v7a

* Remove unused tests

2 years agometal : replace loop of dispatch_async with dispatch_apply (#4934)
Alex Azarov [Tue, 16 Jan 2024 13:41:27 +0000 (14:41 +0100)]
metal : replace loop of dispatch_async with dispatch_apply (#4934)

* Replace loop of dispatch_async with dispatch_apply

* Update ggml-metal.m

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agometal : log `recommendedMaxWorkingSetSize` on iOS 16+ (#4936)
Alex Azarov [Tue, 16 Jan 2024 13:33:02 +0000 (14:33 +0100)]
metal : log `recommendedMaxWorkingSetSize` on iOS 16+ (#4936)

* metal: Log `recommendedMaxWorkingSetSize` on iOS 16+

* Only log on iOS and macOS, ignoring tvOS and other platforms

* Check for Xcode version before using recommendedMaxWorkingSetSize

---------

Co-authored-by: Georgi Gerganov <redacted>
2 years agoexamples : fix and improv docs for the grammar generator (#4909)
Maximilian Winter [Tue, 16 Jan 2024 12:10:48 +0000 (13:10 +0100)]
examples : fix and improv docs for the grammar generator (#4909)

* Create pydantic-models-to-grammar.py

* Added some comments for usage

* Refactored Grammar Generator

Added example and usage instruction.

* Update pydantic_models_to_grammar.py

* Update pydantic-models-to-grammar-examples.py

* Renamed module and imported it.

* Update pydantic-models-to-grammar.py

* Renamed file and fixed grammar generator issue.

* Fixed some issues and bugs of the grammar generator. Imporved Documentation

* Update pydantic_models_to_grammar.py

2 years agoggml : introduce GGML_CALL function annotation (#4850)
Justine Tunney [Tue, 16 Jan 2024 11:16:33 +0000 (03:16 -0800)]
ggml : introduce GGML_CALL function annotation (#4850)

This change makes it possible to build ggml-cuda.cu and ggml-metal.m as
independent dynamic shared objects, that may be conditionally linked at
runtime in a multiplatform binary. It introduces a GGML_CALL annotation
that documents which functions have a cyclic call relationship, between
the application code and GPU modules.

This change does nothing, unless the build defines -DGGML_MULTIPLATFORM
which causes back-references and function pointers to conform to MS ABI
which is supported by NVCC, ROCm, XCode, GCC and Clang across platforms

2 years agofinetune : use LLAMA_FILE_MAGIC_GGLA (#4961)
Daniel Bevenius [Tue, 16 Jan 2024 11:14:19 +0000 (12:14 +0100)]
finetune : use LLAMA_FILE_MAGIC_GGLA (#4961)

This commit replaces the magic number LLAMA_FILE_MAGIC_LORA used in
finetune.cpp with LLAMA_FILE_MAGIC_GGLA defined in llama.h.

Signed-off-by: Daniel Bevenius <redacted>
2 years agospeculative : threading options (#4959)
stduhpf [Tue, 16 Jan 2024 11:04:32 +0000 (12:04 +0100)]
speculative : threading options (#4959)

* speculative: expose draft threading

* fix usage format

* accept -td and -tbd args

* speculative: revert default behavior when -td is unspecified

* fix trailing whitespace

2 years agopass cpu-architecture arguments only to host code (C;C++) (#4943)
ngc92 [Mon, 15 Jan 2024 18:40:48 +0000 (20:40 +0200)]
pass cpu-architecture arguments only to host code (C;C++) (#4943)

2 years agollama : apply classifier-free guidance to logits directly (#4951)
David Friehs [Mon, 15 Jan 2024 13:06:52 +0000 (14:06 +0100)]
llama : apply classifier-free guidance to logits directly (#4951)

2 years agoawq-py : fix typo in awq-py/README.md (#4947)
Victor Z. Peng [Mon, 15 Jan 2024 12:41:46 +0000 (04:41 -0800)]
awq-py : fix typo in awq-py/README.md (#4947)

2 years agocuda : fix dequantize kernel names (#4938)
Georgi Gerganov [Mon, 15 Jan 2024 11:27:00 +0000 (13:27 +0200)]
cuda : fix dequantize kernel names (#4938)

2 years agollama : check for 256 divisibility for IQ2_XS, IQ2_XXS (#4950)
Kawrakow [Mon, 15 Jan 2024 08:09:38 +0000 (10:09 +0200)]
llama : check for 256 divisibility for IQ2_XS, IQ2_XXS (#4950)

Co-authored-by: Iwan Kawrakow <redacted>
2 years agoCUDA: faster dequantize kernels for Q4_0 and Q4_1 (#4938)
Kawrakow [Mon, 15 Jan 2024 05:48:06 +0000 (07:48 +0200)]
CUDA: faster dequantize kernels for Q4_0 and Q4_1 (#4938)

Co-authored-by: Iwan Kawrakow <redacted>
2 years agollama : fix missing quotes (#4937)
David Pflug [Sun, 14 Jan 2024 15:46:00 +0000 (10:46 -0500)]
llama : fix missing quotes (#4937)

2 years agoAdd ability to use importance matrix for all k-quants (#4930)
Kawrakow [Sun, 14 Jan 2024 14:21:12 +0000 (16:21 +0200)]
Add ability to use importance matrix for all k-quants (#4930)

Co-authored-by: Iwan Kawrakow <redacted>
2 years agollama : check LLAMA_TRACE env for extra logging (#4929)
Georgi Gerganov [Sun, 14 Jan 2024 11:26:53 +0000 (13:26 +0200)]
llama : check LLAMA_TRACE env for extra logging (#4929)

* llama : minor fix indent

* llama : check LLAMA_TRACE env for extra logging

ggml-ci

2 years agoscripts : sync-ggml-am.sh option to skip commits
Georgi Gerganov [Sun, 14 Jan 2024 09:08:09 +0000 (11:08 +0200)]
scripts : sync-ggml-am.sh option to skip commits

2 years agollama : use LLAMA_LOG_ macros for logging
Georgi Gerganov [Sun, 14 Jan 2024 09:03:19 +0000 (11:03 +0200)]
llama : use LLAMA_LOG_ macros for logging

2 years agoFix ffn_down quantization mix for MoE models (#4927)
Kawrakow [Sun, 14 Jan 2024 08:53:39 +0000 (10:53 +0200)]
Fix ffn_down quantization mix for MoE models (#4927)

* Fix ffn_down quantization mix for MoE models

In #4872 I did not consider the part where every third
tensor is quantized with more bits. Fir MoE this leads to tensors
of the same layer being quantized with different number of bits,
which is not considered as a possibility in the inference implementation
(it is assumed all experts use the same quantization).

* Fix the fix

* Review suggestion

---------

Co-authored-by: Iwan Kawrakow <redacted>
2 years agometal : correctly set SIMD support flags on iOS (#4923)
Alex Azarov [Sun, 14 Jan 2024 08:44:39 +0000 (09:44 +0100)]
metal : correctly set SIMD support flags on iOS (#4923)

* Correctly set support_simdgroup_reduction and support_simdgroup_mm on iPhone/iPad

* log a little bit more info on iOS

2 years agollama : support WinXP build with MinGW 8.1.0 (#3419)
Karthik Kumar Viswanathan [Sun, 14 Jan 2024 08:41:44 +0000 (00:41 -0800)]
llama : support WinXP build with MinGW 8.1.0 (#3419)

2 years ago2-bit quantizations (#4897)
Kawrakow [Sun, 14 Jan 2024 07:45:56 +0000 (09:45 +0200)]
2-bit quantizations (#4897)

* imatrix: load

* imatrix: WIP

* imatrix: Add Q2_K quantization

* imatrix: also guard against Q2_K_S quantization without importance matrix

* imatrix: guard even more against low-bit quantization misuse

---------

Co-authored-by: Iwan Kawrakow <redacted>
2 years agoMake Q3_K_S be the same as olf Q3_K_L for Mixtral-8x7B (#4906)
Kawrakow [Sun, 14 Jan 2024 07:44:30 +0000 (09:44 +0200)]
Make Q3_K_S be the same as olf Q3_K_L for Mixtral-8x7B (#4906)

Co-authored-by: Iwan Kawrakow <redacted>
2 years agosync : ggml
Georgi Gerganov [Sat, 13 Jan 2024 22:14:46 +0000 (00:14 +0200)]
sync : ggml

2 years agoggml: cache sin/cos for RoPE (#4908)
Johannes Gäßler [Sat, 13 Jan 2024 20:41:37 +0000 (21:41 +0100)]
ggml: cache sin/cos for RoPE (#4908)