]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Xuan Son Nguyen [Sun, 28 Apr 2024 15:36:18 +0000 (17:36 +0200)]
gguf : enforce that tensor names are unique (#6905)
* not allow adding duplicated tensor name
* no duplicated tensor while reading gguf
* typo
* throw exception inside llama_model_loader
Co-authored-by: slaren <redacted>
---------
Co-authored-by: slaren <redacted>
Neo Zhang [Sun, 28 Apr 2024 14:40:31 +0000 (22:40 +0800)]
add device version in device list (#6959)
Co-authored-by: arthw <>
github-actions[bot] [Sun, 28 Apr 2024 00:18:27 +0000 (00:18 +0000)]
flake.lock: Update
Flake lock file updates:
• Updated input 'nixpkgs':
'github:NixOS/nixpkgs/
5c24cf2f0a12ad855f444c30b2421d044120c66f ?narHash=sha256-XtTSSIB2DA6tOv%2Bl0FhvfDMiyCmhoRbNB%2B0SeInZkbk%3D' (2024-04-19)
→ 'github:NixOS/nixpkgs/
7bb2ccd8cdc44c91edba16c48d2c8f331fb3d856 ?narHash=sha256-Drmja/f5MRHZCskS6mvzFqxEaZMeciScCTFxWVLqWEY%3D' (2024-04-25)
mgroeber9110 [Sat, 27 Apr 2024 19:02:06 +0000 (21:02 +0200)]
Replace "alternative" boolean operator in conditional compilation directive (#6949)
Pierrick Hymbert [Sat, 27 Apr 2024 15:50:48 +0000 (17:50 +0200)]
ci: server: tests python env on github container ubuntu latest / fix n_predict (#6935)
* ci: server: fix python env
* ci: server: fix server tests after #6638
* ci: server: fix windows is not building PR branch
agray3 [Fri, 26 Apr 2024 18:08:30 +0000 (19:08 +0100)]
Reset schedule earlier to allow overlap with ggml graph computation on device (#6933)
* Reset schedule earlier to allow overlap with graph computation on device
Pierrick Hymbert [Fri, 26 Apr 2024 18:06:33 +0000 (20:06 +0200)]
quantize: add imatrix and dataset metadata in GGUF (#6658)
* imatrix: save the dataset file used in the output file
* llama: support kv overrides type string string
* common: factorize KV Overrides parsing between common and server
* quantize: add imatrix n entries and dataset KV metadata
quantize: factorize KV Overrides parsing between common
#6656
* llama: remove kv override str_value initialization as it does not compile on some toolchain
* quantize: add imatrix m_last_call as `quantize.imatrix.chunks_count`
* quantize: add imatrix filename in KV
* llama: add llama_model_kv_override_free
* common: add llama_model_kv_override_free
common: free kv override if used after model loading
* llama: finally move the string KV override value to the stack
* llama : minor
* no need to add a NUL to the std::vector, std::string can be initialized from a pair of iterators.
Co-authored-by: slaren <redacted>
* kv override: ensure string termination
---------
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: slaren <redacted>
slaren [Fri, 26 Apr 2024 16:39:58 +0000 (18:39 +0200)]
add basic tensor data validation function (#6884)
* add basic tensor data validation function
* add --check-tensors command line argument
tensor validation is disabled by default and can be enabled by adding
`--check-tensors` to the command line arguments.
quantize always validates tensors.
slaren [Fri, 26 Apr 2024 15:07:42 +0000 (17:07 +0200)]
gguf : fix mismatch between alloc and free functions (#6929)
Justine Tunney [Fri, 26 Apr 2024 14:05:33 +0000 (10:05 -0400)]
llamafile : use 64-bit integers in sgemm (#6928)
Pierrick Hymbert [Fri, 26 Apr 2024 10:27:25 +0000 (12:27 +0200)]
ci: server: fix python installation (#6925)
Pierrick Hymbert [Fri, 26 Apr 2024 10:15:30 +0000 (12:15 +0200)]
server: stop generation at `n_ctx_train` if `n_predict` is not set (#6638)
* server: cap n_predict if not set to n_ctx_train
* server: fix infinite loop
* server: infinite loop, move in process_token
server: infinite loop: set stop limit to true
* minor: spaces
* minor: spaces
* server: include prompt tokens in the EOS limit
Pierrick Hymbert [Fri, 26 Apr 2024 09:11:51 +0000 (11:11 +0200)]
ci: server: fix python installation (#6922)
Georgi Gerganov [Fri, 26 Apr 2024 07:41:53 +0000 (10:41 +0300)]
Merge pull request from GHSA-p5mv-gjc5-mwqv
* always use calloc
clamp n_kv on failure to read a kv
* ggml : alternative ctx->header.n_kv update
---------
Co-authored-by: slaren <redacted>
Pierrick Hymbert [Fri, 26 Apr 2024 07:27:49 +0000 (09:27 +0200)]
ci: server: fix python installation (#6918)
Pierrick Hymbert [Fri, 26 Apr 2024 07:26:59 +0000 (09:26 +0200)]
ci: fix concurrency for pull_request_target (#6917)
Pierrick Hymbert [Fri, 26 Apr 2024 07:26:16 +0000 (09:26 +0200)]
bench: server add stop word for PHI-2 (#6916)
vik [Thu, 25 Apr 2024 19:38:31 +0000 (12:38 -0700)]
llava : add support for moondream vision language model (#6899)
* add support for moondream vision language model
This required making the following changes to the CLIP model:
1. Support for patch embedding bias.
2. Make class embedding and pre-layernorm optional.
3. Add support for post-layernorm.
* Update examples/llava/clip.cpp
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Thu, 25 Apr 2024 18:31:17 +0000 (21:31 +0300)]
cmake : restore LLAMA_LLAMAFILE_DEFAULT
Georgi Gerganov [Thu, 25 Apr 2024 15:59:51 +0000 (18:59 +0300)]
cmake : remove obsolete ANDROID check
slaren [Thu, 25 Apr 2024 15:59:03 +0000 (17:59 +0200)]
llama : synchronize before get/set session data (#6911)
Georgi Gerganov [Thu, 25 Apr 2024 14:06:27 +0000 (17:06 +0300)]
ci : tmp disable slow tests
BarfingLemurs [Thu, 25 Apr 2024 13:52:28 +0000 (09:52 -0400)]
readme : update model list (#6908)
* Update README.md
* missing space
* llama3 !
slaren [Thu, 25 Apr 2024 13:23:47 +0000 (15:23 +0200)]
llama : check that all the tensor data is in the model file (#6885)
* llama : check that all the tensor data is in the model file
* also check for unsigned overflow
Georgi Gerganov [Thu, 25 Apr 2024 12:48:25 +0000 (15:48 +0300)]
ggml : fix redefinition of vaddvq_f32 for 32-bit ARM (#6906)
Daniel Bevenius [Thu, 25 Apr 2024 12:38:14 +0000 (14:38 +0200)]
clip : rename lerp function to avoid conflict (#6894)
This commit renamesthe lerp (linear interpolation) function in clip.cpp
to avoid a conflict with the lerp function in the <cmath> standard C++
library when using c++20.
The motivation for this change is to enable projects that use c++20 to
be able to compile clip.cpp without having to resort to patching it. The
lerp function was added to cmath in version C++20 (202002L) and is why
this is not causing any issue at the moment as C++11/C++17 is currently
used by llama.cpp.
I realize that llama.cpp uses either C++11 (or C++17 in the case for
SYCL) but wanted to ask if this would be an acceptable change just the
same.
Refs: https://en.cppreference.com/w/cpp/numeric/lerp
Signed-off-by: Daniel Bevenius <redacted>
Georgi Gerganov [Thu, 25 Apr 2024 12:12:28 +0000 (15:12 +0300)]
ggml : fix MIN / MAX macros (#6904)
ggml-ci
Georgi Gerganov [Thu, 25 Apr 2024 11:27:20 +0000 (14:27 +0300)]
tests : minor bash stuff (#6902)
* tests : minor bash stuff
ggml-ci
* llama : fix build
ggml-ci
* tests : fix CUR_DIR -> ROOT_DIR
ggml-ci
* tests : fix fname
ggml-ci
jiez [Thu, 25 Apr 2024 10:29:35 +0000 (18:29 +0800)]
quantize : add '--keep-split' to quantize model into shards (#6688)
* Implement '--keep-split' to quantize model into several shards
* Add test script
* Update examples/quantize/quantize.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Split model correctly even if tensor id is out-of-order
* Update llama_model_quantize_params
* Fix preci failures
---------
Co-authored-by: z5269887 <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Johannes Gäßler [Wed, 24 Apr 2024 19:29:13 +0000 (21:29 +0200)]
README: add graphic for matrix multiplication (#6881)
Douglas Hanley [Wed, 24 Apr 2024 13:10:07 +0000 (08:10 -0500)]
llama : add llama_get_pooling_type function (#6862)
* add llama_get_pooling_type function
* fix argument name, move with ctx funcs
mgroeber9110 [Wed, 24 Apr 2024 10:54:24 +0000 (12:54 +0200)]
server : do not apply Markdown formatting in code sections (#6850)
Kyle Mistele [Wed, 24 Apr 2024 10:15:29 +0000 (05:15 -0500)]
common : revert showing control tokens by default for server (#6860)
* fix: revert showing control tokens by default
* feat: revert changes to default behavior of llama_token_to_piece; provide overridden declaration to receive "bool special" param to toggle showing control tokens
* feat: use the overridden declaration of llama_token_to_piece from common/common.cpp to specify "false" so that control tokens are not shown in chat completion responses"
* common : simplify
---------
Co-authored-by: Georgi Gerganov <redacted>
Johannes Gäßler [Wed, 24 Apr 2024 09:08:36 +0000 (11:08 +0200)]
Server: fix seed for multiple slots (#6835)
* Server: add tests for consistent results
* sampling: separate rng per sampling context
Georgi Gerganov [Wed, 24 Apr 2024 09:00:07 +0000 (12:00 +0300)]
ggml : move 32-bit arm compat in ggml-impl.h (#6865)
ggml-ci
Tristan Druyen [Wed, 24 Apr 2024 08:52:37 +0000 (10:52 +0200)]
llama : add phi 3 chat template (#6857)
* Add phi 3 chat template & tests
* test : fix chat template result
---------
Co-authored-by: Georgi Gerganov <redacted>
Junyang Lin [Wed, 24 Apr 2024 07:16:21 +0000 (15:16 +0800)]
convert : add support of codeqwen due to tokenizer (#6707)
* add support of codeqwen due to tokenizer
* override load_hparams
* fix typo
* fix load_params
* convert : fix whitespace
---------
Co-authored-by: Georgi Gerganov <redacted>
liuwei-git [Wed, 24 Apr 2024 07:00:37 +0000 (15:00 +0800)]
llama : add phi3 support (#6852)
* add explicit phi3 support
* add explicit phi3 support
* remove unused code
* convert : add BOS token
* llama : match EOT token <|end|>
* llama : minor / style
* llama : tabs -> spaces
* convert : fix lint checks
---------
Co-authored-by: Georgi Gerganov <redacted>
Anas Ahouzi [Tue, 23 Apr 2024 00:53:18 +0000 (02:53 +0200)]
[SYCL] Windows default build instructions without -DLLAMA_SYCL_F16 flag activated (#6767)
* Fix FP32/FP16 build instructions
* Fix typo
* Recommended build instruction
Co-authored-by: Neo Zhang Jianyu <redacted>
* Recommended build instruction
Co-authored-by: Neo Zhang Jianyu <redacted>
* Recommended build instruction
Co-authored-by: Neo Zhang Jianyu <redacted>
* Add comments in Intel GPU linux
---------
Co-authored-by: Anas Ahouzi <redacted>
Co-authored-by: Neo Zhang Jianyu <redacted>
Justine Tunney [Mon, 22 Apr 2024 19:00:36 +0000 (15:00 -0400)]
llamafile : improve sgemm.cpp (#6796)
* llamafile : improve sgemm.cpp
- Re-enable by default
- Fix issue described in #6716
- Make code more abstract, elegant, and maintainable
- Faster handling of weirdly shaped `m` an `n` edge cases
* Address review comments
* Help clang produce fma instructions
* Address review comments
Dave Airlie [Mon, 22 Apr 2024 14:05:06 +0000 (00:05 +1000)]
ggml : fix calloc argument ordering. (#6820)
Latest gcc complains here:
/home/airlied/devel/llama.cpp/ggml-alloc.c: In function ‘ggml_gallocr_new_n’:
/home/airlied/devel/llama.cpp/ggml-alloc.c:374:59: warning: ‘calloc’ sizes specified with ‘sizeof’ in the earlier argument and not in the later argument [-Wcalloc-transposed-args]
374 | ggml_gallocr_t galloc = (ggml_gallocr_t)calloc(sizeof(struct ggml_gallocr), 1);
| ^~~~~~
/home/airlied/devel/llama.cpp/ggml-alloc.c:374:59: note: earlier argument should specify number of elements, later size of each element
and a bunch more.
calloc is specified to take nmemb first then size, so realign the code.
In a couple of places there was a * x, 1 so I fixed those to use calloc properly.
Georgi Gerganov [Mon, 22 Apr 2024 12:41:11 +0000 (15:41 +0300)]
llama : fix typo in <|im_end|> token text (#6745)
Pierrick Hymbert [Mon, 22 Apr 2024 11:22:54 +0000 (13:22 +0200)]
ci: fix job are cancelling each other (#6781)
github-actions[bot] [Sun, 21 Apr 2024 00:17:47 +0000 (00:17 +0000)]
flake.lock: Update
Flake lock file updates:
• Updated input 'nixpkgs':
'github:NixOS/nixpkgs/
1042fd8b148a9105f3c0aca3a6177fd1d9360ba5 ?narHash=sha256-3sbWO1mbpWsLepZGbWaMovSO7ndZeFqDSdX0hZ9nVyw%3D' (2024-04-10)
→ 'github:NixOS/nixpkgs/
5c24cf2f0a12ad855f444c30b2421d044120c66f ?narHash=sha256-XtTSSIB2DA6tOv%2Bl0FhvfDMiyCmhoRbNB%2B0SeInZkbk%3D' (2024-04-19)
Olivier Chafik [Sun, 21 Apr 2024 17:48:53 +0000 (18:48 +0100)]
`build`: generate hex dump of server assets during build (#6661)
* `build`: generate hex dumps of server assets on the fly
* build: workaround lack of -n on gnu xxd
* build: don't use xxd in cmake
* build: don't call xxd from build.zig
* build: more idiomatic hexing
* build: don't use xxd in Makefile (od hackery instead)
* build: avoid exceeding max cmd line limit in makefile hex dump
* build: hex dump assets at cmake build time (not config time)
Georgi Gerganov [Sun, 21 Apr 2024 15:36:45 +0000 (18:36 +0300)]
llama : add option to render special/control tokens (#6807)
* make : fix common dep on llama.h
* llama : add option to render special tokens
* readme : add API change notice
ggml-ci
* swift : fix build
Georgi Gerganov [Sun, 21 Apr 2024 13:47:57 +0000 (16:47 +0300)]
ggml : fix ggml_backend_cpu_supports_op() for CPY (#0)
Wouter [Sun, 21 Apr 2024 13:03:39 +0000 (15:03 +0200)]
llama : add llama-3 chat template (#6751)
* Added llama-3 chat template
* Update llama.cpp
Co-authored-by: Samuel Tallet <redacted>
* Update llama.cpp
Co-authored-by: Samuel Tallet <redacted>
* Update tests/test-chat-template.cpp
Co-authored-by: Samuel Tallet <redacted>
* Added EOS stop sequence according to https://github.com/ggerganov/llama.cpp/pull/6751#issuecomment-
2065602862
* Removed adding of BOS token before first message
* Removed bos token from expected output from llama-3
* Update tests/test-chat-template.cpp
Co-authored-by: Rene Leonhardt <redacted>
* Update tests/test-chat-template.cpp
Co-authored-by: Rene Leonhardt <redacted>
* Added <|end_of_text|> as another stop token
* Reverted last change of adding the end_of_text stop word for llama 3
---------
Co-authored-by: Wouter Tichelaar <redacted>
Co-authored-by: Samuel Tallet <redacted>
Co-authored-by: Rene Leonhardt <redacted>
Co-authored-by: Georgi Gerganov <redacted>
pmysl [Sun, 21 Apr 2024 12:49:30 +0000 (14:49 +0200)]
gguf-py : add IQ1_M to GGML_QUANT_SIZES (#6761)
Jan Boon [Sun, 21 Apr 2024 12:35:40 +0000 (20:35 +0800)]
doc : add link to falcon (#6789)
Mohammadreza Hendiani [Sun, 21 Apr 2024 12:32:05 +0000 (16:02 +0330)]
readme : add Fedora instructions (#6783)
* added fedora to list of distros that may need the package (the packages have the same name on Fedora)
* how to add clblast that is avalible in the fedora repos
Justine Tunney [Sun, 21 Apr 2024 12:19:04 +0000 (08:19 -0400)]
llava : use logger in llava-cli (#6797)
This change removes printf() logging so llava-cli is shell scriptable.
Pedro Cuenca [Sun, 21 Apr 2024 11:50:41 +0000 (13:50 +0200)]
llama : support Llama 3 HF conversion (#6745)
* Support Llama 3 conversion
The tokenizer is BPE.
* style
* Accept suggestion
Co-authored-by: Sourab Mangrulkar <redacted>
* llama : add llama_token_is_eog()
ggml-ci
* llama : auto-detect more EOT tokens when missing in KV data
* convert : replacing EOS token is a hack
* llama : fix codegemma EOT token + add TODOs
* llama : fix model type string for 8B model
---------
Co-authored-by: Sourab Mangrulkar <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Jan Boon [Sat, 20 Apr 2024 16:29:50 +0000 (00:29 +0800)]
doc : server tests require llama to be built with curl enabled (#6788)
Georgi Gerganov [Sat, 20 Apr 2024 10:27:12 +0000 (13:27 +0300)]
common : try to fix Android CI (#6780)
* common : disable get_math_cpu_count() until Android CI gets fixed
* common : another try
loonerin [Fri, 19 Apr 2024 17:03:35 +0000 (13:03 -0400)]
ci: add ubuntu latest release and fix missing build number (mac & ubuntu) (#6748)
Pierrick Hymbert [Fri, 19 Apr 2024 11:19:01 +0000 (13:19 +0200)]
server: static: upstream upgrade (#6765)
nopperl [Fri, 19 Apr 2024 09:35:54 +0000 (09:35 +0000)]
Implement the OLMo architecture (#6741)
* implement olmo architecture
* remove unused variable
* remove unused moe branch
* remove check for weight
* remove superfluous moe, bias and rope tensors
* clarified comment
* fix clamp_kqv setting
* remove obsolete parameter name filter
Austin [Fri, 19 Apr 2024 07:16:45 +0000 (03:16 -0400)]
train : add general name (#6752)
* llama : make general.name optional
* train: Add 'general.name' to model metadata
Signed-off-by: teleprint-me <redacted>
---------
Signed-off-by: teleprint-me <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Neo Zhang [Fri, 19 Apr 2024 01:16:31 +0000 (09:16 +0800)]
fix wrong parameter in cmd in readme-sycl.md (#6755)
Co-authored-by: jianyuzh <redacted>
slaren [Thu, 18 Apr 2024 13:18:48 +0000 (15:18 +0200)]
ggml : group all experts in a single ggml_mul_mat_id (#6505)
* ggml : group all experts in a single ggml_mul_mat_id
cuda : improve mmid row copy
* cuda : fix bin bcast with non-cont src0
* test-backend-ops : only run all mul mat tests for base types
* llama : disable moe offloading with SYCL
---------
Co-authored-by: Georgi Gerganov <redacted>
Sigbjørn Skjæret [Thu, 18 Apr 2024 11:49:01 +0000 (13:49 +0200)]
convert : support models with multiple chat templates (#6588)
* Support converting models with multiple chat templates
Adds the following metadata:
* tokenizer.chat_templates
* tokenizer.chat_template.<name1>
* tokenizer.chat_template.<name2>
* tokenizer.chat_template.<...>
Where `tokenizer.chat_templates` is an array of the template names (except `default`), `default` is added to the regular `tokenizer.chat_template`.
* replace filtered characters with underscore
* New script to add/modify/remove metadata
This scripts creates a copy of a GGUF file and allows you to add/modify/remove metadata in the process.
Most importantly this allows you to update chat templates, either as a string or directly from an updated tokenizer_config.json file.
* Add files via upload
add new script to project/readme
* flake--
Ren Xuancheng [Thu, 18 Apr 2024 11:38:04 +0000 (19:38 +0800)]
Qwen2 : assume tied weights if lm_head/output weights is missing (#6738)
slaren [Thu, 18 Apr 2024 07:04:47 +0000 (09:04 +0200)]
llama : fix compatibility with old 2 expert models (#6735)
Georgi Gerganov [Wed, 17 Apr 2024 20:58:26 +0000 (23:58 +0300)]
llamafile : tmp disable + build sgemm.o when needed (#6716)
* build : sgemm.o only when needed
ggml-ci
* llamafile : tmp disable due to MoE bug
ggml-ci
Yaroslav [Wed, 17 Apr 2024 12:47:50 +0000 (14:47 +0200)]
readme : add UI (#6724)
* Update README.md
* Update README.md
---------
Co-authored-by: Georgi Gerganov <redacted>
Zheng.Deng [Tue, 16 Apr 2024 20:51:07 +0000 (04:51 +0800)]
convert : fix autoawq gemma (#6704)
* fix autoawq quantized gemma model convert error
using autoawq to quantize gemma model will include a lm_head.weight tensor in model-00001-of-00002.safetensors. it result in this situation that convert-hf-to-gguf.py can't map lm_head.weight. skip loading this tensor could prevent this error.
* change code to full string match and print necessary message
change code to full string match and print a short message to inform users that lm_head.weight has been skipped.
---------
Co-authored-by: Zheng.Deng <redacted>
Georgi Gerganov [Tue, 16 Apr 2024 20:50:38 +0000 (23:50 +0300)]
llama : make general.name optional (#6709)
Georgi Gerganov [Tue, 16 Apr 2024 20:50:22 +0000 (23:50 +0300)]
ggml : fix llamafile sgemm wdata offsets (#6710)
ggml-ci
Justine Tunney [Tue, 16 Apr 2024 18:55:30 +0000 (14:55 -0400)]
ggml : add llamafile sgemm (#6414)
This change upstreams llamafile's cpu matrix multiplication kernels
which improve image and prompt evaluation speed. For starters, Q4_0
and Q8_0 weights should go ~40% faster on CPU. The biggest benefits
are with data types like f16 / f32, which process prompts 2x faster
thus making them faster than quantized data types for prompt evals.
This change also introduces bona fide AVX512 support since tinyBLAS
is able to exploit the larger register file. For example, on my CPU
llama.cpp llava-cli processes an image prompt at 305 tokens/second,
using the Q4_K and Q4_0 types, which has always been faster than if
we used f16 LLaVA weights, which at HEAD go 188 tokens/second. With
this change, f16 LLaVA performance leap frogs to 464 tokens/second.
On Intel Core i9-14900K this change improves F16 prompt perf by 5x.
For example, using llama.cpp at HEAD with Mistral 7b f16 to process
a 215 token prompt will go 13 tok/sec. This change has fixes making
it go 52 tok/sec. It's mostly thanks to my vectorized outer product
kernels but also because I added support for correctly counting the
number of cores on Alderlake, so the default thread count discounts
Intel's new efficiency cores. Only Linux right now can count cores.
This work was sponsored by Mozilla who's given permission to change
the license of this code from Apache 2.0 to MIT. To read more about
what's improved, and how it works, see: https://justine.lol/matmul/
Ashish [Tue, 16 Apr 2024 15:48:35 +0000 (08:48 -0700)]
llama : add StableLM2 12B (#6635)
* StableLM2 12B support for huggingface -> GGUF
* StableLM12 tensormapping and constants
* StableLM-2-12b model support
* fix
* Added 12B support
* Removed autoformatting; resolved bug where model_arch was not selecting StableLM2
* Formatting
* Do QK norm stacking in model conversion step
* Converge StableLM and StableLM2 code to simplify graph construction
* Fix accidental removal
* Removed warnings
* Revert formatter
* Move QK norm stack to private function so it's easier to read
* refactor stablelm graph builder to support 1.6, 3b and 12b more efficiently
* Proper check for None type for new_name to avoid crash; formatting; revert change to base class `write_tensors()`
* Format
* Formatting
* format
Co-authored-by: compilade <redacted>
* Fix incorrect check for K norm
* space after commas; Keep indentation multiple of 4 spaces
* Flake8 format
* Removed unnecessary conditional branches
* Removed unused comment
* Fixed incorrect tensor passing
* Format
---------
Co-authored-by: compilade <redacted>
Shijie [Tue, 16 Apr 2024 15:40:48 +0000 (23:40 +0800)]
llama : add qwen2moe (#6074)
* support qwen2moe
* fix-review
* metal : support unary ops for nelements % 4 != 0
* metal : require contiguousness for float4 unary kernels
* metal : require contiguousness for float4 unary kernels (cont)
* fix-review
* names : for brevity "SHARED_EXP" -> "SHEXP"
* llama : reuse build_moe_ffn()
* llama : add model type name
---------
Co-authored-by: Georgi Gerganov <redacted>
Daniel Bevenius [Tue, 16 Apr 2024 06:34:06 +0000 (08:34 +0200)]
gritlm : add --outdir option to hf.sh script (#6699)
This commit updates the hf.sh script usage to include the --outdir option
and specifies the models directory as the output directory.
The motivation for this is to avoid cluttering the root directory with
model files.
Signed-off-by: Daniel Bevenius <redacted>
Georgi Gerganov [Tue, 16 Apr 2024 06:28:33 +0000 (09:28 +0300)]
perplexity : require positive --ctx-size arg (#6695)
Daniel Bevenius [Tue, 16 Apr 2024 06:13:13 +0000 (08:13 +0200)]
gguf : add special tokens metadata for FIM/Infill (#6689)
This commit adds special token metadata for Fill-In-the-Middle
(FIM)/Infill to the GGUF model.
The motivation for this is that currently there is support for CodeLlama
but other models exist now like CodeGemma, but the different models use
different token ids for the special tokens and this commit allows for
supporting multiple models.
Signed-off-by: Daniel Bevenius <redacted>
Olivier Chafik [Mon, 15 Apr 2024 17:35:21 +0000 (18:35 +0100)]
`main`: add --json-schema / -j flag (#6659)
* main: add --json-schema / -j
* json: move json-schema-to-grammar to common lib
* json: fix zig build
compilade [Mon, 15 Apr 2024 12:56:55 +0000 (08:56 -0400)]
llama : fix restoring the number of outputs from state files (#6687)
Pierrick Hymbert [Mon, 15 Apr 2024 12:18:47 +0000 (14:18 +0200)]
server : revert "minor layout improvements" (#6684)
This reverts commit
b3a96f27f065a828f08c5d89ff60aab5361188fe .
Steven Prichard [Mon, 15 Apr 2024 10:14:46 +0000 (05:14 -0500)]
swift : linux support (#6590)
- Package.swift now supports conditional compilation based on OS
- Allows for package to be used by SPM on Non-Apple platforms
Co-authored-by: Steven Prichard <redacted>
Neo Zhang Jianyu [Mon, 15 Apr 2024 09:12:26 +0000 (17:12 +0800)]
fix mul_mat_id() for new input, make the ut pass (#6682)
David Renshaw [Sun, 14 Apr 2024 19:24:15 +0000 (15:24 -0400)]
llama : add missing kv clear in llama_beam_search (#6664)
Chao Jiang [Sun, 14 Apr 2024 16:16:34 +0000 (00:16 +0800)]
Add Command R chat template (#6650)
* Add chat template for command-r model series
* Fix indentation
* Add chat template test for command-r models and update the implementation to trim whitespaces
* Remove debug print
Georgi Gerganov [Sun, 14 Apr 2024 13:55:30 +0000 (16:55 +0300)]
flake.lock: Update (#6669)
Dave [Sun, 14 Apr 2024 11:14:19 +0000 (07:14 -0400)]
Added support for GGML_OP_CLAMP in Metal (#6662)
* Added support for GGML_OP_CLAMP in Metal
* Corrected size
---------
Co-authored-by: dave-fl <redacted>
Sigbjørn Skjæret [Sun, 14 Apr 2024 11:12:59 +0000 (13:12 +0200)]
Fix --split-max-size (#6655)
* Fix --split-max-size
Byte size calculation was done on int and overflowed.
* add tests.sh
* add examples test scripts to ci run
Will autodiscover examples/*/tests.sh scripts and run them.
* move WORK_PATH to a subdirectory
* clean up before and after test
* explicitly define which scripts to run
* add --split-max-size to readme
Jaemin Son [Sun, 14 Apr 2024 11:12:36 +0000 (20:12 +0900)]
[bug fix] convert github repository_owner to lowercase (#6673)
James A Capozzoli [Sun, 14 Apr 2024 08:40:18 +0000 (04:40 -0400)]
convert : enable the `--use-temp-file` cli flag (#6645)
Neo Zhang Jianyu [Sun, 14 Apr 2024 02:42:29 +0000 (10:42 +0800)]
fix memcpy() crash, add missed cmd in guide, fix softmax (#6622)
* disable mmap to fix memcpy crash, add missed cmd in guide, fix softmax
* refactor to disable mmap for SYCL backend
* fix compile error in other os
* refactor the solution, use host buf to fix it, instead of disable mmap
* keep to support mmap()
* use host buff to reduce malloc times
* revert to malloc/free solution, for threaad safe
Johannes Gäßler [Sat, 13 Apr 2024 22:21:55 +0000 (00:21 +0200)]
CUDA: fix matrix multiplication logic for tests (#6667)
Pierrick Hymbert [Sat, 13 Apr 2024 09:33:52 +0000 (11:33 +0200)]
model: support arch `DbrxForCausalLM` (#6515)
* model: dbrx convert to gguf
#6344
* llama: support dbrx
#6344
* doc: dbrx: add the model as supported
* scripts: get-wikitext-2 add unzip
* llama: increase maximum experts allowed
* llama: factorize moe graph implementation between grok, mixtral and dbrx
---------
Co-authored-by: Megha Agarwal <redacted>
Olivier Chafik [Fri, 12 Apr 2024 18:43:38 +0000 (19:43 +0100)]
JSON schema conversion: ⚡️ faster repetitions, min/maxLength for strings, cap number length (#6555)
* json: rename python schema converter to make import easier
* server: skip null json_schema / grammar fields
* json: deps management for primitive rules (+ allow null values)
* json: optimize repetitions for minItems/maxItems and regexps: `a{,3}` goes from `"a"? "a"? "a"?` (explosive combos) to `(a (a (a)?)?)?`
* grammars: add troubleshooting section to readme
* json: cap length of numbers to 15 digits before/after decimal point
(avoids infinite gen, e.g. "one third" -> `0.
333333333333 ...`)
* json: unify all repetition code (w/ or w/o sep)
* json: support string minLength/maxLength
* server+json: update server/README w/ result_format
* nits
* json: fix type error w/ python 3.8
* json: fix server/README (json_schema in /completion vs. result_format in /v1/chat/completions)
* json: simplify DOT `{"type": "string", "pattern": "^.$"}`
* json: remove recursion in opt_repetitions (avoids Python stack overflow)
* json: rm dead code
* json: rm useless assert & ggml.h import
slaren [Fri, 12 Apr 2024 16:13:20 +0000 (18:13 +0200)]
metal : unify mul_mv_id kernels (#6556)
Daniel Bevenius [Fri, 12 Apr 2024 12:11:46 +0000 (14:11 +0200)]
infill : add download instructions for model (#6626)
* infill : add download instructions for model
This commit adds instructions on how to download a CodeLlama model
using the `hf.sh` script. This will download the model and place it
in the `models` directory which is the same model use later by the
infill example.
Signed-off-by: Daniel Bevenius <redacted>
* squash! infill : add download instructions for model
Clarify the reason for using CodeLlama.
Signed-off-by: Daniel Bevenius <redacted>
---------
Signed-off-by: Daniel Bevenius <redacted>
Pierrick Hymbert [Fri, 12 Apr 2024 11:49:21 +0000 (13:49 +0200)]
server : coherent log output for KV cache full (#6637)
jiez [Fri, 12 Apr 2024 10:45:06 +0000 (18:45 +0800)]
llama : add gguf_remove_key + remove split meta during quantize (#6591)
* Remove split metadata when quantize model shards
* Find metadata key by enum
* Correct loop range for gguf_remove_key and code format
* Free kv memory
---------
Co-authored-by: z5269887 <redacted>
Rene Leonhardt [Fri, 12 Apr 2024 08:52:36 +0000 (10:52 +0200)]
chore: Fix markdown warnings (#6625)
Georgi Gerganov [Fri, 12 Apr 2024 08:49:58 +0000 (11:49 +0300)]
imatrix : remove invalid assert (#6632)
MasterYi1024 [Fri, 12 Apr 2024 08:28:12 +0000 (16:28 +0800)]
Correct free memory and total memory. (#6630)
Co-authored-by: MasterYi <redacted>
Pierrick Hymbert [Fri, 12 Apr 2024 08:26:47 +0000 (10:26 +0200)]
eval-callback: use ggml_op_desc to pretty print unary operator name (#6631)
Georgi Gerganov [Fri, 12 Apr 2024 08:15:05 +0000 (11:15 +0300)]
ci : disable Metal for macOS-latest-cmake-x64 (#6628)