]>
git.djapps.eu Git - pkg/ggml/sources/ggml/log
Shijie [Tue, 16 Apr 2024 15:40:48 +0000 (23:40 +0800)]
llama : add qwen2moe (llama/6074)
* support qwen2moe
* fix-review
* metal : support unary ops for nelements % 4 != 0
* metal : require contiguousness for float4 unary kernels
* metal : require contiguousness for float4 unary kernels (cont)
* fix-review
* names : for brevity "SHARED_EXP" -> "SHEXP"
* llama : reuse build_moe_ffn()
* llama : add model type name
---------
Co-authored-by: Georgi Gerganov <redacted>
Neo Zhang Jianyu [Mon, 15 Apr 2024 09:12:26 +0000 (17:12 +0800)]
fix mul_mat_id() for new input, make the ut pass (llama/6682)
Dave [Sun, 14 Apr 2024 11:14:19 +0000 (07:14 -0400)]
Added support for GGML_OP_CLAMP in Metal (llama/6662)
* Added support for GGML_OP_CLAMP in Metal
* Corrected size
---------
Co-authored-by: dave-fl <redacted>
Neo Zhang Jianyu [Sun, 14 Apr 2024 02:42:29 +0000 (10:42 +0800)]
fix memcpy() crash, add missed cmd in guide, fix softmax (llama/6622)
* disable mmap to fix memcpy crash, add missed cmd in guide, fix softmax
* refactor to disable mmap for SYCL backend
* fix compile error in other os
* refactor the solution, use host buf to fix it, instead of disable mmap
* keep to support mmap()
* use host buff to reduce malloc times
* revert to malloc/free solution, for threaad safe
Johannes Gäßler [Sat, 13 Apr 2024 22:21:55 +0000 (00:21 +0200)]
CUDA: fix matrix multiplication logic for tests (llama/6667)
slaren [Fri, 12 Apr 2024 16:13:20 +0000 (18:13 +0200)]
metal : unify mul_mv_id kernels (llama/6556)
jiez [Fri, 12 Apr 2024 10:45:06 +0000 (18:45 +0800)]
llama : add gguf_remove_key + remove split meta during quantize (llama/6591)
* Remove split metadata when quantize model shards
* Find metadata key by enum
* Correct loop range for gguf_remove_key and code format
* Free kv memory
---------
Co-authored-by: z5269887 <redacted>
Justina Cho [Wed, 1 May 2024 21:44:26 +0000 (14:44 -0700)]
feat: implemented sigmoid function (#806)
* added sigmoid function
* implemented metal kernel for sigmoid
* implemented cuda kernel for sigmoid
* added sigmoid unary op and incremented count
Borislav Stanimirov [Thu, 25 Apr 2024 14:24:07 +0000 (17:24 +0300)]
build: fix and ignore msvc warnings (#805)
Daniel Bevenius [Thu, 18 Apr 2024 16:47:17 +0000 (18:47 +0200)]
spec : fix typo in gguf.md (#798)
Signed-off-by: Daniel Bevenius <redacted>
Georgi Gerganov [Tue, 9 Apr 2024 17:28:35 +0000 (20:28 +0300)]
sync : whisper.cpp
Brad Murray [Tue, 9 Apr 2024 15:38:19 +0000 (11:38 -0400)]
whisper : fix DTW memory access (whisper/2012)
* Fix DTW memory access
* Memory fix - Apply changes from denersc
ulatekh [Tue, 9 Apr 2024 15:34:34 +0000 (08:34 -0700)]
common : fix file-handle leak in read_wav() (whisper/2026)
Now it cleans up in case of error.
Rotem Dan [Tue, 9 Apr 2024 15:33:32 +0000 (18:33 +0300)]
main : set stdin to binary mode on Windows (whisper/2025)
ulatekh [Tue, 9 Apr 2024 15:31:16 +0000 (08:31 -0700)]
main : allow a response-file as the sole parameter (whisper/2019)
* The "main" example now allows a response-file as the sole parameter.
A response-file is a text file with command-line parameters, one per line.
Prefix the name of the response-file with "@" to identify it as such.
It's used under MS Windows to work around command-line length limits.
It may be useful under other platforms to simplify character-escaping.
* minor : style
---------
Co-authored-by: Georgi Gerganov <redacted>
ulatekh [Tue, 9 Apr 2024 15:27:28 +0000 (08:27 -0700)]
whisper : suppress tokens with a regex (whisper/1997)
* Allow a regular expression to describe tokens to suppress.
Example: --suppress-tokens-re "[,\.]|[ ]?[0-9]+" will suppress commas, periods, and numeric tokens.
Technique inspired by https://github.com/openai/whisper/discussions/1041
Co-authored-by: Georgi Gerganov <redacted>
* Blind change to fix Java test.
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Tue, 9 Apr 2024 17:18:09 +0000 (20:18 +0300)]
sync : llama.cpp
Georgi Gerganov [Tue, 9 Apr 2024 17:17:51 +0000 (20:17 +0300)]
license : update copyright notice + add AUTHORS
Carolinabanana [Tue, 9 Apr 2024 08:16:13 +0000 (09:16 +0100)]
llama : add Command R Plus support (llama/6491)
* Add Command R Plus GGUF
* Add Command R Plus GGUF
* Loading works up to LayerNorm2D
* Export new tensors in 1D so they are not quantized.
* Fix embedding layer based on Noeda's example
* Whitespace
* Add line
* Fix unexpected tokens on MPS. Re-add F16 fix. ((Noeda)
* dranger003: Fix block index overflow in CUDA dequantizing.
* Reverted blocked multiplication code as it still has issues and could affect other Llama arches
* export norms as f32
* fix overflow issues during quant and other cleanup
* Type convention
Co-authored-by: Georgi Gerganov <redacted>
* dranger003: Fix more int overflow during quant.
---------
Co-authored-by: S <redacted>
Co-authored-by: S <redacted>
Co-authored-by: slaren <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Abhilash Majumder [Mon, 8 Apr 2024 08:26:01 +0000 (13:56 +0530)]
remove row=1 cond (llama/6532)
Neo Zhang Jianyu [Sun, 7 Apr 2024 02:55:59 +0000 (10:55 +0800)]
support/fix OPs GGML_TYPE_IQ4_NL, GGML_TYPE_IQ4_XS, GGML_TYPE_IQ3_XXS, GGML_TYPE_IQ3_S, GGML_TYPE_IQ2_XXS, GGML_TYPE_IQ2_XS, GGML_TYPE_IQ2_S, GGML_TYPE_IQ1_S, GGML_TYPE_IQ1_M (llama/6521)
Georgi Gerganov [Tue, 9 Apr 2024 17:15:04 +0000 (20:15 +0300)]
scripts : sync LICENSE and gen-authors.sh
Georgi Gerganov [Sun, 7 Apr 2024 14:02:17 +0000 (17:02 +0300)]
sync : whisper.cpp (grammar-parser, skip)
Georgi Gerganov [Sun, 7 Apr 2024 13:22:06 +0000 (16:22 +0300)]
sync : whisper.cpp
Slava Primenko [Thu, 4 Apr 2024 12:49:24 +0000 (14:49 +0200)]
ggml: bypass code incompatible with CUDA < 11.1 (whisper/2020)
`cudaHostRegisterReadOnly` parameter was only introduced in CUDA 11.1
See this issue for more details:
https://github.com/ggerganov/examples/whisper/whisper.cpp/issues/2007
ulatekh [Thu, 28 Mar 2024 10:02:10 +0000 (03:02 -0700)]
main : add command-style grammar (whisper/1998)
* Implemented command-style grammar in the main example.
Mostly just copied the relevant parts from the command example.
* main : code style
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Sun, 7 Apr 2024 13:09:49 +0000 (16:09 +0300)]
scripts : sync ggml-cuda folder
Georgi Gerganov [Sat, 6 Apr 2024 15:21:33 +0000 (18:21 +0300)]
ci : temporary build embeded metal library
ggml-ci
Georgi Gerganov [Sat, 6 Apr 2024 14:50:21 +0000 (17:50 +0300)]
sync : llama.cpp (skip)
ggml-ci
Ouadie EL FAROUKI [Fri, 5 Apr 2024 13:35:06 +0000 (14:35 +0100)]
Fixed minor bug when enabling FP16 for non intel targets (llama/6464)
* moved INTEL_MKL guard from gemm_impl to gemm (wrapper)
* Update ggml-sycl.cpp
Co-authored-by: AidanBeltonS <redacted>
---------
Co-authored-by: AidanBeltonS <redacted>
slaren [Wed, 3 Apr 2024 13:07:05 +0000 (15:07 +0200)]
ggml : mul_mat_id use the same tensor for all the experts (llama/6387)
* ggml : update mul_mat_id to use the same tensor for all the experts
* update cuda
* minor
* update metal
* update test-backend-ops
* fix cuda
* Update ggml-metal.m
Co-authored-by: Georgi Gerganov <redacted>
* update convert.py
* update convert-hf-to-gguf.py
* update convert.py for mixtral hf models
* Update convert-hf-to-gguf.py
Co-authored-by: Georgi Gerganov <redacted>
* cuda : support non-pow-2 number of experts
* allow quantize to work for split and merged experts models in the same way
* cleanup + disable mmap automatically with split tensors models
* update imatrix
* test-backend-ops : test qwen argsort
* update grok model loading
* llama : add merged experts tensors to the grok tensor map
* minor
* gguf : bump version
* fix quantizing of merged experts
* convert-hf-to-gguf.py : update grok (untested)
* make linter happy
* cuda/argsort : use shared memory instead of pool memory
* convert : fix grok tensor names
* metal : add support for non-pow-2 argsort
* llama : more loader cleanup, better error checking
* cuda : fix warning
* llama : still use mmap for loading old models, but copy the data to a host buffer
* add review note
* llama : remove ffn tensor counting + add sanity check
ggml-ci
* convert : fix handling of n_experts == None
ggml-ci
* imatrix : fix ncall counters
* llama : produce error if imatrix size does not match
* quantize : terminate on errors + trace logs
ggml-ci
* metal : pad shared memory to 16 bytes
---------
Co-authored-by: Georgi Gerganov <redacted>
Meng, Hengyu [Wed, 3 Apr 2024 02:34:40 +0000 (10:34 +0800)]
Disable iqx on windows as WA (llama/6435)
* disable iqx on windows as WA
* array instead of global_memory
0cc4m [Fri, 29 Mar 2024 16:29:21 +0000 (17:29 +0100)]
Vulkan k-quant mmq and ggml-backend offload functionality (llama/6155)
* Fix Vulkan no kv offload incoherence
* Add k-quant mul mat mat shaders
* Rework working buffer allocation, reduces vram use noticeably
Clean up cpu assist code, replaced with ggml-backend offload function
* Default to all dedicated GPUs
* Add fallback for integrated GPUs if no dedicated GPUs are found
* Add debug info which device is allocating memory
* Fix Intel dequant issue
Fix validation issue
* Fix Vulkan GGML_OP_GET_ROWS implementation
* Clean up merge artifacts
* Remove Vulkan warning
Neo Zhang Jianyu [Thu, 28 Mar 2024 00:55:24 +0000 (08:55 +0800)]
fix set main gpu crash (llama/6339)
slaren [Wed, 27 Mar 2024 14:07:50 +0000 (15:07 +0100)]
ggml : fix bounds checking of zero size views (llama/6347)
Daniel Bevenius [Wed, 3 Apr 2024 20:57:20 +0000 (22:57 +0200)]
backend : fix typo in scheduler documentation (#781)
Signed-off-by: Daniel Bevenius <redacted>
Andrei [Wed, 3 Apr 2024 20:56:58 +0000 (16:56 -0400)]
cmake : add `GGML_METAL_EMBED_LIBRARY` (#780)
Georgi Gerganov [Wed, 3 Apr 2024 19:59:55 +0000 (22:59 +0300)]
logo : add files (#782)
JacobLinCool [Wed, 27 Mar 2024 17:48:56 +0000 (01:48 +0800)]
gguf : update type enum (#775)
* spec: add missing semicolons in GGUF structs
Co-Authored-By: 郝東彥 Arthur Hao <redacted>
* spec: update GGUF tensor types
---------
Co-authored-by: 郝東彥 Arthur Hao <redacted>
Georgi Gerganov [Wed, 27 Mar 2024 17:01:42 +0000 (19:01 +0200)]
sync : whisper.cpp
Georgi Gerganov [Wed, 27 Mar 2024 11:37:14 +0000 (13:37 +0200)]
sync : whisper.cpp
Georgi Gerganov [Mon, 25 Mar 2024 12:48:19 +0000 (14:48 +0200)]
whisper : improve handling of prompts (whisper/1981)
* whisper : improve handling of prompts
* whisper : add whisper_token_count helper
Sanchit Gandhi [Thu, 21 Mar 2024 16:53:30 +0000 (22:23 +0530)]
whisper : improve support for distil-large-v3 (whisper/1982)
denersc [Wed, 20 Mar 2024 16:25:26 +0000 (13:25 -0300)]
whisper : token-level timestamps with DTW (whisper/1485)
* whisper.cpp: impl dtw algo
* WIP: producing and placing DTW timestamps on tokens
* Fix compile and assertion errors. Attempt to DTW timestamp with single_segment=false.
* Fix mistake causing incorrect alignment of dtw timestamps
* implement N_TOP_MOST and CUSTOM alignment heads setting
* whisper: fix typo on alignment heads enum
* Fix issues related to changes in whisper.cpp
* Fixed excessive memory use when using DTW timestamps. Other minor fixes to DTW timestamping function
* decoder: save cross QKs only if requested
* Calling median filter with ggml_map_custom1
* Reimpl aheads n_top_most and custom. Sanity checks on chosen aheads
* Copying cross QKs from decoder backend correctly
* dtw: cleanup
* Fix incorrect n_frames passed to dtw when near end of audio
* Fix aheads_masks_init for backend != CPU
* whisper : minor style
* main : add dtw (wip)
* whisper: fix invalid memory access in aheads_masks_init
* main : add dtw (cont)
* whisper : minor
---------
Co-authored-by: Georgi Gerganov <redacted>
Jo Liss [Mon, 18 Mar 2024 15:53:33 +0000 (15:53 +0000)]
examples : rename --audio-context to --audio-ctx per help text (whisper/1953)
Georgi Gerganov [Wed, 27 Mar 2024 11:10:13 +0000 (13:10 +0200)]
examples : more CUDA leftovers (#0)
Georgi Gerganov [Wed, 27 Mar 2024 11:09:12 +0000 (13:09 +0200)]
examples : fix CUBLAS leftovers (#0)
ggml-ci
Georgi Gerganov [Wed, 27 Mar 2024 09:26:04 +0000 (11:26 +0200)]
sync : adapt to CUDA changes (#0)
ggml-ci
Georgi Gerganov [Wed, 27 Mar 2024 09:15:00 +0000 (11:15 +0200)]
sync : llama.cpp
ggml-ci
AidanBeltonS [Wed, 27 Mar 2024 08:16:40 +0000 (08:16 +0000)]
Fix batched impl for NVidia GPU (llama/6164)
* Fix batched impl
* Maintain previous behaviour for igpu
* retrigger CI
---------
Co-authored-by: Abhilash Majumder <redacted>
Kawrakow [Wed, 27 Mar 2024 07:44:27 +0000 (08:44 +0100)]
Make IQ1_M work for QK_K = 64 (llama/6327)
* iq1_m: make it work for QK_K = 64 (WIP)
* iq1_m: make it work for QK_K = 64 (scalar and AVX2)
* iq1_m: QK_K = 64 seems to work on Metal and ARM_NEON
---------
Co-authored-by: Iwan Kawrakow <redacted>
compilade [Tue, 26 Mar 2024 14:46:41 +0000 (10:46 -0400)]
llama : greatly reduce output buffer memory usage (llama/6122)
* llama : greatly reduce logits memory usage
* llama : more compact state saving and reloading
* llama : fix lctx.n_outputs not being set before building graph
* perplexity : adapt to the logits API changes
* perplexity : fix Winogrande, use correct logits for second choice start
The first logits used to evaluate the second choice were not from
the end of the common prefix; instead, they were the logits from the end
of the first choice. This has been corrected.
The previous implementation sometimes had outliers in the scores of
choices for some tasks, and the logic to skip choices words
in the log-likelihood evaluation probably was an attempt to reduce those,
but it was complex and didn't quite seem to be the right thing.
This is simpler now, and the outlier scores aren't there anymore.
* perplexity : normalize spaces and punctuation in Winogrande sentences
* llama : fix embedding conditions
* llama : fix llama_get_embeddings_ith when the resulting id is 0
* llama : fix wrong n_outputs in llama_set_inputs
A mismatch happened when using a smaller n_ubatch than n_batch and then using
llama_batch_get_one(). The decision of what n_outputs should be now almost
fully depends on how lctx.n_outputs is set in llama_decode_internal.
The conditions are simpler this way.
* llama : when saving the state, recalculate n_outputs
This ensures the correct number of outputs for the entire previous batch
is stored in the session file, even when n_ubatch is smaller than n_batch.
* llama : fix not-skipping outputs of non-causal models
* llama : fix running a batch with n_outputs == 0
It previously worked because lctx.inp_out_ids was not initialized,
so it pointed to some garbage address which was somehow still valid when I
ran my tests.
* llama : keep same graph topology even when n_outputs == 0
* ggml : saner ggml_can_repeat with empty tensors
* ggml : future-proof ggml_is_empty by using GGML_MAX_DIMS - 1
* ggml : do not multi-thread ops returning empty tensors
* ggml : make ggml_is_empty public and work with views
* llama : use a vector for ctx->output_ids
* llama : rework reallocation logic for llama_output_reserve
Now comparing the actual size with the new total size of the output buffer
to allow more efficient enabling and disabling of the embeddings
and/or logits output in the future.
* ggml : skip empty tensors in all backends
* llama : fix llama_output_reserve nullptr deref when new_size is 0
* perplexity : make Winogrande work as it does on master
The problems with the Winogrande implementation will
need to be fixed in a separate PR to ease review.
* llama : clearer error messages for invalid logits or embeddings ids
* llama : assert all models that can have inp_out_ids
Since the graph topology is now constant, this presence check
can be done even when there are no outputs.
* llama : assert logits and embd buffers exist before writing to them
* llama : handle errors from llama_output_reserve at call sites
* perplexity : make hellaswag and multiple-choice outputs identical to master
Due to how the KV cache is updated, the logprobs for tokens in a batch
are very slightly affected by the other tokens present in the batch,
so to make hellaswag and multiple-choice return exactly the same results
as on master, the last token of each sequence needs to be evaluated
even though its output is not used at all.
This will probably be changed back in the future to make these benchmarks
a tiny bit faster.
* perplexity : fix division by zero when using less than 100 multiple-choice tasks
* llama : allow loading state saved with a different ctx size
When loading a session file, the context size is now only required to be
at least enough to load the KV cells contained in that session file,
instead of requiring to use exactly the same context size as when saving.
Doing this enables the use-case of extending or shrinking the context size
of a saved session.
This breaks existing session files because the meaning of kv_buf_size
is slightly changed (previously it was the size of the whole KV cache,
now it's only the size of the saved part of it). This allows for
finer-grained sanity checks when loading in an effort to keep kv_buf_size
useful even when the kv_size is changed.
* llama : minor
ggml-ci
* readme : update recent API changes, and warn about Vulkan
---------
Co-authored-by: Georgi Gerganov <redacted>
Kawrakow [Tue, 26 Mar 2024 14:21:27 +0000 (15:21 +0100)]
IQ1_M: 1.75 bpw quantization (llama/6302)
* iq1_m: basics
* iq1_m: basics-2
* iq1_m: CUDA dequantize works
Very 1st shot I get PPL = 9.76 for LLaMA-v2-7B.
* iq1_m: separate shifts for each group of 8 in a block
We get
PPL(LLaMA-v2-7B ) = 9.2810
PPL(LLaMA-v2-13B) = 6.8105
Not bad, but slightly higher than
sqrt(PPL(IQ1_S) * PPL(IQ2_XXS))
which is the expected outcome given that IQ1_M is
halfway between IQ1_S and IQ2_XXS in terms of bpw.
From this, we would expect
PPL = 9.14 for LLaMA-v2-7B
PPL = 6.63 for LLaMA-v2-13B
* iq1_m: go to 3-bit scales
There is slight increase in PPL, but the 0.0625 bpw reduction
in size is totally worth it.
We now have
PPL(LLaMA-v2-7B ) = 9.4469 at 1.96 bpw
PPL(LLaMA-v2-13B) = 6.8717 at 1.93 bpw
PPL(LLaMA-v2-70B) = 4.8568 at 1.85 bpw
* iq1_m: scalar dot product
* iq1_m: AVX2 dot product
* iq1_m: very slightly faster AVX2 dot product
* iq1_m: ARM_NEON dot product
Works, but very slow (10.5 t/s)
* iq1_m: Metal - dequantize works, dot product does not
* iq1_m: Metal now works
About the same performance as iq1_s.
* iq1_m: minor
* iq1_m: checking pure iq1_m quantization
It is pretty bad: PPL(LLaMA-v2-7B) = 34 if we quantize output.weight
with Q4_K.
* iiq1_m: slightly faster ARM_NEON dot product
10.5 t/s -> 11.65 t/s
* iq1_m: faster ARM_NEON dot product
11.65 t/s -> 14.9 t/s
* iq1_m: another minor ARM_NEON dot product improvement
14.9 -> 15.0 t/s
* iq1_m: small PPL improvement via super-block scale adjustment
After quantizing block scales redo the super-block scale fit.
PPL(LLaMA-v2-7B ) = 9.3346
PPL(LLaMA-v2-13B) = 6.8419
PPL(LLaMA-v2-70B) = 4.8294
PPL(Mistral-7B ) = 8.1624
* iq1_m: adapt to CUDA refactoring
* iq1_m: remove unused variable
We have progressed to warnings being errors.
* iq1_m: add to backend-ops tests
* iq1_m: fix Windows ARM
* iq1_m: use common definition of iq1m_scale_t
* cuda: assert -> NO_DEVICE_CODE
* iq1_M: PR comments
---------
Co-authored-by: Iwan Kawrakow <redacted>
slaren [Tue, 26 Mar 2024 00:16:01 +0000 (01:16 +0100)]
cuda : rename build flag to LLAMA_CUDA (llama/6299)
Kawrakow [Mon, 25 Mar 2024 17:33:15 +0000 (18:33 +0100)]
tests : include IQ2_XXS and IQ2_XS in test-quantize-fns (llama/6303)
Co-authored-by: Iwan Kawrakow <redacted>
slaren [Mon, 25 Mar 2024 12:50:23 +0000 (13:50 +0100)]
cuda : refactor into multiple files (llama/6269)
Justine Tunney [Mon, 25 Mar 2024 05:39:56 +0000 (01:39 -0400)]
ggml : support AVX512VNNI (llama/6280)
This change causes some quants (e.g. Q4_0, Q8_0) to go faster on some
architectures (e.g. AMD Zen 4).
Rick G [Sun, 24 Mar 2024 21:45:56 +0000 (14:45 -0700)]
Fix heap corruption from wmode out-of-bound writes on windows (llama/6272)
* would throw error on VS2022 on GGML_FREE(wmode)
* wchar_t is usually 2 bytes, but malloc wants bytes
* therefore `*wmode_p++ = (wchar_t)*mode;` could write off the end of the allocation
* Fixes error possibly introduced by https://github.com/ggerganov/llama.cpp/pull/6248
Meng, Hengyu [Sun, 24 Mar 2024 04:04:25 +0000 (12:04 +0800)]
offload op (llama/6217)
* remove no USM methods
* leave the schedule to ggml_backend_sched entirely
Jared Van Bortel [Sat, 23 Mar 2024 22:48:02 +0000 (18:48 -0400)]
use _wfopen instead of fopen on Windows (llama/6248)
also fix missing #defines before windows.h, and BPE LF token on MSVC
slaren [Fri, 22 Mar 2024 13:05:31 +0000 (14:05 +0100)]
cuda : add LLAMA_CUDA_NO_PEER_COPY to workaround broken ROCm p2p copy (llama/6208)
* cuda : add LLAMA_CUDA_NO_PEER_COPY to workaround broken ROCm p2p copy
* add LLAMA_CUDA_NO_PEER_COPY to HIP build
Georgi Gerganov [Fri, 22 Mar 2024 09:35:53 +0000 (11:35 +0200)]
metal : proper assert for mat-mat memory alignment (llama/6225)
* metal : proper assert for mat-mat memory alignment
ggml-ci
* readme : add notice about the bug fix
* metal : fix the fix
ggml-ci
Georgi Gerganov [Fri, 22 Mar 2024 07:36:03 +0000 (09:36 +0200)]
metal : pad n_ctx by 32 (llama/6177)
* metal : require ne00 >= 128 for mat-mat kernels
ggml-ci
* llama : pad n_ctx by 32
ggml-ci
slaren [Thu, 21 Mar 2024 18:54:28 +0000 (19:54 +0100)]
cuda : disable host register by default (llama/6206)
slaren [Thu, 21 Mar 2024 12:59:53 +0000 (13:59 +0100)]
cuda : fix LLAMA_CUDA_F16 build (llama/6197)
Kawrakow [Thu, 21 Mar 2024 12:59:38 +0000 (13:59 +0100)]
ggml : same IQ4_NL quantization for CPU/CUDA/Metal (llama/6196)
* Make quantize_row_iq4_nl do the same thing is quantization on CUDA
* Make quantize_row_iq4_nl do the same thing is quantization on CUDA
This time for real. backend-ops tests pass.
* Now fix test-quantize-fns
---------
Co-authored-by: Iwan Kawrakow <redacted>
Kawrakow [Thu, 21 Mar 2024 07:27:57 +0000 (08:27 +0100)]
Add ability to use Q5_0, Q5_1, and IQ4_NL for quantized K cache (llama/6183)
* k_cache: be able to use Q5_0
* k_cache: be able to use Q5_1 on CODA
* k_cache: be able to use Q5_0 on Metal
* k_cache: be able to use Q5_1 on Metal
* k_cache: be able to use IQ4_NL - just CUDA for now
* k_cache: be able to use IQ4_NL on Metal
* k_cache: add newly added supported types to llama-bench and CUDA supports_op
---------
Co-authored-by: Iwan Kawrakow <redacted>
AidanBeltonS [Thu, 21 Mar 2024 06:10:52 +0000 (06:10 +0000)]
Add nvidia and amd backends (llama/6157)
slaren [Thu, 21 Mar 2024 00:47:46 +0000 (01:47 +0100)]
cuda : fix conflict with std::swap (llama/6186)
slaren [Wed, 20 Mar 2024 20:03:26 +0000 (21:03 +0100)]
cuda : print the returned error when CUDA initialization fails (llama/6185)
slaren [Wed, 20 Mar 2024 13:42:59 +0000 (14:42 +0100)]
cuda : refactor to remove global resources (llama/6170)
* cuda : refactor to remove global resources
Abhilash Majumder [Wed, 20 Mar 2024 02:58:49 +0000 (08:28 +0530)]
increase igpu cluster limit (llama/6159)
slaren [Mon, 18 Mar 2024 15:33:44 +0000 (16:33 +0100)]
backend : set max split inputs to GGML_MAX_SRC (llama/6137)
slaren [Mon, 18 Mar 2024 10:03:04 +0000 (11:03 +0100)]
backend : offload large batches to GPU (llama/6083)
* backend : offload large batches to GPU
* fix hip
* code cleanup
* fix CUDA split buffers
* Update ggml-backend-impl.h
Co-authored-by: Johannes Gäßler <redacted>
* cuda : fix memset without set_device
* imatrix : remove sched affix from weight names
* sched : add a new split if the current one has too many inputs
reduce max inputs per split
more cleanup
* update backends
ggml-ci
---------
Co-authored-by: Johannes Gäßler <redacted>
GainLee [Sun, 17 Mar 2024 17:12:22 +0000 (01:12 +0800)]
ggml:fix finding transfer queue family index error (llama/6094)
Co-authored-by: GainLee <redacted>
AmirAli Mirian [Sat, 16 Mar 2024 15:52:02 +0000 (11:52 -0400)]
ggml : add AVX512F SIMD (llama/6088)
slaren [Fri, 15 Mar 2024 12:24:03 +0000 (13:24 +0100)]
cuda : disable unused cudaLaunchHostFunc code (llama/6078)
Neo Zhang Jianyu [Fri, 15 Mar 2024 10:53:53 +0000 (18:53 +0800)]
fix set main gpu error (llama/6073)
AidanBeltonS [Fri, 15 Mar 2024 09:26:20 +0000 (09:26 +0000)]
Fix non-intel device selection (llama/6042)
* Fix non-intel device selection
* Update ggml-sycl.cpp
Co-authored-by: Neo Zhang Jianyu <redacted>
* Update ggml-sycl.cpp
Co-authored-by: Neo Zhang Jianyu <redacted>
---------
Co-authored-by: Abhilash Majumder <redacted>
Co-authored-by: Neo Zhang Jianyu <redacted>
Ondřej Čertík [Fri, 15 Mar 2024 08:46:51 +0000 (02:46 -0600)]
gguf : add support for I64 and F64 arrays (llama/6062)
* gguf : add support for I64 and F64 arrays
GGML currently does not support I64 or F64 arrays and they are not often
used in machine learning, however if in the future the need arises, it
would be nice to add them now, so that the types are next to the other
types I8, I16, I32 in the enums, and it also reserves their type number.
Furthermore, with this addition the GGUF format becomes very usable for
most computational applications of NumPy (being compatible with the most
common NumPy dtypes: i8, i16, i32, i64, f32, f64), providing a faster,
and more versatile alternative to the `npz` format, and a simpler
alternative to the `hdf5` format.
The change in this PR seems small, not significantly increasing the
maintenance burden. I tested this from Python using GGUFWriter/Reader
and `gguf-dump`, as well as from C, everything seems to work.
* Fix compiler warnings
Bryan Lozano [Fri, 22 Mar 2024 07:18:20 +0000 (00:18 -0700)]
cmake : bump to 3.12 (#768)
This CMakeLists.txt file uses Cmake command `add_compile_definitions()`.
That command did not exist until CMake 3.12.
Reference documentation:
https://cmake.org/cmake/help/latest/command/add_compile_definitions.html
I see a failure in CMake 3.10 with existing CMakeLists.txt
Bryan Lozano [Fri, 22 Mar 2024 07:17:34 +0000 (00:17 -0700)]
examples : fix simple (#770)
* Update README.md
Correcting matrix multiplication expected result.
* Update simple-ctx.cpp
Fix incorrect striding through output.
* simple : update readme
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Mon, 18 Mar 2024 09:03:55 +0000 (11:03 +0200)]
sync : whisper.cpp
Georgi Gerganov [Sat, 16 Mar 2024 15:30:55 +0000 (17:30 +0200)]
whisper : set outputs from conv graph (whisper/1959)
slaren [Sat, 16 Mar 2024 14:47:14 +0000 (15:47 +0100)]
alloc : fix allocation data of pre-allocated leafs
Georgi Gerganov [Fri, 15 Mar 2024 12:22:55 +0000 (14:22 +0200)]
sync : whisper.cpp
Georgi Gerganov [Fri, 15 Mar 2024 12:10:35 +0000 (14:10 +0200)]
spec : add GGUF diagram (#765)
slaren [Thu, 14 Mar 2024 15:45:27 +0000 (16:45 +0100)]
update examples and tests
Georgi Gerganov [Thu, 14 Mar 2024 15:16:45 +0000 (17:16 +0200)]
ggml : add ggml-common.h
Georgi Gerganov [Thu, 14 Mar 2024 15:10:55 +0000 (17:10 +0200)]
sync : llama.cpp
Georgi Gerganov [Thu, 14 Mar 2024 10:38:37 +0000 (12:38 +0200)]
ggml : designate enum vals for integer types (llama/6050)
Georgi Gerganov [Thu, 14 Mar 2024 09:55:23 +0000 (11:55 +0200)]
metal : build metallib + fix embed path (llama/6015)
* metal : build metallib + fix embed path
ggml-ci
* metal : fix embed build + update library load logic
ggml-ci
* metal : fix embeded library build
ggml-ci
* ci : fix iOS builds to use embedded library
slaren [Wed, 13 Mar 2024 17:54:21 +0000 (18:54 +0100)]
llama : add pipeline parallelism support (llama/6017)
* llama : add pipeline parallelism support for batch processing with multiple CUDA GPUs
ggml-ci
* server : add -ub, --ubatch-size parameter
* fix server embedding test
* llama : fix Mamba inference for pipeline parallelism
Tested to work correctly with both `main` and `parallel` examples.
* llama : limit max batch size to n_batch
* add LLAMA_SCHED_MAX_COPIES to configure the number of input copies for pipeline parallelism
default increase to 4 (from 2)
changing this value may improve performance for some systems, but increases memory usage
* fix hip build
* fix sycl build (disable cpy_tensor_async)
* fix hip build
* llama : limit n_batch and n_ubatch to n_ctx during context creation
* llama : fix norm backend
* batched-bench : sync after decode
* swiftui : sync after decode
* ggml : allow ggml_get_rows to use multiple threads if they are available
* check n_ubatch >= n_tokens with non-casual attention
* llama : do not limit n_batch to n_ctx with non-casual attn
* server : construct batch with size of llama_n_batch
* ggml_backend_cpu_graph_compute : fix return value when alloc fails
* llama : better n_batch and n_ubatch comment
* fix merge
* small fix
* reduce default n_batch to 2048
---------
Co-authored-by: Francis Couture-Harpin <redacted>
Co-authored-by: Georgi Gerganov <redacted>
slaren [Wed, 13 Mar 2024 13:58:30 +0000 (14:58 +0100)]
test-backend-ops : skip CPU backend by default (llama/6028)
AidanBeltonS [Wed, 13 Mar 2024 13:17:54 +0000 (13:17 +0000)]
Update get version (llama/6025)
Georgi Gerganov [Tue, 12 Mar 2024 12:27:20 +0000 (14:27 +0200)]
ggml : reuse quantum structs across backends (llama/5943)
* ggml : reuse quant blocks across backends
ggml-ci
* ggml : define helper constants only for CUDA and SYCL
ggml-ci
* ggml : define helper quantum constants for SYCL
ggml-ci
Georgi Gerganov [Tue, 12 Mar 2024 11:49:55 +0000 (13:49 +0200)]
ggml : fix UB in IQ2_S and IQ3_S (llama/6012)
Georgi Gerganov [Tue, 12 Mar 2024 09:15:05 +0000 (11:15 +0200)]
sycl : update IQ1_S kernels (WIP - not working!) (llama/5995)
* sycl : try to fix after IQ1_S changes
* sycl : iq1s_grid -> iq1s_grid_gpu
* sycl : fix grid type
Kawrakow [Mon, 11 Mar 2024 15:53:15 +0000 (16:53 +0100)]
1.5 bit: we can do even better (llama/5999)
* iq1_s: we can do even better
Spent one of the 4 scale bits on a signs of a 0.125 shift.
I.e., quants are now -1 + delta, delta, 1 + delta, where delta
is +/- 0.125.
CUDA works, same performance as before.
PPL(LLaMA-v2-7B) is now 11.85!
* iq1_s: make scalar and AVX2 work with the new version
* iq1_s: make Neon work with new version.
~10% drop in performance, so will need some more work.
* iq1_s: make Metal work with new version
* iq1_s: very slightly faster dequantize on Metal
* iq1_s: fix dequantize on the CPU
---------
Co-authored-by: Iwan Kawrakow <redacted>
Michael Podvitskiy [Mon, 11 Mar 2024 09:28:51 +0000 (10:28 +0100)]
ggml, ci : Windows ARM runner and build fixes (llama/5979)
* windows arm ci
* fix `error C2078: too many initializers` with ggml_vld1q_u32 macro for MSVC ARM64
* fix `warning C4146: unary minus operator applied to unsigned type, result still unsigned`
* fix `error C2065: '__fp16': undeclared identifier`