]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
k.h.lai [Wed, 22 May 2024 12:53:21 +0000 (20:53 +0800)]
vulkan: add workaround for iterator boundary check to fix clang-cl debug build (#7426)
Justine Tunney [Wed, 22 May 2024 11:08:18 +0000 (07:08 -0400)]
llama : add missing model type names (#7445)
Georgi Gerganov [Wed, 22 May 2024 09:36:37 +0000 (12:36 +0300)]
cuda : fix compile warning (#7454)
Johannes Gäßler [Wed, 22 May 2024 08:24:29 +0000 (10:24 +0200)]
CUDA: remove incorrect precision check (#7454)
Georgi Gerganov [Wed, 22 May 2024 08:01:35 +0000 (11:01 +0300)]
cuda : fix rope + add tests (#7452)
* cuda : fix rope pos data
ggml-ci
* ggml : drop mode & 1 == 1 support for ggml_rope
ggml-ci
* ggml : support freq_factors for f16 rope (CPU)
ggml-ci
* tests : add rope tests using frequency factors
ggml-ci
liuwei-git [Tue, 21 May 2024 20:28:32 +0000 (04:28 +0800)]
llama : add phi3 128K model support (#7225)
* add phi3 128k support in convert-hf-to-gguf
* add phi3 128k support in cuda
* address build warnings on llama.cpp
* adjust index value in cuda long rope freq factors
* add long rope support in ggml cpu backend
* make freq factors only depend on ctx size
* remove unused rope scaling type 'su' frin gguf converter
* fix flint warnings on convert-hf-to-gguf.py
* set to the short freq factor when context size is small than trained context size
* add one line of comments
* metal : support rope freq_factors
* ggml : update ggml_rope_ext API to support freq. factors
* backends : add dev messages to support rope freq. factors
* minor : style
* tests : update to use new rope API
* backends : fix pragma semicolons
* minor : cleanup
* llama : move rope factors from KV header to tensors
* llama : remove tmp assert
* cuda : fix compile warning
* convert : read/write n_head_kv
* llama : fix uninitialized tensors
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Tue, 21 May 2024 20:03:42 +0000 (23:03 +0300)]
metal : handle F16 inf values, fix FA partial offload (#7434)
ggml-ci
Olivier Chafik [Tue, 21 May 2024 19:40:00 +0000 (20:40 +0100)]
`grammars`: fix resampling logic regression (#7424)
Johannes Gäßler [Tue, 21 May 2024 17:27:12 +0000 (19:27 +0200)]
CUDA: fix unused warning in mmq.cu (#7442)
Georgi Gerganov [Tue, 21 May 2024 16:53:48 +0000 (19:53 +0300)]
tests : test-tokenizer-0.sh print more info (#7402)
Amir [Tue, 21 May 2024 14:13:12 +0000 (17:13 +0300)]
examples: cache hf model when --model not provided (#7353)
* examples: cache hf model when --model not provided
* examples: cache hf model when --model not provided
* examples: cache hf model when --model not provided
* examples: cache hf model when --model not provided
* examples: cache hf model when --model not provided
Johannes Gäßler [Tue, 21 May 2024 14:02:12 +0000 (16:02 +0200)]
CUDA: deduplicate mmq code (#7397)
jaime-m-p [Tue, 21 May 2024 12:39:48 +0000 (14:39 +0200)]
Tokenizer SPM fixes for phi-3 and llama-spm (bugfix) (#7425)
* Update brute force test: add_special
* Update brute force test: default values for add_bos_token and add_eos_token
* Enable rtrim when pre-inserting BOS
Co-authored-by: Georgi Gerganov <redacted>
* Revert "server : fix test regexes"
jaime-m-p [Mon, 20 May 2024 18:15:57 +0000 (20:15 +0200)]
Tokenizer SPM fixes for phi-3 and llama-spm (#7375)
* Update brute force test: special tokens
* Fix added tokens
- Try to read 'added_tokens.json'.
- Try to read 'tokenizer_config.json'.
- Try to read 'tokenizer.json'.
* Fix special tokens rtrim
Co-authored-by: Georgi Gerganov <redacted>
* server : fix test regexes
Georgi Gerganov [Mon, 20 May 2024 16:35:28 +0000 (19:35 +0300)]
llama : remove Persimmon (#7408)
* llama : remove Persimmon
* requirements : remove
Johannes Gäßler [Mon, 20 May 2024 16:15:38 +0000 (18:15 +0200)]
perplexity: update README FP16 results [no ci] (#7413)
Radoslav Gerganov [Mon, 20 May 2024 13:36:55 +0000 (16:36 +0300)]
rpc : track allocated buffers (#7411)
* rpc : track allocated buffers
ref: #7407
* rpc : pack rpc_tensor tightly
Georgi Gerganov [Mon, 20 May 2024 12:10:03 +0000 (15:10 +0300)]
server : fix temperature + disable some tests (#7409)
* server : fix temperature
* server : disable tests relying on parallel determinism
* ci : change server Debug -> RelWithDebInfo
AidanBeltonS [Mon, 20 May 2024 11:08:23 +0000 (12:08 +0100)]
[SYCL] Update SYCL upscale operation (#7321)
* Update SYCL upscale operation
* Formatting
* Remove messages
Bingan [Mon, 20 May 2024 09:55:34 +0000 (17:55 +0800)]
Update README.md (#7410)
Herman Semenov [Mon, 20 May 2024 07:33:21 +0000 (07:33 +0000)]
ggml-opencl, llama: using reserve() if count already known (#7272)
junchao-loongson [Mon, 20 May 2024 07:19:21 +0000 (15:19 +0800)]
ggml : add loongarch lsx and lasx support (#6454)
* add loongarch lsx and lasx optimize code
* Add loongarch compilation support to makefile
* revert stb_image.h
* opt bytes_from_nibbles_32 and sum_i16_pairs_float
* fix undeclared
* format code
* update
* update 2
---------
Co-authored-by: Jinyang He <redacted>
Georgi Gerganov [Mon, 20 May 2024 07:16:41 +0000 (10:16 +0300)]
server : tuning tests (#7388)
* server : don't pass temperature as string
* server : increase timeout
* tests : fix the fix 0.8f -> 0.8
ggml-ci
* tests : set explicit temperature
Georgi Gerganov [Mon, 20 May 2024 05:56:05 +0000 (08:56 +0300)]
server : return error on too large embedding input (#7389)
Georgi Gerganov [Mon, 20 May 2024 05:55:09 +0000 (08:55 +0300)]
tests : fix --keep_split -> --keep-split (#7374)
Srihari-mcw [Mon, 20 May 2024 02:18:39 +0000 (19:18 -0700)]
Add provisions for windows support for BF16 code including CMake provision for enabling AVX512_BF16 (#7258)
slaren [Sun, 19 May 2024 23:17:03 +0000 (01:17 +0200)]
llama : remove MPI backend (#7395)
Fred Douglas [Sun, 19 May 2024 16:37:04 +0000 (11:37 -0500)]
quantize : fix --keep-split check (#7374)
0cc4m [Sun, 19 May 2024 15:19:53 +0000 (17:19 +0200)]
Vulkan Embedding Fix (#7360)
* Fix empty Vulkan host buffers
Add fp32 fp16 matmul shader
Fix matmul shader alignment
* Remove deprecated tensor->backend uses
* Fix Vulkan validation errors on embedding models with no offloaded layers
* Fix Vulkan llava segfault when not offloading layers
slaren [Sun, 19 May 2024 15:08:46 +0000 (17:08 +0200)]
ggml : fix another case of quants nans (#7387)
Johannes Gäßler [Sun, 19 May 2024 14:46:13 +0000 (16:46 +0200)]
ggml: implement quantized KV cache for FA (#7372)
Johannes Gäßler [Sun, 19 May 2024 14:26:02 +0000 (16:26 +0200)]
server: add test for token probs (#7347)
Johannes Gäßler [Sun, 19 May 2024 14:06:33 +0000 (16:06 +0200)]
server: fix seed being reported back (#7382)
Anas Ahouzi [Sun, 19 May 2024 12:46:46 +0000 (14:46 +0200)]
Add StableLM2 pre-tokenizer (#7349)
* Add StableLM pre-tokenizer
* Fix space
* Fix trailing whitespace
slaren [Sun, 19 May 2024 12:19:37 +0000 (14:19 +0200)]
cuda : clear error after buffer allocation failure (#7376)
Brian [Sun, 19 May 2024 10:51:03 +0000 (20:51 +1000)]
labeler.yml: Use settings from ggerganov/llama.cpp [no ci] (#7363)
https://github.com/actions/labeler#using-configuration-path-input-together-with-the-actionscheckout-action
Recommends the use of checkout action to use the correct repo context
when applying settings for PR labels
e.g.
steps:
- uses: actions/checkout@v4 # Uploads repository content to the runner
with:
repository: "owner/repositoryName" # The one of the available inputs, visit https://github.com/actions/checkout#readme to find more
- uses: actions/labeler@v5
with:
configuration-path: 'path/to/the/uploaded/configuration/file'
Georgi Gerganov [Sun, 19 May 2024 08:01:01 +0000 (11:01 +0300)]
cmake : update android comments (#7341)
fraxy-v [Sat, 18 May 2024 22:44:42 +0000 (01:44 +0300)]
Capture CUDA logging output (#7298)
* logging: output capture in cuda module
* fix compile error
* fix: vsnprintf terminates with 0, string use not correct
* post review
* Update llama.cpp
Co-authored-by: slaren <redacted>
* Update llama.cpp
Co-authored-by: slaren <redacted>
---------
Co-authored-by: slaren <redacted>
Georgi Gerganov [Sat, 18 May 2024 15:55:54 +0000 (18:55 +0300)]
ci : re-enable sanitizer runs (#7358)
* Revert "ci : temporary disable sanitizer builds (#6128)"
This reverts commit
4f6d1337ca5a409dc74aca8c479b7c34408a69c0 .
* ci : trigger
Georgi Gerganov [Sat, 18 May 2024 10:40:39 +0000 (13:40 +0300)]
android : use "ci-android" branch for CI (#7341)
* android : use "ci-android" branch for CI
* ggml : disable SIMD exp and silu for 32-bit ARM
ggml-ci
* android : do not fetch, use add_subdirectory instead
* cmake : provide binary dir
Johannes Gäßler [Sat, 18 May 2024 10:36:25 +0000 (12:36 +0200)]
CUDA: deduplicate FlashAttention code (#7352)
Johannes Gäßler [Sat, 18 May 2024 09:10:47 +0000 (11:10 +0200)]
server: correct --threads documentation [no ci] (#7362)
Engininja2 [Sat, 18 May 2024 08:05:17 +0000 (02:05 -0600)]
cuda : add half2 __shfl_xor() for ROCm 5.5 (#7263)
Steffen Röcker [Sat, 18 May 2024 08:04:55 +0000 (10:04 +0200)]
llama : add support for larger Granite Code Models (20B, 34B) (#7324)
Tie the weights for ARCH_STARCODER to support the larger Granite code models.
Partially addresses ggerganov/issues/7116
There still remains to be a few things to fix.
Currently requires `--override-kv tokenizer.ggml.add_bos_token=bool:false`
strawberrymelonpanda [Sat, 18 May 2024 07:57:08 +0000 (00:57 -0700)]
perplexity : ndot progress and show stats with < 100 tasks (#7348)
Fix floating point error with ndot printing, allow end stats on lower task numbers if multiple-choice tasks.
0cc4m [Sat, 18 May 2024 06:10:58 +0000 (08:10 +0200)]
Update and fix Vulkan soft_max and argsort implementations (#7237)
* Update and fix Vulkan softmax implementation
* Update and fix Vulkan argsort implementation
Brian [Sat, 18 May 2024 06:04:23 +0000 (16:04 +1000)]
github-actions-labeler: initial commit (#7330)
* github-actions-labeler: initial commit [no ci]
* github actions: remove priority auto labeling [no ci]
Georgi Gerganov [Sat, 18 May 2024 05:46:20 +0000 (08:46 +0300)]
convert : fix set_vocab_sentencepiece (#6866)
* convert : fix set_vocab_sentencepiece
* Update convert-hf-to-gguf.py
slaren [Sat, 18 May 2024 00:39:54 +0000 (02:39 +0200)]
ggml : fix quants nans when all the group weights are very close to zero (#7313)
Engininja2 [Sat, 18 May 2024 00:39:25 +0000 (18:39 -0600)]
cmake : fix typo in AMDGPU_TARGETS (#7356)
jaime-m-p [Fri, 17 May 2024 23:09:13 +0000 (01:09 +0200)]
Unicode codepoint flags for custom regexs (#7245)
* Replace CODEPOINT_TYPE_* with codepoint_flags
* Update and bugfix brute force random test
* Deterministic brute force random test
* Unicode normalization NFD
* Get rid of BOM
Johannes Gäßler [Fri, 17 May 2024 16:54:52 +0000 (18:54 +0200)]
CUDA: faster large batch FA without tensor cores (#7314)
Gavin Zhao [Fri, 17 May 2024 15:03:03 +0000 (11:03 -0400)]
ROCm: use native CMake HIP support (#5966)
Supercedes #4024 and #4813.
CMake's native HIP support has become the
recommended way to add HIP code into a project (see
[here](https://rocm.docs.amd.com/en/docs-6.0.0/conceptual/cmake-packages.html#using-hip-in-cmake)).
This PR makes the following changes:
1. The environment variable `HIPCXX` or CMake option
`CMAKE_HIP_COMPILER` should be used to specify the HIP
compiler. Notably this shouldn't be `hipcc`, but ROCm's clang,
which usually resides in `$ROCM_PATH/llvm/bin/clang`. Previously
this was control by `CMAKE_C_COMPILER` and `CMAKE_CXX_COMPILER`.
Note that since native CMake HIP support is not yet available on
Windows, on Windows we fall back to the old behavior.
2. CMake option `CMAKE_HIP_ARCHITECTURES` is used to control the
GPU architectures to build for. Previously this was controled by
`GPU_TARGETS`.
3. Updated the Nix recipe to account for these new changes.
4. The GPU targets to build against in the Nix recipe is now
consistent with the supported GPU targets in nixpkgs.
5. Added CI checks for HIP on both Linux and Windows. On Linux, we test
both the new and old behavior.
The most important part about this PR is the separation of the
HIP compiler and the C/C++ compiler. This allows users to choose
a different C/C++ compiler if desired, compared to the current
situation where when building for ROCm support, everything must be
compiled with ROCm's clang.
~~Makefile is unchanged. Please let me know if we want to be
consistent on variables' naming because Makefile still uses
`GPU_TARGETS` to control architectures to build for, but I feel
like setting `CMAKE_HIP_ARCHITECTURES` is a bit awkward when you're
calling `make`.~~ Makefile used `GPU_TARGETS` but the README says
to use `AMDGPU_TARGETS`. For consistency with CMake, all usage of
`GPU_TARGETS` in Makefile has been updated to `AMDGPU_TARGETS`.
Thanks to the suggestion of @jin-eld, to maintain backwards
compatibility (and not break too many downstream users' builds), if
`CMAKE_CXX_COMPILER` ends with `hipcc`, then we still compile using
the original behavior and emit a warning that recommends switching
to the new HIP support. Similarly, if `AMDGPU_TARGETS` is set but
`CMAKE_HIP_ARCHITECTURES` is not, then we forward `AMDGPU_TARGETS`
to `CMAKE_HIP_ARCHITECTURES` to ease the transition to the new
HIP support.
Signed-off-by: Gavin Zhao <redacted>
Radoslav Gerganov [Fri, 17 May 2024 14:25:44 +0000 (17:25 +0300)]
rpc : set SO_REUSEADDR for the server socket (#7320)
ref: #7293
Brian [Fri, 17 May 2024 12:40:14 +0000 (22:40 +1000)]
Added a single test function script and fix debug-test.sh to be more robust (#7279)
* run-single-test.sh: added a single test function script and fix debug-test.sh to be more robust
* debug-test.sh: combined execute and gdb test mode via -g flag
* debug-test.sh: refactor
* debug-test: refactor for clarity
* debug-test.sh: comment style changes
* debug-test.sh: fix gdb
Aarni Koskela [Fri, 17 May 2024 12:11:45 +0000 (15:11 +0300)]
py : convert-hf-to-gguf-update improvements (#7340)
* convert-hf-to-gguf-update: automate updating
* convert-hf-to-gguf-update: improve download
* share requests session for performance
* create directories only when needed, don't skip downloads when empty directory encountered
* be more graceful about errors
fairydreaming [Fri, 17 May 2024 11:24:38 +0000 (13:24 +0200)]
llama : use n_embd_head_v when reshaping kqv (#7327)
* llama : use n_embd_head_v instead of n_embd_head_k when reshaping kqv
* llama : use n_embd_v_gqa and n_embd_head_v instead of n_embd_k_gqa and n_embd_head_k when making a view of cached value vectors.
---------
Co-authored-by: Stanisław Szymczyk <redacted>
Johannes Gäßler [Fri, 17 May 2024 07:59:57 +0000 (09:59 +0200)]
tokenization: add warning for double BOS (#7332)
Herman Semenov [Fri, 17 May 2024 07:08:49 +0000 (07:08 +0000)]
ggml-quants, llama : removed excess checks (#7274)
amd-lalithnc [Fri, 17 May 2024 07:01:58 +0000 (12:31 +0530)]
convert : fix Qwen/Qwen-7b conversion (#7308)
Radoslav Gerganov [Fri, 17 May 2024 07:00:17 +0000 (10:00 +0300)]
server : add support for the RPC backend (#7305)
ref: #7292
Justine Tunney [Fri, 17 May 2024 06:58:52 +0000 (02:58 -0400)]
ggml : rewrite silu and softmax for cpu (#7154)
This change upstreams llamafile's vectorized expf() functions. This lets
us compute softmax and silu more accurately than the short[65536] lookup
table that GGML previously used to make this operation go faster. We can
support aarch64 and sse2+ with the worst case rounding error of 2ulp. It
makes make -j8 tests && ./tests/test-backend-ops -o SOFT_MAX -b CPU perf
go 1.5x faster for SSE2+FMA, 1.9x faster for AVX2+FMA and 2.1x on AVX512
Leon Knauer [Fri, 17 May 2024 00:11:03 +0000 (02:11 +0200)]
[Server] Added --verbose option to README [no ci] (#7335)
Pierrick Hymbert [Thu, 16 May 2024 18:43:45 +0000 (20:43 +0200)]
Revert "server bench: fix bench not waiting for model load (#7284)" (#7334)
This reverts commit
583fd6b000ec9ad1b465b5c98524f4a0ae388077 .
Radoslav Gerganov [Wed, 15 May 2024 13:04:40 +0000 (16:04 +0300)]
rpc : get available mem for the CPU backend
This can be overridden with the -m command line option
ref: #7293
Radoslav Gerganov [Wed, 15 May 2024 12:29:07 +0000 (15:29 +0300)]
rpc : add command line arg for specifying backend memory
ref: #7293
Jared Van Bortel [Thu, 16 May 2024 06:15:23 +0000 (02:15 -0400)]
convert : get general.name from model dir, not its parent (#5615)
Co-authored-by: Brian <redacted>
Herman Semenov [Thu, 16 May 2024 06:14:24 +0000 (06:14 +0000)]
grammar, json, llama: replace push on emplace if it possible (#7273)
Vaibhav Srivastav [Thu, 16 May 2024 05:38:43 +0000 (07:38 +0200)]
doc: add references to hugging face GGUF-my-repo quantisation web tool. (#7288)
* chore: add references to the quantisation space.
* fix grammer lol.
* Update README.md
Co-authored-by: Julien Chaumond <redacted>
* Update README.md
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Julien Chaumond <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Max Krasnyansky [Thu, 16 May 2024 05:36:43 +0000 (22:36 -0700)]
ci: fix bin/Release path for windows-arm64 builds (#7317)
Switch to Ninja Multi-Config CMake generator to resurect bin/Release path
that broke artifact packaging in CI.
Max Krasnyansky [Thu, 16 May 2024 02:47:36 +0000 (19:47 -0700)]
Add support for properly optimized Windows ARM64 builds with LLVM and MSVC (#7191)
* logging: add proper checks for clang to avoid errors and warnings with VA_ARGS
* build: add CMake Presets and toolchian files for Windows ARM64
* matmul-int8: enable matmul-int8 with MSVC and fix Clang warnings
* ci: add support for optimized Windows ARM64 builds with MSVC and LLVM
* matmul-int8: fixed typos in q8_0_q8_0 matmuls
Co-authored-by: Georgi Gerganov <redacted>
* matmul-int8: remove unnecessary casts in q8_0_q8_0
---------
Co-authored-by: Georgi Gerganov <redacted>
Daniel Bevenius [Wed, 15 May 2024 21:41:03 +0000 (23:41 +0200)]
readme : remove stray double quote (#7310)
Signed-off-by: Daniel Bevenius <redacted>
kunnis [Wed, 15 May 2024 17:59:12 +0000 (12:59 -0500)]
ggml : use dynamic thread scheduling for matrix multiplication (#6915)
* Just reordering some structs.
* Adding in the calls to mm_pause
* Passing around the state
* Renaming and moving a bunch of variables around.
* Extracting the logic to it's own function.
* Moving some variable definitions into the chunk function.
* Moving some variables around
* moving src1_cont inside
* Moving row_size
* adding the current_chunk
* Reorg the code.
* Formatting to match the orig patch
* starting to setup the chunking variables
* Starting the buildup of the loop
* The yield shouldn't be necessary.
* adding the looping structure based on the chunk configuration.
* Add in the re-chunking code.
* Making it much more likely to rechunk.
* disable resizing if numa is enabled.
* Updating comments with what we've learned.
* Fix formatting
* Couple more formatting fixes.
* More style fixes.
* Fix Warnings
* Going with unused because there's conditional logic that needs it.
* Update ggml.c
* Update ggml.c
---------
agray3 [Wed, 15 May 2024 13:44:49 +0000 (14:44 +0100)]
Avoid unnecessarily disabling CUDA graphs (#7302)
As discussed in PR #6766, CUDA graphs were being disabled in the presence of long prompts.
This fixes the issue by avoiding the consective update counter from incrementing unnecessarily
for tokens in which cuda graphs are disabled due to batch size > 1.
slaren [Wed, 15 May 2024 13:08:48 +0000 (15:08 +0200)]
ggml : tag ggml_tensor::backend as deprecated (#7290)
AidanBeltonS [Wed, 15 May 2024 12:26:30 +0000 (13:26 +0100)]
Add missing " (#7303)
dm4 [Wed, 15 May 2024 12:01:12 +0000 (20:01 +0800)]
embedding : free the batch after execution (#7297)
Georgi Gerganov [Wed, 15 May 2024 10:23:41 +0000 (13:23 +0300)]
sync : ggml
John Balis [Wed, 15 May 2024 08:52:33 +0000 (03:52 -0500)]
ggml : add `ggml_upscale_ext` (ggml/814)
* initial commit with CPU implementation of upscale to shape and test, cuda implementation next
* experimental commit to see if dst shape is correct
* test version
* test
* removed unnecessary params
* refactor
* fixed tests
* ggml : metal impl + cleanup + sycl dev warnings
* patched ggml_upscale cuda op to handle non-contiguous tensors, added test for non-contiguous behavior
* metal : fix upsacle op to support nb00 + style
---------
Co-authored-by: Georgi Gerganov <redacted>
Johannes Gäßler [Wed, 15 May 2024 06:44:16 +0000 (08:44 +0200)]
server bench: fix bench not waiting for model load (#7284)
Georgi Gerganov [Tue, 14 May 2024 16:14:38 +0000 (19:14 +0300)]
script : sync ggml-rpc
Georgi Gerganov [Tue, 14 May 2024 16:09:30 +0000 (19:09 +0300)]
metal : support FA without mask + add asserts (#7278)
* ggml : fa without mask + add asserts
ggml-ci
* metal : support non-contiguous KV
ggml-ci
Georgi Gerganov [Tue, 14 May 2024 12:33:16 +0000 (15:33 +0300)]
sync : ggml
ggml-ci
Georgi Gerganov [Mon, 13 May 2024 08:01:07 +0000 (11:01 +0300)]
metal : tune soft_max number of threads (whisper/0)
Georgi Gerganov [Sun, 12 May 2024 17:36:31 +0000 (20:36 +0300)]
ggml : try fix ppc64 (whisper/0)
Przemysław Pawełczyk [Wed, 8 May 2024 15:33:43 +0000 (17:33 +0200)]
ggml : expose SSE3 and SSSE3 for MSVC when AVX is available (whisper/2128)
Hong Bo PENG [Sun, 12 May 2024 09:17:18 +0000 (17:17 +0800)]
ggml : optimize for ppc64le using VSX intrinsics (ggml/784)
* optimize for ppc64le using VSX intrinsics
* 1. code clean up by removing comments about overflow concern.
2. fix typo in suffix of scaling.
* Continue to fix typo in suffix of scaling for QK_K <> 256
---------
Co-authored-by: Georgi Gerganov <redacted>
Steve Grubb [Tue, 14 May 2024 14:11:24 +0000 (10:11 -0400)]
server: free sampling contexts on exit (#7264)
* server: free sampling contexts on exit
This cleans up last leak found by the address sanitizer.
* fix whitespace
* fix whitespace
Brian [Tue, 14 May 2024 13:10:39 +0000 (23:10 +1000)]
Revert "move ndk code to a new library (#6951)" (#7282)
This reverts commit
efc8f767c8c8c749a245dd96ad4e2f37c164b54c .
Radoslav Gerganov [Tue, 14 May 2024 11:27:19 +0000 (14:27 +0300)]
ggml : add RPC backend (#6829)
* ggml : add RPC backend
The RPC backend proxies all operations to a remote server which runs a
regular backend (CPU, CUDA, Metal, etc).
* set TCP_NODELAY
* add CI workflows
* Address review comments
* fix warning
* implement llama_max_devices() for RPC
* Address review comments
* Address review comments
* wrap sockfd into a struct
* implement get_alignment and get_max_size
* add get_device_memory
* fix warning
* win32 support
* add README
* readme : trim trailing whitespace
* Address review comments
* win32 fix
* Address review comments
* fix compile warnings on macos
slaren [Tue, 14 May 2024 07:33:42 +0000 (09:33 +0200)]
llama : disable pipeline parallelism with nkvo (#7265)
Elton Kola [Tue, 14 May 2024 07:30:30 +0000 (03:30 -0400)]
move ndk code to a new library (#6951)
Haggai Nuchi [Tue, 14 May 2024 05:25:56 +0000 (22:25 -0700)]
Add left recursion check: quit early instead of going into an infinite loop (#7083)
* Add left recursion check: quit early instead of going into an infinite loop
* Remove custom enum, rename left recursion check and move to "grammar internal" section, add handling for edge case where a leftmost nonterminal may be empty
* Remove unnecessary declaration
Ryuei [Tue, 14 May 2024 05:20:47 +0000 (14:20 +0900)]
docs: Fix typo and update description for --embeddings flag (#7026)
- Change '--embedding' to '--embeddings' in the README
- Update the description to match the latest --help output
- Added a caution about defining physical batch size
compilade [Mon, 13 May 2024 18:10:51 +0000 (14:10 -0400)]
convert-hf : support direct Q8_0 conversion (#7234)
* convert-hf : support q8_0 conversion
* convert-hf : add missing ftype
This was messing with the checksums otherwise.
* convert-hf : add missing ftype to Baichuan and Xverse
I didn't notice these on my first pass.
Georgi Gerganov [Mon, 13 May 2024 14:15:15 +0000 (17:15 +0300)]
llama : less KV padding when FA is off (#7257)
ggml-ci
k.h.lai [Mon, 13 May 2024 14:02:36 +0000 (22:02 +0800)]
llava-cli: fix base64 prompt (#7248)
Johannes Gäßler [Mon, 13 May 2024 11:03:27 +0000 (13:03 +0200)]
perplexity: add BF16 vs. FP16 results (#7150)
Neo Zhang [Mon, 13 May 2024 10:11:26 +0000 (18:11 +0800)]
[SYCL] rm wait() (#7233)
Joan Fontanals [Mon, 13 May 2024 08:35:14 +0000 (10:35 +0200)]
llama : rename jina tokenizers to v2 (#7249)
* refactor: rename jina tokenizers to v2
* refactor: keep refactoring non-breaking