]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Jeff Bolz [Sun, 8 Dec 2024 08:05:55 +0000 (02:05 -0600)]
vulkan: compile a test shader in cmake to check for coopmat2 support (#10713)
Robert Collins [Sat, 7 Dec 2024 21:12:27 +0000 (16:12 -0500)]
llama : add 128k yarn context for Qwen (#10698)
* add 128k yarn context for Qwen
* added property for model tensors
* removing useless line
Xuan Son Nguyen [Sat, 7 Dec 2024 19:21:09 +0000 (20:21 +0100)]
server : (refactor) no more json in server_task input (#10691)
* server : (refactor) no more json in server_task input
* add test for slots endpoint
* add tests for /props and /slots
* remove task inf_type
* fix CI by adding safe_json_to_str
* add "model_path" to /props
* update readme
Georgi Gerganov [Sat, 7 Dec 2024 16:38:15 +0000 (18:38 +0200)]
ggml : disable iq4_nl interleave size 8 (#10709)
ggml-ci
Georgi Gerganov [Sat, 7 Dec 2024 16:02:05 +0000 (18:02 +0200)]
server : various fixes (#10704)
* server : various fixes
ggml-ci
* server : show curent seed in slot_params
ggml-ci
* fix /slots endpoint
* Update examples/server/server.cpp
Co-authored-by: Georgi Gerganov <redacted>
* server : reflect endpoint response changes in the readme
ggml-ci
---------
Co-authored-by: Xuan Son Nguyen <redacted>
Co-authored-by: Xuan Son Nguyen <redacted>
Djip007 [Sat, 7 Dec 2024 12:37:50 +0000 (13:37 +0100)]
ggml : refactor online repacking (#10446)
* rename ggml-cpu-aarch64.c to .cpp
* reformat extra cpu backend.
- clean Q4_0_N_M and IQ4_0_N_M
- remove from "file" tensor type
- allow only with dynamic repack
- extract cpu extra bufts and convert to C++
- hbm
- "aarch64"
- more generic use of extra buffer
- generalise extra_supports_op
- new API for "cpu-accel":
- amx
- aarch64
* clang-format
* Clean Q4_0_N_M ref
Enable restrict on C++
* add op GGML_OP_MUL_MAT_ID for Q4_0_N_M with runtime repack
* added/corrected control on tensor size for Q4 repacking.
* Update ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp
Co-authored-by: Georgi Gerganov <redacted>
* Update ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp
Co-authored-by: Georgi Gerganov <redacted>
* add debug logs on repacks.
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Sat, 7 Dec 2024 09:52:44 +0000 (11:52 +0200)]
server : fix free of spec context and batch (#10651)
ggml-ci
0cc4m [Sat, 7 Dec 2024 09:24:15 +0000 (10:24 +0100)]
Vulkan: VK_KHR_cooperative_matrix support to speed up prompt processing (#10597)
* Vulkan: Implement VK_KHR_cooperative_matrix support in the matrix matrix multiplication shader
* Improve performance with better q4_k and q5_k dequant and store unrolling
* Add Vulkan MUL_MAT and MUL_MAT_ID accumulator precision selection
* Rework mulmat shader selection and compilation logic, avoid compiling shaders that won't get used by device
* Vulkan: Implement accumulator switch for specific mul mat mat shaders
* Vulkan: Unroll more loops for more mul mat mat performance
* Vulkan: Add VK_AMD_shader_core_properties2 support to read Compute Unit count for split_k logic
* Disable coopmat support on AMD proprietary driver
* Remove redundant checks
* Add environment variable GGML_VK_DISABLE_COOPMAT to disable VK_KHR_cooperative_matrix support
* Fix rebase typo
* Fix coopmat2 MUL_MAT_ID pipeline selection
Robert Ormandi [Sat, 7 Dec 2024 07:55:01 +0000 (01:55 -0600)]
metal : Extend how Llama.cpp locates metal resources (#10676)
* metal : Extend how Llama.cpp locates metal resources (#10675)
* It searches the resource file in the directory where the current
binary is located as well.
* Resolves symbolic links.
Rationale:
When we plug this dependency into a Bazel build and run it in the
context of Bazel (e.g. testing):
* the execution directory is often very different from where the files
are located and no direct control over this (Bazel sandboxing),
* the Bazel sandbox often use symbolic links to make files available.
With this patch, we can have the resource file added to the target,
can build and run tests in the context of Bazel.
* Update ggml/src/ggml-metal/ggml-metal.m
Co-authored-by: Georgi Gerganov <redacted>
* Update ggml/src/ggml-metal/ggml-metal.m
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Sukriti Sharma [Sat, 7 Dec 2024 07:02:14 +0000 (00:02 -0700)]
convert : add support for Roberta embeddings (#10695)
Georgi Gerganov [Fri, 6 Dec 2024 19:33:15 +0000 (21:33 +0200)]
convert : add custom attention mapping
Xuan Son Nguyen [Fri, 6 Dec 2024 12:29:05 +0000 (13:29 +0100)]
common : bring back --no-warmup to server (#10686)
Xuan Son Nguyen [Fri, 6 Dec 2024 10:14:32 +0000 (11:14 +0100)]
server : (refactoring) do not rely on JSON internally (#10643)
* server : (refactoring) reduce usage of json internally
* move all response types to struct
* wip [no ci]
* many fixes
* add virtual function
* fix index
* minor style fix
* add std::move
* refactor handle_completions_generic
* add virtual functions
* remove server.hpp
* clarify server_sent_event RFC specs
* apply review comments
* fix model_alias and completion_probabilities
* small clean up
* remove virtual for to_json_oai_compat()
* naming oai_compat --> oaicompat
* fix unwanted recursive call
* update docs
Plamen Minev [Thu, 5 Dec 2024 21:36:41 +0000 (23:36 +0200)]
fix(server) : not show alert when DONE is received (#10674)
Jeff Bolz [Thu, 5 Dec 2024 19:15:05 +0000 (13:15 -0600)]
vulkan: Add VK_NV_cooperative_matrix2 support for mul_mat and flash attention (#10206)
Riccardo Orlando [Thu, 5 Dec 2024 18:30:59 +0000 (19:30 +0100)]
llama : add Minerva 7B model support (#10673)
* Support for Minerva 7B
* Update convert_hf_to_gguf_update.py
Georgi Gerganov [Thu, 5 Dec 2024 11:27:42 +0000 (13:27 +0200)]
sync : ggml
PAB [Wed, 4 Dec 2024 08:19:30 +0000 (09:19 +0100)]
ggml: add `GGML_SET` Metal kernel + i32 CPU kernel (ggml/1037)
* implemented cpu kernel
* add i32 test cases in test-backend-ops
* typedef `ggml_metal_kargs_set`
* implemented `kernel_set`
* memcpy
PAB [Tue, 3 Dec 2024 19:20:04 +0000 (20:20 +0100)]
ggml : add `GGML_PAD_REFLECT_1D` operation (ggml/1034)
* ggml_pad_reflect_1d defined in header
* implemented on CPU
* called the forward pass
* impl Metal kernel
* added Metal kernel
* added OP_PAD_REFLECT_1D in test-backend-ops.cpp
* add test-pad-reflect-1d test case
* test case support multiple backend
Daniel Bevenius [Thu, 5 Dec 2024 07:47:55 +0000 (08:47 +0100)]
py : update outdated copy-paste instructions [no ci] (#10667)
This commit updates the copy-paste instruction in
convert_hf_to_gguf_update.py to reflect that convert_hf_to_gguf.py
will have already been updated with the new get_vocab_base_pre()
function when this script completes.
aryantandon01 [Wed, 4 Dec 2024 22:19:20 +0000 (03:49 +0530)]
Update deprecation-warning.cpp (#10619)
Fixed Path Separator Handling for Cross-Platform Support (Windows File Systems)
Georgi Gerganov [Wed, 4 Dec 2024 20:38:20 +0000 (22:38 +0200)]
server : fix speculative decoding with context shift (#10641)
* server : fix speculative decoding with context shift
ggml-ci
* server : take into account speculative limits
ggml-ci
* server : add tests
Diego Devesa [Wed, 4 Dec 2024 13:45:40 +0000 (14:45 +0100)]
ggml : add predefined list of CPU backend variants to build (#10626)
* ggml : add predefined list of CPU backend variants to build
* update CPU dockerfiles
Diego Devesa [Wed, 4 Dec 2024 13:40:44 +0000 (14:40 +0100)]
ggml-cpu : fix HWCAP2_I8MM value (#10646)
ltoniazzi [Wed, 4 Dec 2024 09:45:48 +0000 (09:45 +0000)]
Fix HF repo commit to clone lora test models (#10649)
JFLFY2255 [Wed, 4 Dec 2024 09:42:50 +0000 (17:42 +0800)]
llama: Support MiniCPM-1B (with & w/o longrope) (#10559)
Jeff Bolz [Wed, 4 Dec 2024 07:28:59 +0000 (01:28 -0600)]
vulkan: Implement "fast divide" (mul+shift) for unary ops like copy (#10642)
Nicolò Scipione [Wed, 4 Dec 2024 01:29:20 +0000 (02:29 +0100)]
SYCL : Move to compile time oneMKL interface backend selection for NVIDIA backend (#10584)
* [SYCL] Move to Compile Time backend selection on oneMKL Interface for NVIDIA backend
Move to compile time selection to backend to avoid latency at run time.
Add it to all mkl gemm calls and only for NVIDIA backend.
Signed-off-by: nscipione <redacted>
* Formatting
* Address PR comments to increase readibility
---------
Signed-off-by: nscipione <redacted>
Wang Ran (汪然) [Wed, 4 Dec 2024 01:22:50 +0000 (09:22 +0800)]
fix typo of README.md (#10605)
Frankie Robertson [Wed, 4 Dec 2024 00:41:37 +0000 (02:41 +0200)]
Avoid using __fp16 on ARM with old nvcc (#10616)
Benson Wong [Wed, 4 Dec 2024 00:40:36 +0000 (16:40 -0800)]
Add docs for creating a static build (#10268) (#10630)
* Add notes for a static build
* Update docs/build.md
---------
Co-authored-by: Diego Devesa <redacted>
piDack [Wed, 4 Dec 2024 00:26:37 +0000 (08:26 +0800)]
clip : add sycl support (#10574)
Co-authored-by: piDack <redacted>
Jeff Bolz [Tue, 3 Dec 2024 19:29:54 +0000 (13:29 -0600)]
vulkan: optimize and reenable split_k (#10637)
Use vector loads when possible in mul_mat_split_k_reduce. Use split_k
when there aren't enough workgroups to fill the shaders.
Xuan Son Nguyen [Tue, 3 Dec 2024 18:38:44 +0000 (19:38 +0100)]
server : (web ui) Various improvements, now use vite as bundler (#10599)
* hide buttons in dropdown menu
* use npm as deps manager and vite as bundler
* fix build
* fix build (2)
* fix responsive on mobile
* fix more problems on mobile
* sync build
* (test) add CI step for verifying build
* fix ci
* force rebuild .hpp files
* cmake: clean up generated files pre build
Georgi Gerganov [Tue, 3 Dec 2024 17:42:30 +0000 (19:42 +0200)]
scripts : remove amx sync
ggml-ci
Georgi Gerganov [Tue, 3 Dec 2024 17:40:25 +0000 (19:40 +0200)]
sync : ggml
mahorozte [Tue, 3 Dec 2024 13:11:43 +0000 (21:11 +0800)]
CUDA: remove unnecessary warp reduce in FA (ggml/1032)
* kqmax_new_j in every thread within warp is same after operate at line 199,this reduce can be omit
* same problem in vec32
---------
Co-authored-by: ZhaoXiaoYu <redacted>
PAB [Mon, 2 Dec 2024 18:27:24 +0000 (19:27 +0100)]
feat: add `GGML_UNARY_OP_ARGMAX` Metal kernel (ggml/1019)
* implemented argmax kernel
* tpig -> tgpig
* change to strides
* contiguous assertions
* kernel working and tested
* argmax simd parallel implementation
* added 2 new tests for argmax in test-backend-ops
* cosmit
* added 3 tests cases for perf eval
* add test_argmax in make_test_cases_perf
* Update test-backend-ops.cpp
Co-authored-by: Diego Devesa <redacted>
---------
Co-authored-by: Diego Devesa <redacted>
PAB [Thu, 28 Nov 2024 08:25:06 +0000 (09:25 +0100)]
metal : add `GGML_OP_CONV_TRANSPOSE_1D` kernels (ggml/1026)
* wip
* wip implementation f32
* kernel conv transpose 1d f32 working
* initial commit
Xuan Son Nguyen [Tue, 3 Dec 2024 11:54:30 +0000 (12:54 +0100)]
llama : add missing LLAMA_API for llama_chat_builtin_templates (#10636)
Nikolaos Pothitos [Tue, 3 Dec 2024 10:50:08 +0000 (12:50 +0200)]
readme : add option, update default value, fix formatting (#10271)
* readme : document --no-display-prompt
* readme : update default prompt context size
* readme : remove unnecessary indentation
Indenting a line with four spaces makes Markdown treat that section as
plain text.
* readme : indent commands under bullets
* readme : indent commands in lettered list
Georgi Gerganov [Tue, 3 Dec 2024 09:52:33 +0000 (11:52 +0200)]
metal : small-batch mat-mul kernels (#10581)
* metal : small-batch mat-mul kernels
ggml-ci
* metal : add rest of types
ggml-ci
* metal : final adjustments
ggml-ci
* metal : add comments
ggml-ci
Georgi Gerganov [Tue, 3 Dec 2024 09:21:43 +0000 (11:21 +0200)]
github : minify link [no ci] (revert)
this doesn't work as expected
Georgi Gerganov [Tue, 3 Dec 2024 09:20:35 +0000 (11:20 +0200)]
github : minify link [no ci]
Georgi Gerganov [Tue, 3 Dec 2024 09:20:00 +0000 (11:20 +0200)]
server : fix default draft model parameters (#10586)
* server : force F16 KV cache for the draft model
ggml-ci
* server : fix draft params
ggml-ci
* server : various params fixes
ggml-ci
Xuan Son Nguyen [Mon, 2 Dec 2024 21:10:19 +0000 (22:10 +0100)]
llama : add enum for built-in chat templates (#10623)
* llama : add enum for supported chat templates
* use "built-in" instead of "supported"
* arg: print list of built-in templates
* fix test
* update server README
Georgi Gerganov [Mon, 2 Dec 2024 19:22:53 +0000 (21:22 +0200)]
make : deprecate (#10514)
* make : deprecate
ggml-ci
* ci : disable Makefile builds
ggml-ci
* docs : remove make references [no ci]
* ci : disable swift build
ggml-ci
* docs : remove obsolete make references, scripts, examples
ggml-ci
* basic fix for compare-commits.sh
* update build.md
* more build.md updates
* more build.md updates
* more build.md updates
* Update Makefile
Co-authored-by: Diego Devesa <redacted>
---------
Co-authored-by: slaren <redacted>
haopeng [Mon, 2 Dec 2024 13:45:54 +0000 (21:45 +0800)]
server: Add "tokens per second" information in the backend (#10548)
* add cmake rvv support
* add timings
* remove space
* update readme
* fix
* fix code
* remove empty line
* add test
---------
Co-authored-by: Xuan Son Nguyen <redacted>
Akarshan Biswas [Mon, 2 Dec 2024 07:04:11 +0000 (12:34 +0530)]
SYCL: Fix and switch to GGML_LOG system instead of fprintf (#10579)
* Switched to GGML_LOG
* Fix missing semicolon
Georgi Gerganov [Mon, 2 Dec 2024 06:53:27 +0000 (08:53 +0200)]
contrib : refresh (#10593)
* contrib : refresh
* contrib : expand [no ci]
* contrib : expand test-backend-ops instructions
* contrib : add CODEOWNERS
* prs : update template to not have checkbox [no ci]
Juk Armstrong [Sun, 1 Dec 2024 22:09:49 +0000 (22:09 +0000)]
Add `mistral-v1`, `mistral-v3`, `mistral-v3-tekken` and `mistral-v7` chat template types (#10572)
* Templates: `mistral-v1`, `mistral-v2`, `mistral-v3`, `mistral-v3-tekken`
* Changed system message logic and added tests for all 4
* Invalid `system_message` instead of `content` fixed
* Removed tab-indented lines
* Added template code and test for `mistral-v7`
* Added all tests. Fixed bug with `tmpl == "llama2"` test.
* Replaced tabs with spaces.
* Removed `'mistral-v2'` option as no (open) models ever used it
* Removed all references to 'v2' template from comments
* Update llama.cpp
Fixed `trim_assistant_message` bug
Georgi Gerganov [Sun, 1 Dec 2024 19:37:54 +0000 (21:37 +0200)]
grammars : add English-only grammar (#10612)
Wang Qin [Sun, 1 Dec 2024 18:11:42 +0000 (10:11 -0800)]
ci: add error handling for Python venv creation in run.sh (#10608)
Diego Devesa [Sun, 1 Dec 2024 15:12:41 +0000 (16:12 +0100)]
ggml : automatic selection of best CPU backend (#10606)
* ggml : automatic selection of best CPU backend
* amx : minor opt
* add GGML_AVX_VNNI to enable avx-vnni, fix checks
alek3y [Sun, 1 Dec 2024 11:33:12 +0000 (12:33 +0100)]
server : bind to any port when specified (#10590)
Georgi Gerganov [Sun, 1 Dec 2024 09:25:17 +0000 (11:25 +0200)]
readme : update the usage section with examples (#10596)
* readme : update the usage section with examples
* readme : more examples
Wang Qin [Sun, 1 Dec 2024 03:19:44 +0000 (19:19 -0800)]
build: update Makefile comments for C++ version change (#10598)
Adrien Gallouët [Sat, 30 Nov 2024 17:13:18 +0000 (18:13 +0100)]
ggml-cpu: replace AArch64 NEON assembly with intrinsics in ggml_gemv_q4_0_4x4_q8_0() (#10567)
Signed-off-by: Adrien Gallouët <redacted>
Georgi Gerganov [Sat, 30 Nov 2024 08:09:21 +0000 (10:09 +0200)]
readme : remove old badge
Georgi Gerganov [Sat, 30 Nov 2024 07:47:07 +0000 (09:47 +0200)]
readme : refresh (#10587)
* readme : refresh
* readme : move section [no ci]
* readme : clarify [no ci]
* readme : fixes [no ci]
* readme : more fixes [no ci]
* readme : simplify [no ci]
* readme : clarify GGUF
Eve [Sat, 30 Nov 2024 07:00:02 +0000 (07:00 +0000)]
vulkan: Dynamic subgroup size support for Q6_K mat_vec (#10536)
* subgroup 64 version with subgroup add. 15% faster
scalable version
tested for subgroup sizes 16-128
* check for subgroup multiple of 16 and greater than 16
* subgroup sizes are always a power of 2 (https://github.com/KhronosGroup/GLSL/issues/45)
* force 16 sequential threads per block
* make 16 subgroup size a constant
Diego Devesa [Fri, 29 Nov 2024 20:54:58 +0000 (21:54 +0100)]
ggml : move AMX to the CPU backend (#10570)
* ggml : move AMX to the CPU backend
---------
Co-authored-by: Georgi Gerganov <redacted>
Xuan Son Nguyen [Fri, 29 Nov 2024 20:48:56 +0000 (21:48 +0100)]
server : add more test cases (#10569)
* server : add split model test
* add test speculative
* add invalid cases
Robert Collins [Fri, 29 Nov 2024 17:21:37 +0000 (12:21 -0500)]
imatrix : support combine-only (#10492)
* imatrix-combine-only idea
* ensured that behavior consistent with log
Diego Devesa [Fri, 29 Nov 2024 16:45:08 +0000 (17:45 +0100)]
cleanup UI link list (#10577)
* cleanup UI link list
* sort list alphabetically
* add missing licenses
Georgi Gerganov [Fri, 29 Nov 2024 14:25:39 +0000 (16:25 +0200)]
ggml : fix I8MM Q4_1 scaling factor conversion (#10562)
ggml-ci
Shupei Fan [Fri, 29 Nov 2024 13:49:02 +0000 (21:49 +0800)]
ggml-cpu: fix typo in gemv/gemm iq4_nl_4_4 (#10580)
Alberto Cabrera Pérez [Fri, 29 Nov 2024 12:38:45 +0000 (12:38 +0000)]
sycl : offload of get_rows set to 0 (#10432)
Alberto Cabrera Pérez [Fri, 29 Nov 2024 09:49:43 +0000 (09:49 +0000)]
sycl : Reroute permuted mul_mats through oneMKL (#10408)
This PR fixes the failing MUL_MAT tests for the sycl backend.
Chenguang Li [Fri, 29 Nov 2024 06:46:55 +0000 (14:46 +0800)]
CANN: RoPE operator optimization (#10563)
* [cann] RoPE operator optimization
* [CANN]Code Formatting
---------
Co-authored-by: noemotiovon <redacted>
Jeff Bolz [Fri, 29 Nov 2024 06:18:02 +0000 (00:18 -0600)]
vulkan: get the first command buffer submitted sooner (#10499)
This is an incremental improvement over #9118 to get work to the GPU a bit
sooner. The first part is to start with a smaller number of nodes before
the first submit, and ramp it up to the current 100 nodes/submit. The
second part is to reduce the dryrun overhead for all the nodes that just
need to request descriptor space.
With these changes I get around 1-2% speedup on RTX 4070 combined with my
old Haswell-era CPU.
Ting Lou [Fri, 29 Nov 2024 00:09:46 +0000 (08:09 +0800)]
llava: return false instead of exit (#10546)
Georgi Gerganov [Thu, 28 Nov 2024 18:46:40 +0000 (20:46 +0200)]
ggml : remove redundant copyright notice + update authors
Georgi Gerganov [Thu, 28 Nov 2024 18:45:07 +0000 (20:45 +0200)]
llama : add missing model types
Xuan Son Nguyen [Thu, 28 Nov 2024 18:17:49 +0000 (19:17 +0100)]
server : (tests) don't use thread for capturing stdout/stderr, bump openai client library (#10568)
* server : (tests) don't use thread for capturing stdout/stderr
* test: bump openai to 1.55.2
* bump openai to 1.55.3
Johannes Gäßler [Thu, 28 Nov 2024 17:15:25 +0000 (18:15 +0100)]
common: fix warning message when no GPU found (#10564)
Random Fly [Thu, 28 Nov 2024 15:03:11 +0000 (23:03 +0800)]
docs: fix outdated usage of llama-simple (#10565)
Diego Devesa [Thu, 28 Nov 2024 14:58:54 +0000 (15:58 +0100)]
ci : fix tag name in cuda and hip releases (#10566)
Georgi Gerganov [Thu, 28 Nov 2024 12:56:37 +0000 (14:56 +0200)]
ggml : fix row condition for i8mm kernels (#10561)
ggml-ci
Georgi Gerganov [Thu, 28 Nov 2024 12:56:23 +0000 (14:56 +0200)]
cmake : fix ARM feature detection (#10543)
ggml-ci
Shupei Fan [Thu, 28 Nov 2024 12:52:03 +0000 (20:52 +0800)]
ggml-cpu: support IQ4_NL_4_4 by runtime repack (#10541)
* ggml-cpu: support IQ4_NL_4_4 by runtime repack
* ggml-cpu: add __ARM_FEATURE_DOTPROD guard
Sergio López [Thu, 28 Nov 2024 11:51:38 +0000 (12:51 +0100)]
kompute : improve backend to pass test_backend_ops (#10542)
* kompute: op_unary: reject unsupported parameters
Signed-off-by: Sergio Lopez <redacted>
* kompute: softmax: implement ALiBi support
Signed-off-by: Sergio Lopez <redacted>
* kompute: rope: implement neox and phi3 support
Signed-off-by: Sergio Lopez <redacted>
* kompute: op_mul_mat_q4_k permutted support
Signed-off-by: Sergio Lopez <redacted>
* kompute: op_mul_mat_[q4_0|q4_1|q8_0] permutted support
Signed-off-by: Sergio Lopez <redacted>
* kompute: op_mul_mat_f16 permutted support
Signed-off-by: Sergio Lopez <redacted>
* kompute: op_mul_mat_q6_k permutted support
Signed-off-by: Sergio Lopez <redacted>
---------
Signed-off-by: Sergio Lopez <redacted>
Ruixin Huang [Thu, 28 Nov 2024 07:27:11 +0000 (15:27 +0800)]
CANN: Update cann.md to display correctly in CLion (#10538)
leo-pony [Thu, 28 Nov 2024 07:25:24 +0000 (15:25 +0800)]
CANN: Fix SOC_TYPE compile bug (#10519)
* CANN: Fix the bug build fail on Ascend310P under two cases:
1) Manual specify SOC_TYPE
2) Under some unusual compile environment
* Update the cann backend News content: Support F16 and F32 data type model for Ascend 310P NPU.
* fix CANN compile fail bug: the assert in ascend kernel function doesn't supportted on some CANN version
Chenguang Li [Thu, 28 Nov 2024 06:24:46 +0000 (14:24 +0800)]
CANN: ROPE operator optimization (#10540)
* [cann] ROPE operator optimization
Co-authored-by: noemotiovon <redacted>
Xuan Son Nguyen [Wed, 27 Nov 2024 21:30:52 +0000 (22:30 +0100)]
common : fix duplicated file name with hf_repo and hf_file (#10550)
uvos [Wed, 27 Nov 2024 16:10:08 +0000 (17:10 +0100)]
Add some minimal optimizations for CDNA (#10498)
* Add some minimal optimizations for CDNA
* ggml_cuda: set launch bounds also for GCN as it helps there too
Diego Devesa [Wed, 27 Nov 2024 10:03:25 +0000 (11:03 +0100)]
ci : faster CUDA toolkit installation method and use ccache (#10537)
* ci : faster CUDA toolkit installation method and use ccache
* remove fetch-depth
* only pack CUDA runtime on master
Georgi Gerganov [Wed, 27 Nov 2024 09:22:14 +0000 (11:22 +0200)]
metal : fix group_norm support condition (#0)
Georgi Gerganov [Wed, 27 Nov 2024 09:10:42 +0000 (11:10 +0200)]
sync : ggml
Frankie Robertson [Tue, 26 Nov 2024 13:50:26 +0000 (15:50 +0200)]
Do not include arm_neon.h when compiling CUDA code (ggml/1028)
Jeff Bolz [Wed, 27 Nov 2024 07:32:54 +0000 (01:32 -0600)]
vulkan: define all quant data structures in types.comp (#10440)
Jeff Bolz [Wed, 27 Nov 2024 07:30:27 +0000 (01:30 -0600)]
vulkan: Handle GPUs with less shared memory (#10468)
There have been reports of failure to compile on systems with <= 32KB
of shared memory (e.g. #10037). This change makes the large tile size
fall back to a smaller size if necessary, and makes mul_mat_id fall
back to CPU if there's only 16KB of shared memory.
Jeff Bolz [Wed, 27 Nov 2024 07:21:59 +0000 (01:21 -0600)]
vulkan: further optimize q5_k mul_mat_vec (#10479)
Jeff Bolz [Wed, 27 Nov 2024 07:08:54 +0000 (01:08 -0600)]
vulkan: skip integer div/mod in get_offsets for batch_idx==0 (#10506)
Jeff Bolz [Wed, 27 Nov 2024 07:00:50 +0000 (01:00 -0600)]
vulkan: optimize Q2_K and Q3_K mul_mat_vec (#10459)
Diego Devesa [Tue, 26 Nov 2024 21:12:10 +0000 (22:12 +0100)]
ci : fix cuda releases (#10532)
Shane A [Tue, 26 Nov 2024 20:55:29 +0000 (12:55 -0800)]
Add OLMo 2 model in docs (#10530)
* Add link to OLMo 2 model in docs
* Change link to landing page
Diego Devesa [Tue, 26 Nov 2024 20:13:54 +0000 (21:13 +0100)]
ci : remove nix workflows (#10526)
Diego Devesa [Tue, 26 Nov 2024 20:01:47 +0000 (21:01 +0100)]
llama : disable warnings for 3rd party sha1 dependency (#10527)