]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
Sigbjørn Skjæret [Thu, 8 May 2025 13:34:29 +0000 (15:34 +0200)]
convert : support rope_scaling type and rope_type (#13349)
welix [Thu, 8 May 2025 13:03:53 +0000 (22:03 +0900)]
mtmd : fix the calculation of n_tokens for smolvlm (#13381)
Co-authored-by: Taichi Nishimura <redacted>
Georgi Gerganov [Thu, 8 May 2025 11:28:33 +0000 (14:28 +0300)]
context : allow cache-less context for embeddings (#13108)
* context : allow cache-less context for embeddings
ggml-ci
* context : enable reranking with encode()
ggml-ci
* context : encode() clears embd_seq
ggml-ci
* examples : use llama_encode() when appropriate
ggml-ci
* models : nomic bert moe does not require KV cache
* llama : update comments for llama_decode/llama_encode
ggml-ci
* context : update warning log [no ci]
Georgi Gerganov [Thu, 8 May 2025 11:26:50 +0000 (14:26 +0300)]
context : remove logits_all flag (#13284)
* context : remove logits_all flag
ggml-ci
* llama : remove logits_all flag + reorder llama_context_params
ggml-ci
Diego Devesa [Thu, 8 May 2025 11:15:28 +0000 (13:15 +0200)]
ci : move release workflow to a separate file (#13362)
Diego Devesa [Thu, 8 May 2025 11:15:15 +0000 (13:15 +0200)]
llama : print size and type of overridden tensors (#13364)
Alberto Cabrera Pérez [Thu, 8 May 2025 09:08:01 +0000 (10:08 +0100)]
sycl: addressing non-contiguous src1 mul_mats (nc and batched) (#13343)
* sycl: fixed non-contiguous src1 mul_mats (nc and batched)
* Fixed wrong static_cast inside kernel
Diego Devesa [Wed, 7 May 2025 14:36:33 +0000 (16:36 +0200)]
docker : disable arm64 and intel images (#13356)
Georgi Gerganov [Wed, 7 May 2025 13:39:36 +0000 (16:39 +0300)]
sync : ggml
ggml-ci
Daniel Bevenius [Mon, 5 May 2025 11:09:35 +0000 (13:09 +0200)]
whisper: remove MSVC warnings pragmas (whisper/3090)
* ggml : remove MSVC warnings pragmas
This commit removes the MSVC-specific pragmas as these are now handled
in ggml/CMakeLists.txt.
* whisper : remove MSVC warning pragmas
This commit removes the MSVC-specific pragmas. These are now handled in
the ggml/CMakeLists.txt file.
Jared Tweed [Fri, 2 May 2025 09:41:35 +0000 (02:41 -0700)]
cmake : removed stdc++fs (whisper/3097)
* removed stdc++fs
* kept line, but removed stdc++fs
Sigbjørn Skjæret [Wed, 7 May 2025 10:49:27 +0000 (12:49 +0200)]
llama : deci : support ffn-free with attention (#13296)
Ycros [Wed, 7 May 2025 08:23:28 +0000 (18:23 +1000)]
common : Add a warning when we can't match samplers from a string or char. (#13330)
R0CKSTAR [Wed, 7 May 2025 07:48:23 +0000 (15:48 +0800)]
cuda : remove nrows_x in mul_mat_q_process_tile (#13325)
Signed-off-by: Xiaodong Ye <redacted>
Georgi Gerganov [Wed, 7 May 2025 07:28:02 +0000 (10:28 +0300)]
examples : remove infill (#13283)
ggml-ci
piDack [Wed, 7 May 2025 07:23:11 +0000 (15:23 +0800)]
llama : support tie embedding for chatglm models (#13328)
Johannes Gäßler [Tue, 6 May 2025 21:35:51 +0000 (23:35 +0200)]
CUDA: mix virt/real CUDA archs for GGML_NATIVE=OFF (#13135)
Xuan-Son Nguyen [Tue, 6 May 2025 20:40:24 +0000 (22:40 +0200)]
clip : refactor graph builder (#13321)
* mtmd : refactor graph builder
* fix qwen2vl
* clean up siglip cgraph
* pixtral migrated
* move minicpmv to a dedicated build function
* move max_feature_layer to build_llava
* use build_attn for minicpm resampler
* fix windows build
* add comment for batch_size
* also support tinygemma3 test model
* qwen2vl does not use RMS norm
* fix qwen2vl norm (2)
DocShotgun [Tue, 6 May 2025 20:36:24 +0000 (13:36 -0700)]
sampling : make top_n_sigma no-op at <=0 or a single candidate (#13345)
oobabooga [Tue, 6 May 2025 18:24:15 +0000 (15:24 -0300)]
sampling : don't consider -infinity values in top_n_sigma (#13344)
Diego Devesa [Tue, 6 May 2025 18:15:31 +0000 (20:15 +0200)]
cmake : remove arm64 msvc presets (#13342)
Akarshan Biswas [Tue, 6 May 2025 14:57:06 +0000 (20:27 +0530)]
SYCL: Disable reorder optimize by default and stop setting tensor extras when optimize is disabled (#13254)
* SYCL: Do not set tensor extras when reorder optimize is disabled
* SYCL: Disable reorder optimize by default
Xuan-Son Nguyen [Tue, 6 May 2025 12:25:40 +0000 (14:25 +0200)]
llama : fix build_ffn without gate (#13336)
* llama : fix build_ffn without gate
* fix build on windows
* Revert "fix build on windows"
This reverts commit
fc420d3c7eef3481d3d2f313fef2757cb33a7c56 .
Johannes Gäßler [Tue, 6 May 2025 11:58:51 +0000 (13:58 +0200)]
CUDA: fix bad asserts for partial offload (#13337)
Sigbjørn Skjæret [Tue, 6 May 2025 09:12:06 +0000 (11:12 +0200)]
convert : qwen2/3moe : set yarn metadata if present (#13331)
* set yarn metadata if present
* add comment about enabling YaRN
Co-authored-by: Xuan-Son Nguyen <redacted>
---------
Co-authored-by: Xuan-Son Nguyen <redacted>
Johannes Gäßler [Tue, 6 May 2025 06:36:46 +0000 (08:36 +0200)]
CUDA: fix --split-mode row for MMQ (#13323)
compilade [Tue, 6 May 2025 02:27:31 +0000 (22:27 -0400)]
gguf-py : avoid requiring pyside6 for other scripts (#13036)
- gguf-py : remove gguf-py/gguf/scripts/__init__.py because it's not needed
Implicit namespaces are supported since Python 3.3 (https://peps.python.org/pep-0420/),
and the entrypoints in pyproject.toml can directly refer to the main functions.
Johannes Gäßler [Mon, 5 May 2025 20:32:13 +0000 (22:32 +0200)]
CUDA: fix logic for clearing padding with -ngl 0 (#13320)
oobabooga [Mon, 5 May 2025 20:12:19 +0000 (17:12 -0300)]
sampling : Integrate Top-nσ into main sampling chain (and add it to the server) (#13264)
* sampling: add Top-nσ sampler to `llama-server` and sampler ordering
* revert: sampler ordering
* revert: VS' crappy auto-formatting
* revert: VS' crappy auto-formatting pt.2
* revert: my crappy eye sight...
* sampling: add XTC to Top-nσ sampler chain
* sampling: add Dyna. Temp. to Top-nσ sampler chain
* sampling: actually remove Top-nσ from sampler(oops)
* Integrate top_n_sigma into main sampler chain
* Define COMMON_SAMPLER_TYPE_TOP_N_SIGMA
* Formatting
* Lint
* Exit early in the sampler if nsigma < 0
---------
Co-authored-by: CasualAutopsy <redacted>
igardev [Mon, 5 May 2025 14:03:31 +0000 (17:03 +0300)]
server : Webui - change setText command from parent window to also send the message. (#13309)
* setText command from parent window for llama-vscode now sends the message automatically.
* Upgrade packages versions to fix vulnerabilities with "npm audit fix" command.
* Fix code formatting.
* Add index.html.gz changes.
* Revert "Upgrade packages versions to fix vulnerabilities with "npm audit fix" command."
This reverts commit
67687b7fda8a293724ba92ea30bb151677406bc8 .
* easier approach
* add setTimeout
---------
Co-authored-by: igardev <redacted>
Co-authored-by: Xuan Son Nguyen <redacted>
Xuan-Son Nguyen [Mon, 5 May 2025 14:02:55 +0000 (16:02 +0200)]
mtmd : rename llava directory to mtmd (#13311)
* mv llava to mtmd
* change ref everywhere
Xuan-Son Nguyen [Mon, 5 May 2025 10:54:44 +0000 (12:54 +0200)]
clip : fix confused naming ffn_up and ffn_down (#13290)
* clip : fix confused naming ffn_up and ffn_down
* rm ffn_i/o/g naming
* rename n_embd, n_ff
* small fix
* no check n_ff
Sigbjørn Skjæret [Mon, 5 May 2025 10:34:26 +0000 (12:34 +0200)]
convert : bailingmoe : set yarn metadata if present (#13312)
Akarshan Biswas [Mon, 5 May 2025 08:09:10 +0000 (13:39 +0530)]
SYCL: Disable mul_mat kernels for noncontiguous tensor b (#13308)
ggml-ci
Xuan-Son Nguyen [Sun, 4 May 2025 21:43:42 +0000 (23:43 +0200)]
mtmd : add C public API (#13184)
* init
* wip
* working version
* add mtmd::bitmaps
* add test target
* rm redundant define
* test: mtmd_input_chunks_free
* rm outdated comment
* fix merging issue
* explicitly create mtmd::input_chunks
* mtmd_input_chunk_copy
* add clone()
* add const to various places
* add warning about breaking changes
* helper: use mtmd_image_tokens_get_n_pos
Diego Devesa [Sun, 4 May 2025 19:25:43 +0000 (21:25 +0200)]
rpc : use backend registry, support dl backends (#13304)
Aaron Teo [Sun, 4 May 2025 17:49:12 +0000 (01:49 +0800)]
ggml : activate s390x simd for Q3_K (#13301)
Signed-off-by: Aaron Teo <redacted>
Diego Devesa [Sun, 4 May 2025 15:05:20 +0000 (17:05 +0200)]
llava/mtmd : fixes to fully support dl backends (#13303)
Diego Devesa [Sun, 4 May 2025 12:20:49 +0000 (14:20 +0200)]
llama : build windows releases with dl backends (#13220)
Johannes Gäßler [Sun, 4 May 2025 12:16:39 +0000 (14:16 +0200)]
CUDA: fix race condition in MMQ stream-k fixup (#13299)
Johannes Gäßler [Sun, 4 May 2025 11:58:38 +0000 (13:58 +0200)]
CUDA: fix race condition in MMQ ids_dst (#13294)
Jeff Bolz [Sun, 4 May 2025 05:17:16 +0000 (00:17 -0500)]
vulkan: Additional type support for unary, binary, and copy (#13266)
Support f16->f32 copy.
Support f16->f16 and f32->f32 unary ops.
Support all combinations of f16/f32 for src0/src1/dst for add/sub/mul/div.
Johannes Gäßler [Sat, 3 May 2025 22:50:37 +0000 (00:50 +0200)]
imatrix: fix oob writes if src1 is not contiguous (#13286)
Xuan-Son Nguyen [Sat, 3 May 2025 18:07:54 +0000 (20:07 +0200)]
clip : revert the change of BOI/EOI token for GLM-edge (⚠️ breaking change) (#13259)
ymcki [Sat, 3 May 2025 15:39:51 +0000 (23:39 +0800)]
llama : Llama-3_1-Nemotron-Ultra-253B-v1 support (#12843)
Diego Devesa [Fri, 2 May 2025 18:27:13 +0000 (20:27 +0200)]
llama : move end-user examples to tools directory (#13249)
* llama : move end-user examples to tools directory
---------
Co-authored-by: Xuan Son Nguyen <redacted>
Georgi Gerganov [Fri, 2 May 2025 17:54:30 +0000 (20:54 +0300)]
sync : ggml (#13268)
* vulkan : kernels for depthwise 2D convolution (CONV_2D_DW) (ggml/1204)
* vulkan : add kernels for depthwise 2d convolution (OP_CONV_2D_DW)
* review: remove src_x/y < 0 checks; add performance tests
* sync : ggml
ggml-ci
* vulkan : fix lint (#0)
---------
Co-authored-by: Acly <redacted>
Georgi Gerganov [Fri, 2 May 2025 17:54:13 +0000 (20:54 +0300)]
context : fix reorder logic (#13267)
ggml-ci
shalinib-ibm [Fri, 2 May 2025 16:53:12 +0000 (22:23 +0530)]
ggml : Enable MMA for BF16 in llamafile_sgemm (#13148)
This patch upstreams llamafile's cpu matrix multiplication kernels for ppc64le using MMA builtins for BF16 data type.
This change results in 9x - 40x gains
in total speed S t/s (ie all tokens/total time), across various batch sizes tested using llama-batched-bench benchmark.
The patch is tested with Meta-Lllama-3-8B,
and Mistral-7B models (BF16 models generated by using llama-quantize from corresponding FP32 models) on an IBM POWER10 machine.
Signed-off-by: Shalini Salomi Bodapati <redacted>
Jared Van Bortel [Fri, 2 May 2025 15:42:30 +0000 (11:42 -0400)]
llama-model : support Qwen2 embedding models and pooling_mode_lasttoken (#13245)
Jared Van Bortel [Fri, 2 May 2025 15:41:54 +0000 (11:41 -0400)]
convert : use correct context length for nomic-embed-text-v2 (#13216)
Xuan-Son Nguyen [Fri, 2 May 2025 15:17:15 +0000 (17:17 +0200)]
convert : converting mmproj for Qwen2/2.5VL from convert_hf_to_gguf (#13209)
* wip
* qwen2.5vl ok
* vision: fix models missing "text_config"
* add test
* fix test repo name
* fix 32B model
* Revert "fix 32B model"
This reverts commit
651752f1ae25fe8a01c1e57c18cf2eca80b2774e .
* clarify about 32B
* rm qwen surgery script
* update llava/readme
* move V_ENC_EMBD_PATCH handling to Qwen2VLVisionModel
Georgi Gerganov [Fri, 2 May 2025 14:48:36 +0000 (17:48 +0300)]
kv-cache : separate recurrent vs non-recurrent impl (#12799)
* kv-cache : serparate recurrent vs non-recurrent impl (wip)
ggml-ci
* kv-cache : init -> contructor + add llama_memory_params
ggml-ci
* kv-cache : fix callback reference
ggml-ci
* context : llama_kv_cache -> llama_memory_i
ggml-ci
* context : move memory creation logic to model
ggml-ci
* llama : remove reference of memory during encode
ggml-ci
* kv-cache : hide padding details in the implementation
ggml-ci
* kv-cache : add ubatch_next()
ggml-ci
* context : simplify sbatch logic
ggml-ci
* kv-cache : hide defrag logic in the implementation
ggml-ci
* context : hide kv cache details in implementation
ggml-ci
* build : fix
ggml-ci
* cont : another fix
ggml-ci
* kv-cache : simplify interface (wip)
ggml-ci
* kv-cache : use separate KV cell structs for unified/recurrent
ggml-ci
* kv-cache : clean-up
ggml-ci
* model : better llama_model::create_model() signature
ggml-ci
* kv-cache : fix recurrent seq_rm()
ggml-ci
* kv-cache : replace `struct callbacks` with `llama_model &`
ggml-ci
* kv-cache : replace `struct graph_params` with `llama_context &`
ggml-ci
* kv-cache : fix offload check
ggml-ci
* context : avoid passing unique_ptr
ggml-ci
* kv-cache : avoid using the backends from the llama_context
ref #13113
ggml-ci
* kv-cache : more consistent debug logs [no ci]
* kv-cache : do not pass the full llama_context for kv graphs
ggml-ci
* kv-cache : remove comment
* kv-cache : ggml_rope_ext_inplace -> ggml_rope_ext
ggml-ci
* kv-cache : fix recurrent multi-user case
ggml-ci
* memory : remove comments [no ci]
Sigbjørn Skjæret [Fri, 2 May 2025 10:44:24 +0000 (12:44 +0200)]
llama : orion rope type is neox (#13261)
Sigbjørn Skjæret [Fri, 2 May 2025 10:40:56 +0000 (12:40 +0200)]
llama : plamo rope type is neox (#13260)
piDack [Fri, 2 May 2025 09:06:09 +0000 (17:06 +0800)]
llama-chat : reset glmedge chat template (#13253)
* reset glmedge chat template
* fix glmedge chat template
Shakil Ahmed [Fri, 2 May 2025 08:20:27 +0000 (14:20 +0600)]
mtmd-cli : fix out_of_range when input image path is empty (#13244)
* fix out_of_range error to keep the chat loop running
* Update examples/llava/mtmd-cli.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* mtmd-cli : load image right away
* add a new line for readability
* rm printf
* Update examples/llava/mtmd-cli.cpp
Co-authored-by: Sigbjørn Skjæret <redacted>
* Update examples/llava/mtmd-cli.cpp
---------
Co-authored-by: Sigbjørn Skjæret <redacted>
Co-authored-by: Xuan Son Nguyen <redacted>
Co-authored-by: Xuan-Son Nguyen <redacted>
Georgi Gerganov [Fri, 2 May 2025 06:48:31 +0000 (09:48 +0300)]
server : add cache reuse card link to help (#13230)
* server : add cache reuse card link to help
* args : use short url
Xuan-Son Nguyen [Fri, 2 May 2025 06:45:10 +0000 (08:45 +0200)]
convert : explicitly disable trust_remote_code for AutoConfig (#13246)
bandoti [Thu, 1 May 2025 22:06:39 +0000 (19:06 -0300)]
ci: fix cross-compile sync issues (#12804)
Justin Santa Barbara [Thu, 1 May 2025 21:32:11 +0000 (17:32 -0400)]
rpc : avoid uninitialized memory in serialize_tensor (#13210)
Zero out the name and padding buffers.
Jesse Gross [Thu, 1 May 2025 20:46:10 +0000 (13:46 -0700)]
ggml: Don't assert fail when tensor data changes (#13222)
The following scenario will cause an assertion failure in the graph
allocator:
- Build and allocate a graph containing a tensor with a non-NULL data
pointer
- Build and allocate a new graph where that data is NULL
Result:
ggml-alloc.c:819: GGML_ASSERT(talloc->buffer_id >= 0) failed
This happens during revalidation because we think that memory should
have been previously allocated based on the current graph but in
reality the previous graph was different. In this situation, we
should do a full reallocation pass.
Diego Devesa [Thu, 1 May 2025 19:48:08 +0000 (21:48 +0200)]
build : fix build info on windows (#13239)
* build : fix build info on windows
* fix cuda host compiler msg
Loïc Carrère [Thu, 1 May 2025 19:32:21 +0000 (21:32 +0200)]
clip : (minicpmv) Re-enable upscaling of images smaller than the CLIP image size (#13237)
matteo [Thu, 1 May 2025 19:16:38 +0000 (21:16 +0200)]
llama-chat : update GLM4 chat template (#13238)
* update GLM4 chat template
* Update chat template
Co-authored-by: Xuan-Son Nguyen <redacted>
---------
Co-authored-by: Xuan-Son Nguyen <redacted>
Jeff Bolz [Thu, 1 May 2025 18:49:39 +0000 (13:49 -0500)]
vulkan: Add bfloat16 support (#12554)
* vulkan: Add bfloat16 support
This adds bfloat16 matrix multiply support based on VK_KHR_shader_bfloat16.
The extension is required for coopmat multiply support, but matrix-vector
multiply trivially promotes bf16 to fp32 and doesn't require the extension.
The copy/get_rows shaders also don't require the extension.
It's probably possible to fall back to non-coopmat and promote to fp32 when
the extension isn't supported, but this change doesn't do that.
The coopmat support also requires a glslc that supports the extension, which
currently requires a custom build.
* vulkan: Support bf16 tensors without the bf16 extension or coopmat support
Compile a variant of the scalar mul_mm shader that will promote the bf16
values to float, and use that when either the bf16 extension or the coopmat
extensions aren't available.
* vulkan: bfloat16 fixes (really works without bfloat16 support now)
* vulkan: fix spirv-val failure and reenable -O
Jeff Bolz [Thu, 1 May 2025 18:19:31 +0000 (13:19 -0500)]
vulkan: Handle src1 batch dimension in non-contiguous mat-vec-mul shader (#13191)
* vulkan: Handle src1 batch dimension in non-contiguous mat-vec-mul shader
Johannes Gäßler [Thu, 1 May 2025 18:18:56 +0000 (20:18 +0200)]
test: non-cont. b in test-backend-ops -o MUL_MAT (#13187)
Georgi Gerganov [Thu, 1 May 2025 14:07:13 +0000 (17:07 +0300)]
sync : ggml
ggml-ci
Daniel Bevenius [Thu, 1 May 2025 08:05:24 +0000 (10:05 +0200)]
whisper : add check that target name exists (whisper/3103)
This commit adds a check to makes sure that the target exists before
trying to add compile options to ignore warnings when using MSVC.
The motivation for this is currently the build is broken depending on
the cmake options provided. With this fix it should be possible to build
even if the targets are not actually available.
Refs: https://github.com/ggml-org/whisper.cpp/pull/3090#issuecomment-
2842760104
Daniel Bevenius [Tue, 29 Apr 2025 13:47:55 +0000 (15:47 +0200)]
ggml : suppress Windows compiler warnings (whisper/3075)
* whisper: suppress Windows compiler warnings
This commit disables compiler warnings on window using MSVC.
The motivation for these changes is that some compilers generate
warnings for these conversion, for example Windows MSVC, and
there are quite a few of them. This makes it a little difficult to
spot new warnings that may be introduced and also can be difficult
for users/embedders of ggml where these warnings are hard to separate
from their own warnings.
* squash! whisper: suppress Windows compiler warnings
Move ggml related warnings into ggml. This commit also fixes the
indentation and adds a missing whitespace to the if statement.
Xuan-Son Nguyen [Thu, 1 May 2025 15:05:42 +0000 (17:05 +0200)]
mtmd : add **vision** support for Mistral Small 3.1 (#13231)
* convert ok
* load ok, missing patch merger
* ah sheet it works
* update llava/readme
* add test
* fix test
Xuan-Son Nguyen [Thu, 1 May 2025 08:23:25 +0000 (10:23 +0200)]
arg : remove CURLINFO_EFFECTIVE_METHOD (#13228)
Jared Van Bortel [Thu, 1 May 2025 07:09:41 +0000 (03:09 -0400)]
llama-model : fix the reported size class for nomic-embed-text-v2-moe (#13223)
Georgi Gerganov [Thu, 1 May 2025 06:59:02 +0000 (09:59 +0300)]
sync : ggml
Diego Devesa [Wed, 30 Apr 2025 13:20:40 +0000 (15:20 +0200)]
ggml : fix ggml_gallocr_ptr type (ggml/1205)
Georgi Gerganov [Thu, 24 Apr 2025 15:59:06 +0000 (18:59 +0300)]
cuda : fix unused variable compile warning (whisper/0)
ggml-ci
Johannes Gäßler [Wed, 30 Apr 2025 21:12:59 +0000 (23:12 +0200)]
CUDA: batched+noncont MMQ, refactor bs>1 MoE code (#13199)
Xuan-Son Nguyen [Wed, 30 Apr 2025 20:29:15 +0000 (22:29 +0200)]
arg : -hf do not fail if url mismatch (#13219)
* arg : -hf do not fail if url mismatch
* do not return if cannot parse metadata json
ddh0 [Wed, 30 Apr 2025 20:28:43 +0000 (15:28 -0500)]
fix typo: `n_ctx_pre_seq` -> `n_ctx_per_seq` (#13221)
Xuan-Son Nguyen [Wed, 30 Apr 2025 14:56:24 +0000 (16:56 +0200)]
convert : improve model arch handling (#13122)
* convert : improve model arch handling
* use AutoConfig
* rm trust_remote_code
* Update convert_hf_to_gguf.py
* fix self.block_count for vision
* fix NomicBertModel
Tatsuya Tanaka [Wed, 30 Apr 2025 13:25:20 +0000 (22:25 +0900)]
llava : remove duplicate include (#13207)
Olivier Chafik [Wed, 30 Apr 2025 12:52:35 +0000 (13:52 +0100)]
common : add -jf / --json-schema-file flag (#12011)
Jeff Bolz [Wed, 30 Apr 2025 12:38:37 +0000 (07:38 -0500)]
vulkan: use uint array index to avoid glslang bug (#13193)
shalinib-ibm [Wed, 30 Apr 2025 11:17:08 +0000 (16:47 +0530)]
ggml : fix ppc64le build (#13176)
Build fails with compilation error on power pc.
This patch fixes the same.
Tested with unit tests run via
--build <build_dir> && cd <build_dir> && make test
Signed-off-by: Shalini Salomi Bodapati <redacted>
Xuan-Son Nguyen [Wed, 30 Apr 2025 11:06:15 +0000 (13:06 +0200)]
convert : correct typo image_mean --> image_std (#13208)
Aaron Teo [Wed, 30 Apr 2025 09:47:35 +0000 (17:47 +0800)]
feat(ggml-cpu): enable z17 compile (#13182)
z17 compilation requires GCC 15.1.0 and onwards
Signed-off-by: Aaron Teo <redacted>
Xuan-Son Nguyen [Wed, 30 Apr 2025 08:46:32 +0000 (10:46 +0200)]
arg : allow using -hf offline (#13202)
* arg : allow using -hf offline
* add more comments in code [no ci]
Xuan-Son Nguyen [Wed, 30 Apr 2025 08:44:07 +0000 (10:44 +0200)]
docker : do not build tests (#13204)
* docker : do not build tests
* include "ggml-cpu.h"
xiaofei [Wed, 30 Apr 2025 06:29:22 +0000 (14:29 +0800)]
rpc : fix cache directory initialization (#13188)
Signed-off-by: xiaofei <redacted>
Johannes Gäßler [Tue, 29 Apr 2025 21:32:04 +0000 (23:32 +0200)]
scripts: n_depth for compare-llama-bench [no ci] (#13201)
matteo [Tue, 29 Apr 2025 18:33:10 +0000 (20:33 +0200)]
server : Prefilling assistant message in openai compatible API (#13174)
* Prefilling assistant message in openai compatible API
* fixed indentation
* fixed code convention
* simplify method usage
* no more than one assistant message at end of messages
* merge checks into prefill code
* Update examples/server/utils.hpp
---------
Co-authored-by: matteo <redacted>
Co-authored-by: Xuan-Son Nguyen <redacted>
Georgi Gerganov [Tue, 29 Apr 2025 17:22:57 +0000 (20:22 +0300)]
sampling : when top-k <= 0 -> noop (#13173)
ggml-ci
Alberto Cabrera Pérez [Tue, 29 Apr 2025 15:24:36 +0000 (16:24 +0100)]
llama-bench: fixed size of fields to correctly map to values (#13183)
Johannes Gäßler [Tue, 29 Apr 2025 14:00:27 +0000 (16:00 +0200)]
CUDA: fix non-cont. inputs for batched mat mul (#13155)
Sigbjørn Skjæret [Tue, 29 Apr 2025 11:25:53 +0000 (13:25 +0200)]
llama : llm_type order by size (#13177)
Xuan-Son Nguyen [Tue, 29 Apr 2025 09:47:04 +0000 (11:47 +0200)]
mtmd : add qwen2vl and qwen2.5vl (#13141)
* llava : add clip_n_output_tokens, deprecate clip_n_patches
* mtmd : add qwen2vl and qwen2.5vl
* decode_embd_batch::set_position_...
* working version
* deprecate llama-qwen2vl-cli
* correct order W, H of clip_embd_nbytes_by_img
* edit existing line in hot topics
Sigbjørn Skjæret [Tue, 29 Apr 2025 09:00:31 +0000 (11:00 +0200)]
llama : set qwen3 model type sizes (#13175)
Xuan-Son Nguyen [Tue, 29 Apr 2025 06:45:49 +0000 (08:45 +0200)]
llama-graph : fix text position for mrope (#13159)
* llama-graph : fix text position for mrope
* fix typo
* explicitly set 4th dim in the loop
AT [Mon, 28 Apr 2025 19:52:15 +0000 (15:52 -0400)]
model : Nomic Embed Text V2 with Mixture-of-Experts (MoE) architecture (#12466)
* Nomic Embed Text V2 with Mixture-of-Experts (MoE) architecture
- Adds MoE-based embedding model supporting multilingual embeddings.
- Selects architecture variant based on hyperparameter detection (MoE layers).
- Removes unnecessary subclass initialization checks for clarity.
https://www.nomic.ai/blog/posts/nomic-embed-text-v2
Co-authored-by: Jared Van Bortel <redacted>
* fix tokenizer
* don't rename this tensor
---------
Co-authored-by: Jared Van Bortel <redacted>