]>
git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/log
Aaron Teo [Fri, 15 Aug 2025 13:11:22 +0000 (21:11 +0800)]
ggml: initial IBM zDNN backend (llama/14975)
* ggml-zdnn: inital backend impl
Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: temp change z17 to arch15
Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: fix build bugs
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: tensor->extra logging check
Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: add layout name mapping, ztensor information
Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: separate logging into its own line
Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: add shape comparison
Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: add ggml_tensor shape log
Signed-off-by: Aaron Teo <redacted>
ggml-zdnn: fix incorrect shape logging
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add output buffer check
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: run compute and store into tensor->extra
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add set_tensor
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add more loggers
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: update set_tensor logging to check only for matmul
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: last working matmul version
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add comments to prevent accidentally deleting lines
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: support op out_prod
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: update op out_prod to use tensor->extra
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: rewrite the backend implementation
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: bugfix new impl
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix compiler warnings and bugfixes
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: test ztensor finding in init_tensor
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: implement at least 1 op to test
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: assign tensor->extra to buffer
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add check for view tensors to prevent init_tensor
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: rework init_tensor to create new buffers
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: switch to std vector instead of array
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: switch buffers back and set to arbitrary number
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: impl init_tensor
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: update supports_op matmul matrix
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix incorrect ztensor shape, reduce memory padding
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: code clean up
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: impl matmul
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix compiler error missing type
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing data transform call
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add bias init_tensor
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: tighten memory usage, change string allocation
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add bias ztensor and data free
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add bias data transform
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add more debug info for extra buffer transform
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add logger to check if mat mul ops go through set_tensor
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: activate bias transform in matmul
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: move weights transform into mulmat
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add more safeguards in matmul
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix sequencing of transforms
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: bugfix transform ztensor vs origtensor
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: figure out why sigtrap is happening
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix sigsegv
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: move everything back to local declaration
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: move bias data to local also
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: bring back working matmul
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: rewrite into mre
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing vector import
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing vector import in header
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt to fix sigsegv
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing load tensor
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix invalid ztensor buffer release
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add logging to debug free buffer
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: remove free_buffer debug info
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add parmblkformat detections
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add nnpa installed detection
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add zdnn_init call for static libs
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add init_tensor
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt at fixing invalid buffer
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: switch to using deque to fix pointer deref problem
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add weights logging to check
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt to use unique ptr
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add tensor to pre_tfm_desc logging
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add inputs logging
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: disable op_none initialisation for testing
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix missing return from init_tensor
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: load ztensors in cgraph exec
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: work on moving output ztensor as well
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: disable logging and breakpoints for full test
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt at manually changing the layout
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt at using default nwhc format instead
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: disable global load ztensor for now
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix errorenous output load tensor
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: add guards to prevent loading ztensor if transformed
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: code cleanup
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: bring load ztensor back to init routine
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: code clean up
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix ztensor deallocation abort
stabilise ggml <-> zdnn api
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: clean up matmul selection
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: clean up project structure
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: update documentation, prepare for upstream
Signed-off-by: Aaron Teo <redacted>
* chore: add codeowners
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: disable batched matmul
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: attempt at fixing tensor views during matmul
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: deny all view tensors directly
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix pr comments
Signed-off-by: Aaron Teo <redacted>
* docs: update ops docs for zdnn
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: redo test-backend-ops for ops.md
Signed-off-by: Aaron Teo <redacted>
* ggml-zdnn: fix typo in build-s390x.md
Signed-off-by: Aaron Teo <redacted>
* codeowners: remove taronaeo for now
Signed-off-by: Aaron Teo <redacted>
* Revert "codeowners: remove taronaeo for now"
This reverts commit
411ea4ed78d08778967bd0bd33a6538cfcbe082f .
* ggml-zdnn: remove unused ggml_zdnn macro
Signed-off-by: Aaron Teo <redacted>
---------
Signed-off-by: Aaron Teo <redacted>
Johannes Gäßler [Thu, 14 Aug 2025 21:21:24 +0000 (23:21 +0200)]
CUDA: fix negative KV_max values in FA (llama/15321)
uvos [Thu, 14 Aug 2025 14:23:56 +0000 (16:23 +0200)]
HIP: Cleanup hipification header (llama/15285)
add expicit conversion operator to support older versions of rocm
Switch over to hip_bf16 from legacy hip_bfloat16
Simplify RDNA3 define
Reduce swap over of new hipblas api to rocm 6.5 as this version is used for rocm 7.0 previews
---------
Co-authored-by: Johannes Gäßler <redacted>
Jeff Bolz [Thu, 14 Aug 2025 13:38:10 +0000 (08:38 -0500)]
vulkan: perf_logger improvements (llama/15246)
* vulkan: perf_logger improvements
- Account for batch dimension in flops calculation.
- Fix how "_VEC" is detected for mat_mul_id.
- Fix "n" dimension for mat_mul_id (in case of broadcasting).
- Include a->type in name.
* use <=mul_mat_vec_max_cols rather than ==1
Jason Ni [Thu, 14 Aug 2025 11:17:51 +0000 (19:17 +0800)]
ggml: fix ggml_conv_1d_dw bug (ggml/1323)
* ggml: fix ggml_conv_1d_dw bug
* Fixed conv1d_dw weight tensor dimension.
Sigbjørn Skjæret [Thu, 14 Aug 2025 10:22:07 +0000 (12:22 +0200)]
cuda : fix GGML_CUDA_GRAPHS=OFF (llama/15300)
* fix USE_CUDA_GRAPH=OFF
ggml-ci
* check capture status
* completely disable capturing check instead
Jonathan Graehl [Thu, 14 Aug 2025 10:03:57 +0000 (03:03 -0700)]
finetune: SGD optimizer, more CLI args (llama/13873)
* examples/finetune -opt SGD (stochastic gradient descent) memory opt
add unit tested GGML_OPT_OPTIMIZER_SGD to ggml - avoids allocating
m, v tensors.
support finetune.cpp arg -opt SGD (or sgd). (default adamw as before)
llama 3.2-1b-F32 result: observed 11gb gpu ram (41 sec/epoch)
when using SGD instead of 19gb (55 sec/epoch) using adamw.
(wikipedia 100 lines finetune)
(
using the same GPU memory, adamw can only do before OOM 512
batch/context, reaching:
train: [███████▉] data=
0000140 /
0000140 loss=0.02575±0.00099 acc=99.52±0.03% t=00:00:47 ETA=00:00:00
val: [███████▉] data=
0000008 /
0000008 loss=4.76565±0.28810 acc=41.46±0.77% t=00:00:00 ETA=00:00:00
SGD is superior, though it converges slower, with max before OOM 1728
batch/context (esp see the better validation perf):
train: [███████▉] data=
0000039 /
0000039 loss=0.00371±0.00010 acc=99.96±0.01% t=00:00:41 ETA=00:00:00
val: [███████▉] data=
0000003 /
0000003 loss=5.11406±0.76034 acc=48.01±0.69% t=00:00:01 ETA=00:00:00
)
note: when finetuning long enough (or w/ enough -lr),
validation accuracy *eventually* drops ('catastrophic forgetting')
-lr-half (halflife) option useful for SGD to avoid oscillation or
super slow underdamped learning (makes setting -lr more forgiving).
terminal -lr for now is set by lr-halvings i.e. if you want at most
1/8 the inital -lr you set -lr-halvings 3.
note: objective loss not directly comparable between adamw, sgd? -
check perplexity or accuracy or consider relative improvements
for convergence
new finetune args -wd 1e-9 to enable weight decay in sgd or adamw,
and max -epochs N (default 2 as before)
cache (1 - wd*alpha) in 'adamw' opt struct -
no noticeable perf benefit, disabled (still done
for new SGD though)
since opt. memory is pre-allocated, the ggml_opt_get_optimizer_params
would probably be able to change between SGD and AdamW with each epoch
but would need to use adamw for the first (unconfirmed - no cmdline arg
to set such a policy yet)
test-opt checks adamw as before and now sgd (except for a few disabled
tests for sgd only; probably just needs logging values and adding
alternate reference values); tolerance on the 'regression'
test is broader for sgd (so we don't need many more epochs)
* Vulkan: Implement GGML_OP_OPT_STEP_SGD
* tests: Fix OPT_STEP_SGD test-backend-ops
* SGD op param store weight-decay and not 1-alpha*wd
* minor + cosmetic changes
* fix vulkan sgd
* try CI fix
---------
Co-authored-by: 0cc4m <redacted>
Co-authored-by: Johannes Gäßler <redacted>
uvos [Wed, 13 Aug 2025 18:44:30 +0000 (20:44 +0200)]
HIP: bump requirement to rocm 6.1 (llama/15296)
Judd [Wed, 13 Aug 2025 10:45:15 +0000 (18:45 +0800)]
ggml : update `ggml_rope_multi` (llama/12665)
* update `rope_multi`:
1. add `ggml_rope_multi_inplace`;
1. use `GGML_MROPE_SECTIONS` instead of 4.
* Apply suggestions from code review
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Wed, 13 Aug 2025 08:09:39 +0000 (11:09 +0300)]
ggml : repack block_iq4_nlx8 (llama/14904)
ggml-ci
Oliver Simons [Wed, 13 Aug 2025 08:04:46 +0000 (10:04 +0200)]
CUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel-level and 10% perf increase for Gemma3n (llama/15132)
* Factor out `reduce_rows_f32` from common.cuh
This increases iteration cycle speed by not having to recompile
every kernel all the time
* Hide memory-latency by loop unrolling in reduce_rows_f32
* Further optimizations to `reduce_rows_f32`
1. Increase threadblock size to better hide latency of memory requests.
As a consequence of bigger threadblocks, do 2-step summation, using
shared memory to communicate results between invocations
2. Use sum_temp array to reduce waits on sum
3. Adjust num_unroll to reflext bigger threadblock
4. Improve default block_dims, increase support for more block_dims
* Add perf tests for `reduce_rows_f32` kernel
* Add heuristic to toggle 128/512 threads based on sm count
Break even point was the minimum of the following multiples.
| GPU Model | Nrow SM Count Multiple |
| ----------- | ----------- |
| RTX 4000 SFF ADA | 2.0x |
| RTX 6000 ADA | 2.5x |
| RTX PRO 6000 Blackwell Max-Q | 3.04x |
| RTX PRO 4500 Blackwell | 3.15x |
* Ensure perf gains also for small ncols and large nrows
Alternative to this, one could have also made the number of unrollings
template-able, but that would require compiling the kernel multiple
times, increasing binary size unnecessarily
* Modify perf and unit-tests
* Apply auto-formatting by clang
* Fix CI build failure
See https://github.com/ggml-org/llama.cpp/actions/runs/
16798370266 /job/
47573716079 ?pr=15132#step:7:486
Building with VS generator worked though.
* Remove sm_count property from `ggml_backend_cuda_context`
Requested by @JohannesGaessler, and should fix remaining CI issues as a
side-effect
* Add CUB-based implementation for GGML_OP_MEAN
Currently this branch is only executed for nrows==1
* Add heuristics to execute CUB branch only when it brings perf
Heuristics were determined on the following HW:
* RTX 4000 SFF ADA
* RTX 6000 ADA
* RTX PRO 6000 Blackwell Max-Q
* RTX PRO 4500 Blackwell
* Add unit-test for CUB-based mean
Tests should run with CUDA Graphs enabled per default on NVGPUs
* Rename `USE_CUB` to `GGML_CUDA_USE_CUB`
Suggested by @JohannesGaessler
* Unindent Preprocessor directives
See
https://github.com/ggml-org/llama.cpp/pull/15132#discussion_r2269213506
Tak-RS [Wed, 13 Aug 2025 05:54:30 +0000 (14:54 +0900)]
ggml-rpc: chunk send()/recv() to avoid EINVAL for very large tensors over RPC (macOS & others) (llama/15188)
* ggml-rpc: chunk send()/recv() to avoid EINVAL for very large tensors over RPC (macOS & others). Fixes #15055
* ggml-rpc: rename RPC_IO_CHUNK->MAX_CHUNK_SIZE, use std::min() for cap, switch to GGML_LOG_ERROR, handle 0-length send/recv
* rpc: drop n==0 special case in send_data(); retry in loop per review
* rpc: remove trailing whitespace in send_data()
---------
Co-authored-by: Shinnosuke Takagi <redacted>
uvos [Tue, 12 Aug 2025 20:15:12 +0000 (22:15 +0200)]
HIP: disable sync warp shuffel operators from clr amd_warp_sync_functions.h (llama/15273)
Romain Biessy [Tue, 12 Aug 2025 11:58:22 +0000 (13:58 +0200)]
sycl: Fix and disable more configurations of mul_mat (llama/15151)
* sycl: Fix and disable more configurations of mul_mat
* Disable more configurations
rmatif [Tue, 12 Aug 2025 09:42:41 +0000 (11:42 +0200)]
opencl: allow mixed f16/f32 `add` (llama/15140)
Aman Gupta [Tue, 12 Aug 2025 09:21:45 +0000 (17:21 +0800)]
CUDA cmake: add `-lineinfo` for easier debug (llama/15260)
Chenguang Li [Tue, 12 Aug 2025 08:12:13 +0000 (16:12 +0800)]
CANN: GGML_OP_CPY optimization (llama/15070)
Signed-off-by: noemotiovon <redacted>
R0CKSTAR [Tue, 12 Aug 2025 02:02:51 +0000 (10:02 +0800)]
musa: fix failures in test-backend-ops for mul_mat_id op (llama/15236)
* musa: fix failures in test-backend-ops for mul_mat_id op
Signed-off-by: Xiaodong Ye <redacted>
* Address review comments
Signed-off-by: Xiaodong Ye <redacted>
---------
Signed-off-by: Xiaodong Ye <redacted>
hipudding [Mon, 11 Aug 2025 14:50:31 +0000 (22:50 +0800)]
CANN: Add broadcast for softmax and FA (llama/15208)
* refactor softmax
* fix fa
* fix mask shape
* format
* add comments
* Remove whitespace
Charles Xu [Mon, 11 Aug 2025 07:59:26 +0000 (09:59 +0200)]
kleidiai: fix unsigned overflow bug (llama/15150)
* kleidiai: fix unsigned overflow bug
* address review comments
David Zhao [Sat, 9 Aug 2025 18:29:43 +0000 (13:29 -0500)]
cuda: refactored ssm_scan and use CUB (llama/13291)
* cuda: refactored ssm_scan to use CUB
* fixed compilation error when when not using CUB
* assign L to constant and use size_t instead of int
* deduplicated functions
* change min blocks per mp to 1
* Use cub load and store warp transpose
* suppress clang warning
Aman Gupta [Sat, 9 Aug 2025 12:00:24 +0000 (20:00 +0800)]
CUDA: add attention sinks for tile and wmma (llama/15178)
* CUDA: add attention sinks for tile and wmma
* Review: formatting changes + remove syncthreads from tile + remove warp_reduce_max from wmma
compilade [Fri, 8 Aug 2025 21:48:26 +0000 (17:48 -0400)]
gguf-py : add Numpy MXFP4 de/quantization support (llama/15111)
* gguf-py : add MXFP4 de/quantization support
* ggml-quants : handle zero amax for MXFP4
AN Long [Fri, 8 Aug 2025 12:37:22 +0000 (21:37 +0900)]
ggml : fix field name when new ggml_backend (llama/14944)
Johannes Gäßler [Fri, 8 Aug 2025 06:19:58 +0000 (08:19 +0200)]
CUDA: attention sinks for mma FlashAttention (llama/15157)
lhez [Fri, 8 Aug 2025 04:47:03 +0000 (13:47 +0900)]
opencl: support sink in `soft_max` (attn sinks) (llama/15152)
Jeff Bolz [Thu, 7 Aug 2025 20:44:20 +0000 (15:44 -0500)]
vulkan: support fattn sinks (llama/15126)
Jeff Bolz [Thu, 7 Aug 2025 20:07:11 +0000 (15:07 -0500)]
vulkan: Add env var to disable host visible vidmem (llama/15109)
uvos [Thu, 7 Aug 2025 14:44:14 +0000 (16:44 +0200)]
HIP: add cmake option to enable compiler output of kernel resource usage metrics (llama/15103)
Christian Kastner [Thu, 7 Aug 2025 11:45:41 +0000 (13:45 +0200)]
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (llama/15094)
Any available libraries are found and loaded dynamically at runtime.
Johannes Gäßler [Thu, 7 Aug 2025 08:53:21 +0000 (10:53 +0200)]
CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16 (llama/15131)
* CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16
rmatif [Wed, 6 Aug 2025 21:17:51 +0000 (23:17 +0200)]
fix profiling crash (llama/15072)
lhez [Wed, 6 Aug 2025 19:12:17 +0000 (04:12 +0900)]
opencl: add `swiglu_oai` and `add_id` (llama/15121)
* opencl: add `swiglu-oai`
* opencl: add `add_id`
* opencl: add missing `add_id.cl`
Diego Devesa [Wed, 6 Aug 2025 12:37:35 +0000 (05:37 -0700)]
ggml : fix fallback to CPU for ununsupported ops (llama/15118)
Chenguang Li [Wed, 6 Aug 2025 06:12:42 +0000 (14:12 +0800)]
CANN: add support for ACL Graph (llama/15065)
* feat(cann): add optional support for ACL Graph execution
This commit adds support for executing ggml computational graphs using
Huawei's ACL graph mode via the USE_CANN_GRAPH flag. The support can be
enabled at compile time using the CMake option:
-DUSE_CANN_GRAPH=ON
By default, ACL graph execution is **disabled**, and the fallback path
uses node-by-node execution.
Key additions:
- CMake option to toggle graph mode
- Graph capture and execution logic using
- Tensor property matching to determine whether graph update is required
- Safe fallback and logging if the environment variable LLAMA_SET_ROWS
is unset or invalid
This prepares the backend for performance improvements in repetitive graph
execution scenarios on Ascend devices.
Signed-off-by: noemotiovon <redacted>
* Fix review comments
Signed-off-by: noemotiovon <redacted>
* remane USE_CANN_GRAPH to USE_ACL_GRAPH
Signed-off-by: noemotiovon <redacted>
* fix typo
Signed-off-by: noemotiovon <redacted>
---------
Signed-off-by: noemotiovon <redacted>
Georgi Gerganov [Tue, 5 Aug 2025 19:10:36 +0000 (22:10 +0300)]
llama : add gpt-oss (llama/15091)
* oai moe
* compat with new checkpoint
* add attn sink impl
* add rope scaling yarn
* logits match with latest transformers code
* wip chat template
* rm trailing space
* use ggml_scale_bias
* rm redundant is_swa_all
* convert interleaved gate_up
* graph : fix activation function to match reference (llama/7)
* vocab : handle o200k_harmony special tokens
* ggml : add attention sinks support (llama/1)
* llama : add attn sinks
* ggml : add attn sinks
* cuda : add attn sinks
* vulkan : add support for sinks in softmax
remove unnecessary return
* ggml : add fused swiglu_oai op (llama/11)
* ggml : add fused swiglu_oai op
* Update ggml/src/ggml-cpu/ops.cpp
Co-authored-by: Georgi Gerganov <redacted>
* update CUDA impl
* cont : metal impl
* add vulkan impl
* test-backend-ops : more test cases, clean up
* llama : remove unfused impl
* remove extra lines
---------
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: slaren <redacted>
* repack mxfp4 upon conversion
* clean up a bit
* enable thinking
* add quick hack to render only some special tokens
* fix bf16 conversion
* remove vocab hack
* webui ok
* support chat parsing for gpt-oss
* fix webui
* direct mapping mxfp4, FINALLY
* force using mxfp4
* properly use lazy tensor
* ggml : add mxfp4
ggml : use e8m0 conversion instead of powf
Co-authored-by: Diego Devesa <redacted>
change kvalues_mxfp4 table to match e2m1 (llama/6)
metal : remove quantization for now (not used)
cuda : fix disabled CUDA graphs due to ffn moe bias
vulkan : add support for mxfp4
cont : add cm2 dequant
* ggml : add ggml_add_id (llama/13)
* ggml : add ggml_add_id
* add cuda impl
* llama : add weight support check for add_id
* perf opt
* add vulkan impl
* rename cuda files
* add metal impl
* allow in-place ggml_add_id
* llama : keep biases on CPU with --cpu-moe
* llama : fix compile error
ggml-ci
* cuda : add fallback for __nv_cvt_e8m0_to_bf16raw
ggml-ci
* cleanup
ggml-ci
* sycl : fix supports_op for MXFP4
ggml-ci
* fix Unknown reasoning format
* ggml-cpu : fix AVX build
ggml-ci
* fix hip build
ggml-ci
* cuda : add mxfp4 dequantization support for cuBLAS
ggml-ci
* ggml-cpu : fix mxfp4 fallback definitions for some architectures
ggml-ci
* cuda : fix version required for __nv_cvt_e8m0_to_bf16raw
---------
Co-authored-by: Xuan Son Nguyen <redacted>
Co-authored-by: slaren <redacted>
Romain Biessy [Tue, 5 Aug 2025 16:39:55 +0000 (18:39 +0200)]
sycl: fix mul_mat selection (llama/15092)
Christian Kastner [Mon, 4 Aug 2025 19:29:14 +0000 (21:29 +0200)]
cmake: Add GGML_BACKEND_DIR option (llama/15074)
* cmake: Add GGML_BACKEND_DIR option
This can be used by distributions to specify where to look for backends
when ggml is built with GGML_BACKEND_DL=ON.
* Fix phrasing
Jeff Bolz [Mon, 4 Aug 2025 05:09:19 +0000 (00:09 -0500)]
vulkan: fix build when using glslang that does not support coopmat2 (llama/15062)
Jeff Bolz [Sun, 3 Aug 2025 12:23:57 +0000 (07:23 -0500)]
vulkan: Use coopmat2 for conv2d (llama/14982)
lhez [Sat, 2 Aug 2025 17:51:18 +0000 (10:51 -0700)]
opencl: fix adreno compiler detection logic (llama/15029)
Johannes Gäßler [Sat, 2 Aug 2025 14:37:08 +0000 (16:37 +0200)]
CUDA: use mma FA kernel for gqa > 4 on RTX 4000 (llama/15035)
leejet [Sat, 2 Aug 2025 14:15:36 +0000 (22:15 +0800)]
cuda: make im2col a little faster (llama/15025)
Georgi Gerganov [Sat, 2 Aug 2025 14:13:05 +0000 (17:13 +0300)]
cuda, sycl : fix batched gemm when ne02 == 1 && ne03 > 1 (llama/15038)
* cuda, sycl : fix batched gemm when ne02 == 1 && ne03 > 1
ggml-ci
* cont : fix cont types
ggml-ci
* cont : adopt variable names and comment from the other branch
Jeff Bolz [Sat, 2 Aug 2025 09:21:37 +0000 (04:21 -0500)]
vulkan: coopmat2 mul_mat optimizations (llama/14934)
- Increase tile size for k-quants, to match non-k-quants
- Choose more carefully between large and medium tiles, considering how it
interacts with split_k
- Allow larger/non-power of two split_k, and make the splits a multiple of 256
- Use split_k==3 to when >1/2 and <=2/3 of the SMs would hae been used
Jeff Bolz [Sat, 2 Aug 2025 08:48:30 +0000 (03:48 -0500)]
vulkan: Support ne[3]>1 in noncontig matrix-vector multiply (llama/15015)
Jeff Bolz [Sat, 2 Aug 2025 07:57:04 +0000 (02:57 -0500)]
vulkan: optimizations for direct convolution (llama/14933)
* vulkan: optimizations for direct convolution
- Empirically choose a better tile size. Reducing BS_K/BS_NPQ helps fill
the GPU. The new size should be amenable to using coopmat, too.
- Fix shmem bank conflicts. 16B padding should work with coopmat.
- Some explicit loop unrolling.
- Skip math/stores work for parts of the tile that are OOB.
- Apply fastdiv opt.
- Disable shuffles for NV.
* Three tiles sizes for CONV_2D, and a heuristic to choose
* reallow collectives for pre-Turing
* make SHMEM_PAD a spec constant
* fixes for intel perf - no shmem padding, placeholder shader core count
* shader variants with/without unrolling
* 0cc4m's fixes for AMD perf
Co-authored-by: 0cc4m <redacted>
---------
Co-authored-by: 0cc4m <redacted>
Johannes Gäßler [Fri, 1 Aug 2025 18:47:32 +0000 (20:47 +0200)]
CUDA: fix MMQ nwarps for AMD with warp_size==32 (llama/15014)
lhez [Fri, 1 Aug 2025 11:15:44 +0000 (04:15 -0700)]
opencl: add f16 for `add`, `sub`, `mul`, `div` (llama/14984)
Srihari-mcw [Fri, 1 Aug 2025 06:20:33 +0000 (11:50 +0530)]
ggml : Q2k interleaving implementation - x86/x64 SIMD (llama/14373)
* Initial Q2_K Block Interleaving Implementation
* Addressed review comments and clean up of the code
* Post rebase fixes
* Initial CI/CD fixes
* Update declarations in arch-fallback.h
* Changes for GEMV Q2_K in arch-fallback.h
* Enable repacking only on AVX-512 machines
* Update comments in repack.cpp
* Address q2k comments
---------
Co-authored-by: Manogna-Sree <redacted>
diannao [Fri, 1 Aug 2025 02:02:34 +0000 (10:02 +0800)]
docker : add cann build pipline (llama/14591)
* docker: add cann build pipline
* docker: add cann build pipline
* docker: fix cann devops
* cann : fix multi card hccl
* Update ggml/src/ggml-cann/ggml-cann.cpp
Co-authored-by: Xuan-Son Nguyen <redacted>
* Update ggml-cann.cpp
---------
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: Xuan-Son Nguyen <redacted>
Ruben Ortlam [Thu, 31 Jul 2025 15:46:54 +0000 (17:46 +0200)]
Vulkan: Fix minor debug mode issues (llama/14899)
* vulkan: fix debug mode issues
* vulkan: remove broken check_results GGML_OP_SET_ROWS support
hipudding [Thu, 31 Jul 2025 11:47:20 +0000 (19:47 +0800)]
CANN: Improve loading efficiency after converting weights to NZ format. (llama/14985)
* CANN: Improve loading efficiency after converting weights to NZ format.
* CANN: fix typo
lhez [Wed, 30 Jul 2025 21:56:55 +0000 (14:56 -0700)]
opencl: add `mul_mat_f32_f32_l4_lm` and `mul_mat_f16_f32_l4_lm` (llama/14809)
uvos [Wed, 30 Jul 2025 15:38:06 +0000 (17:38 +0200)]
HIP: enable mfma mmq on gfx908 and gfx90a for select datatypes and shapes (llama/14949)
Johannes Gäßler [Wed, 30 Jul 2025 13:46:13 +0000 (15:46 +0200)]
CUDA: skip masked KV slices for all FA kernels (llama/14924)
uvos [Tue, 29 Jul 2025 18:23:04 +0000 (20:23 +0200)]
HIP: remove the use of __HIP_PLATFORM_AMD__, explicitly support only AMD targets (llama/14945)
uvos [Tue, 29 Jul 2025 15:44:30 +0000 (17:44 +0200)]
HIP: add GGML_HIP_MMQ_MFMA option to allow disableing the MFMA path. (llama/14930)
This is useful for testing for regressions on GCN with CDNA hardware.
With GGML_HIP_MMQ_MFMA=Off and GGML_CUDA_FORCE_MMQ=On we can conveniently test the GCN code path on CDNA. As CDNA is just GCN renamed with MFMA added and limited use ACC registers, this provides a good alternative for regression testing when GCN hardware is not available.
uvos [Tue, 29 Jul 2025 15:43:43 +0000 (17:43 +0200)]
HIP: Ignore unsupported unroll transformation in fattn-vec (llama/14931)
llvm with the amdgcn target dose not support unrolling loops with conditional break statements, when those statements can not be resolved at compile time. Similar to other places in GGML lets simply ignore this warning.
hipudding [Tue, 29 Jul 2025 14:36:43 +0000 (22:36 +0800)]
CANN: Add ggml_set_rows (llama/14943)
Sigbjørn Skjæret [Tue, 29 Jul 2025 12:22:03 +0000 (14:22 +0200)]
cuda : add softcap fusion (llama/14907)
Aman Gupta [Tue, 29 Jul 2025 06:45:18 +0000 (14:45 +0800)]
CUDA: add roll (llama/14919)
* CUDA: add roll
* Make everything const, use __restrict__
xctan [Mon, 28 Jul 2025 15:40:24 +0000 (23:40 +0800)]
ggml-cpu : deduplicate scalar implementations (llama/14897)
* remove redundant code in riscv
* remove redundant code in arm
* remove redundant code in loongarch
* remove redundant code in ppc
* remove redundant code in s390
* remove redundant code in wasm
* remove redundant code in x86
* remove fallback headers
* fix x86 ggml_vec_dot_q8_0_q8_0
Akarshan Biswas [Mon, 28 Jul 2025 15:02:15 +0000 (20:32 +0530)]
SYCL: Add set_rows support for quantized types (llama/14883)
* SYCL: Add set_rows support for quantized types
This commit adds support for GGML_OP_SET_ROWS operation for various
quantized tensor types (Q8_0, Q5_1, Q5_0, Q4_1, Q4_0, IQ4_NL) and BF16
type in the SYCL backend.
The quantization/dequantization copy kernels were moved from cpy.cpp
to cpy.hpp to make them available for set_rows.cpp.
This addresses part of the TODOs mentioned in the code.
* Use get_global_linear_id() instead
ggml-ci
* Fix formatting
ggml-ci
* Use const for ne11 and size_t variables in set_rows_sycl_q
ggml-ci
* Increase block size for q kernel to 256
ggml-ci
* Cleanup imports
* Add float.h to cpy.hpp
Johannes Gäßler [Mon, 28 Jul 2025 12:30:22 +0000 (14:30 +0200)]
CUDA: fix pointer incrementation in FA (llama/14916)
Alberto Cabrera Pérez [Mon, 28 Jul 2025 10:05:53 +0000 (11:05 +0100)]
sycl: refactor quantization to q8_1 (llama/14815)
* sycl: quantization to q8_1 refactor
* Refactored src1 copy logic in op_mul_mat
Kai Pastor [Wed, 30 Jul 2025 12:53:16 +0000 (14:53 +0200)]
cmake : Fix BLAS link interface (ggml/1316)
Kai Pastor [Wed, 30 Jul 2025 12:52:26 +0000 (14:52 +0200)]
vulkan : fix 32-bit builds (ggml/1313)
The pipeline member can be cast to VkPipeline.
This is a VkPipeline_T* on 64 bit but a uint64_t on 32 bit.
Cf. VK_DEFINE_NON_DISPATCHABLE_HANDLE documentation.
Georgi Gerganov [Mon, 18 Aug 2025 16:31:13 +0000 (19:31 +0300)]
scripts : update sync scripts
Daniel Bevenius [Fri, 15 Aug 2025 12:54:23 +0000 (14:54 +0200)]
node : add win platform check for require path (#3363)
This commit adds a check to the platform in use and adjust the path to
the addon.node shared library.
The motivation for this change is that on windows addon.node library is
built into build\bin\Release and on linux into build/Release.
Resolves: https://github.com/ggml-org/whisper.cpp/issues/3360
ustas [Wed, 13 Aug 2025 17:30:45 +0000 (14:30 -0300)]
ci : update main-cuda.Dockerfile (#3371)
* Update main-cuda.Dockerfile
Bump CUDA to 13.0.0 and exclude the `compute_50` arch from build because it was deprecated and now throws an error.
* Add quotes in main-cuda.Dockerfile
Dw9 [Tue, 12 Aug 2025 10:58:52 +0000 (18:58 +0800)]
whisper : fixed crash in GPU device selection on multi-GPU systems (#3372)
Georgi Gerganov [Sun, 10 Aug 2025 10:00:17 +0000 (13:00 +0300)]
wasm : change ggml model host to HF (#3369)
Adam Debono [Thu, 7 Aug 2025 02:37:45 +0000 (12:37 +1000)]
ruby : Add ruby binding for max_len (#3365)
* add ruby binding for max_len
* add test, update param numbers
Daniel Bevenius [Sat, 2 Aug 2025 05:03:04 +0000 (07:03 +0200)]
stream.wasm : add language selection support (#3354)
* stream.wasm : add language selection support
This commit adds support for selecting the language in the stream.wasm
example. This is includes adding the model `base` which supports
multilingual transcription, and allowing the user to select a language
from a dropdown menu in the HTML interface.
The motivation for this is that it allows users to transcribe audio in
various languages.
Refs: https://github.com/ggml-org/whisper.cpp/issues/3347
* squash! stream.wasm : add language selection support
Remove strdup() for language in stream.wasm and update butten text for
base (should not be "base.en" but just "base").
Georgi Gerganov [Wed, 30 Jul 2025 18:54:58 +0000 (21:54 +0300)]
whisper : reset conv scheduler when CoreML is used (#3350)
ggml-ci
Georgi Gerganov [Wed, 30 Jul 2025 13:08:57 +0000 (16:08 +0300)]
ggml : remove old kompute, cann (skip) (#3349)
ggml-ci
Georgi Gerganov [Mon, 28 Jul 2025 07:09:47 +0000 (10:09 +0300)]
talk-llama : sync llama.cpp
Georgi Gerganov [Mon, 28 Jul 2025 05:43:53 +0000 (08:43 +0300)]
sync : ggml
ggml-ci
Erik Scholz [Sun, 27 Jul 2025 10:04:33 +0000 (12:04 +0200)]
vulkan : add fp16 support for the conv_2d kernel (llama/14872)
* add f16 to conv_2d testing
* weaken conv2d test error threshold
Jeff Bolz [Sun, 27 Jul 2025 09:05:34 +0000 (04:05 -0500)]
vulkan: skip empty set_rows to avoid invalid API usage (llama/14860)
deepsek [Sat, 26 Jul 2025 22:28:14 +0000 (18:28 -0400)]
HIP: Enable Matrix cores for MMQ Kernels, Enable stream-K for CDNA 3 (llama/14624)
This commit adds support for MFMA instructions to MMQ. CDNA1/GFX908 CDNA2/GFX90a and CDNA3/GFX942 are supported by the MFMA-enabled code path added by this commit. The code path and stream-k is only enabled on CDNA3 for now as it fails to outperform blas in all cases on the other devices.
Blas is currently only consistently outperformed on CDNA3 due to issues in the amd-provided blas libraries.
This commit also improves the awareness of MMQ towards different warp sizes and as a side effect improves the performance of all quant formats besides q4_0 and q4_1, which regress slightly, on GCN gpus.
hipudding [Sat, 26 Jul 2025 09:56:18 +0000 (17:56 +0800)]
CANN: Implement GLU ops (llama/14884)
Implement REGLU, GEGLU, SWIGLU ops according to #14158
R0CKSTAR [Sat, 26 Jul 2025 02:36:02 +0000 (10:36 +0800)]
musa: fix build warnings (unused variable) (llama/14869)
Signed-off-by: Xiaodong Ye <redacted>
Aaron Teo [Fri, 25 Jul 2025 17:09:03 +0000 (01:09 +0800)]
ggml-cpu : disable GGML_NNPA by default due to instability (llama/14880)
* docs: update s390x document for sentencepiece
Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit
e086c5e3a7ab3463d8e0906efcfa39352db0a48d )
* docs: update huggingface links + reword
Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit
8410b085ea8c46e22be38266147a1e94757ef108 )
* ggml-cpu: disable ggml-nnpa compile flag by default
fixes #14877
Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit
412f4c7c88894b8f55846b4719c76892a23cfe09 )
* docs: update s390x build docs to reflect nnpa disable
Signed-off-by: Aaron Teo <redacted>
(cherry picked from commit
c1eeae1d0c2edc74ab9fbeff2707b0d357cf0b4d )
---------
Signed-off-by: Aaron Teo <redacted>
Gabe Goodhart [Fri, 25 Jul 2025 16:47:39 +0000 (10:47 -0600)]
metal: SSM_SCAN performance (llama/14743)
* feat: Add s_off as a parameter in the args struct
This may not be necessary, but it more closely mirrors the CUDA kernel
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* perf: Parallelize mamba2 SSM_SCAN metal kernel over d_state
This is a first attempt at optimizing the metal kernel. The changes here
are:
- Launch the kernel with a thread group of size d_state
- Use simd groups and shared memory to do the summation for the y
computation
When tested with G4 tiny preview, this shows roughly a 3x speedup on
prefill and 15% speedup on decode.
Signed-off-by: Gabe Goodhart <redacted>
* fix: Update logic to correctly do the multi-layer parallel sum
Signed-off-by: Gabe Goodhart <redacted>
* fix: Correctly size the shared memory bufer and assert expected size relationships
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* refactor: Compute block offsets once rather than once per token
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* feat: Use local variable for state recursion
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* feat: Use a secondary simd_sum instead of a for loop
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* feat: Add assertion and comment about relationship between simd size and num simd groups
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* feat: Parallelize of d_state for mamba-1
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* feat: Parallel sum in SSM_CONV
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
* Revert "feat: Parallel sum in SSM_CONV"
After discussion with @compilade, the size of the parallelism here is
not worth the cost in complexity or overhead of the parallel for.
https://github.com/ggml-org/llama.cpp/pull/14743#discussion_r2223395357
This reverts commit
16bc059660c1c59e566628201c0ca2c20c9f4bc3 .
Signed-off-by: Gabe Goodhart <redacted>
* refactor: Simplify shared memory sizing
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <redacted>
Co-Authored-By: Georgi Gerganov <redacted>
---------
Signed-off-by: Gabe Goodhart <redacted>
Co-authored-by: Georgi Gerganov <redacted>
lhez [Fri, 25 Jul 2025 15:12:13 +0000 (08:12 -0700)]
opencl: add fused `rms_norm_mul` (llama/14841)
* opencl: add fused `rms_norm` + `mul`
* opencl: improve workgroup size for `rms_norm_mul`
Oliver Simons [Fri, 25 Jul 2025 11:29:57 +0000 (13:29 +0200)]
ggml : remove invalid portPos specifiers from dot files (llama/14838)
Neither "g" nor "x" are valid portPos specifiers per the official
[graphviz documents](https://graphviz.org/docs/attr-types/portPos/):
> If a compass point is used, it must have the form "n","ne","e","se","s","sw","w","nw","c","_".
I tested locally for it to fall back to default portPos specifier if an
invalid portPos is specified. As a consequence, we can remove associated
code.
Chris Rohlf [Fri, 25 Jul 2025 10:17:02 +0000 (06:17 -0400)]
rpc : check for null buffers in get/set/copy tensor endpoints (llama/14868)
Diego Devesa [Fri, 25 Jul 2025 08:07:26 +0000 (01:07 -0700)]
sched : fix multiple evaluations of the same graph with pipeline parallelism (llama/14855)
ggml-ci
R0CKSTAR [Thu, 24 Jul 2025 19:05:37 +0000 (03:05 +0800)]
musa: upgrade musa sdk to rc4.2.0 (llama/14498)
* musa: apply mublas API changes
Signed-off-by: Xiaodong Ye <redacted>
* musa: update musa version to 4.2.0
Signed-off-by: Xiaodong Ye <redacted>
* musa: restore MUSA graph settings in CMakeLists.txt
Signed-off-by: Xiaodong Ye <redacted>
* musa: disable mudnnMemcpyAsync by default
Signed-off-by: Xiaodong Ye <redacted>
* musa: switch back to non-mudnn images
Signed-off-by: Xiaodong Ye <redacted>
* minor changes
Signed-off-by: Xiaodong Ye <redacted>
* musa: restore rc in docker image tag
Signed-off-by: Xiaodong Ye <redacted>
---------
Signed-off-by: Xiaodong Ye <redacted>
Kai Pastor [Thu, 24 Jul 2025 17:58:02 +0000 (19:58 +0200)]
cmake : Indent ggml-config.cmake (ggml/1310)
Alberto Cabrera Pérez [Thu, 24 Jul 2025 10:09:57 +0000 (11:09 +0100)]
sycl: fixed semantics of block offset calculation (llama/14814)
Georgi Gerganov [Thu, 24 Jul 2025 07:24:05 +0000 (10:24 +0300)]
metal : fix fusion across different encoders (llama/14849)
* metal : fix fusion across different encoders
ggml-ci
* cont : add assertion
ggml-ci
Donghyeon Jeong [Thu, 24 Jul 2025 04:50:41 +0000 (13:50 +0900)]
sycl: fix undefined variable in work group size check (llama/14843)
Johannes Gäßler [Wed, 23 Jul 2025 19:43:25 +0000 (21:43 +0200)]
CUDA: fix overflow in FA, tune performance (llama/14840)
Johannes Gäßler [Wed, 23 Jul 2025 16:22:30 +0000 (18:22 +0200)]
CUDA: fix compilation with GGML_CUDA_F16 (llama/14837)
Johannes Gäßler [Wed, 23 Jul 2025 10:35:53 +0000 (12:35 +0200)]
CUDA: fix quantized KV cache + multiple sequences (llama/14822)
* CUDA: fix quantized KV cache + multiple sequences
* Update ggml/src/ggml-cuda/fattn-common.cuh
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
lixing-star [Wed, 23 Jul 2025 06:39:51 +0000 (14:39 +0800)]
ggml: fix loongarch quantize_row_q8_1 error (llama/14827)
chen fan [Wed, 23 Jul 2025 03:58:00 +0000 (11:58 +0800)]
CANN: weight format to NZ for Ascend310P3 (llama/14407)
* weight format to nz for 310p
* remove quant weight format to nz
* clean code
* fix
* make the conditions for converting weights to NZ format consistent
* clean code