]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
fix: Compute the full sum in llama-eval-callback, not just the sum of printed values...
authorGabe Goodhart <redacted>
Thu, 28 Aug 2025 20:27:36 +0000 (15:27 -0500)
committerGitHub <redacted>
Thu, 28 Aug 2025 20:27:36 +0000 (15:27 -0500)
commita8bca68f727844e7dcf24a956003b3c2039ea563
treefd756fa7f5c8c61d7c93a82b39716b0d2e8d033d
parentc97dc093912ad014f6d22743ede0d4d7fd82365a
fix: Compute the full sum in llama-eval-callback, not just the sum of printed values (#15637)

This makes it much easier to compare between llama.cpp and transformers!

https://github.com/ggml-org/llama.cpp/issues/nemotron-nano-15409
Branch: gabe-l-hart/nvidia-nemotron-nano-15409

Signed-off-by: Gabe Goodhart <redacted>
examples/eval-callback/eval-callback.cpp