]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
server : use common_token_to_piece instead of common_detokenize (#11740)
authorDaniel Bevenius <redacted>
Tue, 11 Feb 2025 13:06:45 +0000 (14:06 +0100)
committerGitHub <redacted>
Tue, 11 Feb 2025 13:06:45 +0000 (14:06 +0100)
commita18f481f99962638092d6f1c98b1d34d3e3256de
treed831393b48c3edd4c1ff5e9fa0efec10976913e2
parentb9ab0a4d0b2ed19effec130921d05fb5c30b68c5
server : use common_token_to_piece instead of common_detokenize (#11740)

* server : use common_token_to_piece instead of common_detokenize

This commit replaces the call to common_detokenize with
common_token_to_piece in the populate_token_probs.

The motivation for this change is to avoid an issue where
common_detokenize would remove the word boundary character for tokens,
which caused a regression in the server generated token probabilities.

Resolves: https://github.com/ggerganov/llama.cpp/issues/11728

* squash! server : use common_token_to_piece instead of common_detokenize

Use common_token_to_piece for post_sampling_probs as well.
examples/server/server.cpp