]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
gguf-py : bump sentencepiece version (#19319)
authorAlex Trotta <redacted>
Fri, 6 Feb 2026 20:05:19 +0000 (15:05 -0500)
committerGitHub <redacted>
Fri, 6 Feb 2026 20:05:19 +0000 (21:05 +0100)
commit3228e7728789e0456d0458ce38d20d0b1d60a9aa
tree8fe918a12374fb10d7169741104a8c4592981a44
parent7fbd36c50c1a439a485486729faf20b47a0e6d8c
gguf-py : bump sentencepiece version (#19319)

* gguf-py: Bump sentencepiece version

There's a new version that's been out for a while that addresses the issues mentioned in https://github.com/ggml-org/llama.cpp/pull/14200. There's a long chain of reasons I would like this change, but the short version is that it allows people who use both `sentencepiece` and `gguf` to take advantage of these fixes. On conda-forge, currently, it locks the version (since there is no notion of optional dependencies).

Regardless, I don't think this should be too controversial.

* review feedback
gguf-py/pyproject.toml
pyproject.toml
requirements/requirements-convert_legacy_llama.txt