]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
llama : fix BPE pre-tokenization (#6920)
authorGeorgi Gerganov <redacted>
Mon, 29 Apr 2024 13:58:41 +0000 (16:58 +0300)
committerGitHub <redacted>
Mon, 29 Apr 2024 13:58:41 +0000 (16:58 +0300)
commitf4ab2a41476600a98067a9474ea8f9e6db41bcfa
tree4e840ec5b4243ed43906a576e396995e3d9dbc21
parent3f167476b11efa7ab08f6cacdeb8cab0935c1249
llama : fix BPE pre-tokenization (#6920)

* merged the changes from deepseeker models to main branch

* Moved regex patterns to unicode.cpp and updated unicode.h

* Moved header files

* Resolved issues

* added and refactored unicode_regex_split and related functions

* Updated/merged the deepseek coder pr

* Refactored code

* Adding unicode regex mappings

* Adding unicode regex function

* Added needed functionality, testing remains

* Fixed issues

* Fixed issue with gpt2 regex custom preprocessor

* unicode : fix? unicode_wstring_to_utf8

* lint : fix whitespaces

* tests : add tokenizer tests for numbers

* unicode : remove redundant headers

* tests : remove and rename tokenizer test scripts

* tests : add sample usage

* gguf-py : reader prints warnings on duplicate keys

* llama : towards llama3 tokenization support (wip)

* unicode : shot in the dark to fix tests on Windows

* unicode : first try custom implementations

* convert : add "tokenizer.ggml.pre" GGUF KV (wip)

* llama : use new pre-tokenizer type

* convert : fix pre-tokenizer type writing

* lint : fix

* make : add test-tokenizer-0-llama-v3

* wip

* models : add llama v3 vocab file

* llama : adapt punctuation regex + add llama 3 regex

* minor

* unicode : set bomb

* unicode : set bomb

* unicode : always use std::wregex

* unicode : support \p{N}, \p{L} and \p{P} natively

* unicode : try fix windows

* unicode : category support via std::regex

* unicode : clean-up

* unicode : simplify

* convert : add convert-hf-to-gguf-update.py

ggml-ci

* lint : update

* convert : add falcon

ggml-ci

* unicode : normalize signatures

* lint : fix

* lint : fix

* convert : remove unused functions

* convert : add comments

* convert : exercise contractions

ggml-ci

* lint : fix

* cmake : refactor test targets

* tests : refactor vocab tests

ggml-ci

* tests : add more vocabs and tests

ggml-ci

* unicode : cleanup

* scripts : ignore new update script in check-requirements.sh

* models : add phi-3, mpt, gpt-2, starcoder

* tests : disable obsolete

ggml-ci

* tests : use faster bpe test

ggml-ci

* llama : more prominent warning for old BPE models

* tests : disable test-tokenizer-1-bpe due to slowness

ggml-ci

---------

Co-authored-by: Jaggzh <redacted>
Co-authored-by: Kazim Abrar Mahi <redacted>
64 files changed:
.github/workflows/python-lint.yml
.gitignore
Makefile
common/common.cpp
common/common.h
convert-hf-to-gguf-update.py [new file with mode: 0644]
convert-hf-to-gguf.py
convert-llama-ggml-to-gguf.py
convert-persimmon-to-gguf.py
gguf-py/gguf/constants.py
gguf-py/gguf/gguf_reader.py
gguf-py/gguf/gguf_writer.py
llama.cpp
llama.h
models/ggml-vocab-bert-bge.gguf [new file with mode: 0644]
models/ggml-vocab-bert-bge.gguf.inp [new file with mode: 0644]
models/ggml-vocab-bert-bge.gguf.out [new file with mode: 0644]
models/ggml-vocab-deepseek-coder.gguf [new file with mode: 0644]
models/ggml-vocab-deepseek-coder.gguf.inp [new file with mode: 0644]
models/ggml-vocab-deepseek-coder.gguf.out [new file with mode: 0644]
models/ggml-vocab-deepseek-llm.gguf [new file with mode: 0644]
models/ggml-vocab-deepseek-llm.gguf.inp [new file with mode: 0644]
models/ggml-vocab-deepseek-llm.gguf.out [new file with mode: 0644]
models/ggml-vocab-falcon.gguf
models/ggml-vocab-falcon.gguf.inp [new file with mode: 0644]
models/ggml-vocab-falcon.gguf.out [new file with mode: 0644]
models/ggml-vocab-gpt-2.gguf [new file with mode: 0644]
models/ggml-vocab-gpt-2.gguf.inp [new file with mode: 0644]
models/ggml-vocab-gpt-2.gguf.out [new file with mode: 0644]
models/ggml-vocab-llama-bpe.gguf [new file with mode: 0644]
models/ggml-vocab-llama-bpe.gguf.inp [new file with mode: 0644]
models/ggml-vocab-llama-bpe.gguf.out [new file with mode: 0644]
models/ggml-vocab-llama-spm.gguf [new file with mode: 0644]
models/ggml-vocab-llama-spm.gguf.inp [new file with mode: 0644]
models/ggml-vocab-llama-spm.gguf.out [new file with mode: 0644]
models/ggml-vocab-llama.gguf [deleted file]
models/ggml-vocab-mpt.gguf
models/ggml-vocab-mpt.gguf.inp [new file with mode: 0644]
models/ggml-vocab-mpt.gguf.out [new file with mode: 0644]
models/ggml-vocab-phi-3.gguf [new file with mode: 0644]
models/ggml-vocab-phi-3.gguf.inp [new file with mode: 0644]
models/ggml-vocab-phi-3.gguf.out [new file with mode: 0644]
models/ggml-vocab-stablelm-3b-4e1t.gguf [deleted file]
models/ggml-vocab-stablelm.gguf [new file with mode: 0644]
models/ggml-vocab-starcoder.gguf
models/ggml-vocab-starcoder.gguf.inp [new file with mode: 0644]
models/ggml-vocab-starcoder.gguf.out [new file with mode: 0644]
requirements.txt
requirements/requirements-convert-hf-to-gguf-update.txt [new file with mode: 0644]
scripts/check-requirements.sh
tests/CMakeLists.txt
tests/test-tokenizer-0-bpe.py [new file with mode: 0644]
tests/test-tokenizer-0-falcon.cpp [deleted file]
tests/test-tokenizer-0-falcon.py [deleted file]
tests/test-tokenizer-0-llama.cpp [deleted file]
tests/test-tokenizer-0-llama.py [deleted file]
tests/test-tokenizer-0-spm.py [new file with mode: 0644]
tests/test-tokenizer-0.cpp [new file with mode: 0644]
tests/test-tokenizer-1-llama.cpp [deleted file]
tests/test-tokenizer-1-spm.cpp [new file with mode: 0644]
unicode-data.cpp
unicode-data.h
unicode.cpp
unicode.h