]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
Add support for encoder-only T5 models (#8900)
authorfairydreaming <redacted>
Sat, 10 Aug 2024 09:43:26 +0000 (11:43 +0200)
committerGitHub <redacted>
Sat, 10 Aug 2024 09:43:26 +0000 (11:43 +0200)
commit7c3f55c10051c634546247387c5c359c9d499360
tree39f7fe944edaea347881b4eb9b62c9c8c23d7ef6
parent911b437f228e75aa3d235acec21bfddd23ecce2f
Add support for encoder-only T5 models (#8900)

* gguf-py : add T5ENCODER model architecture

* common : call llama_decode() during warmup only if the model has decoder

* convert-hf : add T5EncoderModel

* llama : add llama_model_has_decoder() API function

* llama : split build_t5() into build_t5_encoder() and build_t5_decoder()

* llama : add support for LLM_ARCH_T5ENCODER

* llama-embedding : add support for LLAMA_POOLING_TYPE_NONE

* llama-embedding : add support for encoder-only models

---------

Co-authored-by: Stanisław Szymczyk <redacted>
common/common.cpp
convert_hf_to_gguf.py
examples/embedding/embedding.cpp
gguf-py/gguf/constants.py
include/llama.h
src/llama.cpp