Daniel Bevenius [Wed, 21 Feb 2024 13:36:57 +0000 (14:36 +0100)]
llava : add --skip-unknown to 1.6 convert.py (#5632)
This commit adds the `--skip-unknown` option to the convert.py script
and removes the saving of the updated checkpoints to avoid updating
possibly checked out files.
The motivation for this change is that this was done for 1.5
in Commit fc0c8d286a533363a9a663510b62af85ffad58b3 ("llava :
update surgery script to not remove tensors") and makes the examples
more consistent.
Kawrakow [Wed, 21 Feb 2024 09:39:52 +0000 (11:39 +0200)]
IQ4_NL: 4-bit non-linear quants with blocks of 32 (#5590)
* iq4_nl: squash commits for easier rebase
* Basics (quantize, dequantize)
* CUDA dequantize and dot product
* Slightly faster CUDA dot product (120 t/s)
* Switch to 6-bit scales
* Scalar dot product
* AVX2 dot product
* ARM_NEON dot product
* Works on metal, but still slow
* Slightly better Metal dot product
* Another small Metal improvement
* Metal dot product is getting there
* Faster CUDA dot product
* Add 1/8 ffn_down layers as Q5_K when no imatrix has been provided
* Report the actual bpw
* Add _xs mix that is 4.05 bpw for non-MoE models
* Remove IQ4_XS for now, slightly adjust kvalues_iq4nl
* AVX2 dot product uses Q8_0 instead of Q8_K
* Add to test-backend-ops
* Minor fix
* Also use use Q5_K for attn_output in MoE models
* Fixes after merging latest master
* Switching to blocks of 32
* AVX2 for blocks of 32
* Scaler dot product for blocks of 32
* ARM_NEON dot product for blocks of 32
* Metal kernels for blocks of 32
* Slightly faster Metal kernels
* iq4_nl: Fix after merging with master
* iq4_nl: another fix after merging with master
* Use IQ4_NL instead of Q4_K when using k-quants is not possible
* Fix typo that makes several tests fail
* It was the ggml_vdotq thing missed inside the brackets
Daniel Bevenius [Tue, 20 Feb 2024 17:30:27 +0000 (18:30 +0100)]
llava : add explicit instructions for llava-1.6 (#5611)
This commit contains a suggestion for the README.md in the llava
example. The suggestion adds explicit instructions for how to convert
a llava-1.6 model and run it using llava-cli.
The motivation for this is that having explicit instructions similar to
the 1.5 instructions will make it easier for users to try this out.
Daniel Bevenius [Mon, 19 Feb 2024 08:31:59 +0000 (09:31 +0100)]
llava : avoid changing the original BakLLaVA model (#5577)
This is a follup of Commit fc0c8d286a533363a9a663510b62af85ffad58b3
("llava : update surgery script to not remove tensors") but this time
the change is to the BakLLaVA specific part of the surgery script.
I've been able to test this using SkunkworksAI/BakLLaVA-1 and it works
as expected using the instructions in README.md.
Jared Van Bortel [Sun, 18 Feb 2024 21:21:52 +0000 (16:21 -0500)]
build : pass all warning flags to nvcc via -Xcompiler (#5570)
* build : pass all warning flags to nvcc via -Xcompiler
* make : fix apparent mis-merge from #3952
* make : fix incorrect GF_CC_VER for CUDA host compiler
Daniel Bevenius [Sun, 18 Feb 2024 16:19:23 +0000 (17:19 +0100)]
llava : update surgery script to not remove tensors (#5536)
This commit updates the surgery script to not remove the tensors from the
model file. For this to work the `--skip-unknown` flag is added as an
argument to the convert.py script in README.md.
The motivation for this change is that the surgery script currently
removes the projector tensors from the model file. If the model was
checked out from a repository, the model file will have been updated
and have to be checked out again to reset this effect. If this can be
avoided I think it would be preferable.
I did not perform this change for BakLLaVA models as I am not sure
how that part works.
Co-authored-by: Jared Van Bortel <redacted>
* Update READMEs with info about numa flags, change INTERLEAVE strategy name to DISTRIBUTE everywhere, implement the improved distribution strategy from @rankaiyx, fix a spelling mistake and un-merge some bad merges
* split numa init out from llama_backend_init and created llama_numa_init. Updated all code paths and samples
* Fix up some boolean vs enum comparisons
* Added #ifdefs for non-Linux OS that don't have cpu_set_t datatype
* Update ggml.h
Align enum values
Co-authored-by: Georgi Gerganov <redacted>
* Update ggml.c
Remove whitespace
Co-authored-by: Georgi Gerganov <redacted>
* Update ggml.c
align paremeters
Co-authored-by: Georgi Gerganov <redacted>
* Update examples/server/server.cpp
remove whitespace and align brace
Co-authored-by: Georgi Gerganov <redacted>
* Update common/common.cpp
Remove whitespace and align brace
Co-authored-by: Georgi Gerganov <redacted>
* unified ggml_numa_strategy enum and fixed text alignment in server.cpp example
* Update ggml.c
simplified return for platforms without NUMA support
Co-authored-by: Jared Van Bortel <redacted>
* removed redundant else from cli argument processing of --numa
* whitespace
---------
Co-authored-by: root <redacted> Co-authored-by: Jared Van Bortel <redacted> Co-authored-by: Georgi Gerganov <redacted> Co-authored-by: Jared Van Bortel <redacted>
Daniel Bevenius [Fri, 16 Feb 2024 09:24:39 +0000 (10:24 +0100)]
llava : fix clip-model-is-vision flag in README.md (#5509)
* llava: fix clip-model-is-vision flag in README.md
This commit fixes the flag `--clip_model_is_vision` in README.md which
is does not match the actual flag:
```console
$ python convert-image-encoder-to-gguf.py --help
...
--clip-model-is-vision
The clip model is a pure vision model
(ShareGPT4V vision extract for example)
```
Signed-off-by: Daniel Bevenius <redacted>
* llava: update link to vit config in README.md
Signed-off-by: Daniel Bevenius <redacted>
---------
John [Wed, 14 Feb 2024 07:38:35 +0000 (08:38 +0100)]
llava : support v1.6 (#5267)
* Create llava-survery-v2.py
* Update convert-image-encoder-to-gguf.py
* Update convert-image-encoder-to-gguf.py
* Rename llava-survery-v2.py to llava-surgery-v2.py
* Update convert-image-encoder-to-gguf.py
will now search for projector
* Update convert-image-encoder-to-gguf.py
whoops
* Update llava-surgery-v2.py
* Clip: Bugfix for normalization (it did not loat the 3 std and mean values)
Clip: bicubic resize function
Clip: added save-to-bmp/pil for debugging and conversion from/to 32/8 images
Clip: added normalization with FP16 precision simulation (image tensors match HF implementation, can be switched off, only used for llava-1.6)
Clip: added newline tensor, mergetype kv, image-grid kv, new resize-pad function with resolution from gridpoints
Clip: clip_image_preprocess now returns a float * vector instead of float, this way llava 1.5 and 1.6 is supported
llava: added ggml cpu graph for embedding patching, added spatial_unpad preliminary support, added a lot of comments that need to be cleaned when all is final
convert-image-encoder: fixed image-grid flattening
* whitespace corrections
* ws
* Tensors are now properly permuted.
Before the embeddings were inserted 1:1, now they are split into the 24x24 patches as in reference.
* ws
* added verbose_prompt support into cli
added stopwords for llava-1.6 into cli
* moved llava functions to llava.cpp, made clip.h C compatible API, replaced vector style functions with pointers, added a debug define to remove functions from compilation while not needed
* ws
* convert : skip unknown tensors (need for LLaVA)
* llava : update readme
* llava : fix compile warnings
* llava : style
* convert : add --skip-unknown CLI arg
* server : remove clip structs
* bugfix for non llava-1.6
It should now work with llava-1.5 as well
* clip : minor code rearrange
* llava : update readme a bit
---------
Co-authored-by: John <redacted> Co-authored-by: Georgi Gerganov <redacted>
This commit renames the feed-forward tensors w1, w2 and w3 to ffn_gate,
ffn_down and ffn_up respectively.
The motivation for this change is to make it easier to understand the
purpose of the tensors. This also seems to be inline with the names
used in the llama_layer struct in llama.cpp.
Signed-off-by: Daniel Bevenius <redacted>
* train-text-from-scratch: rename ff tensors
This commit renames the feed-forward tensors w1, w2 and w3 to ffn_gate,
ffn_down and ffn_up respectively.
The motivation for this change is to make it easier to understand the
purpose of the tensors. This also seems to be inline with the names
used in the llama_layer struct in llama.cpp
Signed-off-by: Daniel Bevenius <redacted>
---------
Daniel Bevenius [Mon, 12 Feb 2024 08:38:44 +0000 (09:38 +0100)]
llava : remove prog parameter from ArgumentParser (#5457)
* llava: remove prog parameter from ArgumentParser
This commit removes the `prog` parameter from `ArgumentParser`
so that it uses the default value which is the name of the script.
The motivation for this change is that currently the usage output looks
like this:
```console
$ python examples/llava/convert-image-encoder-to-gguf.py --help
usage: convert_hf_to_gguf.py [-h] ...
```
And with this change it will look like this:
```console
$ python examples/llava/convert-image-encoder-to-gguf.py --help
usage: convert-image-encoder-to-gguf.py [-h] ...
```
Signed-off-by: Daniel Bevenius <redacted>
* ci: add W503 to flake8 ignore list
This commit adds W503 to the ignore list for flake8. This is done to
avoid the following error:
W503 line break before binary operator
Signed-off-by: Daniel Bevenius <redacted>
---------
Douglas Hanley [Sun, 11 Feb 2024 16:21:38 +0000 (10:21 -0600)]
Add support for BERT embedding models (#5423)
* BERT model graph construction (build_bert)
* WordPiece tokenizer (llm_tokenize_wpm)
* Add flag for non-causal attention models
* Allow for models that only output embeddings
* Support conversion of BERT models to GGUF
* Based on prior work by @xyzhang626 and @skeskinen
---------
Co-authored-by: Jared Van Bortel <redacted> Co-authored-by: Jared Van Bortel <redacted> Co-authored-by: Georgi Gerganov <redacted>