]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commitdiff
llava : add requirements.txt and update README.md (#5428)
authorDaniel Bevenius <redacted>
Fri, 9 Feb 2024 13:00:59 +0000 (14:00 +0100)
committerGitHub <redacted>
Fri, 9 Feb 2024 13:00:59 +0000 (15:00 +0200)
* llava: add requirements.txt and update README.md

This commit adds a `requirements.txt` file to the `examples/llava`
directory. This file contains the required Python packages to run the
scripts in the `examples/llava` directory.

The motivation of this to make it easier for users to run the scripts in
`examples/llava`. This will avoid users from having to possibly run into
missing package issues if the packages are not installed on their system.

Signed-off-by: Daniel Bevenius <redacted>
* llava: fix typo in llava-surgery.py output

Signed-off-by: Daniel Bevenius <redacted>
---------

Signed-off-by: Daniel Bevenius <redacted>
examples/llava/README.md
examples/llava/llava-surgery.py
examples/llava/requirements.txt [new file with mode: 0644]

index 721d5e61397552684f1eb5eb144b0329301bc6cf..19f1a50a235d77dc0757c749abfe36e7bfca589d 100644 (file)
@@ -29,19 +29,25 @@ git clone https://huggingface.co/liuhaotian/llava-v1.5-7b
 git clone https://huggingface.co/openai/clip-vit-large-patch14-336
 ```
 
-2. Use `llava-surgery.py` to split the LLaVA model to LLaMA and multimodel projector constituents:
+2. Install the required Python packages:
+
+```sh
+pip install -r examples/llava/requirements.txt
+```
+
+3. Use `llava-surgery.py` to split the LLaVA model to LLaMA and multimodel projector constituents:
 
 ```sh
 python ./examples/llava/llava-surgery.py -m ../llava-v1.5-7b
 ```
 
-3. Use `convert-image-encoder-to-gguf.py` to convert the LLaVA image encoder to GGUF:
+4. Use `convert-image-encoder-to-gguf.py` to convert the LLaVA image encoder to GGUF:
 
 ```sh
 python ./examples/llava/convert-image-encoder-to-gguf.py -m ../clip-vit-large-patch14-336 --llava-projector ../llava-v1.5-7b/llava.projector --output-dir ../llava-v1.5-7b
 ```
 
-4. Use `convert.py` to convert the LLaMA part of LLaVA to GGUF:
+5. Use `convert.py` to convert the LLaMA part of LLaVA to GGUF:
 
 ```sh
 python ./convert.py ../llava-v1.5-7b
index 515f6b58d47f5f4bd7d271501206b54e973af259..0a61efdfe14d16b3b2caefe178196c139b4b1802 100644 (file)
@@ -42,5 +42,5 @@ if len(clip_tensors) > 0:
 torch.save(checkpoint, path)
 
 print("Done!")
-print(f"Now you can convert {args.model} to a regular LLaMA GGUF file.")
+print(f"Now you can convert {args.model} to a regular LLaMA GGUF file.")
 print(f"Also, use {args.model}/llava.projector to prepare a llava-encoder.gguf file.")
diff --git a/examples/llava/requirements.txt b/examples/llava/requirements.txt
new file mode 100644 (file)
index 0000000..f80f727
--- /dev/null
@@ -0,0 +1,3 @@
+-r ../../requirements/requirements-convert.txt
+pillow~=10.2.0
+torch~=2.1.1