]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava...
authorOlivier Chafik <redacted>
Wed, 12 Jun 2024 23:41:52 +0000 (00:41 +0100)
committerGitHub <redacted>
Wed, 12 Jun 2024 23:41:52 +0000 (00:41 +0100)
commit1c641e6aac5c18b964e7b32d9dbbb4bf5301d0d7
tree616348dac8e67d80a03a81847ce9ee4bb7e19d49
parent963552903f51043ee947a8deeaaa7ec00bc3f1a4
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)

* `main`/`server`: rename to `llama` / `llama-server` for consistency w/ homebrew

* server: update refs -> llama-server

gitignore llama-server

* server: simplify nix package

* main: update refs -> llama

fix examples/main ref

* main/server: fix targets

* update more names

* Update build.yml

* rm accidentally checked in bins

* update straggling refs

* Update .gitignore

* Update server-llm.sh

* main: target name -> llama-cli

* Prefix all example bins w/ llama-

* fix main refs

* rename {main->llama}-cmake-pkg binary

* prefix more cmake targets w/ llama-

* add/fix gbnf-validator subfolder to cmake

* sort cmake example subdirs

* rm bin files

* fix llama-lookup-* Makefile rules

* gitignore /llama-*

* rename Dockerfiles

* rename llama|main -> llama-cli; consistent RPM bin prefixes

* fix some missing -cli suffixes

* rename dockerfile w/ llama-cli

* rename(make): llama-baby-llama

* update dockerfile refs

* more llama-cli(.exe)

* fix test-eval-callback

* rename: llama-cli-cmake-pkg(.exe)

* address gbnf-validator unused fread warning (switched to C++ / ifstream)

* add two missing llama- prefixes

* Updating docs for eval-callback binary to use new `llama-` prefix.

* Updating a few lingering doc references for rename of main to llama-cli

* Updating `run-with-preset.py` to use new binary names.
Updating docs around `perplexity` binary rename.

* Updating documentation references for lookup-merge and export-lora

* Updating two small `main` references missed earlier in the finetune docs.

* Update apps.nix

* update grammar/README.md w/ new llama-* names

* update llama-rpc-server bin name + doc

* Revert "update llama-rpc-server bin name + doc"

This reverts commit e474ef1df481fd8936cd7d098e3065d7de378930.

* add hot topic notice to README.md

* Update README.md

* Update README.md

* rename gguf-split & quantize bins refs in **/tests.sh

---------

Co-authored-by: HanClinto <redacted>
138 files changed:
.devops/cloud-v-pipeline
.devops/llama-cli-cuda.Dockerfile [new file with mode: 0644]
.devops/llama-cli-intel.Dockerfile [new file with mode: 0644]
.devops/llama-cli-rocm.Dockerfile [new file with mode: 0644]
.devops/llama-cli-vulkan.Dockerfile [new file with mode: 0644]
.devops/llama-cli.Dockerfile [new file with mode: 0644]
.devops/llama-cpp-clblast.srpm.spec
.devops/llama-cpp-cuda.srpm.spec
.devops/llama-cpp.srpm.spec
.devops/llama-server-cuda.Dockerfile [new file with mode: 0644]
.devops/llama-server-intel.Dockerfile [new file with mode: 0644]
.devops/llama-server-rocm.Dockerfile [new file with mode: 0644]
.devops/llama-server-vulkan.Dockerfile [new file with mode: 0644]
.devops/llama-server.Dockerfile [new file with mode: 0644]
.devops/main-cuda.Dockerfile [deleted file]
.devops/main-intel.Dockerfile [deleted file]
.devops/main-rocm.Dockerfile [deleted file]
.devops/main-vulkan.Dockerfile [deleted file]
.devops/main.Dockerfile [deleted file]
.devops/nix/apps.nix
.devops/nix/package.nix
.devops/server-cuda.Dockerfile [deleted file]
.devops/server-intel.Dockerfile [deleted file]
.devops/server-rocm.Dockerfile [deleted file]
.devops/server-vulkan.Dockerfile [deleted file]
.devops/server.Dockerfile [deleted file]
.devops/tools.sh
.dockerignore
.github/ISSUE_TEMPLATE/01-bug-low.yml
.github/ISSUE_TEMPLATE/02-bug-medium.yml
.github/ISSUE_TEMPLATE/03-bug-high.yml
.github/ISSUE_TEMPLATE/04-bug-critical.yml
.github/workflows/bench.yml
.github/workflows/build.yml
.github/workflows/docker.yml
.github/workflows/server.yml
.gitignore
Makefile
README-sycl.md
README.md
ci/run.sh
docs/HOWTO-add-model.md
docs/token_generation_performance_tips.md
examples/CMakeLists.txt
examples/Miku.sh
examples/baby-llama/CMakeLists.txt
examples/base-translate.sh
examples/batched-bench/CMakeLists.txt
examples/batched-bench/README.md
examples/batched.swift/Makefile
examples/batched.swift/Package.swift
examples/batched.swift/README.md
examples/batched/CMakeLists.txt
examples/batched/README.md
examples/benchmark/CMakeLists.txt
examples/chat-13B.sh
examples/chat-persistent.sh
examples/chat-vicuna.sh
examples/chat.sh
examples/convert-llama2c-to-ggml/CMakeLists.txt
examples/convert-llama2c-to-ggml/README.md
examples/embedding/CMakeLists.txt
examples/embedding/README.md
examples/eval-callback/CMakeLists.txt
examples/eval-callback/README.md
examples/export-lora/CMakeLists.txt
examples/export-lora/README.md
examples/finetune/CMakeLists.txt
examples/finetune/README.md
examples/finetune/finetune.sh
examples/gbnf-validator/CMakeLists.txt
examples/gbnf-validator/gbnf-validator.cpp
examples/gguf-split/CMakeLists.txt
examples/gguf-split/tests.sh
examples/gguf/CMakeLists.txt
examples/gritlm/CMakeLists.txt
examples/gritlm/README.md
examples/imatrix/CMakeLists.txt
examples/imatrix/README.md
examples/infill/CMakeLists.txt
examples/infill/README.md
examples/jeopardy/jeopardy.sh
examples/json-schema-pydantic-example.py
examples/json_schema_to_grammar.py
examples/llama-bench/README.md
examples/llava/CMakeLists.txt
examples/llava/MobileVLM-README.md
examples/llava/README.md
examples/llava/android/adb_run.sh
examples/lookahead/CMakeLists.txt
examples/lookup/CMakeLists.txt
examples/lookup/lookup-merge.cpp
examples/main-cmake-pkg/CMakeLists.txt
examples/main-cmake-pkg/README.md
examples/main/CMakeLists.txt
examples/main/README.md
examples/parallel/CMakeLists.txt
examples/passkey/CMakeLists.txt
examples/passkey/README.md
examples/perplexity/CMakeLists.txt
examples/perplexity/perplexity.cpp
examples/quantize-stats/CMakeLists.txt
examples/quantize/CMakeLists.txt
examples/quantize/tests.sh
examples/reason-act.sh
examples/retrieval/CMakeLists.txt
examples/retrieval/README.md
examples/rpc/README.md
examples/save-load-state/CMakeLists.txt
examples/server-llama2-13B.sh
examples/server/CMakeLists.txt
examples/server/README.md
examples/server/bench/README.md
examples/server/bench/bench.py
examples/server/public_simplechat/readme.md
examples/server/tests/README.md
examples/server/tests/features/steps/steps.py
examples/simple/CMakeLists.txt
examples/speculative/CMakeLists.txt
examples/sycl/CMakeLists.txt
examples/sycl/README.md
examples/sycl/run-llama2.sh
examples/tokenize/CMakeLists.txt
examples/train-text-from-scratch/CMakeLists.txt
examples/train-text-from-scratch/README.md
flake.nix
grammars/README.md
pocs/vdot/CMakeLists.txt
scripts/get-hellaswag.sh
scripts/get-wikitext-103.sh
scripts/get-wikitext-2.sh
scripts/get-winogrande.sh
scripts/hf.sh
scripts/pod-llama.sh
scripts/qnt-all.sh
scripts/run-all-ppl.sh
scripts/run-with-preset.py
scripts/server-llm.sh