]> git.djapps.eu Git - pkg/ggml/sources/ggml/commitdiff
readme : simplify
authorGeorgi Gerganov <redacted>
Mon, 16 Mar 2026 12:54:53 +0000 (14:54 +0200)
committerGitHub <redacted>
Mon, 16 Mar 2026 12:54:53 +0000 (14:54 +0200)
README.md

index aaa7162c46ca10af7caac792a97f2523474a3ae9..455c812587c1ccc9a745a09e3968a87cb0486df1 100644 (file)
--- a/README.md
+++ b/README.md
@@ -1,6 +1,6 @@
 # ggml
 
-[Roadmap](https://github.com/users/ggerganov/projects/7) / [Manifesto](https://github.com/ggerganov/llama.cpp/discussions/205)
+[Manifesto](https://github.com/ggerganov/llama.cpp/discussions/205)
 
 Tensor library for machine learning
 
@@ -44,64 +44,6 @@ cmake --build . --config Release -j 8
 
 For more information, checkout the corresponding programs in the [examples](examples) folder.
 
-## Using CUDA
-
-```bash
-# fix the path to point to your CUDA compiler
-cmake -DGGML_CUDA=ON -DCMAKE_CUDA_COMPILER=/usr/local/cuda-12.1/bin/nvcc ..
-```
-
-## Using hipBLAS
-
-```bash
-cmake -DCMAKE_C_COMPILER="$(hipconfig -l)/clang" -DCMAKE_CXX_COMPILER="$(hipconfig -l)/clang++" -DGGML_HIP=ON
-```
-
-## Using SYCL
-
-```bash
-# linux
-source /opt/intel/oneapi/setvars.sh
-cmake -G "Ninja" -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DGGML_SYCL=ON ..
-
-# windows
-"C:\Program Files (x86)\Intel\oneAPI\setvars.bat"
-cmake -G "Ninja" -DCMAKE_C_COMPILER=cl -DCMAKE_CXX_COMPILER=icx -DGGML_SYCL=ON ..
-```
-
-## Compiling for Android
-
-Download and unzip the NDK from this download [page](https://developer.android.com/ndk/downloads). Set the NDK_ROOT_PATH environment variable or provide the absolute path to the CMAKE_ANDROID_NDK in the command below.
-
-```bash
-cmake .. \
-   -DCMAKE_SYSTEM_NAME=Android \
-   -DCMAKE_SYSTEM_VERSION=33 \
-   -DCMAKE_ANDROID_ARCH_ABI=arm64-v8a \
-   -DCMAKE_ANDROID_NDK=$NDK_ROOT_PATH
-   -DCMAKE_ANDROID_STL_TYPE=c++_shared
-```
-
-```bash
-# create directories
-adb shell 'mkdir /data/local/tmp/bin'
-adb shell 'mkdir /data/local/tmp/models'
-
-# push the compiled binaries to the folder
-adb push bin/* /data/local/tmp/bin/
-
-# push the ggml library
-adb push src/libggml.so /data/local/tmp/
-
-# push model files
-adb push models/gpt-2-117M/ggml-model.bin /data/local/tmp/models/
-
-adb shell
-cd /data/local/tmp
-export LD_LIBRARY_PATH=/data/local/tmp
-./bin/gpt-2-backend -m models/ggml-model.bin -p "this is an example"
-```
-
 ## Resources
 
 - [Introduction to ggml](https://huggingface.co/blog/introduction-to-ggml)