]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commitdiff
add blog link (#6222)
authorNeo Zhang Jianyu <redacted>
Fri, 22 Mar 2024 07:19:37 +0000 (15:19 +0800)
committerGitHub <redacted>
Fri, 22 Mar 2024 07:19:37 +0000 (15:19 +0800)
README-sycl.md

index 501b9d481a026c5e901032e5c609791f0c9762a1..cbf14f2da7c07f0dc496562d3596459c75f5036e 100644 (file)
@@ -29,6 +29,7 @@ For Intel CPU, recommend to use llama.cpp for X86 (Intel MKL building).
 ## News
 
 - 2024.3
+  - A blog is published: **Run LLM on all Intel GPUs Using llama.cpp**: [intel.com](https://www.intel.com/content/www/us/en/developer/articles/technical/run-llm-on-all-gpus-using-llama-cpp-artical.html) or [medium.com](https://medium.com/@jianyu_neo/run-llm-on-all-intel-gpus-using-llama-cpp-fd2e2dcbd9bd).
   - New base line is ready: [tag b2437](https://github.com/ggerganov/llama.cpp/tree/b2437).
   - Support multiple cards: **--split-mode**: [none|layer]; not support [row], it's on developing.
   - Support to assign main GPU by **--main-gpu**, replace $GGML_SYCL_DEVICE.