]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commitdiff
docs: Fix intel documentation link (#20040)
authorMickael Desgranges <redacted>
Tue, 3 Mar 2026 13:50:00 +0000 (14:50 +0100)
committerGitHub <redacted>
Tue, 3 Mar 2026 13:50:00 +0000 (21:50 +0800)
docs/build.md

index fd447424c78534364a9501a1da5600a6305e69e9..e6f572c77f3caaaeae8f571734bdc8d6d7f3ddce 100644 (file)
@@ -108,7 +108,7 @@ Building through oneAPI compilers will make avx_vnni instruction set available f
 - Using oneAPI docker image:
   If you do not want to source the environment vars and install oneAPI manually, you can also build the code using intel docker container: [oneAPI-basekit](https://hub.docker.com/r/intel/oneapi-basekit). Then, you can use the commands given above.
 
-Check [Optimizing and Running LLaMA2 on IntelĀ® CPU](https://www.intel.com/content/www/us/en/content-details/791610/optimizing-and-running-llama2-on-intel-cpu.html) for more information.
+Check [Optimizing and Running LLaMA2 on IntelĀ® CPU](https://builders.intel.com/solutionslibrary/optimizing-and-running-llama2-on-intel-cpu) for more information.
 
 ### Other BLAS libraries