]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
Support multiple GPUs (split mode) on SYCL backend (#5806)
authorNeo Zhang Jianyu <redacted>
Sat, 2 Mar 2024 11:49:30 +0000 (19:49 +0800)
committerGitHub <redacted>
Sat, 2 Mar 2024 11:49:30 +0000 (19:49 +0800)
commit715641391dda1ff9762dc5d99d9a30acce99f2c6
treee57b359034b61f8d3ea4de372c2c3c0ec885c943
parent9bf297a02bfbd474e51912409a470dd797e2fe13
Support multiple GPUs (split mode) on SYCL backend (#5806)

* suport multiple cards: split-mode - layer|row

* rm warning

* rebase with master, support tow new OPs, close feature for -sm=row, fix for unit test

* update news

* fix merge error

* update according to review comments
README-sycl.md
common/common.cpp
examples/llama-bench/llama-bench.cpp
examples/sycl/ls-sycl-device.cpp
examples/sycl/run-llama2.sh
ggml-sycl.cpp
ggml-sycl.h
llama.cpp