]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
llama.cpp: fix warning message (#11839)
authorOleksandr Kuvshynov <redacted>
Thu, 13 Feb 2025 06:25:34 +0000 (01:25 -0500)
committerGitHub <redacted>
Thu, 13 Feb 2025 06:25:34 +0000 (08:25 +0200)
commite4376270d971cff7992bdb6c5412a739195b1459
tree7ef1c1b17fe5d2dbd35da22c8afa4d7b12f222c2
parent3e693197724c31d53a9b69018c2f1bd0b93ebab2
llama.cpp: fix warning message (#11839)

There was a typo-like error, which would print the same number twice if
request is received with n_predict > server-side config.

Before the fix:
```
slot launch_slot_: id  0 | task 0 | n_predict = 4096 exceeds server configuration, setting to 4096
```

After the fix:
```
slot launch_slot_: id  0 | task 0 | n_predict = 8192 exceeds server configuration, setting to 4096
```
examples/server/server.cpp