This commit removes `n_threads` from the `llama_decode_internal`
functions doc comment as it does not exist anymore.
It looks like this parameter was removed in
Commit
16bc66d9479edd5ee12ec734973554d4493c5dfa ("llama.cpp : split
llama_context_params into model and context params").
Signed-off-by: Daniel Bevenius <redacted>
//
// - lctx: llama context
// - batch: batch to evaluate
-// - n_threads: number of threads to use
//
// return 0 on success
// return positive int on warning