llama : set n_outputs to 1 to avoid 0 outputs mean-pooling (#15791)
* llama : set n_outputs to 1 to avoid 0 outputs mean-pooling
This commit modifies the llama_context constructor to set n_outputs to
1.
The motivation for this is that when using pooling, and specifically
mean pooling, for embeddings having n_outputs set to 0 can lead to the
following error:
```console
$ build/bin/llama-embedding -m models/nomic-embed-text-1.5-Q4_K_M.gguf \
--pooling mean -p "Hello, how are you?"
...
llama_context: CPU output buffer size = 0.12 MiB
/home/danbev/work/ai/llama.cpp/ggml/src/ggml.c:3023: GGML_ASSERT(ggml_can_mul_mat(a, b)) failed
0x0000743c96d107e3 in __GI___wait4 (pid=292978, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30
warning: 30 ../sysdeps/unix/sysv/linux/wait4.c: No such file or directory
30 in ../sysdeps/unix/sysv/linux/wait4.c
196 waitpid(child_pid, NULL, 0);
230 ggml_print_backtrace();
3023 GGML_ASSERT(ggml_can_mul_mat(a, b));
1823 cur = ggml_mul_mat(ctx0, ggml_cont(ctx0, ggml_transpose(ctx0, inp)), inp_mean);
18983 llm->build_pooling(cls, cls_b, cls_out, cls_out_b);
1399 auto * gf = model.build_graph(gparams);
292 auto * gf = graph_reserve(1, n_seqs, n_outputs, mctx.get(), true);
2329 auto * ctx = new llama_context(*model, params);
913 llama_context * lctx = llama_init_from_model(model, cparams);
105 common_init_result llama_init = common_init_from_params(params);
[Inferior 1 (process 292976) detached]
Aborted (core dumped)
```
Co-authored-by: Georgi Gerganov <redacted>
* add comment about not reserving graphs with zero outputs
* add assert in graph_reserve to ensure n_outputs >= 1