]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
llama : default sampling changes + greedy update (#9897)
authorGeorgi Gerganov <redacted>
Mon, 21 Oct 2024 06:46:40 +0000 (09:46 +0300)
committerGitHub <redacted>
Mon, 21 Oct 2024 06:46:40 +0000 (09:46 +0300)
commit55e47786e373c90fc7803e718e3e1dd6d53c3db6
treedefc4984ec706d598676d4c2103e712706e4467f
parentbc219750845a59166d79f0d4ee3da1993b369b8a
llama : default sampling changes + greedy update (#9897)

* llama : deprecate softmax sampler + fix dist sampler

ggml-ci

* tests : replace macros with functions

ggml-ci

* sampling : change temperature sampler logic

For t <= 0.0f, keep the max logit intact and set the rest to -inf

* cont : no need for special "greedy" logic

top-k == 1 is the same

* tests : init prob correctly

* llama : handle temp <= 0.0 in the temp_ext sampler too

ggml-ci

* cont : avoid extra loop in temperature sampler for sub-zero temp

ggml-ci
common/sampling.cpp
examples/llama.swiftui/llama.cpp.swift/LibLlama.swift
examples/save-load-state/save-load-state.cpp
examples/speculative/speculative.cpp
include/llama.h
src/llama-sampling.cpp
tests/test-sampling.cpp