This bug has been present since v1.1.0.
Effectively, the past transcribed text wasn't being used for following
transcriptions, which likely significantly reduces the transcription
quality.
Likely related to #419
prompt.clear();
// if we have already generated some text, use it as a prompt to condition the next generation
- if (!prompt_past.empty() && t_cur > 0.5f) {
+ if (!prompt_past.empty() && t_cur < 0.5f) {
int n_take = std::min(std::min(params.n_max_text_ctx, whisper_n_text_ctx(ctx)/2), int(prompt_past.size()));
prompt = { whisper_token_prev(ctx) };