From: City Date: Fri, 25 Apr 2025 12:38:34 +0000 (+0200) Subject: Force FP32 compute in GLM4 FFN Down (#13101) X-Git-Tag: upstream/0.0.5318~128 X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=558a764713468f26f5a163d25a22100c9a04a48f;p=pkg%2Fggml%2Fsources%2Fllama.cpp Force FP32 compute in GLM4 FFN Down (#13101) * Force FP32 compute in cuBLAS GEMM * Revert "Force FP32 compute in cuBLAS GEMM" This reverts commit 6efd872732159ab88ee7b3c1d77ba5ebc83079bd. * Force F32 compute in GLM4 ffn down * Edit comment to clarify issue Co-authored-by: Johannes Gäßler --------- Co-authored-by: Johannes Gäßler --- diff --git a/src/llama-graph.cpp b/src/llama-graph.cpp index a85e9728..b52e3f62 100644 --- a/src/llama-graph.cpp +++ b/src/llama-graph.cpp @@ -803,6 +803,10 @@ ggml_tensor * llm_graph_context::build_ffn( if (down) { cur = build_lora_mm(down, cur); + if (arch == LLM_ARCH_GLM4) { + // GLM4 seems to have numerical issues with half-precision accumulators + ggml_mul_mat_set_prec(cur, GGML_PREC_F32); + } } if (down_b) {