CANN: support flash attention for head dim not multiple of 16, fix ALiBi slope offset (llama/20031)
- Allow FLASH_ATTN_EXT when head dimension D is not a multiple of 16 by
padding Q/K/V to D_padded = GGML_PAD(D, 16), running FusedInferAttentionScoreV2,
then slicing the output back to D (ggml-cann.cpp + aclnn_ops.cpp).
- Fix aclnn_get_slope second-part offset: use ggml_type_size(dtype) instead of
sizeof(float) so ALiBi slopes are correct when dtype is F16 (e.g. GQA with
48 heads); fixes buffer overflow and large numerical errors in those cases.