- Various other examples are available in the [examples](examples) folder
The tensor operators are optimized heavily for Apple silicon CPUs. Depending on the computation size, Arm Neon SIMD
-instrisics or CBLAS Accelerate framework routines are used. The latter are especially effective for bigger sizes since
+intrinsics or CBLAS Accelerate framework routines are used. The latter are especially effective for bigger sizes since
the Accelerate framework utilizes the special-purpose AMX coprocessor available in modern Apple products.
## Quick start
} while (0)
#define BYTESWAP_TENSOR(t) \
do { \
- byteswap_tensor(tensor); \
+ byteswap_tensor(t); \
} while (0)
#else
#define BYTESWAP_VALUE(d) do {} while (0)
struct whisper_sequence {
std::vector<whisper_token_data> tokens;
- // the accumulated transcription in the current interation (used to truncate the tokens array)
+ // the accumulated transcription in the current iteration (used to truncate the tokens array)
int result_len;
double sum_logprobs_all; // the sum of the log probabilities of the tokens
void * user_data);
// Parameters for the whisper_full() function
- // If you chnage the order or add new parameters, make sure to update the default values in whisper.cpp:
+ // If you change the order or add new parameters, make sure to update the default values in whisper.cpp:
// whisper_full_default_params()
struct whisper_full_params {
enum whisper_sampling_strategy strategy;