]>
git.djapps.eu Git - pkg/ggml/sources/llama.cpp/log
anzz1 [Wed, 29 Mar 2023 13:20:07 +0000 (16:20 +0300)]
Fix GCC warning about binary literal (#595)
0b10101010 -> 0xAA /*
0b10101010 */
anzz1 [Wed, 29 Mar 2023 13:19:29 +0000 (16:19 +0300)]
Fix typo in llama.h (#593)
anzz1 [Tue, 28 Mar 2023 19:44:29 +0000 (22:44 +0300)]
Enable Fused-Multiply-Add (FMA) and F16C/CVT16 vector extensions on MSVC (#375)
* Enable Fused-Multiply-Add (FMA) instructions on MSVC
__FMA__ macro does not exist in MSVC
* Enable F16C/CVT16 vector extensions on MSVC
__F16C__ macro does not exist in MSVC, but is implied with AVX2/AVX512
* MSVC cvt intrinsics
* Add __SSE3__ macro for MSVC too because why not
even though it's not currently used for anything when AVX is defined
anzz1 [Tue, 28 Mar 2023 19:43:25 +0000 (22:43 +0300)]
CI: fix subdirectory path globbing (#546)
- Changes in subdirectories will now be detecter properly
- (Windows-MSVC) AVX512 tests temporarily disabled
anzz1 [Tue, 28 Mar 2023 18:23:09 +0000 (21:23 +0300)]
llama : fix linkage with mingw (#551)
* Revert
7e53955 (#542)
Still needs to be fixed properly
* Fix linking on mingw32
slaren [Tue, 28 Mar 2023 18:06:03 +0000 (20:06 +0200)]
ggml : add AVX2 implementation of quantize_row_q4_1 (#515)
* Add AVX2 implementation of quantize_row_q4_1
* Actually use AVX2
* Make quantize_row_q4_1 static
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Georgi Gerganov <redacted>
thement [Tue, 28 Mar 2023 17:55:42 +0000 (19:55 +0200)]
py : add temporary script to convert old ggml files to newer version (#539)
Co-authored-by: Jakub Horak <redacted>
Tai Duc Nguyen [Tue, 28 Mar 2023 17:51:29 +0000 (13:51 -0400)]
py : add capabiliy to convert from ggml back to torch or hf format for further consumption/training/finetuning (#403)
Stephan Walter [Tue, 28 Mar 2023 17:13:01 +0000 (17:13 +0000)]
ggml : refactor quantized processing functions (#509)
* Refactor quantized processing functions
* ggml : minor
---------
Co-authored-by: Georgi Gerganov <redacted>
DooWoong Lee (David) [Tue, 28 Mar 2023 17:02:34 +0000 (02:02 +0900)]
py : removed unused `model` variable and verified that the code functions correctly with `vocab_only` setting. Also confirmed that the code works as expected after running with reduced memory usage due to deletion of no-longer-needed variable. (#547)
Georgi Gerganov [Tue, 28 Mar 2023 17:01:09 +0000 (20:01 +0300)]
ci : make ctest verbose, hopefully we see what is wrong with the sanitizer
Georgi Gerganov [Tue, 28 Mar 2023 16:51:55 +0000 (19:51 +0300)]
tests : free llama context at the end of the test
Stephan Walter [Tue, 28 Mar 2023 16:48:20 +0000 (16:48 +0000)]
all : be more strict about converting float to double (#458)
* Be more strict about converting float to double
* Test equivalence of round, SILU implementations
Test module is commented out in CMakeLists.txt because the tests may
take a long time, depending on how much the compiler optimizes.
* Fix softmax in perplexity.cpp
* all : prefer float over double where appropriate
* perplexity : add <cmath>
---------
Co-authored-by: Georgi Gerganov <redacted>
Jed Fox [Tue, 28 Mar 2023 16:39:01 +0000 (11:39 -0500)]
deploy : add a Package.swift for SwiftPM support (#393)
* Add a Package.swift for SwiftPM support
* Swap from exclusions to allowlist
Stephan Walter [Tue, 28 Mar 2023 15:56:03 +0000 (15:56 +0000)]
ggml : introduce structs for the q4 data blocks (#356)
* Introduce structs for the q4 data blocks
* ggml : rename quant struct variables + fix ARM_NEON
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Tue, 28 Mar 2023 15:34:35 +0000 (18:34 +0300)]
gitignore : add "embedding"
dotpy314 [Tue, 28 Mar 2023 15:06:28 +0000 (23:06 +0800)]
Check the existence of f16_model_path_base in quantize.py (#574)
Co-authored-by: Jincheng Miao <redacted>
slaren [Tue, 28 Mar 2023 14:26:55 +0000 (16:26 +0200)]
Fix usage of F16C intrinsics in AVX code (#563)
* Fix usage of F16C intrinsics in AVX code when F16C is not defined
anzz1 [Tue, 28 Mar 2023 14:09:55 +0000 (17:09 +0300)]
main.cpp fixes, refactoring (#571)
- main: entering empty line passes back control without new input in interactive/instruct modes
- instruct mode: keep prompt fix
- instruct mode: duplicate instruct prompt fix
- refactor: move common console code from main->common
RJ Adriaansen [Tue, 28 Mar 2023 06:11:09 +0000 (08:11 +0200)]
Add embedding example to Makefile (#540)
Marco Matthies [Mon, 27 Mar 2023 04:55:26 +0000 (06:55 +0200)]
Fix missing ggml link in cmake for examples/* on w64-mingw32 (#542)
Erik Scholz [Sun, 26 Mar 2023 15:48:40 +0000 (17:48 +0200)]
ci: add debug build to sanitizer build matrix (#527)
Stephan Walter [Sun, 26 Mar 2023 15:34:02 +0000 (15:34 +0000)]
Fix undefined variables in debug build, remove unused variables (#531)
Juan Calderon-Perez [Sun, 26 Mar 2023 14:48:42 +0000 (10:48 -0400)]
Add support for linux/arm64 platform during Docker Builds (#514)
* Add support for linux/arm64 platform
* Add platform to versioned builds
Stephan Walter [Sun, 26 Mar 2023 13:14:01 +0000 (13:14 +0000)]
Update README and comments for standalone perplexity tool (#525)
anzz1 [Sun, 26 Mar 2023 13:06:10 +0000 (16:06 +0300)]
[main] fix infinite generation (-n == -1) (#523)
Georgi Gerganov [Sun, 26 Mar 2023 07:20:49 +0000 (10:20 +0300)]
Add logo to README.md
Harald Fernengel [Sun, 26 Mar 2023 05:25:46 +0000 (07:25 +0200)]
Exit from interactive mode if input stream is bad (#491)
Allow exiting the interactive prompt also with CTRL-D on Unix and CTRL-Z
on Windows.
anzz1 [Sat, 25 Mar 2023 22:13:28 +0000 (00:13 +0200)]
CI: Run other sanitizer builds even if one fails (#511)
applies only to sanitizer builds so they wont be cancelled
jp-x-g [Sat, 25 Mar 2023 21:53:55 +0000 (14:53 -0700)]
Clarify console output in convert-pth-to-ggml.py (#512)
"Processing part 1 of 3" instead of "Processing part 0"
anzz1 [Sat, 25 Mar 2023 21:38:11 +0000 (23:38 +0200)]
CMake / CI additions (#497)
* CMake: Add AVX512 option
* CI: Add AVX/AVX512 builds (Windows)
(AVX512 tests can only be run when the worker happens to support it, building works anyway)
* CMake: Fix sanitizer linkage ( merged #468 )
* CI: Add sanitizer builds (Ubuntu)
* CI: Fix release tagging
(change @zendesk/action-create-release to @anzz1/action-create-release until upstream PR Added commitish as input zendesk/action-create-release#32 is merged)
anzz1 [Sat, 25 Mar 2023 20:29:22 +0000 (22:29 +0200)]
(Windows) Set console to UTF-8 on init (#420)
Sets console codepage to 65001 (CP_UTF8) on start for both input and output, should fix problems with UTF-8 characters.
Georgi Gerganov [Sat, 25 Mar 2023 19:53:39 +0000 (21:53 +0200)]
Fix colors enabling on WIN32
Georgi Gerganov [Sat, 25 Mar 2023 19:51:41 +0000 (21:51 +0200)]
If n_predict == -1, generate forever
Georgi Gerganov [Sat, 25 Mar 2023 19:36:22 +0000 (21:36 +0200)]
Inifinite generation via context swapping (#71)
Georgi Gerganov [Sat, 25 Mar 2023 18:51:14 +0000 (20:51 +0200)]
Cleanup STL headers + fix embedding examples + minor stuff
Georgi Gerganov [Sat, 25 Mar 2023 18:36:52 +0000 (20:36 +0200)]
Move chat scripts into "./examples"
slaren [Sat, 25 Mar 2023 18:31:48 +0000 (19:31 +0100)]
Add AVX2 implementation of dequantize_row_q4_1 (#505)
Georgi Gerganov [Sat, 25 Mar 2023 18:26:40 +0000 (20:26 +0200)]
Overhaul the examples structure
- main -> examples
- utils -> examples (renamed to "common")
- quantize -> examples
- separate tools for "perplexity" and "embedding"
Hope I didn't break something !
Georgi Gerganov [Sat, 25 Mar 2023 17:47:21 +0000 (19:47 +0200)]
Retire the ggml_mul_mat() branch for transposed src0 (#500)
* Retire the ggml_mul_mat() for transposed src0
- It can always be made contiguous with ggml_cpy()
- The code is now simplified
- The results are deterministic in respect to num threads
* SIMD-ify dequantize_row_q4_0() for ARM_NEON (#502)
* Attempt to SIMD-ify dequantize_row_q4_0() for ARM_NEON
* Fix dequantization - forgot to interleave the quants
Georgi Gerganov [Sat, 25 Mar 2023 15:16:50 +0000 (17:16 +0200)]
Disable prompt verbosity by default and add option to enable (#480)
slaren [Sat, 25 Mar 2023 15:06:49 +0000 (16:06 +0100)]
Add AVX2 implementation of dequantize_row_q4_0 (#467)
Georgi Gerganov [Sat, 25 Mar 2023 15:03:10 +0000 (17:03 +0200)]
Don't interefe with BLAS for large prompts by running only 1 thread
Georgi Gerganov [Sat, 25 Mar 2023 14:47:59 +0000 (16:47 +0200)]
Add longer DAN prompt for testing big batch numbers
slaren [Sat, 25 Mar 2023 14:34:23 +0000 (15:34 +0100)]
Add timings for the prompt evaluation (#478)
Georgi Gerganov [Sat, 25 Mar 2023 14:30:32 +0000 (16:30 +0200)]
Remove obsolete information from README
Georgi Gerganov [Sat, 25 Mar 2023 14:22:05 +0000 (16:22 +0200)]
Remove obsolete assert and fix compiler warning
Georgi Gerganov [Sat, 25 Mar 2023 14:09:54 +0000 (16:09 +0200)]
Fix nasty bug in ggml_compute_forward_mul_mat_f32() and reenable BLAS
anzz1 [Sat, 25 Mar 2023 12:42:09 +0000 (14:42 +0200)]
bounds checking for input prefix (#492)
anzz1 [Sat, 25 Mar 2023 12:03:19 +0000 (14:03 +0200)]
feat: '--in-prefix STRING' option (#426)
Prefix user inputs with a string
Jed Fox [Sat, 25 Mar 2023 05:26:28 +0000 (01:26 -0400)]
Add support for file load progress reporting callbacks (#434)
* File load progress reporting
* Move llama_progress_handler into llama_context_params
* Renames
* Use seekg to find file size instead
* More correct load progress
* Call progress callback more frequently
* Fix typo
Doomsdayrs [Sat, 25 Mar 2023 05:21:24 +0000 (01:21 -0400)]
Add missing struct annotation (#483)
`llama_sample_top_p_top_k` was missing the struct annotation on line 126.
This causes a compiler issue when being parsed by the Kotlin C interop generator.
This commit fixes the above issue by adding the struct annotation.
Chris Kuehl [Sat, 25 Mar 2023 04:38:14 +0000 (23:38 -0500)]
Fix crash for 65B model with pre-allocated memory (#485)
Georgi Gerganov [Fri, 24 Mar 2023 21:47:06 +0000 (23:47 +0200)]
Disable BLAS altogether - the bug is not just for qunatized mat mul
Georgi Gerganov [Fri, 24 Mar 2023 21:39:17 +0000 (23:39 +0200)]
Disable BLAS branch in mul_mat - seems there is a bug
Georgi Gerganov [Fri, 24 Mar 2023 21:17:58 +0000 (23:17 +0200)]
Immediately start processing the prompt before user input has been provided (#476)
Georgi Gerganov [Fri, 24 Mar 2023 21:17:37 +0000 (23:17 +0200)]
Reduce memory usage and allocate enough memory for largest context (#473)
* Reduce memory usage and allocate enough memory for large contexts
* Simpler scratch buffer usage
* Reenable BLAS for quantized mul_mat
* Fix number of layers in 30B and 65B
* Fix KV cache size for F32
Georgi Gerganov [Fri, 24 Mar 2023 16:23:56 +0000 (18:23 +0200)]
Temporary bump the memory buffer size - hopefully fix issues from
483bab2e
Gary Mulder [Fri, 24 Mar 2023 15:23:09 +0000 (15:23 +0000)]
Update README.md (#444)
Added explicit **bolded** instructions clarifying that people need to request access to models from Facebook and never through through this repo.
rabidcopy [Fri, 24 Mar 2023 15:22:39 +0000 (10:22 -0500)]
fix instruct mode (#445)
changes to EOS behavior in interactive and reverse prompt handling broke instruct mode by erroneously injecting instruct mode's reverse prompt and an extra newline.
Georgi Gerganov [Fri, 24 Mar 2023 15:21:01 +0000 (17:21 +0200)]
Properly free llama_context on failure
Cameron Kaiser [Fri, 24 Mar 2023 15:19:26 +0000 (08:19 -0700)]
additional optimizations for POWER9 (#454)
comex [Fri, 24 Mar 2023 15:19:05 +0000 (08:19 -0700)]
Support calling mlock() on loaded model data on Linux and macOS (#453)
* Support calling mlock() on loaded model data on Linux and macOS
This is enabled by a new --mlock command line option.
Using mlock() disables swapping and memory compression for the model
data. Doing so can be useful on systems where the model takes up a
large fraction of system RAM. In my experience, macOS is quite eager to
start compressing llama.cpp's memory, which then makes it halt for a few
seconds while it decompresses, even with a model that uses "only" 25GB
out of 32GB.
Of course, this comes at the cost of forcing the system to swap or
compress other processes' memory instead, so it needs to be used with
care and shouldn't be enabled by default.
In theory it should be possible to support this on Windows as well using
VirtualLock(), but I'm not much of a Windows user.
* Update llama.cpp
---------
Co-authored-by: Georgi Gerganov <redacted>
Luciano [Fri, 24 Mar 2023 15:05:13 +0000 (08:05 -0700)]
Add embedding mode with arg flag. Currently working (#282)
* working but ugly
* add arg flag, not working on embedding mode
* typo
* Working! Thanks to @nullhook
* make params argument instead of hardcoded boolean. remove useless time check
* start doing the instructions but not finished. This probably doesnt compile
* Embeddings extraction support
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Fri, 24 Mar 2023 07:13:35 +0000 (09:13 +0200)]
Add link to Roadmap discussion
Georgi Gerganov [Fri, 24 Mar 2023 04:22:28 +0000 (06:22 +0200)]
Revert "Fix memory allocation issues and seg faults"
This reverts commit
4870e455b3653f7d7769fa5772b2c90ffad088df .
Will provide the correct fix later
Georgi Gerganov [Thu, 23 Mar 2023 22:11:53 +0000 (00:11 +0200)]
Fix memory allocation issues and seg faults
Georgi Gerganov [Thu, 23 Mar 2023 21:22:01 +0000 (23:22 +0200)]
Avoid the transposed X branch in the Z = X * Y matrix multiplication (#439)
Should make results reproducible for different number of threads and batch sizes
Jed Fox [Thu, 23 Mar 2023 20:42:52 +0000 (16:42 -0400)]
Fix quantize script not finding models in parent directory (#428)
Georgi Gerganov [Thu, 23 Mar 2023 20:39:44 +0000 (22:39 +0200)]
Remove oboslete command from Docker script
Georgi Gerganov [Thu, 23 Mar 2023 20:32:02 +0000 (22:32 +0200)]
Obsolete
rabidcopy [Thu, 23 Mar 2023 20:22:47 +0000 (15:22 -0500)]
Replace EOS with newline to prevent context/memory being flushed by EOS in interactive mode (#333)
* Improve interactive mode's coherence after EOS
Aims to improve coherence and ability to resume the interactive session when the user is given input back after an end of text token is reached.
Not sure what token 13 is or why it seems to help. See conversation for examples.
* Make newline token a constant
* dynamically determine newline token
* relocate previous newline token const
* cleanup whitespace
* print a new line on end of text in interactive
this may need to be looked into further when not using a reverse prompt
* only print manual newline with reverse prompt
fix formatting of reverse prompts so they don't end up at the end of the current line while not introducing unnecessary new lines otherwise
* alternate approach to replace end of text tokens
* Inject the reverse prompt again after eos in interactive mode
* tokenize reverse prompt when needed
makes this PR compatible with https://github.com/ggerganov/llama.cpp/pull/330
* tokenize and inject only first reverse prompt
thanks to tjohnman
* tokenize first reverse prompt once
* add newline token
* add newline token
* tokenize/inject reverse prompt for refactor
this doesn't seem right though
* tokenize nothing for antiprompt if no reverse
* Update main.cpp
* Update main.cpp
* tokenize and inject reverse prompt as needed
this doesn't seem to work if the reverse prompt is tokenized outside earlier on
* not needed
* remove newline token
* remove newline token
* tokenize newline token
* add space to comment
* Update main.cpp
Co-authored-by: Georgi Gerganov <redacted>
---------
Co-authored-by: Slaren <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Timmy Knight [Thu, 23 Mar 2023 20:18:13 +0000 (10:18 -1000)]
Fix GPTQ converter (#423)
* Fix GPTQ converter
* Fix comment
---------
Co-authored-by: Georgi Gerganov <redacted>
nusu-github [Thu, 23 Mar 2023 20:16:48 +0000 (05:16 +0900)]
Generate library with CMake (#430)
* Generate library with CMake
BUILD_SHARED_LIBS to allow llama library to be generated.
* Turn ON PIC when BUILD_SHARED_LIBS is ON
anzz1 [Thu, 23 Mar 2023 17:54:28 +0000 (19:54 +0200)]
Command line args bounds checking (#424)
* command line args bounds checking
* unknown and invalid param exit codes 0 -> 1
Ben Siraphob [Wed, 22 Mar 2023 05:37:02 +0000 (00:37 -0500)]
Fix Nix build
Stephan Walter [Thu, 23 Mar 2023 14:15:48 +0000 (14:15 +0000)]
Revert "Delete SHA256SUMS for now" (#429)
* Revert "Delete SHA256SUMS for now (#416)"
This reverts commit
8eea5ae0e5f31238a97c79ea9103c27647380e37 .
* Remove ggml files until they can be verified
* Remove alpaca json
* Add also model/tokenizer.model to SHA256SUMS + update README
---------
Co-authored-by: Pavol Rusnak <redacted>
Kerfuffle [Thu, 23 Mar 2023 11:41:32 +0000 (05:41 -0600)]
Fix Makefile echo escape codes (by removing them). (#418)
Gary Mulder [Thu, 23 Mar 2023 11:30:40 +0000 (11:30 +0000)]
Move model section from issue template to README.md (#421)
* Update custom.md
* Removed Model section as it is better placed in README.md
* Updates to README.md model section
* Inserted text that was removed from issue template about obtaining models from FB and links to papers describing the various models
* Removed IPF down links for the Alpaca 7B models as these look to be in the old data format and probably shouldn't be directly linked to, anyway
* Updated the perplexity section to point at Perplexity scores #406 discussion
anzz1 [Thu, 23 Mar 2023 10:26:19 +0000 (12:26 +0200)]
Delete SHA256SUMS for now (#416)
Delete this for now to avoid confusion since it contains some wrong checksums from the old tokenizer format
Re-add after #374 is resolved
Georgi Gerganov [Thu, 23 Mar 2023 08:46:58 +0000 (10:46 +0200)]
Adjust repetition penalty ..
Georgi Gerganov [Thu, 23 Mar 2023 07:48:51 +0000 (09:48 +0200)]
Add link to recent podcast about whisper.cpp and llama.cpp
anzz1 [Thu, 23 Mar 2023 02:20:34 +0000 (04:20 +0200)]
CI: CMake: Separate build and test steps (#376)
* CI: Separate Build and Test steps (CMake)
* CI: Make sure build passes before running tests (CMake)
* CI: Standardise step id names
tjohnman [Thu, 23 Mar 2023 00:30:23 +0000 (01:30 +0100)]
Fix instruct mode broken by PR #354 (#409)
Co-authored-by: Johnman <redacted>
Gary Mulder [Wed, 22 Mar 2023 19:06:18 +0000 (19:06 +0000)]
Update issue template so people will use it (#404)
Stephan Walter [Wed, 22 Mar 2023 17:29:06 +0000 (17:29 +0000)]
Deduplicate q4 quantization functions (#383)
* Deduplicate q4 quantization functions
* Use const; add basic test
* Re-enable quantization test
* Disable AVX2 flags in CI
---------
Co-authored-by: Georgi Gerganov <redacted>
Valentyn Bezshapkin [Wed, 22 Mar 2023 17:20:25 +0000 (18:20 +0100)]
fix: add POSIX functionality for Linux compilation (#51)
* fix: add POSIX functionality for Linux compilation
* fix: older standard for compatibility
tjohnman [Wed, 22 Mar 2023 17:16:35 +0000 (18:16 +0100)]
Don't force immediate interactive without `-i` (#354)
* Don't force immediate interactive without -i
Sometimes we might want to use a reverse prompt but we want to let the
model generate tokens right after the initial prompt. So we don't force
user input mode if the -i flag wasn't specified and instead let it run
until we encounter the reverse prompt.
This gives use some more flexibility, since it doesn't force the user to
enter a newline if they want to let the model generate text right after
the initial prompt and only be asked for input if the reverse prompt is
encountered.
The `--interactive-first` flag is reintroduced to force the old
behavior. `-r` behaves like `-i` plus introduces a reverse prompt (it
can be specified more than once).
* Update help output.
---------
Co-authored-by: Johnman <redacted>
Erik Scholz [Wed, 22 Mar 2023 16:37:10 +0000 (17:37 +0100)]
cmake: make llama an actual library (#392)
Erik Scholz [Wed, 22 Mar 2023 16:09:38 +0000 (17:09 +0100)]
fix perplexity after c-api refactor (#390)
* preallocate a buffer of fitting size for tokenization (utils.cpp)
* don't create a new std::string (especially here, where it's usually large)
Gary Linscott [Wed, 22 Mar 2023 15:53:54 +0000 (08:53 -0700)]
Add details on perplexity to README.md (#395)
Yusuf Kağan Hanoğlu [Wed, 22 Mar 2023 08:55:45 +0000 (11:55 +0300)]
Add missing header for memcpy (#386)
fixed: memcpy is not defined
Georgi Gerganov [Wed, 22 Mar 2023 05:47:15 +0000 (07:47 +0200)]
When seed <= 0 - use the clock to generate one
Georgi Gerganov [Wed, 22 Mar 2023 05:45:00 +0000 (07:45 +0200)]
Init llama_context_params properly from CLI (#370)
Georgi Gerganov [Wed, 22 Mar 2023 05:34:02 +0000 (07:34 +0200)]
Remove temporary notice and update hot topics
Georgi Gerganov [Wed, 22 Mar 2023 05:32:36 +0000 (07:32 +0200)]
Introduce C-style API (#370)
* Major refactoring - introduce C-style API
* Clean up
* Add <cassert>
* Add <iterator>
* Add <algorithm> ....
* Fix timing reporting and accumulation
* Measure eval time only for single-token calls
* Change llama_tokenize return meaning
Gary Mulder [Mon, 20 Mar 2023 20:14:06 +0000 (20:14 +0000)]
Add SHA256SUMS file and instructions to README how to obtain and verify the downloads
Hashes created using:
sha256sum models/*B/*.pth models/*[7136]B/ggml-model-f16.bin* models/*[7136]B/ggml-model-q4_0.bin* > SHA256SUMS
anzz1 [Tue, 21 Mar 2023 21:49:24 +0000 (23:49 +0200)]
Fix bin dir for win ci
Erik Scholz [Tue, 21 Mar 2023 21:34:25 +0000 (22:34 +0100)]
specify build type for ctest on windows (#371)
Georgi Gerganov [Tue, 21 Mar 2023 20:57:35 +0000 (22:57 +0200)]
Add notice about pending change