]>
git.djapps.eu Git - pkg/ggml/sources/ggml/log
Jiahao Li [Tue, 11 Jul 2023 18:12:57 +0000 (02:12 +0800)]
ggml : use a single kernel for CUDA mul op (#373)
Borislav Stanimirov [Tue, 11 Jul 2023 18:10:40 +0000 (21:10 +0300)]
tests : ifdef for #pragma GCC (#370)
Georgi Gerganov [Tue, 11 Jul 2023 18:10:14 +0000 (21:10 +0300)]
readme : add link to ggllm.cpp repo (close #361)
Jiahao Li [Tue, 11 Jul 2023 18:06:05 +0000 (02:06 +0800)]
ggml : broadcast ggml_add() for F32 (#359)
* Support broadcast add for fp32
* Use single kernel for broadcast add
Jiahao Li [Tue, 11 Jul 2023 17:58:02 +0000 (01:58 +0800)]
ggml : support norm op on CUDA (#364)
Tom Jobbins [Tue, 11 Jul 2023 17:54:36 +0000 (18:54 +0100)]
starcoder : fix for ggml-model.bin being saved in wrong directory + use argparse (#363)
* Fix for starcoder ggml-model.bin being saved in wrong directory. Modernise by using argparse.
* Make sure output directory exists
goerch [Tue, 11 Jul 2023 17:39:54 +0000 (19:39 +0200)]
ggml : fix ggml_set_xxx (#354)
`ggml_set_i32`/`ggml_set_f32` are in line with `ggml_set_i32_1d`/`ggml_set_f32_1d` then.
Daulet Zhanguzin [Tue, 11 Jul 2023 17:26:22 +0000 (10:26 -0700)]
ggml : fix Alibi implementation (#351)
* correct Alibi implementation
* update f16 too
Georgi Gerganov [Tue, 11 Jul 2023 16:36:52 +0000 (19:36 +0300)]
ggml : sync llama.cpp (fix for #341)
Georgi Gerganov [Mon, 10 Jul 2023 19:05:13 +0000 (22:05 +0300)]
ggml : fix docs about element access (close #348)
the-crypt-keeper [Mon, 10 Jul 2023 18:41:58 +0000 (14:41 -0400)]
starcoder : add <|end_of_turn|> token handling in order to support openchat/opencoderplus (#343)
* Add <|end_of_turn|> token handling to support openchat/opencoderplus
* The opencoder EOT occurs inside the prompt, so we should only break if the model actually generated it
---------
Co-authored-by: Mike <redacted>
Sam Spilsbury [Mon, 10 Jul 2023 18:40:29 +0000 (21:40 +0300)]
pkg-config : fix typo in includedir (#367)
Georgi Gerganov [Mon, 10 Jul 2023 18:40:05 +0000 (21:40 +0300)]
ggml : sync llama.cpp (changes to ggml_graph_compute() API) (#368)
Georgi Gerganov [Thu, 6 Jul 2023 16:41:18 +0000 (19:41 +0300)]
ggml : minor indentation
Borislav Stanimirov [Thu, 6 Jul 2023 07:24:39 +0000 (10:24 +0300)]
ggml : restore GGML_RESTRICT (#350)
Georgi Gerganov [Wed, 5 Jul 2023 17:38:55 +0000 (20:38 +0300)]
Georgi Gerganov [Wed, 5 Jul 2023 17:38:20 +0000 (20:38 +0300)]
tests : sync from llama.cpp and disable some obsolete tests
Georgi Gerganov [Wed, 5 Jul 2023 17:14:13 +0000 (20:14 +0300)]
ggml : sync llama.cpp (generalize quantize_fns + CUDA improvements)
Andrei [Tue, 4 Jul 2023 19:53:42 +0000 (15:53 -0400)]
cmake : fix public header path for submodules (#342)
Georgi Gerganov [Tue, 4 Jul 2023 17:27:19 +0000 (20:27 +0300)]
whisper : fix wrong variable name from previous commit
Sam Spilsbury [Tue, 4 Jul 2023 17:35:13 +0000 (20:35 +0300)]
build : add pkg-config file (#335)
This makes it easier for other library consumers to find
the library and link to it.
Fixes #334
Sam Spilsbury [Tue, 4 Jul 2023 17:34:28 +0000 (20:34 +0300)]
cmake : install the header file to ggml/ggml.h (#333)
Fixes #332
Georgi Gerganov [Tue, 4 Jul 2023 17:24:22 +0000 (20:24 +0300)]
whisper : sync whisper.cpp (tinydiarize + OpenVINO)
Sam Spilsbury [Tue, 4 Jul 2023 13:30:21 +0000 (16:30 +0300)]
readme : add link to ggml-gobject (#336)
This enables also some bindings to python (through pygi), gjs, vala, csharp, etc. However `ggml-gobject`s main purpose is to make the library a bit more friendly to the desktop platform, eg, by providing asynchronous operation, a DBus service, etc.
Jakob Frick [Tue, 4 Jul 2023 13:26:57 +0000 (14:26 +0100)]
dolly : update error print behavior (#337)
Borislav Stanimirov [Tue, 4 Jul 2023 13:26:29 +0000 (16:26 +0300)]
dolly : disable interactive_port on Windows (#339)
Jakob Frick [Sun, 2 Jul 2023 18:48:02 +0000 (14:48 -0400)]
dolly : add interactive prompt and port mode (#319)
* update basic function to execute prompt
* try to factor our prediciton loop
* update code
* update prompt things
* only render at the end
* add basic server port
* refactor
* fix client file descriptor
* undo common.h style changes
* undo sytle changes to main.cpp
* fix check for interactive port
Georgi Gerganov [Sun, 2 Jul 2023 18:41:23 +0000 (21:41 +0300)]
examples : remove whitespace
Hirochika Matsumoto [Sun, 2 Jul 2023 16:47:47 +0000 (01:47 +0900)]
examples : use GGML_FILE_MAGIC where possible (#323)
sjinzh [Sun, 2 Jul 2023 16:36:53 +0000 (00:36 +0800)]
zig : add tests codes using zig (#315)
* update build.zig
* zig : add tests by zig
* zig : add tests codes using zig
* zig : add tests codes using zig
Hugo Rosenkranz-Costa [Sun, 2 Jul 2023 16:05:24 +0000 (18:05 +0200)]
mpt : convert model weights part by part to save memory (#314)
* mpt : update conversion script to load model weights part by part
* mpt : add usage README
Borislav Stanimirov [Sun, 2 Jul 2023 15:54:16 +0000 (18:54 +0300)]
ggml : generalize interface for 1d and 2d convolutions (#313)
* conv_1d wip
* conv_1d opt
* conv_1d done
* conv_1 improve alias func name
* conv_2d wip
* conv size to separate func
* conv2d done
---------
Co-authored-by: Georgi Gerganov <redacted>
Georgi Gerganov [Sun, 2 Jul 2023 15:53:42 +0000 (18:53 +0300)]
ggml : disable ggml_rope_back for ChatGLM
Georgi Gerganov [Sun, 2 Jul 2023 15:33:41 +0000 (18:33 +0300)]
ggml : remove tensor ptr from export for now (close #267)
Not used for now
Georgi Gerganov [Sun, 2 Jul 2023 15:26:26 +0000 (18:26 +0300)]
ggml : fix enum order for TANH (#316)
PAB [Sun, 2 Jul 2023 15:25:37 +0000 (17:25 +0200)]
ggml : add `ELU`, `TANH`, `ARGMAX` (#316)
* add: `elu` activation
* add: `tanh` activation
* add: `argmax`
* ggml : rearrange ops - put "tanh" after "step"
---------
Co-authored-by: Georgi Gerganov <redacted>
goerch [Sun, 2 Jul 2023 15:13:23 +0000 (17:13 +0200)]
ggml : add GGML_TENSOR_LOCALS helper macros (#309)
* [WIP] ref #292
* Further code reduction
* ggml : minor style fixes
* ggml : hide op locals in source file
---------
Co-authored-by: Georgi Gerganov <redacted>
the-crypt-keeper [Sun, 2 Jul 2023 14:52:52 +0000 (10:52 -0400)]
starcoder : add repeat penalty (#311)
* implement repeat penalty processing for starcoder
* show effective parameters at starcoder startup
---------
Co-authored-by: Mike Ravkine <redacted>
Georgi Gerganov [Sun, 2 Jul 2023 14:33:57 +0000 (17:33 +0300)]
ggml : sync latest llama.cpp (ggml_task_type changes + GPU backends)
sjinzh [Mon, 26 Jun 2023 20:48:31 +0000 (04:48 +0800)]
zig : add tests by zig (#307)
* update build.zig
* zig : add tests by zig
Jiahao Li [Mon, 26 Jun 2023 20:47:31 +0000 (04:47 +0800)]
ggml : support ChatGLM-style RoPE (#305)
Georgi Gerganov [Mon, 26 Jun 2023 20:26:37 +0000 (23:26 +0300)]
ggml : increase max name size to 48
Georgi Gerganov [Mon, 26 Jun 2023 18:10:24 +0000 (21:10 +0300)]
ggml : sync llama.cpp (NUMA + thread improvements + k-quants)
Playdev [Sun, 25 Jun 2023 13:50:39 +0000 (22:50 +0900)]
py : add requirements.txt (#201)
* Add requirements.txt
* Fix README.md files
M. Yusuf Sarıgöz [Sun, 25 Jun 2023 13:45:34 +0000 (16:45 +0300)]
readme : add link to CLIP example (#298)
Georgi Gerganov [Sun, 25 Jun 2023 13:39:57 +0000 (16:39 +0300)]
ggml : fix invalid src0 dereference
Georgi Gerganov [Sun, 25 Jun 2023 13:38:17 +0000 (16:38 +0300)]
ggml : remove _GNU_SOURCE
ref : https://github.com/ggerganov/whisper.cpp/pull/1027
sjinzh [Sun, 25 Jun 2023 13:36:09 +0000 (21:36 +0800)]
zig : update build.zig (#296)
Georgi Gerganov [Sun, 25 Jun 2023 13:09:34 +0000 (16:09 +0300)]
readme : add roadmap + manifesto
M. Yusuf Sarıgöz [Sun, 25 Jun 2023 12:59:24 +0000 (15:59 +0300)]
ggml : do not round up the conv 2D row size (#274)
Georgi Gerganov [Sun, 25 Jun 2023 12:38:55 +0000 (15:38 +0300)]
whisper : fix ifdef
Georgi Gerganov [Sun, 25 Jun 2023 12:37:02 +0000 (15:37 +0300)]
opencl : remove ggml-opencl.c
Georgi Gerganov [Sun, 25 Jun 2023 12:35:05 +0000 (15:35 +0300)]
whisper : sync latest whisper.cpp
Georgi Gerganov [Sun, 25 Jun 2023 11:31:01 +0000 (14:31 +0300)]
whisper : sync latest whisper.cpp
Georgi Gerganov [Sun, 25 Jun 2023 11:20:41 +0000 (14:20 +0300)]
common : fix trailing whitespace
Georgi Gerganov [Sun, 25 Jun 2023 11:19:47 +0000 (14:19 +0300)]
whisper : sync latest whisper.cpp
Georgi Gerganov [Sun, 25 Jun 2023 10:07:18 +0000 (13:07 +0300)]
readme : add encodec.cpp link
Georgi Gerganov [Sat, 24 Jun 2023 17:58:42 +0000 (20:58 +0300)]
readme : add BioGPT example link
LoganDark [Sat, 24 Jun 2023 17:47:53 +0000 (10:47 -0700)]
ggml : add custom mapping functions (#264)
* Add custom mapping functions
The current mapping functions are basically jokes, add some real
ones. These ones get access to the actual tensor structs so they
can do things like
- Know the dimensions they are operating on
- Work with tensors with more than 2 dimensions, or transposed
- Operate on two differently sized tensors (like matmul)
- Use their own thread pool that does a better job than ggml does.
Among other things ...
* fix ordering mistake
* ggml : custom operators support scratch buffers
---------
Co-authored-by: Georgi Gerganov <redacted>
sjinzh [Sat, 24 Jun 2023 17:03:13 +0000 (01:03 +0800)]
zig : add zig build system support (#279)
* add zig build system support
* add zig build system support
Georgi Gerganov [Sat, 24 Jun 2023 16:39:32 +0000 (19:39 +0300)]
tests : allow to set threads to test-grad0
Borislav Stanimirov [Sat, 24 Jun 2023 16:11:35 +0000 (19:11 +0300)]
build : fix compilation errors and warnigns when building with MSVC (#275)
Borislav Stanimirov [Sat, 24 Jun 2023 16:06:13 +0000 (19:06 +0300)]
tests : increase stack size for test1 when building with MSVC (#277)
Georgi Gerganov [Sat, 24 Jun 2023 16:03:09 +0000 (19:03 +0300)]
tests : use LBFGS optimizer instead of ADAM (close #276)
ADAM seems to behave differently since the recent training changes.
Need to see how to make it work again for test2 - probably some
parameters need to be adjusted
AmbientL [Sat, 24 Jun 2023 15:31:38 +0000 (15:31 +0000)]
ggml : more verbose memory allocation failure (#270)
AmbientL [Sat, 24 Jun 2023 15:30:23 +0000 (15:30 +0000)]
starcoder : add special tokens for fill-in-the-middle task (#269)
Georgi Gerganov [Sat, 24 Jun 2023 15:27:46 +0000 (18:27 +0300)]
ggml : sync llama.cpp (tensor names)
Georgi Gerganov [Mon, 19 Jun 2023 18:28:16 +0000 (21:28 +0300)]
ci : reduce GGML_NLOOP to 3
Georgi Gerganov [Mon, 19 Jun 2023 17:43:19 +0000 (20:43 +0300)]
tests : sync test-grad0 from llama.cpp
Georgi Gerganov [Mon, 19 Jun 2023 17:43:12 +0000 (20:43 +0300)]
ggml : fix bug in LBFGS optimizer
Georgi Gerganov [Mon, 19 Jun 2023 17:35:08 +0000 (20:35 +0300)]
ggml : sync latest llama.cpp
Ebey Abraham [Sun, 18 Jun 2023 10:33:38 +0000 (11:33 +0100)]
gpt-2 : fix typo (#261)
Co-authored-by: Ebey Abraham <redacted>
Avi Lumelsky [Sun, 18 Jun 2023 10:32:09 +0000 (13:32 +0300)]
whisper : removed duplicate lines in convert-pt-to-ggml.py (#256)
Deleted 2 lines of .astype(float32) conversion to the model weights (No real impact, just for cleaner code)
Lukas Möller [Sun, 18 Jun 2023 08:34:21 +0000 (10:34 +0200)]
replit : update inference code to match reference (#218)
* Update replit inference code to match reference
* Add qntvr printf
Adam Tazi [Sun, 18 Jun 2023 08:15:58 +0000 (01:15 -0700)]
ci : introduce Github Actions CI workflow (#247)
* Introduce Github Actions CI workflow for the ggml repo
This commit integrates a Github Actions CI workflow that compiles and tests the codebase on both Ubuntu 22.04 and macOS 12 Monterey. The workflow is triggered on pull requests against the main branch and on every push to the main branch.
To accommodate the resource constraints of the Github-hosted runners, a `GGML_NITER` environment variable is introduced, allowing tests to run within a reasonable time frame. `test-grad0.c` is modified to use this variable instead of `GGML_NLOOP`.
The workflow file includes:
- A build strategy for both Ubuntu and MacOS.
- An environment setup with variables `GGML_NLOOP` and `GGML_NITER`.
- A step to limit the number of threads used by `test2.c` for efficient execution.
- A typical build process with steps for environment creation, CMake configuration, building, and verbose testing with a timeout.
* main to master
Tanmay [Sun, 18 Jun 2023 08:09:48 +0000 (13:39 +0530)]
ggml : convert interleaved addressing to sequential addressing for reduce functions (#117)
* Convert interleaved addressing to sequential addressing for REDUCE
* update addressing on new archs
Ravindra Marella [Sun, 18 Jun 2023 07:54:59 +0000 (13:24 +0530)]
examples : fix c++ standard errors and pedantic warnings (#239)
Cristiano Calcagno [Sun, 18 Jun 2023 07:45:11 +0000 (09:45 +0200)]
ggml : fix minor resource leak reported by static analysis (#237)
Ravindra Marella [Sun, 18 Jun 2023 07:37:09 +0000 (13:07 +0530)]
starcoder : add support for starchat special tokens (#246)
* starcoder : add support for starchat special tokens
* examples : fix `gpt_tokenize()` for special tokens
LoganDark [Fri, 16 Jun 2023 19:39:09 +0000 (12:39 -0700)]
ggml : return input tensor in ggml_set_name (#262)
this is SO USEFUL for debugging. in order to find any cgraph node,
I can wrap it in ggml_set_name and set a conditional breakpoint.
but I can only wrap existing code if this returns its input.
otherwise the barrier becomes annoyingly high (have to move a
bunch of code around to add name to a tensor)
LoganDark [Fri, 16 Jun 2023 19:17:30 +0000 (12:17 -0700)]
ggml : fix ggml_clamp (#263)
This unconditionally failed before
M. Yusuf Sarıgöz [Fri, 16 Jun 2023 17:36:46 +0000 (20:36 +0300)]
ggml : add quick GELU (#254)
* Implement Quick GELU
* Revert "Implement Quick GELU"
This reverts commit
ff220cc1f91a184f195d19b17ed4c352cc72a6f0 .
* Tidy up ggml.h
* Respect to the style of ggml
* Fix: Fix minor typo
* Rename `quick_gelu` -> `gelu_quick`
Andrei [Thu, 8 Jun 2023 18:51:39 +0000 (14:51 -0400)]
cmake : export all symbols on windows when building shared library (#234)
Currently building ggml on windows as a shared library does not export all symbols by default.
LoganDark [Wed, 7 Jun 2023 16:16:19 +0000 (09:16 -0700)]
ggml : correct off-by-one bounds check in ggml_compute_forward_set_f32 (#229)
without this fix you will be unable to set a zero-length tensor to the end of another tensor
this sounds stupid, but is used in my testing
klosax [Wed, 7 Jun 2023 16:15:50 +0000 (18:15 +0200)]
gpt-neox : fix ctx size calculation (#228)
Georgi Gerganov [Wed, 7 Jun 2023 16:14:50 +0000 (19:14 +0300)]
ggml : fix ggml_clamp thresholds being read as ints instead of floats (#221)
Jiahao Li [Wed, 7 Jun 2023 16:14:27 +0000 (00:14 +0800)]
ggml : add inplace ops api in header file (#219)
Georgi Gerganov [Fri, 2 Jun 2023 12:46:59 +0000 (15:46 +0300)]
ggml : add ggml_conv_2d_sk_p0(), ggml_win_part(), ggml_win_unpart()
Georgi Gerganov [Tue, 30 May 2023 10:49:08 +0000 (13:49 +0300)]
ggml : fix ggml op conv_1d enum names
Georgi Gerganov [Tue, 30 May 2023 10:19:55 +0000 (13:19 +0300)]
ggml : better conv_1d naming
Georgi Gerganov [Tue, 30 May 2023 07:18:31 +0000 (10:18 +0300)]
ggml : rename conv_1d ops to reflect half-padding used
Georgi Gerganov [Tue, 30 May 2023 07:03:30 +0000 (10:03 +0300)]
ggml : fix compiler warnings for printf
Georgi Gerganov [Mon, 29 May 2023 18:14:52 +0000 (21:14 +0300)]
mnist : remove redundant stuff + rename ctx0
Eldar Yusupov [Mon, 29 May 2023 16:55:13 +0000 (19:55 +0300)]
mnist : add missing header (#213)
Eldar Yusupov [Mon, 29 May 2023 16:47:57 +0000 (19:47 +0300)]
common : fix compilation on Linux (#212)
Georgi Gerganov [Mon, 29 May 2023 16:28:07 +0000 (19:28 +0300)]
ggml : cgraph export/import/eval example + GPU support (#108)
* ggml : cgraph export brainstorming
* mnist : code style
* mnist : minor
* ggml : initial cgraph export
* ggml : initial graph import (wip)
* ggml : import op args correctly
* ggml : add ggml_get_tensor_by_name()
* mnist : add compute graph evaluation on CPU example
* ggml : add ggml_tensor_overhead()
* ggml : rename new functions to ggml_cgraph_...
* mnist : add Metal inference skeleton (WIP)
* mnist : working on the Metal pipeline (WIP)
* mnist : prepare the Metal encoder (WIP)
* mnist : first Metal kernel for F32 ADD
* mnist : looks like MTLHeap does not work
* mnist : initial full pass of MNIST on the GPU (not verified)
* mnist : minor cleanup
* mnist : full GPU inference works
* mnist : use custom soft_max kernel since MPSMatrixSoftMax is bugged
* mnist : use constant for soft_max instead of hardcoded 10
* mnist : check multiple predictions (Metal)
* mnist : minor
* ggml : move cgraph import / export to ggml
* mnist : remove common dependencies
* mnist : fix soft_max threadgroup size
* mnist : init no_alloc member
* ggml : improve "get tensor" API
Tyé singwa [Sun, 28 May 2023 17:41:11 +0000 (20:41 +0300)]
fix : fix ggml_alibi (#204)
Skyler Celestinian-Sterling [Sun, 28 May 2023 10:45:30 +0000 (03:45 -0700)]
readme : add "development" (#203)
You are welcome lol
apcameron [Sat, 27 May 2023 13:48:33 +0000 (14:48 +0100)]
ggml : add CLBLAST support (#197)
Enable support for the RISCV architecture
This addresses https://github.com/ggerganov/ggml/issues/129
Georgi Gerganov [Sat, 27 May 2023 13:20:24 +0000 (16:20 +0300)]
cuda : sync latest llama.cpp (control DMMV X/Y sizes)