+++ /dev/null
-name: Low Severity Bugs
-description: Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
-title: "Bug: "
-labels: ["bug-unconfirmed", "low severity"]
-body:
- - type: markdown
- attributes:
- value: |
- Thanks for taking the time to fill out this bug report!
- Please include information about your system, the steps to reproduce the bug,
- and the version of llama.cpp that you are using.
- If possible, please provide a minimal code example that reproduces the bug.
- - type: textarea
- id: what-happened
- attributes:
- label: What happened?
- description: Also tell us, what did you expect to happen?
- placeholder: Tell us what you see!
- validations:
- required: true
- - type: textarea
- id: version
- attributes:
- label: Name and Version
- description: Which executable and which version of our software are you running? (use `--version` to get a version string)
- placeholder: |
- $./llama-cli --version
- version: 2999 (42b4109e)
- built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
- validations:
- required: true
- - type: dropdown
- id: operating-system
- attributes:
- label: What operating system are you seeing the problem on?
- multiple: true
- options:
- - Linux
- - Mac
- - Windows
- - BSD
- - Other? (Please let us know in description)
- validations:
- required: false
- - type: textarea
- id: logs
- attributes:
- label: Relevant log output
- description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
- render: shell
--- /dev/null
+name: Bug (compilation)
+description: Something goes wrong when trying to compile llama.cpp.
+title: "Compile bug: "
+labels: ["bug-unconfirmed", "compilation"]
+body:
+ - type: markdown
+ attributes:
+ value: >
+ Thanks for taking the time to fill out this bug report!
+ This issue template is intended for bug reports where the compilation of llama.cpp fails.
+ Before opening an issue, please confirm that the compilation still fails with `-DGGML_CCACHE=OFF`.
+ If the compilation succeeds with ccache disabled you should be able to permanently fix the issue
+ by clearing `~/.cache/ccache` (on Linux).
+ - type: textarea
+ id: commit
+ attributes:
+ label: Git commit
+ description: Which commit are you trying to compile?
+ placeholder: |
+ $git rev-parse HEAD
+ 84a07a17b1b08cf2b9747c633a2372782848a27f
+ validations:
+ required: true
+ - type: dropdown
+ id: operating-system
+ attributes:
+ label: Which operating systems do you know to be affected?
+ multiple: true
+ options:
+ - Linux
+ - Mac
+ - Windows
+ - BSD
+ - Other? (Please let us know in description)
+ validations:
+ required: true
+ - type: dropdown
+ id: backends
+ attributes:
+ label: GGML backends
+ description: Which GGML backends do you know to be affected?
+ options: [AMX, BLAS, CPU, CUDA, HIP, Kompute, Metal, Musa, RPC, SYCL, Vulkan]
+ multiple: true
+ - type: textarea
+ id: steps_to_reproduce
+ attributes:
+ label: Steps to Reproduce
+ description: >
+ Please tell us how to reproduce the bug and any additional information that you think could be useful for fixing it.
+ If you can narrow down the bug to specific compile flags, that information would be very much appreciated by us.
+ placeholder: >
+ Here are the exact commands that I used: ...
+ validations:
+ required: true
+ - type: textarea
+ id: first_bad_commit
+ attributes:
+ label: First Bad Commit
+ description: >
+ If the bug was not present on an earlier version: when did it start appearing?
+ If possible, please do a git bisect and identify the exact commit that introduced the bug.
+ validations:
+ required: false
+ - type: textarea
+ id: logs
+ attributes:
+ label: Relevant log output
+ description: >
+ Please copy and paste any relevant log output, including the command that you entered and any generated text.
+ This will be automatically formatted into code, so no need for backticks.
+ render: shell
+ validations:
+ required: true
--- /dev/null
+name: Bug (model use)
+description: Something goes wrong when using a model (in general, not specific to a single llama.cpp module).
+title: "Eval bug: "
+labels: ["bug-unconfirmed", "model evaluation"]
+body:
+ - type: markdown
+ attributes:
+ value: >
+ Thanks for taking the time to fill out this bug report!
+ This issue template is intended for bug reports where the model evaluation results
+ (i.e. the generated text) are incorrect or llama.cpp crashes during model evaluation.
+ If you encountered the issue while using an external UI (e.g. ollama),
+ please reproduce your issue using one of the examples/binaries in this repository.
+ The `llama-cli` binary can be used for simple and reproducible model inference.
+ - type: textarea
+ id: version
+ attributes:
+ label: Name and Version
+ description: Which version of our software are you running? (use `--version` to get a version string)
+ placeholder: |
+ $./llama-cli --version
+ version: 2999 (42b4109e)
+ built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
+ validations:
+ required: true
+ - type: dropdown
+ id: operating-system
+ attributes:
+ label: Which operating systems do you know to be affected?
+ multiple: true
+ options:
+ - Linux
+ - Mac
+ - Windows
+ - BSD
+ - Other? (Please let us know in description)
+ validations:
+ required: true
+ - type: dropdown
+ id: backends
+ attributes:
+ label: GGML backends
+ description: Which GGML backends do you know to be affected?
+ options: [AMX, BLAS, CPU, CUDA, HIP, Kompute, Metal, Musa, RPC, SYCL, Vulkan]
+ multiple: true
+ - type: textarea
+ id: hardware
+ attributes:
+ label: Hardware
+ description: Which CPUs/GPUs are you using?
+ placeholder: >
+ e.g. Ryzen 5950X + 2x RTX 4090
+ validations:
+ required: true
+ - type: textarea
+ id: model
+ attributes:
+ label: Model
+ description: >
+ Which model at which quantization were you using when encountering the bug?
+ If you downloaded a GGUF file off of Huggingface, please provide a link.
+ placeholder: >
+ e.g. Meta LLaMA 3.1 Instruct 8b q4_K_M
+ validations:
+ required: false
+ - type: textarea
+ id: steps_to_reproduce
+ attributes:
+ label: Steps to Reproduce
+ description: >
+ Please tell us how to reproduce the bug and any additional information that you think could be useful for fixing it.
+ If you can narrow down the bug to specific hardware, compile flags, or command line arguments,
+ that information would be very much appreciated by us.
+ placeholder: >
+ e.g. when I run llama-cli with -ngl 99 I get garbled outputs.
+ When I use -ngl 0 it works correctly.
+ Here are the exact commands that I used: ...
+ validations:
+ required: true
+ - type: textarea
+ id: first_bad_commit
+ attributes:
+ label: First Bad Commit
+ description: >
+ If the bug was not present on an earlier version: when did it start appearing?
+ If possible, please do a git bisect and identify the exact commit that introduced the bug.
+ validations:
+ required: false
+ - type: textarea
+ id: logs
+ attributes:
+ label: Relevant log output
+ description: >
+ Please copy and paste any relevant log output, including the command that you entered and any generated text.
+ This will be automatically formatted into code, so no need for backticks.
+ render: shell
+ validations:
+ required: true
--- /dev/null
+name: Bug (misc.)
+description: Something is not working the way it should (and it's not covered by any of the above cases).
+title: "Misc. bug: "
+labels: ["bug-unconfirmed"]
+body:
+ - type: markdown
+ attributes:
+ value: >
+ Thanks for taking the time to fill out this bug report!
+ This issue template is intended for miscellaneous bugs that don't fit into any other category.
+ If you encountered the issue while using an external UI (e.g. ollama),
+ please reproduce your issue using one of the examples/binaries in this repository.
+ - type: textarea
+ id: version
+ attributes:
+ label: Name and Version
+ description: Which version of our software are you running? (use `--version` to get a version string)
+ placeholder: |
+ $./llama-cli --version
+ version: 2999 (42b4109e)
+ built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
+ validations:
+ required: true
+ - type: dropdown
+ id: operating-system
+ attributes:
+ label: Which operating systems do you know to be affected?
+ multiple: true
+ options:
+ - Linux
+ - Mac
+ - Windows
+ - BSD
+ - Other? (Please let us know in description)
+ validations:
+ required: true
+ - type: dropdown
+ id: module
+ attributes:
+ label: Which llama.cpp modules do you know to be affected?
+ multiple: true
+ options:
+ - libllama (core library)
+ - llama-cli
+ - llama-server
+ - llama-bench
+ - llama-quantize
+ - Python/Bash scripts
+ - Other (Please specify in the next section)
+ validations:
+ required: true
+ - type: textarea
+ id: steps_to_reproduce
+ attributes:
+ label: Steps to Reproduce
+ description: >
+ Please tell us how to reproduce the bug and any additional information that you think could be useful for fixing it.
+ validations:
+ required: true
+ - type: textarea
+ id: first_bad_commit
+ attributes:
+ label: First Bad Commit
+ description: >
+ If the bug was not present on an earlier version: when did it start appearing?
+ If possible, please do a git bisect and identify the exact commit that introduced the bug.
+ validations:
+ required: false
+ - type: textarea
+ id: logs
+ attributes:
+ label: Relevant log output
+ description: >
+ Please copy and paste any relevant log output, including the command that you entered and any generated text.
+ This will be automatically formatted into code, so no need for backticks.
+ render: shell
+ validations:
+ required: true
+++ /dev/null
-name: Medium Severity Bug
-description: Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but generally still useable)
-title: "Bug: "
-labels: ["bug-unconfirmed", "medium severity"]
-body:
- - type: markdown
- attributes:
- value: |
- Thanks for taking the time to fill out this bug report!
- Please include information about your system, the steps to reproduce the bug,
- and the version of llama.cpp that you are using.
- If possible, please provide a minimal code example that reproduces the bug.
- - type: textarea
- id: what-happened
- attributes:
- label: What happened?
- description: Also tell us, what did you expect to happen?
- placeholder: Tell us what you see!
- validations:
- required: true
- - type: textarea
- id: version
- attributes:
- label: Name and Version
- description: Which executable and which version of our software are you running? (use `--version` to get a version string)
- placeholder: |
- $./llama-cli --version
- version: 2999 (42b4109e)
- built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
- validations:
- required: true
- - type: dropdown
- id: operating-system
- attributes:
- label: What operating system are you seeing the problem on?
- multiple: true
- options:
- - Linux
- - Mac
- - Windows
- - BSD
- - Other? (Please let us know in description)
- validations:
- required: false
- - type: textarea
- id: logs
- attributes:
- label: Relevant log output
- description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
- render: shell
--- /dev/null
+name: Enhancement
+description: Used to request enhancements for llama.cpp.
+title: "Feature Request: "
+labels: ["enhancement"]
+body:
+ - type: markdown
+ attributes:
+ value: |
+ [Please post your idea first in Discussion if there is not yet a consensus for this enhancement request. This will help to keep this issue tracker focused on enhancements that the community has agreed needs to be implemented.](https://github.com/ggerganov/llama.cpp/discussions/categories/ideas)
+
+ - type: checkboxes
+ id: prerequisites
+ attributes:
+ label: Prerequisites
+ description: Please confirm the following before submitting your enhancement request.
+ options:
+ - label: I am running the latest code. Mention the version if possible as well.
+ required: true
+ - label: I carefully followed the [README.md](https://github.com/ggerganov/llama.cpp/blob/master/README.md).
+ required: true
+ - label: I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
+ required: true
+ - label: I reviewed the [Discussions](https://github.com/ggerganov/llama.cpp/discussions), and have a new and useful enhancement to share.
+ required: true
+
+ - type: textarea
+ id: feature-description
+ attributes:
+ label: Feature Description
+ description: Please provide a detailed written description of what you were trying to do, and what you expected `llama.cpp` to do as an enhancement.
+ placeholder: Detailed description of the enhancement
+ validations:
+ required: true
+
+ - type: textarea
+ id: motivation
+ attributes:
+ label: Motivation
+ description: Please provide a detailed written description of reasons why this feature is necessary and how it is useful to `llama.cpp` users.
+ placeholder: Explanation of why this feature is needed and its benefits
+ validations:
+ required: true
+
+ - type: textarea
+ id: possible-implementation
+ attributes:
+ label: Possible Implementation
+ description: If you have an idea as to how it can be implemented, please write a detailed description. Feel free to give links to external sources or share visuals that might be helpful to understand the details better.
+ placeholder: Detailed description of potential implementation
+ validations:
+ required: false
+++ /dev/null
-name: High Severity Bug
-description: Used to report high severity bugs in llama.cpp (e.g. Malfunctioning features hindering important common workflow)
-title: "Bug: "
-labels: ["bug-unconfirmed", "high severity"]
-body:
- - type: markdown
- attributes:
- value: |
- Thanks for taking the time to fill out this bug report!
- Please include information about your system, the steps to reproduce the bug,
- and the version of llama.cpp that you are using.
- If possible, please provide a minimal code example that reproduces the bug.
- - type: textarea
- id: what-happened
- attributes:
- label: What happened?
- description: Also tell us, what did you expect to happen?
- placeholder: Tell us what you see!
- validations:
- required: true
- - type: textarea
- id: version
- attributes:
- label: Name and Version
- description: Which executable and which version of our software are you running? (use `--version` to get a version string)
- placeholder: |
- $./llama-cli --version
- version: 2999 (42b4109e)
- built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
- validations:
- required: true
- - type: dropdown
- id: operating-system
- attributes:
- label: What operating system are you seeing the problem on?
- multiple: true
- options:
- - Linux
- - Mac
- - Windows
- - BSD
- - Other? (Please let us know in description)
- validations:
- required: false
- - type: textarea
- id: logs
- attributes:
- label: Relevant log output
- description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
- render: shell
--- /dev/null
+name: Research
+description: Track new technical research area.
+title: "Research: "
+labels: ["research 🔬"]
+body:
+ - type: markdown
+ attributes:
+ value: |
+ Don't forget to check for any [duplicate research issue tickets](https://github.com/ggerganov/llama.cpp/issues?q=is%3Aopen+is%3Aissue+label%3A%22research+%F0%9F%94%AC%22)
+
+ - type: checkboxes
+ id: research-stage
+ attributes:
+ label: Research Stage
+ description: Track general state of this research ticket
+ options:
+ - label: Background Research (Let's try to avoid reinventing the wheel)
+ - label: Hypothesis Formed (How do you think this will work and it's effect?)
+ - label: Strategy / Implementation Forming
+ - label: Analysis of results
+ - label: Debrief / Documentation (So people in the future can learn from us)
+
+ - type: textarea
+ id: background
+ attributes:
+ label: Previous existing literature and research
+ description: Whats the current state of the art and whats the motivation for this research?
+
+ - type: textarea
+ id: hypothesis
+ attributes:
+ label: Hypothesis
+ description: How do you think this will work and it's effect?
+
+ - type: textarea
+ id: implementation
+ attributes:
+ label: Implementation
+ description: Got an approach? e.g. a PR ready to go?
+
+ - type: textarea
+ id: analysis
+ attributes:
+ label: Analysis
+ description: How does the proposed implementation behave?
+
+ - type: textarea
+ id: logs
+ attributes:
+ label: Relevant log output
+ description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
+ render: shell
+++ /dev/null
-name: Critical Severity Bug
-description: Used to report critical severity bugs in llama.cpp (e.g. Crashing, Corrupted, Dataloss)
-title: "Bug: "
-labels: ["bug-unconfirmed", "critical severity"]
-body:
- - type: markdown
- attributes:
- value: |
- Thanks for taking the time to fill out this bug report!
- Please include information about your system, the steps to reproduce the bug,
- and the version of llama.cpp that you are using.
- If possible, please provide a minimal code example that reproduces the bug.
- - type: textarea
- id: what-happened
- attributes:
- label: What happened?
- description: Also tell us, what did you expect to happen?
- placeholder: Tell us what you see!
- validations:
- required: true
- - type: textarea
- id: version
- attributes:
- label: Name and Version
- description: Which executable and which version of our software are you running? (use `--version` to get a version string)
- placeholder: |
- $./llama-cli --version
- version: 2999 (42b4109e)
- built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
- validations:
- required: true
- - type: dropdown
- id: operating-system
- attributes:
- label: What operating system are you seeing the problem on?
- multiple: true
- options:
- - Linux
- - Mac
- - Windows
- - BSD
- - Other? (Please let us know in description)
- validations:
- required: false
- - type: textarea
- id: logs
- attributes:
- label: Relevant log output
- description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
- render: shell
--- /dev/null
+name: Refactor (Maintainers)
+description: Used to track refactoring opportunities.
+title: "Refactor: "
+labels: ["refactor"]
+body:
+ - type: markdown
+ attributes:
+ value: |
+ Don't forget to [check for existing refactor issue tickets](https://github.com/ggerganov/llama.cpp/issues?q=is%3Aopen+is%3Aissue+label%3Arefactoring) in case it's already covered.
+ Also you may want to check [Pull request refactor label as well](https://github.com/ggerganov/llama.cpp/pulls?q=is%3Aopen+is%3Apr+label%3Arefactoring) for duplicates too.
+
+ - type: textarea
+ id: background-description
+ attributes:
+ label: Background Description
+ description: Please provide a detailed written description of the pain points you are trying to solve.
+ placeholder: Detailed description behind your motivation to request refactor
+ validations:
+ required: true
+
+ - type: textarea
+ id: possible-approaches
+ attributes:
+ label: Possible Refactor Approaches
+ description: If you have some idea of possible approaches to solve this problem. You may want to make it a todo list.
+ placeholder: Your idea of possible refactoring opportunity/approaches
+ validations:
+ required: false
+++ /dev/null
-name: Enhancement
-description: Used to request enhancements for llama.cpp
-title: "Feature Request: "
-labels: ["enhancement"]
-body:
- - type: markdown
- attributes:
- value: |
- [Please post your idea first in Discussion if there is not yet a consensus for this enhancement request. This will help to keep this issue tracker focused on enhancements that the community has agreed needs to be implemented.](https://github.com/ggerganov/llama.cpp/discussions/categories/ideas)
-
- - type: checkboxes
- id: prerequisites
- attributes:
- label: Prerequisites
- description: Please confirm the following before submitting your enhancement request.
- options:
- - label: I am running the latest code. Mention the version if possible as well.
- required: true
- - label: I carefully followed the [README.md](https://github.com/ggerganov/llama.cpp/blob/master/README.md).
- required: true
- - label: I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- required: true
- - label: I reviewed the [Discussions](https://github.com/ggerganov/llama.cpp/discussions), and have a new and useful enhancement to share.
- required: true
-
- - type: textarea
- id: feature-description
- attributes:
- label: Feature Description
- description: Please provide a detailed written description of what you were trying to do, and what you expected `llama.cpp` to do as an enhancement.
- placeholder: Detailed description of the enhancement
- validations:
- required: true
-
- - type: textarea
- id: motivation
- attributes:
- label: Motivation
- description: Please provide a detailed written description of reasons why this feature is necessary and how it is useful to `llama.cpp` users.
- placeholder: Explanation of why this feature is needed and its benefits
- validations:
- required: true
-
- - type: textarea
- id: possible-implementation
- attributes:
- label: Possible Implementation
- description: If you have an idea as to how it can be implemented, please write a detailed description. Feel free to give links to external sources or share visuals that might be helpful to understand the details better.
- placeholder: Detailed description of potential implementation
- validations:
- required: false
+++ /dev/null
-name: Research
-description: Track new technical research area
-title: "Research: "
-labels: ["research 🔬"]
-body:
- - type: markdown
- attributes:
- value: |
- Don't forget to check for any [duplicate research issue tickets](https://github.com/ggerganov/llama.cpp/issues?q=is%3Aopen+is%3Aissue+label%3A%22research+%F0%9F%94%AC%22)
-
- - type: checkboxes
- id: research-stage
- attributes:
- label: Research Stage
- description: Track general state of this research ticket
- options:
- - label: Background Research (Let's try to avoid reinventing the wheel)
- - label: Hypothesis Formed (How do you think this will work and it's effect?)
- - label: Strategy / Implementation Forming
- - label: Analysis of results
- - label: Debrief / Documentation (So people in the future can learn from us)
-
- - type: textarea
- id: background
- attributes:
- label: Previous existing literature and research
- description: Whats the current state of the art and whats the motivation for this research?
-
- - type: textarea
- id: hypothesis
- attributes:
- label: Hypothesis
- description: How do you think this will work and it's effect?
-
- - type: textarea
- id: implementation
- attributes:
- label: Implementation
- description: Got an approach? e.g. a PR ready to go?
-
- - type: textarea
- id: analysis
- attributes:
- label: Analysis
- description: How does the proposed implementation behave?
-
- - type: textarea
- id: logs
- attributes:
- label: Relevant log output
- description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
- render: shell
+++ /dev/null
-name: Refactor (Maintainers)
-description: Used to track refactoring opportunities
-title: "Refactor: "
-labels: ["refactor"]
-body:
- - type: markdown
- attributes:
- value: |
- Don't forget to [check for existing refactor issue tickets](https://github.com/ggerganov/llama.cpp/issues?q=is%3Aopen+is%3Aissue+label%3Arefactoring) in case it's already covered.
- Also you may want to check [Pull request refactor label as well](https://github.com/ggerganov/llama.cpp/pulls?q=is%3Aopen+is%3Apr+label%3Arefactoring) for duplicates too.
-
- - type: textarea
- id: background-description
- attributes:
- label: Background Description
- description: Please provide a detailed written description of the pain points you are trying to solve.
- placeholder: Detailed description behind your motivation to request refactor
- validations:
- required: true
-
- - type: textarea
- id: possible-approaches
- attributes:
- label: Possible Refactor Approaches
- description: If you have some idea of possible approaches to solve this problem. You may want to make it a todo list.
- placeholder: Your idea of possible refactoring opportunity/approaches
- validations:
- required: false