Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 13 additions & 15 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,20 +25,21 @@
persist-credentials: false
sparse-checkout: action.yml

- name: Inference request
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3

Check warning

Code scanning / CodeQL

Unpinned tag for a non-immutable Action in workflow Medium

Unpinned 3rd party Action 'CI' step
Uses Step
uses 'hashicorp/setup-terraform' with ref 'v3', not a pinned commit hash

- name: Run Terraform
id: terraform
continue-on-error: true
run: terraform plan

- name: AI inference request
id: prompt
uses: ./
with:
payload: |
model: openai/gpt-4.1-mini
messages:
- role: system
content: You are a helpful assistant
- role: user
content: What is the capital of France
max_tokens: 100
temperature: 0.9
top_p: 0.9
system-prompt: You are a helpful DevOps assistant and expert at debugging Terraform errors.
user-prompt: Troubleshoot the following Terraform output; ${{ steps.terraform.outputs.stderr }}
show-payload: true

- name: Echo outputs
run: |
Expand All @@ -49,7 +50,4 @@
echo "${{ steps.prompt.outputs.response-file }}"
echo "response-file contents:"
cat "${{ steps.prompt.outputs.response-file }}" | jq
echo "payload:"
echo "${{ steps.prompt.outputs.payload }}"
cat "${{ steps.prompt.outputs.response-file }}"
134 changes: 96 additions & 38 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,21 @@
[![GitHub license](https://img.shields.io/github/license/op5dev/ai-inference-request?logo=apache&label=License)](LICENSE "Apache License 2.0.")
[![GitHub release tag](https://img.shields.io/github/v/release/op5dev/ai-inference-request?logo=semanticrelease&label=Release)](https://github.com/op5dev/ai-inference-request/releases "View all releases.")
[![GitHub license](https://img.shields.io/github/license/op5dev/prompt-ai?logo=apache&label=License)](LICENSE "Apache License 2.0.")
[![GitHub release tag](https://img.shields.io/github/v/release/op5dev/prompt-ai?logo=semanticrelease&label=Release)](https://github.com/op5dev/prompt-ai/releases "View all releases.")
*
[![GitHub repository stargazers](https://img.shields.io/github/stars/op5dev/ai-inference-request)](https://github.com/op5dev/ai-inference-request "Become a stargazer.")
[![GitHub repository stargazers](https://img.shields.io/github/stars/op5dev/prompt-ai)](https://github.com/op5dev/prompt-ai "Become a stargazer.")

# AI Inference Request via GitHub Action
# Prompt GitHub AI Models via GitHub Action

> [!TIP]
> [AI inference request](https://docs.github.com/en/rest/models/inference?apiVersion=2022-11-28#run-an-inference-request "GitHub API documentation.") GitHub Models via this [GitHub Action](https://github.com/marketplace/actions/ai-inference-request-via-github-action "GitHub Actions marketplace.").
> Prompt GitHub AI Models using [inference request](https://docs.github.com/en/rest/models/inference?apiVersion=2022-11-28#run-an-inference-request "GitHub API documentation.") via GitHub Action API.

</br>

## Usage Examples

[Compare available AI models](https://docs.github.com/en/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task "Comparison of AI models for GitHub.") to choose the best one for your use-case.

### Summarize GitHub Issues

```yml
on:
issues:
Expand All @@ -30,18 +32,13 @@ jobs:
steps:
- name: Summarize issue
id: prompt
uses: op5dev/ai-inference-request@v2
uses: op5dev/prompt-ai@v2
with:
payload: |
model: openai/gpt-4.1-mini
messages:
- role: system
content: You are a helpful assistant running within GitHub CI.
- role: user
content: Concisely summarize this GitHub issue titled ${{ github.event.issue.title }}: ${{ github.event.issue.body }}
max_tokens: 100
temperature: 0.9
top_p: 0.9
user-prompt: |
Concisely summarize the GitHub issue
with title '${{ github.event.issue.title }}'
and body: ${{ github.event.issue.body }}
max_tokens: 250

- name: Comment summary
run: gh issue comment $NUMBER --body "$SUMMARY"
Expand All @@ -51,31 +48,92 @@ jobs:
SUMMARY: ${{ steps.prompt.outputs.response }}
```

### Troubleshoot Terraform Deployments

```yml
on:
pull_request:
push:
branches: main

jobs:
provision:
runs-on: ubuntu-latest

permissions:
actions: read
checks: write
contents: read
pull-requests: write
models: read

steps:
- name: Checkout repository
uses: actions/checkout@v4

- name: Setup Terraform
uses: hashicorp/setup-terraform@v3

- name: Provision Terraform
id: provision
uses: op5dev/tf-via-pr@v13
with:
working-directory: env/dev
command: ${{ github.event_name == 'push' && 'apply' || 'plan' }}

- name: Troubleshoot Terraform
if: failure()
uses: op5dev/prompt-ai@v2
with:
model: openai/gpt-4.1-mini
system-prompt: You are a helpful DevOps assistant and expert at debugging Terraform errors.
user-prompt: Troubleshoot the following Terraform output; ${{ steps.provision.outputs.result }}
max-tokens: 500
temperature: 0.7
top_p: 0.9
```

</br>

## Inputs

Either `payload` or `payload-file` is required with at least `model` and `messages` parameters, per [documentation](https://docs.github.com/en/rest/models/inference?apiVersion=2022-11-28#run-an-inference-request "GitHub API documentation.").

| Type | Name | Description |
| ------ | -------------------- | ----------------------------------------------------------------------------------------------------------- |
| Data | `payload` | Body parameters of the inference request in YAML format.</br>Example: `model…` |
| Data | `payload-file` | Path to a file containing the body parameters of the inference request.</br>Example: `./payload.{json,yml}` |
| Config | `show-payload` | Whether to show the payload in the logs.</br>Default: `true` |
| Config | `show-response` | Whether to show the response content in the logs.</br>Default: `true` |
| Admin | `github-api-version` | GitHub API version.</br>Default: `2022-11-28` |
| Admin | `github-token` | GitHub token.</br>Default: `github.token` |
| Admin | `org` | Organization for request attribution.</br>Example: `github.repository_owner` |
The only required input is `user-prompt`, while every parameter can be tuned per [documentation](https://docs.github.com/en/rest/models/inference?apiVersion=2022-11-28#run-an-inference-request "GitHub API documentation.").

| Type | Name | Description |
| ---------- | ---------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Common | `model` | Model ID to use for the inference request.</br>(e.g., `openai/gpt-4.1-mini`) |
| Common | `system-prompt` | Prompt associated with the `system` role.</br>(e.g., `You are a helpful software engineering assistant`) |
| Common | `user-prompt` | Prompt associated with the `user` role.</br>(e.g., `List best practices for workflows with GitHub Actions`) |
| Common | `max-tokens` | The maximum number of tokens to generate in the completion. The token count of your prompt plus `max-tokens` cannot exceed the model's context length.</br>(e.g., `100`) |
| Common | `temperature` | The sampling temperature to use that controls the apparent creativity of generated completions. Higher values will make output more random while lower values will make results more focused and deterministic.</br>(e.g., range is `[0, 1]`) |
| Common | `top-p` | An alternative to sampling with temperature called nucleus sampling. This value causes the model to consider the results of tokens with the provided probability mass.</br>(e.g., range is `[0, 1]`) |
| Additional | `frequency-penalty` | A value that influences the probability of generated tokens appearing based on their cumulative frequency in generated text.</br>(e.g., range is `[-2, 2]`) |
| Additional | `modalities` | The modalities that the model is allowed to use for the chat completions response.</br>(e.g., from `text` and `audio`) |
| Additional | `org` | Organization to which the request is to be attributed.</br>(e.g., `github.repository_owner`) |
| Additional | `presence-penalty` | A value that influences the probability of generated tokens appearing based on their existing presence in generated text.</br>(e.g., range is `[-2, 2]`) |
| Additional | `seed` | If specified, the system will make a best effort to sample deterministically such that repeated requests with the same seed and parameters should return the same result.</br>(e.g., `123456789`) |
| Additional | `stop` | A collection of textual sequences that will end completion generation.</br>(e.g., `["\n\n", "END"]`) |
| Additional | `stream` | A value indicating whether chat completions should be streamed for this request.</br>(e.g., `false`) |
| Additional | `stream-include-usage` | Whether to include usage information in the response.</br>(e.g., `false`) |
| Additional | `tool-choice` | If specified, the model will configure which of the provided tools it can use for the chat completions response.</br>(e.g., 'auto', 'required', or 'none') |
| Payload | `payload` | Body parameters of the inference request in JSON format.</br>(e.g., `{"model"…`) |
| Payload | `payload-file` | Path to a JSON file containing the body parameters of the inference request.</br>(e.g., `./payload.json`) |
| Payload | `show-payload` | Whether to show the body parameters in the workflow log.</br>(e.g., `false`) |
| Payload | `show-response` | Whether to show the response content in the workflow log.</br>(e.g., `true`) |
| GitHub | `github-api-version` | GitHub API version.</br>(e.g., `2022-11-28`) |
| GitHub | `github-token` | GitHub token for authorization.</br>(e.g., `github.token`) |

</br>

## Outputs

| Name | Description |
| --------------- | -------------------------------------------------------- |
| `response` | Response content from the inference request. |
| `response-file` | File path containing the complete, raw response. |
| `payload` | Body parameters of the inference request in JSON format. |
Due to GitHub's API limitations, the `response` content is truncated to 262,144 (2^18) characters so the complete, raw response is saved to `response-file`.

| Name | Description |
| --------------- | --------------------------------------------------------------- |
| `response` | Response content from the inference request. |
| `response-file` | File path containing the complete, raw response in JSON format. |
| `payload` | Body parameters of the inference request in JSON format. |

</br>

Expand All @@ -91,21 +149,21 @@ View [security policy and reporting instructions](SECURITY.md).

## Changelog

View [all notable changes](https://github.com/op5dev/ai-inference-request/releases "Releases.") to this project in [Keep a Changelog](https://keepachangelog.com "Keep a Changelog.") format, which adheres to [Semantic Versioning](https://semver.org "Semantic Versioning.").
View [all notable changes](https://github.com/op5dev/prompt-ai/releases "Releases.") to this project in [Keep a Changelog](https://keepachangelog.com "Keep a Changelog.") format, which adheres to [Semantic Versioning](https://semver.org "Semantic Versioning.").

> [!TIP]
>
> All forms of **contribution are very welcome** and deeply appreciated for fostering open-source projects.
>
> - [Create a PR](https://github.com/op5dev/ai-inference-request/pulls "Create a pull request.") to contribute changes you'd like to see.
> - [Raise an issue](https://github.com/op5dev/ai-inference-request/issues "Raise an issue.") to propose changes or report unexpected behavior.
> - [Open a discussion](https://github.com/op5dev/ai-inference-request/discussions "Open a discussion.") to discuss broader topics or questions.
> - [Become a stargazer](https://github.com/op5dev/ai-inference-request/stargazers "Become a stargazer.") if you find this project useful.
> - [Create a PR](https://github.com/op5dev/prompt-ai/pulls "Create a pull request.") to contribute changes you'd like to see.
> - [Raise an issue](https://github.com/op5dev/prompt-ai/issues "Raise an issue.") to propose changes or report unexpected behavior.
> - [Open a discussion](https://github.com/op5dev/prompt-ai/discussions "Open a discussion.") to discuss broader topics or questions.
> - [Become a stargazer](https://github.com/op5dev/prompt-ai/stargazers "Become a stargazer.") if you find this project useful.

</br>

## License

- This project is licensed under the **permissive** [Apache License 2.0](LICENSE "Apache License 2.0.").
- All works herein are my own, shared of my own volition, and [contributors](https://github.com/op5dev/ai-inference-request/graphs/contributors "Contributors.").
- All works herein are my own, shared of my own volition, and [contributors](https://github.com/op5dev/prompt-ai/graphs/contributors "Contributors.").
- Copyright 2016-present [Rishav Dhar](https://rdhar.dev "Rishav Dhar's profile.") — All wrongs reserved.
2 changes: 1 addition & 1 deletion SECURITY.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,4 @@ Integrating security in your CI/CD pipeline is critical to practicing DevSecOps.

## Reporting a Vulnerability

You must never report security related issues, vulnerabilities or bugs including sensitive information to the issue tracker, or elsewhere in public. Instead, sensitive bugs must be sent by email to <security@OP5.dev> or reported via [Security Advisory](https://github.com/op5dev/ai-inference-request/security/advisories/new "Create a new security advisory.").
You must never report security related issues, vulnerabilities or bugs including sensitive information to the issue tracker, or elsewhere in public. Instead, sensitive bugs must be sent by email to <security@OP5.dev> or reported via [Security Advisory](https://github.com/op5dev/prompt-ai/security/advisories/new "Create a new security advisory.").
Loading
Loading