diff --git a/.github/workflows/pypi-publish.yml b/.github/workflows/pypi-publish.yml
index f9a0615..9d0c742 100644
--- a/.github/workflows/pypi-publish.yml
+++ b/.github/workflows/pypi-publish.yml
@@ -3,7 +3,7 @@ on:
push:
branches:
- main
- tags: ['**']
+ tags: ['**']
jobs:
pypi-build:
if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags')
diff --git a/README.md b/README.md
index f659968..5de1912 100644
--- a/README.md
+++ b/README.md
@@ -30,6 +30,10 @@ Install with
```bash
uv pip install avise
```
+ or
+ ```bash
+ uv tool install avise
+ ```
### 2. Run a model
diff --git a/docs/source/installation.rst b/docs/source/installation.rst
index 7d2507d..f590c9a 100644
--- a/docs/source/installation.rst
+++ b/docs/source/installation.rst
@@ -1,113 +1,122 @@
Installation
=================================
-Currently, AVISE can be installed by cloning the repository and installing required dependencies. After installation,
-Connector configuration files found in `avise/configs/connector/` need to be configured with details of the target model API endpoint.
+### Prerequisites
-The guide below assumes using `Ollama `__ to run models.
+- Python 3.10+
+- Docker (For Running models locally with Ollama)
-Prerequisites
-~~~~~~~~~~~~~
+### 1. Install AVISE
-- Python 3.10+
-- Docker (for Ollama backend)
-- pip
+Install with
+- **pip:**
+ ```bash
+ pip install avise
+ ```
-1. Clone the Repository
-~~~~~~~~~~~~~~~~~~~~~~~
+- **uv:**
-.. code:: bash
+ ```bash
+ uv install avise
+ ```
+ or
+ ```bash
+ uv tool install avise
+ ```
- git clone https://github.com/ouspg/AVISE.git
+### 2. Run a model
-.. code:: bash
+You can use AVISE to evaluate any model accessible via an API by configuring a Connector. In this Quickstart, we will
+assume using the Ollama Docker container for running a language model. If you wish to evaluate models deployed in other ways, see
+the [Full Documentations](https://avise.readthedocs.io) and available template connector configuration files at `AVISE/avise/configs/connector/languagemodel/` dir of this repository.
- cd AVISE
+#### Running a language model locally with Docker & Ollama
-2. Set Up Python Environment
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+- Clone this repository to your local machine with:
-* Create Virtual Environment
+```bash
+git clone https://github.com/ouspg/AVISE.git
+```
-.. code:: bash
+- Create the Ollama Docker container
+ - for **GPU** accelerated inference with:
+ ```bash
+ docker compose -f AVISE/docker/ollama/docker-compose.yml up -d
+ ```
+ - or for **CPU** inference with:
+ ```bash
+ docker compose -f AVISE/docker/ollama/docker-compose-cpu.yml up -d
+ ```
- python -m venv venv
+- Pull an Ollama model to evaluate into the container with:
+ ```bash
+ docker exec -it avise-ollama ollama pull
+ ```
-* Activate Virtual Environment
+### 3. Evaluate the model with a Security Evaluation Test (SET)
- * On Linux & Mac:
+#### Basic usage
- .. code:: bash
+```bash
+avise --SET --connectorconf [options]
+```
- source venv/bin/activate
+For example, you can run the `prompt_injection` SET on the model pulled to the Ollama Docker container with:
- * On Windows:
+```bash
+avise --SET prompt_injection --connectorconf ollama_lm --target
+```
- .. code:: bash
+To list the available SETs, run the command:
+```bash
+avise --SET-list
+```
- source venv/Scripts/activate
-* Install dependencies
+## Advanced usage
-.. code:: bash
+### Configuring Connectors
- pip install -r requirements.txt
+You can create your own connector configuration files, or if you cloned the AVISE repository, you can modify the existing connector configuration files in `AVISE/avise/configs/connector/languagemodel/`.
-3. Set Up by using Ollama Backend with Docker
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+For example, you can edit the default Ollama Connector configuration file `AVISE/avise/configs/connector/languagemodel/ollama.json`, and insert the name of an Ollama model you have pulled to be used as a target by default:
-**GPU Version:**
+```json
+{
+ "target_model": {
+ "connector": "ollama-lm",
+ "type": "language_model",
+ "name": "",
+ "api_url": "http://localhost:11434", #Ollama default
+ "api_key": null
+ }
+}
+```
+If you want to use custom configuration files for SETs and/or Connectors, you can do so by giving the paths to the configuration files with `--SETconf` and `--connectorconf` arguments:
-.. code:: bash
+```bash
+avise --SET prompt_injection --SETconf AVISE/avise/configs/SET/languagemodel/single_turn/prompt_injection_mini.json --connectorconf AVISE/avise/configs/connector/languagemodel/ollama.json
+```
- docker-compose -f docker/ollama/docker-compose.yml up -d
+### Required Arguments
-**CPU-only Version:**
+| Argument | Description |
+|----------|-------------|
+| `--SET`, `-s` | Security Evaluation Test to run (e.g., `prompt_injection`, `context_test`) |
+| `--connectorconf`, `-c` | Path to Connector configuration JSON (Accepts predefined connector configuration paths: `ollama_lm`, `openai_lm`, `genericrest_lm`)|
-.. code:: bash
- docker-compose -f docker/ollama/docker-compose-cpu.yml up -d
+### Optional Arguments
-4. Pull Models
-~~~~~~~~~~~~~~
-
-After Ollama is running, pull the models you want to test:
-
-.. code:: bash
-
- docker exec -it ollama ollama pull MODEL_NAME
-
-5. Configure Connectors
-~~~~~~~~~~~~~~~~~~~~~~~
-
-Edit ``avise/configs/connector/ollama.json``:
-
-.. code:: json
-
- {
- "target_model": {
- "connector": "ollama-lm",
- "type": "language_model",
- "name": "phi3:latest", //ADD NAME OF THE OLLAMA MODEL TO TEST HERE
- "api_url": "http://localhost:11434", //Ollama default
- "api_key": null
- }
- }
-
-Basic usage example
----------------------
-
-AVISE uses preconfigured paths for SET and Connector configuration JSON files, if the paths are not given as CLI arguments:
-
-.. code:: bash
-
- python -m avise --SET prompt_injection --connectorconf ollama
-
-Advanced usage example
------------------------
-
-If you wish to use custom SET and Connector configuration files, you can give them with the ``--connectorconf`` and ``--SETconf`` CLI arguments:
-
-.. code:: bash
-
- python -m avise --SET prompt_injection --connectorconf avise/configs/connector/ollama.json --SETconf avise/configs/set/prompt_injection_mini.json
+| Argument | Description |
+|----------|-------------|
+| `--SETconf` | Path to SET configuration JSON file. If not given, uses preconfigured paths for SET config JSON files. |
+| `--target`, `-t` | Name of the target model/system to evaluate. Overrides target name from connector configuration file. |
+| `--format`, `-f` | Report format: `json`, `html`, `md` |
+| `--runs`, `-r` | How many times each SET is executed |
+| `--output` | Custom output file path |
+| `--reports-dir` | Base directory for reports (default: `avise-reports/`) |
+| `--SET-list` | List available Security Evaluation Tests |
+| `--connector-list` | List available Connectors |
+| `--verbose`, `-v` | Enable verbose logging |
+| `--version`, `-V` | Print version |
diff --git a/docs/source/quickstart.rst b/docs/source/quickstart.rst
index df616c8..8802957 100644
--- a/docs/source/quickstart.rst
+++ b/docs/source/quickstart.rst
@@ -5,140 +5,124 @@ The guide below assumes using `Ollama `__ to run models. Co
or any model accessbile through a REST API.
-Prerequisites
-~~~~~~~~~~~~~
+## Quickstart for evaluating Language Models
-- Python 3.10+
-- Docker (for Ollama backend)
-- pip
+### Prerequisites
-1. Clone the Repository
-~~~~~~~~~~~~~~~~~~~~~~~
+- Python 3.10+
+- Docker (For Running models locally with Ollama)
-.. code:: bash
+### 1. Install AVISE
- git clone https://github.com/ouspg/AVISE.git
+Install with
+- **pip:**
+ ```bash
+ pip install avise
+ ```
-.. code:: bash
+- **uv:**
- cd AVISE
+ ```bash
+ uv install avise
+ ```
+ or
+ ```bash
+ uv tool install avise
+ ```
-2. Set Up Python Environment
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+### 2. Run a model
-* Create Virtual Environment
+You can use AVISE to evaluate any model accessible via an API by configuring a Connector. In this Quickstart, we will
+assume using the Ollama Docker container for running a language model. If you wish to evaluate models deployed in other ways, see
+the [Full Documentations](https://avise.readthedocs.io) and available template connector configuration files at `AVISE/avise/configs/connector/languagemodel/` dir of this repository.
-.. code:: bash
+#### Running a language model locally with Docker & Ollama
- python -m venv venv
+- Clone this repository to your local machine with:
-* Activate Virtual Environment
+```bash
+git clone https://github.com/ouspg/AVISE.git
+```
- * On Linux & Mac:
+- Create the Ollama Docker container
+ - for **GPU** accelerated inference with:
+ ```bash
+ docker compose -f AVISE/docker/ollama/docker-compose.yml up -d
+ ```
+ - or for **CPU** inference with:
+ ```bash
+ docker compose -f AVISE/docker/ollama/docker-compose-cpu.yml up -d
+ ```
- .. code:: bash
+- Pull an Ollama model to evaluate into the container with:
+ ```bash
+ docker exec -it avise-ollama ollama pull
+ ```
- source venv/bin/activate
+### 3. Evaluate the model with a Security Evaluation Test (SET)
- * On Windows:
+#### Basic usage
- .. code:: bash
+```bash
+avise --SET --connectorconf [options]
+```
- source venv/Scripts/activate
+For example, you can run the `prompt_injection` SET on the model pulled to the Ollama Docker container with:
-* Install dependencies
+```bash
+avise --SET prompt_injection --connectorconf ollama_lm --target
+```
-.. code:: bash
+To list the available SETs, run the command:
+```bash
+avise --SET-list
+```
- pip install -r requirements.txt
-3. Set Up by using Ollama Backend with Docker
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+## Advanced usage
-**GPU Version:**
+### Configuring Connectors
-.. code:: bash
+You can create your own connector configuration files, or if you cloned the AVISE repository, you can modify the existing connector configuration files in `AVISE/avise/configs/connector/languagemodel/`.
- docker-compose -f docker/ollama/docker-compose.yml up -d
+For example, you can edit the default Ollama Connector configuration file `AVISE/avise/configs/connector/languagemodel/ollama.json`, and insert the name of an Ollama model you have pulled to be used as a target by default:
-**CPU-only Version:**
+```json
+{
+ "target_model": {
+ "connector": "ollama-lm",
+ "type": "language_model",
+ "name": "",
+ "api_url": "http://localhost:11434", #Ollama default
+ "api_key": null
+ }
+}
+```
+If you want to use custom configuration files for SETs and/or Connectors, you can do so by giving the paths to the configuration files with `--SETconf` and `--connectorconf` arguments:
-.. code:: bash
+```bash
+avise --SET prompt_injection --SETconf AVISE/avise/configs/SET/languagemodel/single_turn/prompt_injection_mini.json --connectorconf AVISE/avise/configs/connector/languagemodel/ollama.json
+```
- docker-compose -f docker/ollama/docker-compose-cpu.yml up -d
+### Required Arguments
-4. Pull Models
-~~~~~~~~~~~~~~
+| Argument | Description |
+|----------|-------------|
+| `--SET`, `-s` | Security Evaluation Test to run (e.g., `prompt_injection`, `context_test`) |
+| `--connectorconf`, `-c` | Path to Connector configuration JSON (Accepts predefined connector configuration paths: `ollama_lm`, `openai_lm`, `genericrest_lm`)|
-After Ollama is running, pull the models you want to test:
-.. code:: bash
+### Optional Arguments
- docker exec -it ollama ollama pull MODEL_NAME
-
-5. Configure Connectors
-~~~~~~~~~~~~~~~~~~~~~~~
-
-Edit ``avise/configs/connector/ollama.json``:
-
-.. code:: json
-
- {
- "target_model": {
- "connector": "ollama-lm",
- "type": "language_model",
- "name": "phi3:latest", //ADD NAME OF THE OLLAMA MODEL TO TEST HERE
- "api_url": "http://localhost:11434", //Ollama default
- "api_key": null
- }
- }
-
-Basic usage example
----------------------
-
-AVISE uses preconfigured paths for SET and Connector configuration JSON files, if the paths are not given as CLI arguments:
-
-.. code:: bash
-
- python -m avise --SET prompt_injection --connectorconf ollama
-
-Advanced usage example
------------------------
-
-If you wish to use custom SET and Connector configuration files, you can give them with the ``--connectorconf`` and ``--SETconf`` CLI arguments:
-
-.. code:: bash
-
- python -m avise --SET prompt_injection --connectorconf avise/configs/connector/ollama.json --SETconf avise/configs/set/prompt_injection_mini.json
-
-Required Arguments
-~~~~~~~~~~~~~~~~~~
-
-+------------------------------+---------------------------------------+
-| Argument | Description |
-+==============================+=======================================+
-| ``--SET`` | Security Evaluation Test to run |
-| | (e.g., ``prompt_injection``, |
-| | ``context_test``) |
-+------------------------------+---------------------------------------+
-| ``--connectorconf`` | Path to Connector configuration JSON |
-| | (Accepts preconfigured connector |
-| | configuration paths: ``ollama``, |
-| | ``openai``, ``genericrest``) |
-+------------------------------+---------------------------------------+
-
-
-Optional Arguments
-~~~~~~~~~~~~~~~~~~
-
-==================== ==================================================
-Argument Description
-==================== ==================================================
-``--SETconf`` Path to SET configuration JSON file
-``--format``, ``-f`` Report format: ``json``, ``html``, ``md``
-``--output`` Custom output file path
-``--reports-dir`` Base directory for reports (default: ``reports/``)
-``--list`` List available tests and formats
-``-verbose`` Enable verbose logging
-``-version`` Print version
-==================== ==================================================
+| Argument | Description |
+|----------|-------------|
+| `--SETconf` | Path to SET configuration JSON file. If not given, uses preconfigured paths for SET config JSON files. |
+| `--target`, `-t` | Name of the target model/system to evaluate. Overrides target name from connector configuration file. |
+| `--format`, `-f` | Report format: `json`, `html`, `md` |
+| `--runs`, `-r` | How many times each SET is executed |
+| `--output` | Custom output file path |
+| `--reports-dir` | Base directory for reports (default: `avise-reports/`) |
+| `--SET-list` | List available Security Evaluation Tests |
+| `--connector-list` | List available Connectors |
+| `--verbose`, `-v` | Enable verbose logging |
+| `--version`, `-V` | Print version |