Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/pypi-publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ on:
push:
branches:
- main
tags: ['**']
tags: ['**']
jobs:
pypi-build:
if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags')
Expand Down
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,10 @@ Install with
```bash
uv pip install avise
```
or
```bash
uv tool install avise
```

### 2. Run a model

Expand Down
169 changes: 89 additions & 80 deletions docs/source/installation.rst
Original file line number Diff line number Diff line change
@@ -1,113 +1,122 @@
Installation
=================================

Currently, AVISE can be installed by cloning the repository and installing required dependencies. After installation,
Connector configuration files found in `avise/configs/connector/` need to be configured with details of the target model API endpoint.
### Prerequisites

The guide below assumes using `Ollama <https://ollama.com/>`__ to run models.
- Python 3.10+
- Docker (For Running models locally with Ollama)

Prerequisites
~~~~~~~~~~~~~
### 1. Install AVISE

- Python 3.10+
- Docker (for Ollama backend)
- pip
Install with
- **pip:**
```bash
pip install avise
```

1. Clone the Repository
~~~~~~~~~~~~~~~~~~~~~~~
- **uv:**

.. code:: bash
```bash
uv install avise
```
or
```bash
uv tool install avise
```

git clone https://github.com/ouspg/AVISE.git
### 2. Run a model

.. code:: bash
You can use AVISE to evaluate any model accessible via an API by configuring a Connector. In this Quickstart, we will
assume using the Ollama Docker container for running a language model. If you wish to evaluate models deployed in other ways, see
the [Full Documentations](https://avise.readthedocs.io) and available template connector configuration files at `AVISE/avise/configs/connector/languagemodel/` dir of this repository.

cd AVISE
#### Running a language model locally with Docker & Ollama

2. Set Up Python Environment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Clone this repository to your local machine with:

* Create Virtual Environment
```bash
git clone https://github.com/ouspg/AVISE.git
```

.. code:: bash
- Create the Ollama Docker container
- for **GPU** accelerated inference with:
```bash
docker compose -f AVISE/docker/ollama/docker-compose.yml up -d
```
- or for **CPU** inference with:
```bash
docker compose -f AVISE/docker/ollama/docker-compose-cpu.yml up -d
```

python -m venv venv
- Pull an Ollama model to evaluate into the container with:
```bash
docker exec -it avise-ollama ollama pull <model_name>
```

* Activate Virtual Environment
### 3. Evaluate the model with a Security Evaluation Test (SET)

* On Linux & Mac:
#### Basic usage

.. code:: bash
```bash
avise --SET <SET_name> --connectorconf <connector_name> [options]
```

source venv/bin/activate
For example, you can run the `prompt_injection` SET on the model pulled to the Ollama Docker container with:

* On Windows:
```bash
avise --SET prompt_injection --connectorconf ollama_lm --target <model_name>
```

.. code:: bash
To list the available SETs, run the command:
```bash
avise --SET-list
```

source venv/Scripts/activate

* Install dependencies
## Advanced usage

.. code:: bash
### Configuring Connectors

pip install -r requirements.txt
You can create your own connector configuration files, or if you cloned the AVISE repository, you can modify the existing connector configuration files in `AVISE/avise/configs/connector/languagemodel/`.

3. Set Up by using Ollama Backend with Docker
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For example, you can edit the default Ollama Connector configuration file `AVISE/avise/configs/connector/languagemodel/ollama.json`, and insert the name of an Ollama model you have pulled to be used as a target by default:

**GPU Version:**
```json
{
"target_model": {
"connector": "ollama-lm",
"type": "language_model",
"name": "<NAME_OF_TARGET_MODEL>",
"api_url": "http://localhost:11434", #Ollama default
"api_key": null
}
}
```
If you want to use custom configuration files for SETs and/or Connectors, you can do so by giving the paths to the configuration files with `--SETconf` and `--connectorconf` arguments:

.. code:: bash
```bash
avise --SET prompt_injection --SETconf AVISE/avise/configs/SET/languagemodel/single_turn/prompt_injection_mini.json --connectorconf AVISE/avise/configs/connector/languagemodel/ollama.json
```

docker-compose -f docker/ollama/docker-compose.yml up -d
### Required Arguments

**CPU-only Version:**
| Argument | Description |
|----------|-------------|
| `--SET`, `-s` | Security Evaluation Test to run (e.g., `prompt_injection`, `context_test`) |
| `--connectorconf`, `-c` | Path to Connector configuration JSON (Accepts predefined connector configuration paths: `ollama_lm`, `openai_lm`, `genericrest_lm`)|

.. code:: bash

docker-compose -f docker/ollama/docker-compose-cpu.yml up -d
### Optional Arguments

4. Pull Models
~~~~~~~~~~~~~~

After Ollama is running, pull the models you want to test:

.. code:: bash

docker exec -it ollama ollama pull MODEL_NAME

5. Configure Connectors
~~~~~~~~~~~~~~~~~~~~~~~

Edit ``avise/configs/connector/ollama.json``:

.. code:: json

{
"target_model": {
"connector": "ollama-lm",
"type": "language_model",
"name": "phi3:latest", //ADD NAME OF THE OLLAMA MODEL TO TEST HERE
"api_url": "http://localhost:11434", //Ollama default
"api_key": null
}
}

Basic usage example
---------------------

AVISE uses preconfigured paths for SET and Connector configuration JSON files, if the paths are not given as CLI arguments:

.. code:: bash

python -m avise --SET prompt_injection --connectorconf ollama

Advanced usage example
-----------------------

If you wish to use custom SET and Connector configuration files, you can give them with the ``--connectorconf`` and ``--SETconf`` CLI arguments:

.. code:: bash

python -m avise --SET prompt_injection --connectorconf avise/configs/connector/ollama.json --SETconf avise/configs/set/prompt_injection_mini.json
| Argument | Description |
|----------|-------------|
| `--SETconf` | Path to SET configuration JSON file. If not given, uses preconfigured paths for SET config JSON files. |
| `--target`, `-t` | Name of the target model/system to evaluate. Overrides target name from connector configuration file. |
| `--format`, `-f` | Report format: `json`, `html`, `md` |
| `--runs`, `-r` | How many times each SET is executed |
| `--output` | Custom output file path |
| `--reports-dir` | Base directory for reports (default: `avise-reports/`) |
| `--SET-list` | List available Security Evaluation Tests |
| `--connector-list` | List available Connectors |
| `--verbose`, `-v` | Enable verbose logging |
| `--version`, `-V` | Print version |
Loading
Loading