diff --git a/demos/integration_with_OpenWebUI/README.md b/demos/integration_with_OpenWebUI/README.md index a60e6bbd86..380ba4c644 100644 --- a/demos/integration_with_OpenWebUI/README.md +++ b/demos/integration_with_OpenWebUI/README.md @@ -363,7 +363,7 @@ mcpo --port 9000 -- python -m mcp_weather_server ## Using Web Search -### Step 1: Configure WebSearch +### Step 1: Configure Web Search 1. Go to **Admin Panel** → **Settings** → **Web Search** 2. Enable **Web Search** @@ -474,19 +474,22 @@ Next, download and add to config model for transcription: :::{tab-item} Windows :sync: Windows ```bat -ovms.exe --pull --source_model OpenVINO/whisper-base-fp16-ov --model_repository_path models --task speech2text +ovms.exe --pull --source_model OpenVINO/whisper-base-fp16-ov --model_repository_path models --task speech2text --target_device GPU ovms.exe --add_to_config --config_path models\config.json --model_path OpenVINO\whisper-base-fp16-ov --model_name OpenVINO/whisper-base-fp16-ov ``` ::: :::{tab-item} Linux (using Docker) :sync: Linux ```bash -docker run --rm -u $(id -u):$(id -g) -v $PWD/models:/models openvino/model_server:2026.1-gpu --pull --source_model OpenVINO/whisper-base-fp16-ov --model_repository_path /models --task speech2text +docker run --rm -u $(id -u):$(id -g) --device /dev/dri --group-add=$(stat -c "%g" /dev/dri/render* | head -n 1) -v $PWD/models:/models openvino/model_server:2026.1-gpu --pull --source_model OpenVINO/whisper-base-fp16-ov --model_repository_path /models --task speech2text --target_device GPU docker run --rm -u $(id -u):$(id -g) -v $PWD/models:/models openvino/model_server:2026.1-gpu --add_to_config --config_path /models/config.json --model_path OpenVINO/whisper-base-fp16-ov --model_name OpenVINO/whisper-base-fp16-ov ``` ::: :::: +> **Note:** Family of Whisper models (except Whisper-large) can be also deployed on NPU or CPU devices by just changing the --target_device parameter. + + ### Step 2: Audio Settings 1. Go to **Admin Panel** → **Settings** → **Audio**