diff --git a/private-ai-services-container/introduction/images/arch.png b/private-ai-services-container/introduction/images/arch.png
new file mode 100644
index 000000000..2c2612c1d
Binary files /dev/null and b/private-ai-services-container/introduction/images/arch.png differ
diff --git a/private-ai-services-container/introduction/introduction.md b/private-ai-services-container/introduction/introduction.md
new file mode 100644
index 000000000..0709f5556
--- /dev/null
+++ b/private-ai-services-container/introduction/introduction.md
@@ -0,0 +1,59 @@
+# Introduction
+
+## About this Workshop
+
+Welcome to **Build multimodal AI Vector Search using Oracle Private AI Service Container**.
+
+Oracle Private AI Services Container exists to give you modern model inference inside your own environment. You get a local model endpoint without sending your text or images to a public AI service.
+
+Its core value is control with flexibility:
+- Keep **data inside your network** and security boundary
+- Update model services **without changing core database** deployment
+- **Reuse one model endpoint** across notebooks, SQL flows, and apps
+- **Support multimodal patterns**, such as image and text embeddings in one solution
+
+Use each path for a different job:
+- **In-database embeddings (`provider=database`)** fit SQL-first workflows with minimal moving parts.
+- **Private AI Services Container (`provider=privateai`)** fits teams that need model agility, multimodal use cases, or shared model serving across tools.
+
+Compared with public embedding APIs, a private container is often the stronger enterprise choice:
+- Sensitive data does not leave your environment
+- Latency and cost are more predictable on local network paths
+- Development is less exposed to external quotas, endpoint drift, and service outages
+
+In the following labs you will work not only with in-database embedding but **specifically** with the Oracle Private AI Services Container to:
+- discover available models in the Oracle Private AI Services Container
+- generate embeddings using ONXX models stored in the Oracle AI Database and via the API endpoint provided by the Oracle Private AI Services Container.
+- store vectors in Oracle AI Database
+- run cosine similarity search
+- build a simple image search app that used multimodal embedding models
+
+Estimated Workshop Time: 90 minutes
+
+### Architecture at a Glance
+
+- `jupyterlab` runs Python notebooks.
+- `privateai` serves embedding models at `http://privateai:8080` on the container network.
+- `aidbfree` stores documents and vectors.
+
+
+
+
+### Objectives
+
+In this workshop, you will:
+- Validate the runtime services required for the lab
+- Generate embeddings with both database-stored ONNX models and Oracle Private AI Services Container
+- Perform semantic similarity search in Oracle AI Database 26ai
+- Build a simple image app that uses multimodal embeddings for similarity search
+
+
+## Learn More
+
+- [Oracle Private AI Services Container User Guide](https://docs.oracle.com/en/database/oracle/oracle-database/26/prvai/oracle-private-ai-services-container.html)
+- [Private AI Services Container API Reference](https://docs.oracle.com/en/database/oracle/oracle-database/26/prvai/private-ai-services-container-api-reference.html)
+- [DBMS_VECTOR UTL_TO_EMBEDDING](https://docs.oracle.com/en/database/oracle/oracle-database/26/vecse/utl_to_embedding-and-utl_to_embeddings-dbms_vector.html)
+
+## Acknowledgements
+- **Author** - Oracle LiveLabs Team
+- **Last Updated By/Date** - Oracle LiveLabs Team, March 2026
diff --git a/private-ai-services-container/lab1-verify-environment/lab1-verify-environment.md b/private-ai-services-container/lab1-verify-environment/lab1-verify-environment.md
new file mode 100644
index 000000000..d59d0b655
--- /dev/null
+++ b/private-ai-services-container/lab1-verify-environment/lab1-verify-environment.md
@@ -0,0 +1,57 @@
+# Lab 1: Verify the Runtime Environment
+
+## Introduction
+
+In this lab you verify that all the Oracle Private AI Services API endpoint is reachable on the netwokk.
+You will also learn how to list all the models available in the container.
+All checks are executed from a **JupyterLab Terminal**.
+
+Estimated Time: 10 minutes
+
+### Objectives
+
+In this lab, you will:
+- Verify internal container DNS resolution from JupyterLab
+- Validate Private AI health and model list
+- Confirm Oracle AI Database and ORDS reachability from JupyterLab
+- Confirm `/home/.env` is available to the notebook
+
+### Prerequisites
+
+This lab assumes:
+- You can open a terminal in JupyterLab (`File` -> `New` -> `Terminal`)
+
+## Task 1: Verify Internal Hostname Resolution
+
+1. In JupyterLab, open a new terminal.
+
+2. Verify that runtime service names resolve:
+
+ ```bash
+ getent hosts privateai aidbfree ords
+ ```
+
+ Expected: one IP entry for each service name.
+
+## Task 2: Validate Private AI REST Endpoints and list available models
+
+1. Health endpoint:
+
+ ```bash
+ curl -sS -i http://privateai:8080/health
+ ```
+
+ Expected: HTTP `200`.
+
+2. List deployed models:
+
+ ```bash
+ curl -sS http://privateai:8080/v1/models | jq .
+ ```
+
+ Expected: JSON payload with a `data` array of model IDs.
+
+
+## Acknowledgements
+- **Author** - Oracle LiveLabs Team
+- **Last Updated By/Date** - Oracle LiveLabs Team, March 2026
diff --git a/private-ai-services-container/lab2-db-model-embeddings/files/database-model-embeddings.ipynb b/private-ai-services-container/lab2-db-model-embeddings/files/database-model-embeddings.ipynb
new file mode 100644
index 000000000..7d27b72e3
--- /dev/null
+++ b/private-ai-services-container/lab2-db-model-embeddings/files/database-model-embeddings.ipynb
@@ -0,0 +1,259 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Notebook B: Database-Stored Model Embeddings\n",
+ "\n",
+ "This notebook demonstrates vector search using embedding models already stored in Oracle AI Database (for example `ALL_MINILM_L12_V2`).\n",
+ "\n",
+ "Flow:\n",
+ "- discover available database models\n",
+ "- generate embeddings with `provider=database`\n",
+ "- store vectors and run similarity search\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "import json\n",
+ "import re\n",
+ "import oracledb\n",
+ "from dotenv import dotenv_values\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 1) Load DB configuration from `/home/.env`"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "ENV_PATH = os.getenv('LAB_ENV_FILE', '/home/.env')\n",
+ "env = dotenv_values(ENV_PATH) if os.path.exists(ENV_PATH) else {}\n",
+ "\n",
+ "DB_USER = os.getenv('DB_USER') or env.get('USERNAME') or 'ADMIN'\n",
+ "DB_PASSWORD = os.getenv('DB_PASSWORD') or env.get('DBPASSWORD')\n",
+ "DB_HOST = os.getenv('DB_HOST', 'aidbfree')\n",
+ "DB_PORT = os.getenv('DB_PORT', '1521')\n",
+ "DB_SERVICE = os.getenv('DB_SERVICE', 'FREEPDB1')\n",
+ "DB_DSN = os.getenv('DB_DSN', f'{DB_HOST}:{DB_PORT}/{DB_SERVICE}')\n",
+ "PREFERRED_DB_MODEL = os.getenv('DB_EMBED_MODEL', 'ALL_MINILM_L12_V2').upper()\n",
+ "\n",
+ "print('ENV file:', ENV_PATH if os.path.exists(ENV_PATH) else 'not found')\n",
+ "print('DB user:', DB_USER)\n",
+ "print('DB dsn :', DB_DSN)\n",
+ "print('Preferred DB model:', PREFERRED_DB_MODEL)\n",
+ "\n",
+ "if not DB_PASSWORD:\n",
+ " raise ValueError('DB password not found. Set DB_PASSWORD or DBPASSWORD in /home/.env')\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 2) Connect and discover stored models"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "conn = oracledb.connect(user=DB_USER, password=DB_PASSWORD, dsn=DB_DSN)\n",
+ "cur = conn.cursor()\n",
+ "\n",
+ "cur.execute('select user from dual')\n",
+ "print('Connected as:', cur.fetchone()[0])\n",
+ "\n",
+ "cur.execute('''\n",
+ " SELECT model_name\n",
+ " FROM user_mining_models\n",
+ " ORDER BY model_name\n",
+ "''')\n",
+ "models = [row[0] for row in cur.fetchall()]\n",
+ "print('Models in USER_MINING_MODELS:', models)\n",
+ "\n",
+ "if not models:\n",
+ " raise RuntimeError('No models found in USER_MINING_MODELS. Provision an ONNX model first.')\n",
+ "\n",
+ "MODEL_NAME = PREFERRED_DB_MODEL if PREFERRED_DB_MODEL in models else models[0]\n",
+ "print('Selected DB model:', MODEL_NAME)\n",
+ "\n",
+ "if not re.match(r'^[A-Z][A-Z0-9_$#]*$', MODEL_NAME):\n",
+ " raise ValueError(f'Unsafe model identifier: {MODEL_NAME}')\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 3) Determine embedding dimension from the DB model"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "db_params = json.dumps({\n",
+ " 'provider': 'database',\n",
+ " 'model': MODEL_NAME,\n",
+ "})\n",
+ "\n",
+ "cur.execute('''\n",
+ " SELECT VECTOR_DIMENSION_COUNT(\n",
+ " DBMS_VECTOR.UTL_TO_EMBEDDING(:txt, JSON(:params))\n",
+ " )\n",
+ " FROM dual\n",
+ "''', {'txt': 'dimension probe', 'params': db_params})\n",
+ "\n",
+ "EMBEDDING_DIM = int(cur.fetchone()[0])\n",
+ "print('Embedding dimension:', EMBEDDING_DIM)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 4) Create table and store in-database embeddings"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "TABLE_NAME = 'PRIVATEAI_DOCS_DBMODEL'\n",
+ "\n",
+ "try:\n",
+ " cur.execute(f'DROP TABLE {TABLE_NAME} PURGE')\n",
+ "except oracledb.DatabaseError:\n",
+ " pass\n",
+ "\n",
+ "cur.execute(f'''\n",
+ " CREATE TABLE {TABLE_NAME} (\n",
+ " doc_id NUMBER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,\n",
+ " title VARCHAR2(200) NOT NULL,\n",
+ " content CLOB NOT NULL,\n",
+ " embedding VECTOR({EMBEDDING_DIM}, FLOAT32)\n",
+ " )\n",
+ "''')\n",
+ "\n",
+ "docs = [\n",
+ " ('Database Model Path', 'Embeddings can be generated directly in Oracle AI Database with a stored ONNX model.'),\n",
+ " ('Operational Simplicity', 'Keeping model inference in-database can simplify architecture and governance.'),\n",
+ " ('Vector Search SQL', 'Use VECTOR_DISTANCE with COSINE to rank semantic similarity results.'),\n",
+ " ('Model Governance', 'Database-stored models can be versioned and controlled with DB privileges.'),\n",
+ "]\n",
+ "\n",
+ "inserted = 0\n",
+ "for title, content in docs:\n",
+ " cur.execute(f'''\n",
+ " INSERT INTO {TABLE_NAME} (title, content, embedding)\n",
+ " VALUES (\n",
+ " :title,\n",
+ " :content,\n",
+ " DBMS_VECTOR.UTL_TO_EMBEDDING(:content, JSON(:params))\n",
+ " )\n",
+ " ''', {'title': title, 'content': content, 'params': db_params})\n",
+ " inserted += 1\n",
+ "\n",
+ "conn.commit()\n",
+ "print('Inserted rows:', inserted)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 5) Similarity search with the database model"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "query_text = 'Which approach keeps embedding generation inside Oracle Database?'\n",
+ "\n",
+ "sql = f'''\n",
+ "SELECT\n",
+ " title,\n",
+ " ROUND(1 - VECTOR_DISTANCE(\n",
+ " embedding,\n",
+ " DBMS_VECTOR.UTL_TO_EMBEDDING(:query_text, JSON(:params)),\n",
+ " COSINE\n",
+ " ), 4) AS similarity,\n",
+ " SUBSTR(content, 1, 120) AS preview\n",
+ "FROM {TABLE_NAME}\n",
+ "ORDER BY VECTOR_DISTANCE(\n",
+ " embedding,\n",
+ " DBMS_VECTOR.UTL_TO_EMBEDDING(:query_text, JSON(:params)),\n",
+ " COSINE\n",
+ ")\n",
+ "FETCH FIRST 3 ROWS ONLY\n",
+ "'''\n",
+ "\n",
+ "cur.execute(sql, {'query_text': query_text, 'params': db_params})\n",
+ "rows = cur.fetchall()\n",
+ "\n",
+ "print('Query:', query_text)\n",
+ "for idx, (title, score, preview) in enumerate(rows, 1):\n",
+ " print(f'{idx}. {title} | score={score}')\n",
+ " print(f' {preview}')\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Optional cleanup\n",
+ "# cur.execute(f'DROP TABLE {TABLE_NAME} PURGE')\n",
+ "# conn.commit()\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "cur.close()\n",
+ "conn.close()\n",
+ "print('Connection closed.')\n"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "name": "python",
+ "version": "3.11"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
\ No newline at end of file
diff --git a/private-ai-services-container/lab2-db-model-embeddings/files/private-ai-notebooks.zip b/private-ai-services-container/lab2-db-model-embeddings/files/private-ai-notebooks.zip
new file mode 100644
index 000000000..c7c75a56d
Binary files /dev/null and b/private-ai-services-container/lab2-db-model-embeddings/files/private-ai-notebooks.zip differ
diff --git a/private-ai-services-container/lab2-db-model-embeddings/files/privateai-container-embeddings.ipynb b/private-ai-services-container/lab2-db-model-embeddings/files/privateai-container-embeddings.ipynb
new file mode 100644
index 000000000..48c344ad2
--- /dev/null
+++ b/private-ai-services-container/lab2-db-model-embeddings/files/privateai-container-embeddings.ipynb
@@ -0,0 +1,304 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Notebook A: Private AI Container Embeddings\n",
+ "\n",
+ "This notebook demonstrates vector search using Oracle Private AI Services Container (`privateai`) for embedding generation.\n",
+ "\n",
+ "Flow:\n",
+ "- call Private AI REST APIs (`/health`, `/v1/models`, `/v1/embeddings`)\n",
+ "- store vectors in Oracle AI Database\n",
+ "- run cosine similarity search\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "import json\n",
+ "import requests\n",
+ "import oracledb\n",
+ "from dotenv import dotenv_values\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 1) Load configuration from `/home/.env`"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "ENV_PATH = os.getenv('LAB_ENV_FILE', '/home/.env')\n",
+ "env = dotenv_values(ENV_PATH) if os.path.exists(ENV_PATH) else {}\n",
+ "\n",
+ "DB_USER = os.getenv('DB_USER') or env.get('USERNAME') or 'ADMIN'\n",
+ "DB_PASSWORD = os.getenv('DB_PASSWORD') or env.get('DBPASSWORD')\n",
+ "DB_HOST = os.getenv('DB_HOST', 'aidbfree')\n",
+ "DB_PORT = os.getenv('DB_PORT', '1521')\n",
+ "DB_SERVICE = os.getenv('DB_SERVICE', 'FREEPDB1')\n",
+ "DB_DSN = os.getenv('DB_DSN', f'{DB_HOST}:{DB_PORT}/{DB_SERVICE}')\n",
+ "\n",
+ "PRIVATEAI_BASE_URL = os.getenv('PRIVATEAI_BASE_URL', 'http://privateai:8080').rstrip('/')\n",
+ "PREFERRED_MODEL = os.getenv('PRIVATEAI_MODEL', 'all-minilm-l12-v2')\n",
+ "\n",
+ "print('ENV file:', ENV_PATH if os.path.exists(ENV_PATH) else 'not found')\n",
+ "print('DB user:', DB_USER)\n",
+ "print('DB dsn :', DB_DSN)\n",
+ "print('Private AI URL:', PRIVATEAI_BASE_URL)\n",
+ "print('Preferred model:', PREFERRED_MODEL)\n",
+ "\n",
+ "if not DB_PASSWORD:\n",
+ " raise ValueError('DB password not found. Set DB_PASSWORD or DBPASSWORD in /home/.env')\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 2) Validate Private AI and choose a text-embedding model"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "health = requests.get(f'{PRIVATEAI_BASE_URL}/health', timeout=20)\n",
+ "print('Health status:', health.status_code)\n",
+ "health.raise_for_status()\n",
+ "\n",
+ "models_resp = requests.get(f'{PRIVATEAI_BASE_URL}/v1/models', timeout=20)\n",
+ "models_resp.raise_for_status()\n",
+ "models = models_resp.json().get('data', [])\n",
+ "\n",
+ "if not models:\n",
+ " raise RuntimeError('No models returned by /v1/models')\n",
+ "\n",
+ "def norm(s):\n",
+ " return (s or '').strip().lower()\n",
+ "\n",
+ "text_models = []\n",
+ "for model in models:\n",
+ " mid = model.get('id')\n",
+ " caps = [c.upper() for c in model.get('modelCapabilities', [])]\n",
+ " print(' -', mid, '|', ','.join(caps))\n",
+ " if mid and any(c in ('TEXT_EMBEDDINGS', 'EMBEDDINGS') for c in caps):\n",
+ " text_models.append(mid)\n",
+ "\n",
+ "if not text_models:\n",
+ " raise RuntimeError('No TEXT_EMBEDDINGS-capable models found')\n",
+ "\n",
+ "MODEL_ID = next((m for m in text_models if norm(m) == norm(PREFERRED_MODEL)), None)\n",
+ "if MODEL_ID is None:\n",
+ " MODEL_ID = next((m for m in text_models if norm(m) == 'all-minilm-l12-v2'), text_models[0])\n",
+ "\n",
+ "print()\n",
+ "print('Selected model:', MODEL_ID)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 3) Direct `/v1/embeddings` test"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "payload = {\n",
+ " 'model': MODEL_ID,\n",
+ " 'input': [\n",
+ " 'Oracle AI Database supports vector similarity search.',\n",
+ " 'Private AI Services Container runs embedding models close to your data.'\n",
+ " ],\n",
+ "}\n",
+ "\n",
+ "resp = requests.post(f'{PRIVATEAI_BASE_URL}/v1/embeddings', json=payload, timeout=60)\n",
+ "if not resp.ok:\n",
+ " print('Status :', resp.status_code)\n",
+ " print('Body :', resp.text[:1500])\n",
+ "resp.raise_for_status()\n",
+ "\n",
+ "embed_json = resp.json()\n",
+ "first_vec = embed_json['data'][0]['embedding']\n",
+ "EMBEDDING_DIM = len(first_vec)\n",
+ "\n",
+ "print('Returned vectors:', len(embed_json.get('data', [])))\n",
+ "print('Embedding dimension:', EMBEDDING_DIM)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 4) Connect to Oracle Database"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "conn = oracledb.connect(user=DB_USER, password=DB_PASSWORD, dsn=DB_DSN)\n",
+ "cur = conn.cursor()\n",
+ "\n",
+ "cur.execute('select user from dual')\n",
+ "print('Connected as:', cur.fetchone()[0])\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 5) Create table and store Private AI embeddings"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "TABLE_NAME = 'PRIVATEAI_DOCS_CONTAINER'\n",
+ "\n",
+ "try:\n",
+ " cur.execute(f'DROP TABLE {TABLE_NAME} PURGE')\n",
+ "except oracledb.DatabaseError:\n",
+ " pass\n",
+ "\n",
+ "cur.execute(f'''\n",
+ " CREATE TABLE {TABLE_NAME} (\n",
+ " doc_id NUMBER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,\n",
+ " title VARCHAR2(200) NOT NULL,\n",
+ " content CLOB NOT NULL,\n",
+ " embedding VECTOR({EMBEDDING_DIM}, FLOAT32)\n",
+ " )\n",
+ "''')\n",
+ "\n",
+ "docs = [\n",
+ " ('Why Private AI Container', 'Private AI keeps inference local and exposes embedding APIs by container endpoint.'),\n",
+ " ('Why Vector Search', 'Vector search compares semantic meaning and not only exact keyword matches.'),\n",
+ " ('JupyterLab Delivery', 'Notebooks are a practical way to prototype retrieval and embedding pipelines quickly.'),\n",
+ " ('RAG Workflow', 'Chunk, embed, store, and retrieve relevant context at question time.'),\n",
+ "]\n",
+ "\n",
+ "embed_params = json.dumps({\n",
+ " 'provider': 'privateai',\n",
+ " 'url': f'{PRIVATEAI_BASE_URL}/v1/embeddings',\n",
+ " 'host': 'local',\n",
+ " 'model': MODEL_ID,\n",
+ "})\n",
+ "\n",
+ "inserted = 0\n",
+ "for title, content in docs:\n",
+ " cur.execute(f'''\n",
+ " INSERT INTO {TABLE_NAME} (title, content, embedding)\n",
+ " VALUES (\n",
+ " :title,\n",
+ " :content,\n",
+ " DBMS_VECTOR.UTL_TO_EMBEDDING(:content, JSON(:params))\n",
+ " )\n",
+ " ''', {'title': title, 'content': content, 'params': embed_params})\n",
+ " inserted += 1\n",
+ "\n",
+ "conn.commit()\n",
+ "print('Inserted rows:', inserted)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 6) Similarity search"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "query_text = 'How can I run embeddings locally and use them for semantic search?'\n",
+ "\n",
+ "sql = f'''\n",
+ "SELECT\n",
+ " title,\n",
+ " ROUND(1 - VECTOR_DISTANCE(\n",
+ " embedding,\n",
+ " DBMS_VECTOR.UTL_TO_EMBEDDING(:query_text, JSON(:params)),\n",
+ " COSINE\n",
+ " ), 4) AS similarity,\n",
+ " SUBSTR(content, 1, 120) AS preview\n",
+ "FROM {TABLE_NAME}\n",
+ "ORDER BY VECTOR_DISTANCE(\n",
+ " embedding,\n",
+ " DBMS_VECTOR.UTL_TO_EMBEDDING(:query_text, JSON(:params)),\n",
+ " COSINE\n",
+ ")\n",
+ "FETCH FIRST 3 ROWS ONLY\n",
+ "'''\n",
+ "\n",
+ "cur.execute(sql, {'query_text': query_text, 'params': embed_params})\n",
+ "rows = cur.fetchall()\n",
+ "\n",
+ "print('Query:', query_text)\n",
+ "for idx, (title, score, preview) in enumerate(rows, 1):\n",
+ " print(f'{idx}. {title} | score={score}')\n",
+ " print(f' {preview}')\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Optional cleanup\n",
+ "# cur.execute(f'DROP TABLE {TABLE_NAME} PURGE')\n",
+ "# conn.commit()\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "cur.close()\n",
+ "conn.close()\n",
+ "print('Connection closed.')\n"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "name": "python",
+ "version": "3.11"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
\ No newline at end of file
diff --git a/private-ai-services-container/lab2-db-model-embeddings/lab2-db-model-embeddings.md b/private-ai-services-container/lab2-db-model-embeddings/lab2-db-model-embeddings.md
new file mode 100644
index 000000000..3144c3cc8
--- /dev/null
+++ b/private-ai-services-container/lab2-db-model-embeddings/lab2-db-model-embeddings.md
@@ -0,0 +1,233 @@
+# Lab 2: Vector Search with ONNX Model Stored in Oracle Database
+
+## Introduction
+
+In this lab you run vector search using an embedding model that is already stored in Oracle AI Database (for example `ALL_MINILM_L12_V2`).
+
+
+Estimated Time: 20 minutes
+
+### Objectives
+
+In this lab, you will:
+- Discover available embedding models in `USER_MINING_MODELS`
+- Generate embeddings with `DBMS_VECTOR.UTL_TO_EMBEDDING` using `provider=database`
+- Store vectors in a `VECTOR` column
+- Run cosine similarity search in SQL
+
+### Prerequisites
+
+This lab assumes:
+- You completed Lab 1
+
+## Task 1: Download the Jupyter Notebooks
+
+1. Open a new Terminal in JupyterLab and execute the following:
+
+ ```bash
+
+ /bin/bash -c 'set -euo pipefail; mkdir -p "$HOME/notebooks"; curl -fsSL -o /tmp/private-ai-notebooks.zip "https://c4u04.objectstorage.us-ashburn-1.oci.customer-oci.com/p/EcTjWk2IuZPZeNnD_fYMcgUhdNDIDA6rt9gaFj_WZMiL7VvxPBNMY60837hu5hga/n/c4u04/b/livelabsfiles/o/ai-ml-library/private-ai-notebooks.zip"; unzip -o /tmp/private-ai-notebooks.zip -d "$HOME/notebooks"; rm -f /tmp/private-ai-notebooks.zip'
+
+ ```
+
+ >Note: The above command will download a zip file and extract the content into a new folder 'notebooks'.
+
+## Task 2: Open the Notebook
+
+1. From the sidebar, double-click on the folder notebooks and then open the notebook (in case you do not see the folder click on refresh):
+
+ ```
+ database-model-embeddings.ipynb
+ ```
+
+ The following tasks and instructions are also available in the notebook. You can continue working from here on in the Jupyer Notebook.
+
+## Task 3: Import Python Libraries
+
+Run this cell:
+
+```python
+import os
+import json
+import re
+import oracledb
+from dotenv import dotenv_values
+```
+
+## Task 3: Load Database Configuration
+
+Run this cell:
+
+```python
+ENV_PATH = os.getenv('LAB_ENV_FILE', '/home/.env')
+env = dotenv_values(ENV_PATH) if os.path.exists(ENV_PATH) else {}
+
+DB_USER = os.getenv('DB_USER') or env.get('USERNAME') or 'ADMIN'
+DB_PASSWORD = os.getenv('DB_PASSWORD') or env.get('DBPASSWORD')
+DB_HOST = os.getenv('DB_HOST', 'aidbfree')
+DB_PORT = os.getenv('DB_PORT', '1521')
+DB_SERVICE = os.getenv('DB_SERVICE', 'FREEPDB1')
+DB_DSN = os.getenv('DB_DSN', f'{DB_HOST}:{DB_PORT}/{DB_SERVICE}')
+PREFERRED_DB_MODEL = os.getenv('DB_EMBED_MODEL', 'ALL_MINILM_L12_V2').upper()
+
+print('ENV file:', ENV_PATH if os.path.exists(ENV_PATH) else 'not found')
+print('DB user:', DB_USER)
+print('DB dsn :', DB_DSN)
+print('Preferred DB model:', PREFERRED_DB_MODEL)
+
+if not DB_PASSWORD:
+ raise ValueError('DB password not found. Set DB_PASSWORD or DBPASSWORD in /home/.env')
+```
+
+## Task 4: Connect and Discover Stored Models
+
+Run this cell:
+
+```python
+conn = oracledb.connect(user=DB_USER, password=DB_PASSWORD, dsn=DB_DSN)
+cur = conn.cursor()
+
+cur.execute('select user from dual')
+print('Connected as:', cur.fetchone()[0])
+
+cur.execute('''
+ SELECT model_name
+ FROM user_mining_models
+ ORDER BY model_name
+''')
+models = [row[0] for row in cur.fetchall()]
+print('Models in USER_MINING_MODELS:', models)
+
+if not models:
+ raise RuntimeError('No models found in USER_MINING_MODELS. Provision an ONNX model first.')
+
+MODEL_NAME = PREFERRED_DB_MODEL if PREFERRED_DB_MODEL in models else models[0]
+print('Selected DB model:', MODEL_NAME)
+
+if not re.match(r'^[A-Z][A-Z0-9_$#]*$', MODEL_NAME):
+ raise ValueError(f'Unsafe model identifier: {MODEL_NAME}')
+```
+
+## Task 5: Determine Embedding Dimension
+
+Run this cell:
+
+```python
+db_params = json.dumps({
+ 'provider': 'database',
+ 'model': MODEL_NAME,
+})
+
+cur.execute('''
+ SELECT VECTOR_DIMENSION_COUNT(
+ DBMS_VECTOR.UTL_TO_EMBEDDING(:txt, JSON(:params))
+ )
+ FROM dual
+''', {'txt': 'dimension probe', 'params': db_params})
+
+EMBEDDING_DIM = int(cur.fetchone()[0])
+print('Embedding dimension:', EMBEDDING_DIM)
+```
+
+## Task 6: Create Table and Store Embeddings
+
+Run this cell:
+
+```python
+TABLE_NAME = 'PRIVATEAI_DOCS_DBMODEL'
+
+try:
+ cur.execute(f'DROP TABLE {TABLE_NAME} PURGE')
+except oracledb.DatabaseError:
+ pass
+
+cur.execute(f'''
+ CREATE TABLE {TABLE_NAME} (
+ doc_id NUMBER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
+ title VARCHAR2(200) NOT NULL,
+ content CLOB NOT NULL,
+ embedding VECTOR({EMBEDDING_DIM}, FLOAT32)
+ )
+''')
+
+docs = [
+ ('Database Model Path', 'Embeddings can be generated directly in Oracle AI Database with a stored ONNX model.'),
+ ('Operational Simplicity', 'Keeping model inference in-database can simplify architecture and governance.'),
+ ('Vector Search SQL', 'Use VECTOR_DISTANCE with COSINE to rank semantic similarity results.'),
+ ('Model Governance', 'Database-stored models can be versioned and controlled with DB privileges.'),
+]
+
+inserted = 0
+for title, content in docs:
+ cur.execute(f'''
+ INSERT INTO {TABLE_NAME} (title, content, embedding)
+ VALUES (
+ :title,
+ :content,
+ DBMS_VECTOR.UTL_TO_EMBEDDING(:content, JSON(:params))
+ )
+ ''', {'title': title, 'content': content, 'params': db_params})
+ inserted += 1
+
+conn.commit()
+print('Inserted rows:', inserted)
+```
+
+## Task 7: Run Similarity Search
+
+Run this cell:
+
+```python
+query_text = 'Which approach keeps embedding generation inside Oracle Database?'
+
+sql = f'''
+SELECT
+ title,
+ ROUND(1 - VECTOR_DISTANCE(
+ embedding,
+ DBMS_VECTOR.UTL_TO_EMBEDDING(:query_text, JSON(:params)),
+ COSINE
+ ), 4) AS similarity,
+ SUBSTR(content, 1, 120) AS preview
+FROM {TABLE_NAME}
+ORDER BY VECTOR_DISTANCE(
+ embedding,
+ DBMS_VECTOR.UTL_TO_EMBEDDING(:query_text, JSON(:params)),
+ COSINE
+)
+FETCH FIRST 3 ROWS ONLY
+'''
+
+cur.execute(sql, {'query_text': query_text, 'params': db_params})
+rows = cur.fetchall()
+
+print('Query:', query_text)
+for idx, (title, score, preview) in enumerate(rows, 1):
+ print(f'{idx}. {title} | score={score}')
+ print(f' {preview}')
+```
+
+## Task 8: Optional Cleanup
+
+Run optional cleanup:
+
+```python
+# cur.execute(f'DROP TABLE {TABLE_NAME} PURGE')
+# conn.commit()
+```
+
+Close the connection:
+
+```python
+cur.close()
+conn.close()
+print('Connection closed.')
+```
+
+## Learn More
+
+- [DBMS_VECTOR UTL_TO_EMBEDDING](https://docs.oracle.com/en/database/oracle/oracle-database/26/vecse/utl_to_embedding-and-utl_to_embeddings-dbms_vector.html)
+
+## Acknowledgements
+- **Author** - Oracle LiveLabs Team
+- **Last Updated By/Date** - Oracle LiveLabs Team, March 2026
diff --git a/private-ai-services-container/lab3-privateai-container-embeddings/files/privateai-container-embeddings.ipynb b/private-ai-services-container/lab3-privateai-container-embeddings/files/privateai-container-embeddings.ipynb
new file mode 100644
index 000000000..48c344ad2
--- /dev/null
+++ b/private-ai-services-container/lab3-privateai-container-embeddings/files/privateai-container-embeddings.ipynb
@@ -0,0 +1,304 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Notebook A: Private AI Container Embeddings\n",
+ "\n",
+ "This notebook demonstrates vector search using Oracle Private AI Services Container (`privateai`) for embedding generation.\n",
+ "\n",
+ "Flow:\n",
+ "- call Private AI REST APIs (`/health`, `/v1/models`, `/v1/embeddings`)\n",
+ "- store vectors in Oracle AI Database\n",
+ "- run cosine similarity search\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "import json\n",
+ "import requests\n",
+ "import oracledb\n",
+ "from dotenv import dotenv_values\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 1) Load configuration from `/home/.env`"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "ENV_PATH = os.getenv('LAB_ENV_FILE', '/home/.env')\n",
+ "env = dotenv_values(ENV_PATH) if os.path.exists(ENV_PATH) else {}\n",
+ "\n",
+ "DB_USER = os.getenv('DB_USER') or env.get('USERNAME') or 'ADMIN'\n",
+ "DB_PASSWORD = os.getenv('DB_PASSWORD') or env.get('DBPASSWORD')\n",
+ "DB_HOST = os.getenv('DB_HOST', 'aidbfree')\n",
+ "DB_PORT = os.getenv('DB_PORT', '1521')\n",
+ "DB_SERVICE = os.getenv('DB_SERVICE', 'FREEPDB1')\n",
+ "DB_DSN = os.getenv('DB_DSN', f'{DB_HOST}:{DB_PORT}/{DB_SERVICE}')\n",
+ "\n",
+ "PRIVATEAI_BASE_URL = os.getenv('PRIVATEAI_BASE_URL', 'http://privateai:8080').rstrip('/')\n",
+ "PREFERRED_MODEL = os.getenv('PRIVATEAI_MODEL', 'all-minilm-l12-v2')\n",
+ "\n",
+ "print('ENV file:', ENV_PATH if os.path.exists(ENV_PATH) else 'not found')\n",
+ "print('DB user:', DB_USER)\n",
+ "print('DB dsn :', DB_DSN)\n",
+ "print('Private AI URL:', PRIVATEAI_BASE_URL)\n",
+ "print('Preferred model:', PREFERRED_MODEL)\n",
+ "\n",
+ "if not DB_PASSWORD:\n",
+ " raise ValueError('DB password not found. Set DB_PASSWORD or DBPASSWORD in /home/.env')\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 2) Validate Private AI and choose a text-embedding model"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "health = requests.get(f'{PRIVATEAI_BASE_URL}/health', timeout=20)\n",
+ "print('Health status:', health.status_code)\n",
+ "health.raise_for_status()\n",
+ "\n",
+ "models_resp = requests.get(f'{PRIVATEAI_BASE_URL}/v1/models', timeout=20)\n",
+ "models_resp.raise_for_status()\n",
+ "models = models_resp.json().get('data', [])\n",
+ "\n",
+ "if not models:\n",
+ " raise RuntimeError('No models returned by /v1/models')\n",
+ "\n",
+ "def norm(s):\n",
+ " return (s or '').strip().lower()\n",
+ "\n",
+ "text_models = []\n",
+ "for model in models:\n",
+ " mid = model.get('id')\n",
+ " caps = [c.upper() for c in model.get('modelCapabilities', [])]\n",
+ " print(' -', mid, '|', ','.join(caps))\n",
+ " if mid and any(c in ('TEXT_EMBEDDINGS', 'EMBEDDINGS') for c in caps):\n",
+ " text_models.append(mid)\n",
+ "\n",
+ "if not text_models:\n",
+ " raise RuntimeError('No TEXT_EMBEDDINGS-capable models found')\n",
+ "\n",
+ "MODEL_ID = next((m for m in text_models if norm(m) == norm(PREFERRED_MODEL)), None)\n",
+ "if MODEL_ID is None:\n",
+ " MODEL_ID = next((m for m in text_models if norm(m) == 'all-minilm-l12-v2'), text_models[0])\n",
+ "\n",
+ "print()\n",
+ "print('Selected model:', MODEL_ID)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 3) Direct `/v1/embeddings` test"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "payload = {\n",
+ " 'model': MODEL_ID,\n",
+ " 'input': [\n",
+ " 'Oracle AI Database supports vector similarity search.',\n",
+ " 'Private AI Services Container runs embedding models close to your data.'\n",
+ " ],\n",
+ "}\n",
+ "\n",
+ "resp = requests.post(f'{PRIVATEAI_BASE_URL}/v1/embeddings', json=payload, timeout=60)\n",
+ "if not resp.ok:\n",
+ " print('Status :', resp.status_code)\n",
+ " print('Body :', resp.text[:1500])\n",
+ "resp.raise_for_status()\n",
+ "\n",
+ "embed_json = resp.json()\n",
+ "first_vec = embed_json['data'][0]['embedding']\n",
+ "EMBEDDING_DIM = len(first_vec)\n",
+ "\n",
+ "print('Returned vectors:', len(embed_json.get('data', [])))\n",
+ "print('Embedding dimension:', EMBEDDING_DIM)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 4) Connect to Oracle Database"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "conn = oracledb.connect(user=DB_USER, password=DB_PASSWORD, dsn=DB_DSN)\n",
+ "cur = conn.cursor()\n",
+ "\n",
+ "cur.execute('select user from dual')\n",
+ "print('Connected as:', cur.fetchone()[0])\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 5) Create table and store Private AI embeddings"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "TABLE_NAME = 'PRIVATEAI_DOCS_CONTAINER'\n",
+ "\n",
+ "try:\n",
+ " cur.execute(f'DROP TABLE {TABLE_NAME} PURGE')\n",
+ "except oracledb.DatabaseError:\n",
+ " pass\n",
+ "\n",
+ "cur.execute(f'''\n",
+ " CREATE TABLE {TABLE_NAME} (\n",
+ " doc_id NUMBER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,\n",
+ " title VARCHAR2(200) NOT NULL,\n",
+ " content CLOB NOT NULL,\n",
+ " embedding VECTOR({EMBEDDING_DIM}, FLOAT32)\n",
+ " )\n",
+ "''')\n",
+ "\n",
+ "docs = [\n",
+ " ('Why Private AI Container', 'Private AI keeps inference local and exposes embedding APIs by container endpoint.'),\n",
+ " ('Why Vector Search', 'Vector search compares semantic meaning and not only exact keyword matches.'),\n",
+ " ('JupyterLab Delivery', 'Notebooks are a practical way to prototype retrieval and embedding pipelines quickly.'),\n",
+ " ('RAG Workflow', 'Chunk, embed, store, and retrieve relevant context at question time.'),\n",
+ "]\n",
+ "\n",
+ "embed_params = json.dumps({\n",
+ " 'provider': 'privateai',\n",
+ " 'url': f'{PRIVATEAI_BASE_URL}/v1/embeddings',\n",
+ " 'host': 'local',\n",
+ " 'model': MODEL_ID,\n",
+ "})\n",
+ "\n",
+ "inserted = 0\n",
+ "for title, content in docs:\n",
+ " cur.execute(f'''\n",
+ " INSERT INTO {TABLE_NAME} (title, content, embedding)\n",
+ " VALUES (\n",
+ " :title,\n",
+ " :content,\n",
+ " DBMS_VECTOR.UTL_TO_EMBEDDING(:content, JSON(:params))\n",
+ " )\n",
+ " ''', {'title': title, 'content': content, 'params': embed_params})\n",
+ " inserted += 1\n",
+ "\n",
+ "conn.commit()\n",
+ "print('Inserted rows:', inserted)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 6) Similarity search"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "query_text = 'How can I run embeddings locally and use them for semantic search?'\n",
+ "\n",
+ "sql = f'''\n",
+ "SELECT\n",
+ " title,\n",
+ " ROUND(1 - VECTOR_DISTANCE(\n",
+ " embedding,\n",
+ " DBMS_VECTOR.UTL_TO_EMBEDDING(:query_text, JSON(:params)),\n",
+ " COSINE\n",
+ " ), 4) AS similarity,\n",
+ " SUBSTR(content, 1, 120) AS preview\n",
+ "FROM {TABLE_NAME}\n",
+ "ORDER BY VECTOR_DISTANCE(\n",
+ " embedding,\n",
+ " DBMS_VECTOR.UTL_TO_EMBEDDING(:query_text, JSON(:params)),\n",
+ " COSINE\n",
+ ")\n",
+ "FETCH FIRST 3 ROWS ONLY\n",
+ "'''\n",
+ "\n",
+ "cur.execute(sql, {'query_text': query_text, 'params': embed_params})\n",
+ "rows = cur.fetchall()\n",
+ "\n",
+ "print('Query:', query_text)\n",
+ "for idx, (title, score, preview) in enumerate(rows, 1):\n",
+ " print(f'{idx}. {title} | score={score}')\n",
+ " print(f' {preview}')\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Optional cleanup\n",
+ "# cur.execute(f'DROP TABLE {TABLE_NAME} PURGE')\n",
+ "# conn.commit()\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "cur.close()\n",
+ "conn.close()\n",
+ "print('Connection closed.')\n"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "name": "python",
+ "version": "3.11"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
\ No newline at end of file
diff --git a/private-ai-services-container/lab3-privateai-container-embeddings/lab3-privateai-container-embeddings.md b/private-ai-services-container/lab3-privateai-container-embeddings/lab3-privateai-container-embeddings.md
new file mode 100644
index 000000000..57f0f4360
--- /dev/null
+++ b/private-ai-services-container/lab3-privateai-container-embeddings/lab3-privateai-container-embeddings.md
@@ -0,0 +1,282 @@
+# Lab 3: Vector Search with Oracle Private AI Services Container
+
+## Introduction
+
+In this lab you run vector search using Oracle Private AI Services Container (`privateai`) for embedding generation.
+
+This lab mirrors the notebook:
+
+```text
+lab3-privateai-container-embeddings/files/privateai-container-embeddings.ipynb
+```
+
+Estimated Time: 25 minutes
+
+### Objectives
+
+In this lab, you will:
+- Validate Private AI APIs (`/health`, `/v1/models`, `/v1/embeddings`)
+- Select a text-embedding model dynamically
+- Store vectors in Oracle AI Database using `provider=privateai`
+- Run cosine similarity search in SQL
+
+### Prerequisites
+
+This lab assumes:
+- You completed Lab 1
+- JupyterLab is available
+- `/home/.env` contains your DB credentials
+- Private AI container is reachable as `http://privateai:8080`
+
+## Task 1: Download the Jupyter Notebooks
+
+1. Open a new Terminal in JupyterLab and execute the following. In case you have already downloaded the notebooks, you can skip this step.
+
+ ```bash
+
+ /bin/bash -c 'set -euo pipefail; mkdir -p "$HOME/notebooks"; curl -fsSL -o /tmp/private-ai-notebooks.zip "https://c4u04.objectstorage.us-ashburn-1.oci.customer-oci.com/p/EcTjWk2IuZPZeNnD_fYMcgUhdNDIDA6rt9gaFj_WZMiL7VvxPBNMY60837hu5hga/n/c4u04/b/livelabsfiles/o/ai-ml-library/private-ai-notebooks.zip"; unzip -o /tmp/private-ai-notebooks.zip -d "$HOME/notebooks"; rm -f /tmp/private-ai-notebooks.zip'
+
+ ```
+
+ >Note: The above command will download a zip file and extract the content into a new folder 'notebooks'.
+
+## Task 2: Open the Notebook
+
+1. From the sidebar, double-click on the folder notebooks and then open the notebook (in case you do not see the folder click on refresh):
+
+ ```
+ privateai-container-embeddings.ipynb
+ ```
+
+ The following tasks and instructions are also available in the notebook. You can continue working from here on in the Jupyer Notebook.
+
+
+## Task 3: Import Python Libraries
+
+Run this cell:
+
+```python
+import os
+import json
+import requests
+import oracledb
+from dotenv import dotenv_values
+```
+
+## Task 3: Load Configuration
+
+Run this cell:
+
+```python
+ENV_PATH = os.getenv('LAB_ENV_FILE', '/home/.env')
+env = dotenv_values(ENV_PATH) if os.path.exists(ENV_PATH) else {}
+
+DB_USER = os.getenv('DB_USER') or env.get('USERNAME') or 'ADMIN'
+DB_PASSWORD = os.getenv('DB_PASSWORD') or env.get('DBPASSWORD')
+DB_HOST = os.getenv('DB_HOST', 'aidbfree')
+DB_PORT = os.getenv('DB_PORT', '1521')
+DB_SERVICE = os.getenv('DB_SERVICE', 'FREEPDB1')
+DB_DSN = os.getenv('DB_DSN', f'{DB_HOST}:{DB_PORT}/{DB_SERVICE}')
+
+PRIVATEAI_BASE_URL = os.getenv('PRIVATEAI_BASE_URL', 'http://privateai:8080').rstrip('/')
+PREFERRED_MODEL = os.getenv('PRIVATEAI_MODEL', 'all-minilm-l12-v2')
+
+print('ENV file:', ENV_PATH if os.path.exists(ENV_PATH) else 'not found')
+print('DB user:', DB_USER)
+print('DB dsn :', DB_DSN)
+print('Private AI URL:', PRIVATEAI_BASE_URL)
+print('Preferred model:', PREFERRED_MODEL)
+
+if not DB_PASSWORD:
+ raise ValueError('DB password not found. Set DB_PASSWORD or DBPASSWORD in /home/.env')
+```
+
+## Task 4: Validate Private AI and Choose Model
+
+Run this cell:
+
+```python
+health = requests.get(f'{PRIVATEAI_BASE_URL}/health', timeout=20)
+print('Health status:', health.status_code)
+health.raise_for_status()
+
+models_resp = requests.get(f'{PRIVATEAI_BASE_URL}/v1/models', timeout=20)
+models_resp.raise_for_status()
+models = models_resp.json().get('data', [])
+
+if not models:
+ raise RuntimeError('No models returned by /v1/models')
+
+def norm(s):
+ return (s or '').strip().lower()
+
+text_models = []
+for model in models:
+ mid = model.get('id')
+ caps = [c.upper() for c in model.get('modelCapabilities', [])]
+ print(' -', mid, '|', ','.join(caps))
+ if mid and any(c in ('TEXT_EMBEDDINGS', 'EMBEDDINGS') for c in caps):
+ text_models.append(mid)
+
+if not text_models:
+ raise RuntimeError('No TEXT_EMBEDDINGS-capable models found')
+
+MODEL_ID = next((m for m in text_models if norm(m) == norm(PREFERRED_MODEL)), None)
+if MODEL_ID is None:
+ MODEL_ID = next((m for m in text_models if norm(m) == 'all-minilm-l12-v2'), text_models[0])
+
+print()
+print('Selected model:', MODEL_ID)
+```
+
+## Task 5: Call `/v1/embeddings` Directly
+
+Run this cell:
+
+```python
+payload = {
+ 'model': MODEL_ID,
+ 'input': [
+ 'Oracle AI Database supports vector similarity search.',
+ 'Private AI Services Container runs embedding models close to your data.'
+ ],
+}
+
+resp = requests.post(f'{PRIVATEAI_BASE_URL}/v1/embeddings', json=payload, timeout=60)
+if not resp.ok:
+ print('Status :', resp.status_code)
+ print('Body :', resp.text[:1500])
+resp.raise_for_status()
+
+embed_json = resp.json()
+first_vec = embed_json['data'][0]['embedding']
+EMBEDDING_DIM = len(first_vec)
+
+print('Returned vectors:', len(embed_json.get('data', [])))
+print('Embedding dimension:', EMBEDDING_DIM)
+```
+
+## Task 6: Connect to Oracle Database
+
+Run this cell:
+
+```python
+conn = oracledb.connect(user=DB_USER, password=DB_PASSWORD, dsn=DB_DSN)
+cur = conn.cursor()
+
+cur.execute('select user from dual')
+print('Connected as:', cur.fetchone()[0])
+```
+
+## Task 7: Create Table and Store Private AI Embeddings
+
+Run this cell:
+
+```python
+TABLE_NAME = 'PRIVATEAI_DOCS_CONTAINER'
+
+try:
+ cur.execute(f'DROP TABLE {TABLE_NAME} PURGE')
+except oracledb.DatabaseError:
+ pass
+
+cur.execute(f'''
+ CREATE TABLE {TABLE_NAME} (
+ doc_id NUMBER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
+ title VARCHAR2(200) NOT NULL,
+ content CLOB NOT NULL,
+ embedding VECTOR({EMBEDDING_DIM}, FLOAT32)
+ )
+''')
+
+docs = [
+ ('Why Private AI Container', 'Private AI keeps inference local and exposes embedding APIs by container endpoint.'),
+ ('Why Vector Search', 'Vector search compares semantic meaning and not only exact keyword matches.'),
+ ('JupyterLab Delivery', 'Notebooks are a practical way to prototype retrieval and embedding pipelines quickly.'),
+ ('RAG Workflow', 'Chunk, embed, store, and retrieve relevant context at question time.'),
+]
+
+embed_params = json.dumps({
+ 'provider': 'privateai',
+ 'url': f'{PRIVATEAI_BASE_URL}/v1/embeddings',
+ 'host': 'local',
+ 'model': MODEL_ID,
+})
+
+inserted = 0
+for title, content in docs:
+ cur.execute(f'''
+ INSERT INTO {TABLE_NAME} (title, content, embedding)
+ VALUES (
+ :title,
+ :content,
+ DBMS_VECTOR.UTL_TO_EMBEDDING(:content, JSON(:params))
+ )
+ ''', {'title': title, 'content': content, 'params': embed_params})
+ inserted += 1
+
+conn.commit()
+print('Inserted rows:', inserted)
+```
+
+## Task 8: Run Similarity Search
+
+Run this cell:
+
+```python
+query_text = 'How can I run embeddings locally and use them for semantic search?'
+
+sql = f'''
+SELECT
+ title,
+ ROUND(1 - VECTOR_DISTANCE(
+ embedding,
+ DBMS_VECTOR.UTL_TO_EMBEDDING(:query_text, JSON(:params)),
+ COSINE
+ ), 4) AS similarity,
+ SUBSTR(content, 1, 120) AS preview
+FROM {TABLE_NAME}
+ORDER BY VECTOR_DISTANCE(
+ embedding,
+ DBMS_VECTOR.UTL_TO_EMBEDDING(:query_text, JSON(:params)),
+ COSINE
+)
+FETCH FIRST 3 ROWS ONLY
+'''
+
+cur.execute(sql, {'query_text': query_text, 'params': embed_params})
+rows = cur.fetchall()
+
+print('Query:', query_text)
+for idx, (title, score, preview) in enumerate(rows, 1):
+ print(f'{idx}. {title} | score={score}')
+ print(f' {preview}')
+```
+
+## Task 9: Optional Cleanup
+
+Run optional cleanup:
+
+```python
+# cur.execute(f'DROP TABLE {TABLE_NAME} PURGE')
+# conn.commit()
+```
+
+Close the connection:
+
+```python
+cur.close()
+conn.close()
+print('Connection closed.')
+```
+
+> **Troubleshooting:** If you see `ORA-20002` with `ORA-29273` or `ORA-24247`, the DB user is missing outbound network ACL permissions for HTTP calls to Private AI.
+
+## Learn More
+
+- [Oracle Private AI Services Container User Guide](https://docs.oracle.com/en/database/oracle/oracle-database/26/prvai/oracle-private-ai-services-container.html)
+- [Private AI Services Container API Reference](https://docs.oracle.com/en/database/oracle/oracle-database/26/prvai/private-ai-services-container-api-reference.html)
+
+## Acknowledgements
+- **Author** - Oracle LiveLabs Team
+- **Last Updated By/Date** - Oracle LiveLabs Team, March 2026
diff --git a/private-ai-services-container/lab4-flask-image-search/files/app.py b/private-ai-services-container/lab4-flask-image-search/files/app.py
new file mode 100644
index 000000000..8b1595b56
--- /dev/null
+++ b/private-ai-services-container/lab4-flask-image-search/files/app.py
@@ -0,0 +1,268 @@
+import base64
+import json
+import mimetypes
+import os
+from pathlib import Path
+
+import oracledb
+import requests
+from flask import Flask, flash, redirect, render_template, request, url_for
+
+APP = Flask(__name__)
+APP.secret_key = os.getenv("FLASK_SECRET", "privateai-image-search-lab")
+
+LAB_ROOT = Path(os.getenv("LAB_ROOT", str(Path.home() / "image-search-lab")))
+DEFAULT_IMAGE_ROOT = LAB_ROOT / "data" / "pdimagearchive"
+
+PRIVATEAI_BASE_URL = os.getenv("PRIVATEAI_BASE_URL", "http://privateai:8080").rstrip("/")
+PRIVATEAI_EMBED_URL = f"{PRIVATEAI_BASE_URL}/v1/embeddings"
+IMAGE_MODEL_ID = os.getenv("IMAGE_MODEL_ID", "clip-vit-base-patch32-img")
+TEXT_MODEL_ID = os.getenv("TEXT_MODEL_ID", "clip-vit-base-patch32-txt")
+
+VECTOR_DIM = 512
+TOP_K = 10
+TABLE_NAME = "IMAGE_LIBRARY"
+DB_USER = "ADMIN"
+DB_DSN = "aidbfree:1521/FREEPDB1"
+
+
+def _load_env():
+ db_password = (os.getenv("DBPASSWORD") or "").strip()
+ if not db_password:
+ raise RuntimeError("Database password not found in env variable DBPASSWORD.")
+
+ return DB_USER, DB_DSN, db_password, "DBPASSWORD env"
+
+
+def _get_connection():
+ user, dsn, password, _ = _load_env()
+ return oracledb.connect(user=user, password=password, dsn=dsn)
+
+
+def _ensure_table():
+ with _get_connection() as conn:
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ SELECT COUNT(*)
+ FROM user_tables
+ WHERE table_name = :table_name
+ """,
+ table_name=TABLE_NAME,
+ )
+ exists = cur.fetchone()[0] == 1
+
+ if not exists:
+ cur.execute(
+ f"""
+ CREATE TABLE {TABLE_NAME} (
+ image_id NUMBER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
+ filename VARCHAR2(500) UNIQUE NOT NULL,
+ category VARCHAR2(120),
+ mime_type VARCHAR2(120),
+ embedding VECTOR({VECTOR_DIM}, FLOAT32),
+ image_data BLOB,
+ loaded_at TIMESTAMP DEFAULT SYSTIMESTAMP
+ )
+ """
+ )
+ conn.commit()
+
+
+def _collect_images(root: Path):
+ if not root.exists():
+ return []
+ patterns = ("*.jpg", "*.jpeg", "*.png", "*.webp")
+ files = []
+ for pattern in patterns:
+ files.extend(root.rglob(pattern))
+ files.extend(root.rglob(pattern.upper()))
+ files = sorted({p.resolve() for p in files if p.is_file()})
+ return files
+
+
+def _embed_with_privateai(model: str, input_payload: str):
+ response = requests.post(
+ PRIVATEAI_EMBED_URL,
+ json={"model": model, "input": input_payload},
+ timeout=120,
+ )
+ if not response.ok:
+ raise RuntimeError(
+ f"Private AI embedding call failed ({response.status_code}): {response.text[:800]}"
+ )
+ body = response.json()
+ data = body.get("data", [])
+ if not data:
+ raise RuntimeError("Private AI embedding response had no vectors.")
+ return data[0]["embedding"]
+
+
+def _load_images_into_db(image_root: Path):
+ _ensure_table()
+ images = _collect_images(image_root)
+ if not images:
+ raise RuntimeError(
+ f"No images found under {image_root}. Download and unzip the image archive first."
+ )
+
+ inserted = 0
+ skipped = 0
+
+ with _get_connection() as conn:
+ with conn.cursor() as cur:
+ for image_path in images:
+ relative_name = str(image_path.relative_to(image_root))
+
+ cur.execute(
+ f"SELECT COUNT(*) FROM {TABLE_NAME} WHERE filename = :filename",
+ filename=relative_name,
+ )
+ if cur.fetchone()[0] > 0:
+ skipped += 1
+ continue
+
+ image_bytes = image_path.read_bytes()
+ image_b64 = base64.b64encode(image_bytes).decode("ascii")
+ vector = _embed_with_privateai(IMAGE_MODEL_ID, image_b64)
+ vector_json = json.dumps(vector)
+
+ mime_type = mimetypes.guess_type(str(image_path))[0] or "application/octet-stream"
+ category = image_path.parent.name
+
+ sql_insert = (
+ "INSERT INTO"
+ + " "
+ + TABLE_NAME
+ + " (filename, category, mime_type, embedding, image_data) "
+ + "VALUES (:filename, :category, :mime_type, TO_VECTOR(:vector_json), :image_data)"
+ )
+ cur.execute(
+ sql_insert,
+ {
+ "filename": relative_name,
+ "category": category,
+ "mime_type": mime_type,
+ "vector_json": vector_json,
+ "image_data": image_bytes,
+ },
+ )
+ inserted += 1
+
+ conn.commit()
+
+ return inserted, skipped, len(images)
+
+
+def _blob_to_bytes(value):
+ if value is None:
+ return b""
+ if hasattr(value, "read"):
+ return value.read()
+ return bytes(value)
+
+
+def _search_images(query_text: str, top_k: int = TOP_K):
+ query_vector = _embed_with_privateai(TEXT_MODEL_ID, query_text)
+ query_vector_json = json.dumps(query_vector)
+
+ sql = f"""
+ SELECT image_id,
+ filename,
+ category,
+ mime_type,
+ image_data,
+ ROUND(1 - VECTOR_DISTANCE(embedding, TO_VECTOR(:query_vector), COSINE), 4) AS similarity
+ FROM {TABLE_NAME}
+ ORDER BY VECTOR_DISTANCE(embedding, TO_VECTOR(:query_vector), COSINE)
+ FETCH FIRST {top_k} ROWS ONLY
+ """
+
+ results = []
+ with _get_connection() as conn:
+ with conn.cursor() as cur:
+ cur.execute(sql, query_vector=query_vector_json)
+ for image_id, filename, category, mime_type, image_data, similarity in cur:
+ raw_bytes = _blob_to_bytes(image_data)
+ data_uri = (
+ f"data:{mime_type or 'image/jpeg'};base64,"
+ f"{base64.b64encode(raw_bytes).decode('ascii')}"
+ )
+ results.append(
+ {
+ "image_id": image_id,
+ "filename": filename,
+ "category": category,
+ "similarity": similarity,
+ "data_uri": data_uri,
+ }
+ )
+ return results
+
+
+def _table_count():
+ _ensure_table()
+ with _get_connection() as conn:
+ with conn.cursor() as cur:
+ cur.execute(f"SELECT COUNT(*) FROM {TABLE_NAME}")
+ return int(cur.fetchone()[0])
+
+
+@APP.route("/", methods=["GET", "POST"])
+def index():
+ _ensure_table()
+
+ query = ""
+ results = []
+ image_root = request.form.get("image_root", str(DEFAULT_IMAGE_ROOT))
+
+ if request.method == "POST":
+ action = request.form.get("action")
+
+ if action == "load":
+ try:
+ inserted, skipped, total = _load_images_into_db(Path(image_root).expanduser())
+ flash(
+ f"Image load complete: inserted={inserted}, skipped={skipped}, discovered={total}.",
+ "success",
+ )
+ except Exception as exc:
+ flash(f"Image load failed: {exc}", "error")
+ return redirect(url_for("index"))
+
+ if action == "search":
+ query = (request.form.get("query") or "").strip()
+ if not query:
+ flash("Enter a search query.", "error")
+ else:
+ try:
+ results = _search_images(query, top_k=TOP_K)
+ if not results:
+ flash("No images found. Load the image archive first.", "error")
+ except Exception as exc:
+ flash(f"Search failed: {exc}", "error")
+
+ return render_template(
+ "index.html",
+ query=query,
+ results=results,
+ total_images=_table_count(),
+ default_image_root=str(DEFAULT_IMAGE_ROOT),
+ image_model_id=IMAGE_MODEL_ID,
+ text_model_id=TEXT_MODEL_ID,
+ )
+
+
+if __name__ == "__main__":
+ LAB_ROOT.mkdir(parents=True, exist_ok=True)
+ db_user, db_dsn, _, password_source = _load_env()
+ print(
+ "DB config:",
+ f"user={db_user}",
+ f"dsn={db_dsn}",
+ f"password_source={password_source}",
+ )
+ _ensure_table()
+ print("Starting Flask image search app on port 5500")
+ print("Open in browser: http://:5500/")
+ APP.run(host="0.0.0.0", port=5500, debug=True)
diff --git a/private-ai-services-container/lab4-flask-image-search/files/cleanup_db.py b/private-ai-services-container/lab4-flask-image-search/files/cleanup_db.py
new file mode 100644
index 000000000..ab3ddc64c
--- /dev/null
+++ b/private-ai-services-container/lab4-flask-image-search/files/cleanup_db.py
@@ -0,0 +1,46 @@
+import sys
+
+import oracledb
+
+from app import TABLE_NAME, _get_connection
+
+
+def _table_exists(cur):
+ cur.execute(
+ """
+ SELECT COUNT(*)
+ FROM user_tables
+ WHERE table_name = :table_name
+ """,
+ table_name=TABLE_NAME,
+ )
+ return cur.fetchone()[0] == 1
+
+
+def main():
+ with _get_connection() as conn:
+ with conn.cursor() as cur:
+ if not _table_exists(cur):
+ print(f"No cleanup needed. Table {TABLE_NAME} does not exist.")
+ return 0
+
+ cur.execute(f"SELECT COUNT(*) FROM {TABLE_NAME}")
+ row_count = cur.fetchone()[0]
+
+ try:
+ cur.execute(f"DROP TABLE {TABLE_NAME} PURGE")
+ except oracledb.DatabaseError as exc:
+ error_obj = exc.args[0]
+ if getattr(error_obj, "code", None) == 942:
+ print(f"No cleanup needed. Table {TABLE_NAME} does not exist.")
+ return 0
+ raise
+
+ conn.commit()
+
+ print(f"Dropped table {TABLE_NAME}. Removed {row_count} rows.")
+ return 0
+
+
+if __name__ == "__main__":
+ sys.exit(main())
diff --git a/private-ai-services-container/lab4-flask-image-search/files/templates/index.html b/private-ai-services-container/lab4-flask-image-search/files/templates/index.html
new file mode 100644
index 000000000..d546e1b6b
--- /dev/null
+++ b/private-ai-services-container/lab4-flask-image-search/files/templates/index.html
@@ -0,0 +1,184 @@
+
+
+
+
+
+ Private AI CLIP Image Search
+
+
+
+
+
+
Private AI CLIP Image Search
+
Image embeddings model: {{ image_model_id }}
+
Text embeddings model: {{ text_model_id }}
+
+
Total indexed images: {{ total_images }}
+
Top-K results: 10
+
+
+
+ {% with messages = get_flashed_messages(with_categories=true) %}
+ {% if messages %}
+ {% for category, msg in messages %}
+
{{ msg }}
+ {% endfor %}
+ {% endif %}
+ {% endwith %}
+
+
+
+
1) Load Image Library into Database
+
+
+
+
+
2) Search by Text
+
+
+
+
+ {% if results %}
+
+
Search Results
+
Query: {{ query }}
+
+ {% for r in results %}
+
+
+
+
{{ r.filename }}
+
Category: {{ r.category }}
+
Similarity: {{ r.similarity }}
+
+
+ {% endfor %}
+
+
+ {% endif %}
+
+
+
diff --git a/private-ai-services-container/lab4-flask-image-search/lab4-flask-image-search.md b/private-ai-services-container/lab4-flask-image-search/lab4-flask-image-search.md
new file mode 100644
index 000000000..9ea4b1f82
--- /dev/null
+++ b/private-ai-services-container/lab4-flask-image-search/lab4-flask-image-search.md
@@ -0,0 +1,648 @@
+# Lab 4: Image Search App with CLIP Models using Oracle Private AI Services Container
+
+## Introduction
+
+In this lab you build a small Flask application that performs semantic image search.
+
+The workflow is:
+- Download and unzip an image archive with `wget`
+- Embed each image with `clip-vit-base-patch32-img`
+- Embed user text queries with `clip-vit-base-patch32-txt`
+- Return and display the top 10 most similar images from Oracle AI Database
+
+Estimated Time: 30 minutes
+
+### Objectives
+
+In this lab, you will:
+- Build and run a Flask web app in JupyterLab
+- Load an image dataset into Oracle AI Database
+- Generate image embeddings using Oracle Private AI Services Container CLIP image model
+- Generate query embeddings using CLIP text model
+- Perform top-10 cosine similarity search and render image results
+
+### Prerequisites
+
+This lab assumes:
+- You completed Labs 1-3
+- JupyterLab is available
+- `privateai` and `aidbfree` are reachable from JupyterLab
+- The `admin` database user can connect to `aidbfree:1521/FREEPDB1`
+
+## Task 1: Prepare Project and Download Image Archive
+
+1. Open **JupyterLab Terminal** and run:
+
+ ```bash
+ mkdir -p ~/image-search-lab/data
+ cd ~/image-search-lab/data
+ wget -O pdimagearchive.zip "https://c4u04.objectstorage.us-ashburn-1.oci.customer-oci.com/p/EcTjWk2IuZPZeNnD_fYMcgUhdNDIDA6rt9gaFj_WZMiL7VvxPBNMY60837hu5hga/n/c4u04/b/livelabsfiles/o/ai-ml-library/pdimagearchive.zip"
+ unzip -o pdimagearchive.zip
+ ```
+
+ Verify images exist:
+
+ ```bash
+ find ~/image-search-lab/data/pdimagearchive -type f \( -iname "*.jpg" -o -iname "*.jpeg" -o -iname "*.png" \) | head -n 5
+ ```
+
+## Task 2: Create Flask App Files
+
+1. Create project folders:
+
+ ```bash
+ mkdir -p ~/image-search-lab/templates
+ ```
+
+ Create `~/image-search-lab/app.py` with the following code:
+
+ ```python
+
+ import base64
+ import json
+ import mimetypes
+ import os
+ from pathlib import Path
+
+ import oracledb
+ import requests
+ from flask import Flask, flash, redirect, render_template, request, url_for
+
+ APP = Flask(__name__)
+ APP.secret_key = os.getenv("FLASK_SECRET", "privateai-image-search-lab")
+
+ LAB_ROOT = Path(os.getenv("LAB_ROOT", str(Path.home() / "image-search-lab")))
+ DEFAULT_IMAGE_ROOT = LAB_ROOT / "data" / "pdimagearchive"
+
+ PRIVATEAI_BASE_URL = os.getenv("PRIVATEAI_BASE_URL", "http://privateai:8080").rstrip("/")
+ PRIVATEAI_EMBED_URL = f"{PRIVATEAI_BASE_URL}/v1/embeddings"
+ IMAGE_MODEL_ID = os.getenv("IMAGE_MODEL_ID", "clip-vit-base-patch32-img")
+ TEXT_MODEL_ID = os.getenv("TEXT_MODEL_ID", "clip-vit-base-patch32-txt")
+
+ VECTOR_DIM = 512
+ TOP_K = 10
+ TABLE_NAME = "IMAGE_LIBRARY"
+ DB_USER = "ADMIN"
+ DB_DSN = "aidbfree:1521/FREEPDB1"
+
+
+ def _load_env():
+ db_password = (os.getenv("DBPASSWORD") or "").strip()
+ if not db_password:
+ raise RuntimeError("Database password not found in env variable DBPASSWORD.")
+
+ return DB_USER, DB_DSN, db_password, "DBPASSWORD env"
+
+
+ def _get_connection():
+ user, dsn, password, _ = _load_env()
+ return oracledb.connect(user=user, password=password, dsn=dsn)
+
+
+ def _ensure_table():
+ with _get_connection() as conn:
+ with conn.cursor() as cur:
+ cur.execute(
+ """
+ SELECT COUNT(*)
+ FROM user_tables
+ WHERE table_name = :table_name
+ """,
+ table_name=TABLE_NAME,
+ )
+ exists = cur.fetchone()[0] == 1
+
+ if not exists:
+ cur.execute(
+ f"""
+ CREATE TABLE {TABLE_NAME} (
+ image_id NUMBER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
+ filename VARCHAR2(500) UNIQUE NOT NULL,
+ category VARCHAR2(120),
+ mime_type VARCHAR2(120),
+ embedding VECTOR({VECTOR_DIM}, FLOAT32),
+ image_data BLOB,
+ loaded_at TIMESTAMP DEFAULT SYSTIMESTAMP
+ )
+ """
+ )
+ conn.commit()
+
+
+ def _collect_images(root: Path):
+ if not root.exists():
+ return []
+ patterns = ("*.jpg", "*.jpeg", "*.png", "*.webp")
+ files = []
+ for pattern in patterns:
+ files.extend(root.rglob(pattern))
+ files.extend(root.rglob(pattern.upper()))
+ files = sorted({p.resolve() for p in files if p.is_file()})
+ return files
+
+
+ def _embed_with_privateai(model: str, input_payload: str):
+ response = requests.post(
+ PRIVATEAI_EMBED_URL,
+ json={"model": model, "input": input_payload},
+ timeout=120,
+ )
+ if not response.ok:
+ raise RuntimeError(
+ f"Private AI embedding call failed ({response.status_code}): {response.text[:800]}"
+ )
+ body = response.json()
+ data = body.get("data", [])
+ if not data:
+ raise RuntimeError("Private AI embedding response had no vectors.")
+ return data[0]["embedding"]
+
+
+ def _load_images_into_db(image_root: Path):
+ _ensure_table()
+ images = _collect_images(image_root)
+ if not images:
+ raise RuntimeError(
+ f"No images found under {image_root}. Download and unzip the image archive first."
+ )
+
+ inserted = 0
+ skipped = 0
+
+ with _get_connection() as conn:
+ with conn.cursor() as cur:
+ for image_path in images:
+ relative_name = str(image_path.relative_to(image_root))
+
+ cur.execute(
+ f"SELECT COUNT(*) FROM {TABLE_NAME} WHERE filename = :filename",
+ filename=relative_name,
+ )
+ if cur.fetchone()[0] > 0:
+ skipped += 1
+ continue
+
+ image_bytes = image_path.read_bytes()
+ image_b64 = base64.b64encode(image_bytes).decode("ascii")
+ vector = _embed_with_privateai(IMAGE_MODEL_ID, image_b64)
+ vector_json = json.dumps(vector)
+
+ mime_type = mimetypes.guess_type(str(image_path))[0] or "application/octet-stream"
+ category = image_path.parent.name
+
+ sql_insert = (
+ "INSERT INTO"
+ + " "
+ + TABLE_NAME
+ + " (filename, category, mime_type, embedding, image_data) "
+ + "VALUES (:filename, :category, :mime_type, TO_VECTOR(:vector_json), :image_data)"
+ )
+ cur.execute(
+ sql_insert,
+ {
+ "filename": relative_name,
+ "category": category,
+ "mime_type": mime_type,
+ "vector_json": vector_json,
+ "image_data": image_bytes,
+ },
+ )
+ inserted += 1
+
+ conn.commit()
+
+ return inserted, skipped, len(images)
+
+
+ def _blob_to_bytes(value):
+ if value is None:
+ return b""
+ if hasattr(value, "read"):
+ return value.read()
+ return bytes(value)
+
+
+ def _search_images(query_text: str, top_k: int = TOP_K):
+ query_vector = _embed_with_privateai(TEXT_MODEL_ID, query_text)
+ query_vector_json = json.dumps(query_vector)
+
+ sql = f"""
+ SELECT image_id,
+ filename,
+ category,
+ mime_type,
+ image_data,
+ ROUND(1 - VECTOR_DISTANCE(embedding, TO_VECTOR(:query_vector), COSINE), 4) AS similarity
+ FROM {TABLE_NAME}
+ ORDER BY VECTOR_DISTANCE(embedding, TO_VECTOR(:query_vector), COSINE)
+ FETCH FIRST {top_k} ROWS ONLY
+ """
+
+ results = []
+ with _get_connection() as conn:
+ with conn.cursor() as cur:
+ cur.execute(sql, query_vector=query_vector_json)
+ for image_id, filename, category, mime_type, image_data, similarity in cur:
+ raw_bytes = _blob_to_bytes(image_data)
+ data_uri = (
+ f"data:{mime_type or 'image/jpeg'};base64,"
+ f"{base64.b64encode(raw_bytes).decode('ascii')}"
+ )
+ results.append(
+ {
+ "image_id": image_id,
+ "filename": filename,
+ "category": category,
+ "similarity": similarity,
+ "data_uri": data_uri,
+ }
+ )
+ return results
+
+
+ def _table_count():
+ _ensure_table()
+ with _get_connection() as conn:
+ with conn.cursor() as cur:
+ cur.execute(f"SELECT COUNT(*) FROM {TABLE_NAME}")
+ return int(cur.fetchone()[0])
+
+
+ @APP.route("/", methods=["GET", "POST"])
+ def index():
+ _ensure_table()
+
+ query = ""
+ results = []
+ image_root = request.form.get("image_root", str(DEFAULT_IMAGE_ROOT))
+
+ if request.method == "POST":
+ action = request.form.get("action")
+
+ if action == "load":
+ try:
+ inserted, skipped, total = _load_images_into_db(Path(image_root).expanduser())
+ flash(
+ f"Image load complete: inserted={inserted}, skipped={skipped}, discovered={total}.",
+ "success",
+ )
+ except Exception as exc:
+ flash(f"Image load failed: {exc}", "error")
+ return redirect(url_for("index"))
+
+ if action == "search":
+ query = (request.form.get("query") or "").strip()
+ if not query:
+ flash("Enter a search query.", "error")
+ else:
+ try:
+ results = _search_images(query, top_k=TOP_K)
+ if not results:
+ flash("No images found. Load the image archive first.", "error")
+ except Exception as exc:
+ flash(f"Search failed: {exc}", "error")
+
+ return render_template(
+ "index.html",
+ query=query,
+ results=results,
+ total_images=_table_count(),
+ default_image_root=str(DEFAULT_IMAGE_ROOT),
+ image_model_id=IMAGE_MODEL_ID,
+ text_model_id=TEXT_MODEL_ID,
+ )
+
+
+ if __name__ == "__main__":
+ LAB_ROOT.mkdir(parents=True, exist_ok=True)
+ db_user, db_dsn, _, password_source = _load_env()
+ print(
+ "DB config:",
+ f"user={db_user}",
+ f"dsn={db_dsn}",
+ f"password_source={password_source}",
+ )
+ _ensure_table()
+ print("Starting Flask image search app on port 5500")
+ print("Open in browser: http://:5500/")
+ APP.run(host="0.0.0.0", port=5500, debug=True)
+
+ ```
+
+2. Create `~/image-search-lab/templates/index.html` with the following code:
+
+ ```html
+
+
+
+
+
+
+ Private AI CLIP Image Search
+
+
+
+
+
+
Private AI CLIP Image Search
+
Image embeddings model: {{ image_model_id }}
+
Text embeddings model: {{ text_model_id }}
+
+
Total indexed images: {{ total_images }}
+
Top-K results: 10
+
+
+
+ {% with messages = get_flashed_messages(with_categories=true) %}
+ {% if messages %}
+ {% for category, msg in messages %}
+
{{ msg }}
+ {% endfor %}
+ {% endif %}
+ {% endwith %}
+
+
+
+
1) Load Image Library into Database
+
+
+
+
+
2) Search by Text
+
+
+
+
+ {% if results %}
+
+
Search Results
+
Query: {{ query }}
+
+ {% for r in results %}
+
+
+
+
{{ r.filename }}
+
Category: {{ r.category }}
+
Similarity: {{ r.similarity }}
+
+
+ {% endfor %}
+
+
+ {% endif %}
+
+
+
+
+ ```
+
+## Task 3: Start the Flask App
+
+1. Run:
+
+ ```bash
+ cd ~/image-search-lab
+ python app.py
+ ```
+
+2. Expected output includes:
+
+ ```text
+ Starting Flask image search app on port 5500
+ Open in browser: http://:5500/
+ ```
+
+## Task 4: Open the Web UI
+
+1. Open the Flask app directly:
+
+ ```text
+ http://:5500/
+ ```
+
+## Task 5: Load Images and Build Embeddings
+
+1. In the app:
+2.
+ 1. Keep the default image root (`~/image-search-lab/data/pdimagearchive`) or adjust if needed.
+ 2. Click **Load + Embed Images**.
+ 3. Wait for completion message with inserted/skipped counts.
+
+This step generates image embeddings with:
+- `clip-vit-base-patch32-img`
+
+## Task 6: Search with Text
+
+1. Enter a query such as:
+ - `ship in rough sea`
+ - `old maps and navigation`
+ - `harbor with boats`
+
+2. Click **Search Top 10**.
+
+The app embeds the query using:
+- `clip-vit-base-patch32-txt`
+
+Then it returns and displays the 10 most similar images using cosine similarity.
+
+## Task 7: Clean Up Database Objects
+
+1. Create `~/image-search-lab/cleanup_db.py` with:
+
+ ```python
+
+ import sys
+
+ import oracledb
+
+ from app import TABLE_NAME, _get_connection
+
+
+ def _table_exists(cur):
+ cur.execute(
+ """
+ SELECT COUNT(*)
+ FROM user_tables
+ WHERE table_name = :table_name
+ """,
+ table_name=TABLE_NAME,
+ )
+ return cur.fetchone()[0] == 1
+
+
+ def main():
+ with _get_connection() as conn:
+ with conn.cursor() as cur:
+ if not _table_exists(cur):
+ print(f"No cleanup needed. Table {TABLE_NAME} does not exist.")
+ return 0
+
+ cur.execute(f"SELECT COUNT(*) FROM {TABLE_NAME}")
+ row_count = cur.fetchone()[0]
+
+ try:
+ cur.execute(f"DROP TABLE {TABLE_NAME} PURGE")
+ except oracledb.DatabaseError as exc:
+ error_obj = exc.args[0]
+ if getattr(error_obj, "code", None) == 942:
+ print(f"No cleanup needed. Table {TABLE_NAME} does not exist.")
+ return 0
+ raise
+
+ conn.commit()
+
+ print(f"Dropped table {TABLE_NAME}. Removed {row_count} rows.")
+ return 0
+
+
+ if __name__ == "__main__":
+ sys.exit(main())
+
+ ```
+
+2. Run cleanup:
+
+ ```bash
+ cd ~/image-search-lab
+ python cleanup_db.py
+ ```
+
+3. Expected output:
+
+ ```text
+ Dropped table IMAGE_LIBRARY. Removed rows.
+ ```
+
+## Learn More
+
+- [Oracle Private AI Services Container User Guide](https://docs.oracle.com/en/database/oracle/oracle-database/26/prvai/oracle-private-ai-services-container.html)
+- [Private AI Services Container API Reference](https://docs.oracle.com/en/database/oracle/oracle-database/26/prvai/private-ai-services-container-api-reference.html)
+- [Oracle AI Vector Search](https://docs.oracle.com/en/database/oracle/oracle-database/26/vecse/)
+
+## Acknowledgements
+- **Author** - Oracle LiveLabs Team
+- **Last Updated By/Date** - Oracle LiveLabs Team, March 2026
diff --git a/private-ai-services-container/lab5-optional-next-steps/lab5-optional-next-steps.md b/private-ai-services-container/lab5-optional-next-steps/lab5-optional-next-steps.md
new file mode 100644
index 000000000..e57c0cd87
--- /dev/null
+++ b/private-ai-services-container/lab5-optional-next-steps/lab5-optional-next-steps.md
@@ -0,0 +1,34 @@
+# Lab 5: Optional Next Steps
+
+## Introduction
+
+This lab captures practical extensions you can build from the notebook and Flask app baseline.
+
+Estimated Time: 10 minutes
+
+### Option 1: Add PDF Chunking
+
+- Add a `DOCUMENTS` and `DOC_CHUNKS` pipeline.
+- Use `DBMS_VECTOR.UTL_TO_CHUNKS` and then `UTL_TO_EMBEDDING` with provider `privateai`.
+- Store chunk vectors and search across real document corpora.
+
+### Option 2: Expose Search Through ORDS
+
+- Create a stored procedure that accepts query text.
+- Return top-k rows from vector search SQL.
+- Publish endpoint through ORDS under your chosen base path.
+
+### Option 3: Compare Models
+
+- Enumerate `/v1/models`.
+- Regenerate vectors with different model IDs.
+- Compare retrieval quality and latency.
+
+### Objectives
+
+In this lab, you will:
+* TODO: Add objectives
+
+## Acknowledgements
+- **Author** - Oracle LiveLabs Team
+- **Last Updated By/Date** - Oracle LiveLabs Team, March 2026
diff --git a/private-ai-services-container/workshops/tenancy/index.html b/private-ai-services-container/workshops/tenancy/index.html
new file mode 100644
index 000000000..07a87d1e3
--- /dev/null
+++ b/private-ai-services-container/workshops/tenancy/index.html
@@ -0,0 +1,63 @@
+
+
+
+
+
+
+
+
+ Oracle LiveLabs
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Oracle LiveLabs
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/private-ai-services-container/workshops/tenancy/manifest.json b/private-ai-services-container/workshops/tenancy/manifest.json
new file mode 100644
index 000000000..850f042cb
--- /dev/null
+++ b/private-ai-services-container/workshops/tenancy/manifest.json
@@ -0,0 +1,36 @@
+{
+ "workshoptitle": "Build multimodal AI Vector Search using Oracle Private AI Service Container",
+ "help": "livelabs-help-db_us@oracle.com",
+ "tutorials": [
+ {
+ "title": "Introduction",
+ "description": "Overview of the workshop architecture and outcomes.",
+ "filename": "../../introduction/introduction.md"
+ },
+ {
+ "title": "Lab 1: Get started with Private AI Services Container",
+ "description": "Validate that JupyterLab, Private AI Services Container, Oracle AI Database, and ORDS are reachable.",
+ "filename": "../../lab1-verify-environment/lab1-verify-environment.md"
+ },
+ {
+ "title": "Lab 2: Vector Search with ONNX Model Stored in Oracle Database",
+ "description": "Run semantic search with a model stored in Oracle AI Database using provider=database.",
+ "filename": "../../lab2-db-model-embeddings/lab2-db-model-embeddings.md"
+ },
+ {
+ "title": "Lab 3: Vector Search with Oracle Private AI Services Container",
+ "description": "Run semantic search with embeddings generated by Oracle Private AI Services Container.",
+ "filename": "../../lab3-privateai-container-embeddings/lab3-privateai-container-embeddings.md"
+ },
+ {
+ "title": "Lab 4: Flask Image Search with CLIP Models",
+ "description": "Build a Flask app that embeds images and text with CLIP models in Private AI and returns top-10 similar images.",
+ "filename": "../../lab4-flask-image-search/lab4-flask-image-search.md"
+ },
+ {
+ "title": "Need Help?",
+ "description": "Solutions to common problems and directions for receiving live help.",
+ "filename": "https://oracle-livelabs.github.io/common/labs/need-help/need-help-livelabs.md"
+ }
+ ]
+}