-
Notifications
You must be signed in to change notification settings - Fork 981
feat(examples): add QDP tutorial notebook for Google Colab #1068
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
SuyashParmar
wants to merge
40
commits into
apache:main
Choose a base branch
from
SuyashParmar:feat/colab-examples-clean
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+261
−1
Open
Changes from all commits
Commits
Show all changes
40 commits
Select commit
Hold shift + click to select a range
91b1c33
docs: release version 0.5 and configure latest versioning
c6805e2
Merge branch 'main' of https://github.com/SuyashParmar/mahout
5e51074
Delete website/versioned_docs/version-0.5/.gitignore
SuyashParmar 4a0bde5
Delete website/versioned_sidebars/version-0.5-sidebars.json
SuyashParmar 238f886
fix(website): populate version-0.5 docs to fix build
461791e
fix(ci): handle missing asf-site branch in deployment workflow
9015c88
Delete website/versioned_docs/version-0.5/qumat-gap-analysis-for-pqc.md
SuyashParmar c98125d
Delete .github/workflows/website.yml
SuyashParmar cddc1fe
chore(website): remove version-0.5 docs content but keep config
e160fff
revert: restore original website.yml workflow (no changes)
7efa897
fix(website): update sync script to populate version-0.5 docs during …
02fc257
feat(docs): add proper version 0.5 snapshot and update labels (PR rev…
53cc242
fix(docs): remove .gitignore from version-0.5 to allow committing docs
22eba60
Merge branch 'apache:main' into main
SuyashParmar c5de1e4
Merge branch 'apache:main' into main
SuyashParmar 7b1d30d
fix(docs): resolve broken relative links in documentation
c983341
Revert "fix(docs): resolve broken relative links in documentation"
308cf8b
Merge branch 'apache:main' into main
SuyashParmar b15180e
Merge branch 'apache:main' into main
SuyashParmar f6d9eac
Merge branch 'apache:main' into main
SuyashParmar d8ddbbb
feat(examples): add QDP tutorial notebook for Google Colab
8560e22
fix(examples): use direct import from qumat.qdp without try-except
4f87a84
feat(examples): add PennyLane training loop integration to tutorial
32b7826
fix(examples): ensure float64 weights/labels in QML training loop to …
1d2bfe3
fix(examples): remove all try-except blocks for cleaner tutorial code
fafa8ce
fix(examples): restore robust Colab notebook setup with pinned depend…
e75e03d
Update notebook-testing.yml
SuyashParmar 6807b0a
Merge branch 'main' into feat/colab-examples-clean
ryankert01 297e4a4
Handle missing QDP extension in tutorial notebook
aca855b
Remove try/except from QDP notebook init flow
1d3605a
Align tutorial install flow with docs and keep PennyLane tensors on GPU
7a9f674
Install QDP into active Colab kernel in tutorial setup
27e9e5b
Fix Colab QDP install by installing local qdp package explicitly
5ea364e
Simplify QDP tutorial flow by removing conditional guards
c0fc587
Skip GPU QDP benchmark notebooks in notebook CI
79ca122
Merge branch 'apache:main' into main
SuyashParmar 819d450
Move QDP tutorial notebook to examples and polish educational text
cc767bb
Merge remote-tracking branch 'origin/main' into feat/colab-examples-c…
067c9ca
Merge branch 'main' into feat/colab-examples-clean
SuyashParmar 9339cf1
Merge branch 'main' into feat/colab-examples-clean
ryankert01 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,259 @@ | ||
| { | ||
| "cells": [ | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": { | ||
| "id": "intro_header" | ||
| }, | ||
| "source": [ | ||
| "# QDP Tutorial: GPU-Accelerated Quantum Data Preparation\n", | ||
| "\n", | ||
| "This notebook introduces **QDP** (Quantum Data Preparation), which accelerates encoding classical data into quantum states on GPU.\n", | ||
| "\n", | ||
| "**What you will do in this tutorial:**\n", | ||
| "1. Set up a Colab GPU environment for QDP.\n", | ||
| "2. Initialize `QdpEngine` and run a basic amplitude-encoding example.\n", | ||
| "3. Integrate QDP output with a small PennyLane + PyTorch training loop.\n" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": { | ||
| "id": "setup_phase" | ||
| }, | ||
| "source": [ | ||
| "## 1. Environment Setup\n", | ||
| "\n", | ||
| "The setup below installs Rust (needed for native extension build), clones Mahout, and installs QDP into the active Colab kernel.\n" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "metadata": { | ||
| "colab": { | ||
| "base_uri": "https://localhost:8080/" | ||
| }, | ||
| "id": "gpu_check", | ||
| "outputId": "gpu_check_out" | ||
| }, | ||
| "outputs": [], | ||
| "source": [ | ||
| "# Check for NVIDIA GPU\n", | ||
| "!nvidia-smi" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "metadata": { | ||
| "id": "install_deps" | ||
| }, | ||
| "outputs": [], | ||
| "source": [ | ||
| "# 1. Install Rust Toolchain (Required for QDP compilation)\n", | ||
| "!curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y\n", | ||
| "import os\n", | ||
| "os.environ['PATH'] += \":/root/.cargo/bin\"\n", | ||
| "\n", | ||
| "# 2. Install uv and clone Mahout repository\n", | ||
| "!pip install uv\n", | ||
| "!git clone https://github.com/apache/mahout.git\n", | ||
| "\n", | ||
| "# 3. Install local packages into the active Colab kernel\n", | ||
| "%cd /content/mahout\n", | ||
| "!uv pip install --system -e .\n", | ||
| "!uv pip install --system -e qdp/qdp-python\n", | ||
| "\n", | ||
| "# 4. Notebook dependencies\n", | ||
| "!uv pip install --system torch numpy pennylane \"cloudpickle>=3.0.0\" \"antlr4-python3-runtime==4.9.*\"\n" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": { | ||
| "id": "basic_usage_header" | ||
| }, | ||
| "source": [ | ||
| "## 2. Basic Usage\n", | ||
| "\n", | ||
| "Next, we initialize `QdpEngine` and encode a simple sample into a quantum state tensor.\n" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "metadata": { | ||
| "id": "init_engine" | ||
| }, | ||
| "outputs": [], | ||
| "source": [ | ||
| "import torch\n", | ||
| "import numpy as np\n", | ||
| "from qumat.qdp import QdpEngine\n", | ||
| "\n", | ||
| "print(\"Imported QdpEngine from qumat.qdp\")\n", | ||
| "\n", | ||
| "# Initialize engine on GPU 0\n", | ||
| "engine = QdpEngine(0)\n", | ||
| "print(\"QDP Engine initialized successfully on GPU 0\")\n" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "metadata": { | ||
| "id": "simple_encode" | ||
| }, | ||
| "outputs": [], | ||
| "source": [ | ||
| "# Example 1: Encode a simple Python list\n", | ||
| "data = [0.5, 0.5, 0.5, 0.5]\n", | ||
| "n_qubits = 2\n", | ||
| "\n", | ||
| "# Encode using amplitude encoding\n", | ||
| "# 4 values can form a state of 2 qubits (2^2 = 4)\n", | ||
| "qtensor = engine.encode(data, n_qubits, \"amplitude\")\n", | ||
| "\n", | ||
| "# Convert to PyTorch tensor (zero-copy)\n", | ||
| "torch_tensor = torch.from_dlpack(qtensor)\n", | ||
| "\n", | ||
| "print(f\"Quantum state shape: {torch_tensor.shape}\")\n", | ||
| "print(f\"Quantum state data:\\n{torch_tensor}\")\n" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": { | ||
| "id": "real_world_integration" | ||
| }, | ||
| "source": [ | ||
| "## 3. Real-World Integration: PennyLane Training Loop\n", | ||
| "\n", | ||
| "QDP is most useful when its encoded states feed directly into a quantum ML workflow.\n", | ||
| "\n", | ||
| "In this section we will:\n", | ||
| "1. Generate synthetic classification data.\n", | ||
| "2. Use **QDP** to encode data into quantum states on GPU.\n", | ||
| "3. Feed those states into a **PennyLane** quantum model.\n", | ||
| "4. Train end-to-end with PyTorch.\n" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "metadata": { | ||
| "id": "pennylane_setup" | ||
| }, | ||
| "outputs": [], | ||
| "source": [ | ||
| "import pennylane as qml\n", | ||
| "import torch.nn as nn\n", | ||
| "import torch.optim as optim\n", | ||
| "\n", | ||
| "# Configuration\n", | ||
| "n_qubits = 4\n", | ||
| "batch_size = 32\n", | ||
| "n_features = 1 << n_qubits # Amplitude encoding: 2^n features\n", | ||
| "learning_rate = 0.1\n", | ||
| "epochs = 5\n", | ||
| "\n", | ||
| "# 1. Create a PennyLane Device\n", | ||
| "# Use 'default.qubit' (CPU) or 'lightning.gpu' (if installed) for simulation.\n", | ||
| "# QDP handles the heavy lifting of state preparation on GPU first.\n", | ||
| "dev = qml.device(\"default.qubit\", wires=n_qubits)\n", | ||
| "\n", | ||
| "# 2. Define the QNode (Quantum Circuit)\n", | ||
| "# This takes a pre-calculated state vector as input\n", | ||
| "@qml.qnode(dev, interface=\"torch\")\n", | ||
| "def qnn_circuit(inputs, weights):\n", | ||
| " # Initialize the qubit register with the state from QDP\n", | ||
| " qml.StatePrep(inputs, wires=range(n_qubits))\n", | ||
| " \n", | ||
| " # Trainable Variational Layers\n", | ||
| " qml.StronglyEntanglingLayers(weights, wires=range(n_qubits))\n", | ||
| " \n", | ||
| " # Measure expectation value\n", | ||
| " return [qml.expval(qml.PauliZ(i)) for i in range(n_qubits)]\n", | ||
| "\n", | ||
| "print(\"PennyLane QNode defined successfully.\")" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "metadata": { | ||
| "id": "training_loop" | ||
| }, | ||
| "outputs": [], | ||
| "source": [ | ||
| "# 3. Data preparation (synthetic)\n", | ||
| "# Generate random features and binary labels\n", | ||
| "input_data = np.random.rand(batch_size, n_features).astype(np.float64)\n", | ||
| "\n", | ||
| "# Important: use float64 regarding the dtype mismatch error (Float vs Double)\n", | ||
| "labels = torch.randint(0, 2, (batch_size,)).to(torch.float64)\n", | ||
| "\n", | ||
| "# 4. QDP encoding (the acceleration step)\n", | ||
| "print(\"Encoding data on GPU with QDP...\")\n", | ||
| "qtensor_batch = engine.encode(input_data, n_qubits, \"amplitude\")\n", | ||
| "# Converting to PyTorch tensor (on GPU)\n", | ||
| "train_states_gpu = torch.from_dlpack(qtensor_batch)\n", | ||
| "\n", | ||
| "# 5. Define PyTorch model using the QNode\n", | ||
| "weight_shape = qml.StronglyEntanglingLayers.shape(n_layers=2, n_wires=n_qubits)\n", | ||
| "# Initialize weights as float64 to match input precision\n", | ||
| "weights = torch.nn.Parameter(torch.rand(weight_shape, dtype=torch.float64))\n", | ||
| "optimizer = optim.Adam([weights], lr=learning_rate)\n", | ||
| "loss_fn = nn.MSELoss() # Simple MSE for demonstration\n", | ||
| "\n", | ||
| "print(f\"Starting training for {epochs} epochs...\")\n", | ||
| "\n", | ||
| "# 6. Training loop\n", | ||
| "for epoch in range(epochs):\n", | ||
| " optimizer.zero_grad()\n", | ||
| "\n", | ||
| " # Forward pass: feed QDP states into PennyLane circuit\n", | ||
| " # We sum the Z-expectation values to get a single prediction per sample\n", | ||
| " predictions = torch.stack([torch.sum(torch.stack(qnn_circuit(state, weights))) for state in train_states_gpu])\n", | ||
| "\n", | ||
| " # Normalize predictions to [0, 1] for dummy classification (sigmoid-like)\n", | ||
| " predictions = torch.sigmoid(predictions)\n", | ||
| "\n", | ||
| " loss = loss_fn(predictions, labels)\n", | ||
| " loss.backward()\n", | ||
| " optimizer.step()\n", | ||
| "\n", | ||
| " print(f\"Epoch {epoch + 1}/{epochs} | Loss: {loss.item():.4f}\")\n", | ||
| "\n", | ||
| "print(\"Training complete!\")\n" | ||
| ] | ||
| } | ||
| ], | ||
| "metadata": { | ||
| "accelerator": "GPU", | ||
| "colab": { | ||
| "gpuType": "T4", | ||
| "provenance": [] | ||
| }, | ||
| "kernelspec": { | ||
| "display_name": "Python 3", | ||
| "language": "python", | ||
| "name": "python3" | ||
| }, | ||
| "language_info": { | ||
| "codemirror_mode": { | ||
| "name": "ipython", | ||
| "version": 3 | ||
| }, | ||
| "file_extension": ".py", | ||
| "mimetype": "text/x-python", | ||
| "name": "python", | ||
| "nbconvert_exporter": "python", | ||
| "pygments_lexer": "ipython3", | ||
| "version": "3.10.12" | ||
| } | ||
| }, | ||
| "nbformat": 4, | ||
| "nbformat_minor": 0 | ||
| } |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we don't need this change right now.