diff --git a/.azdo/publish.yml b/.azdo/publish.yml index 51b0f728..cd2bcc4b 100644 --- a/.azdo/publish.yml +++ b/.azdo/publish.yml @@ -129,7 +129,7 @@ extends: # Always use internal feed for dependency resolution (avoids firewall issues on 1ES pool) $env:UV_INDEX_URL = "https://build:$($env:SYSTEM_ACCESSTOKEN)@pkgs.dev.azure.com/DomoreexpGithub/Github_Pipelines/_packaging/TeamsSDKPreviews/pypi/simple/" Write-Host "Using authenticated Azure Artifacts feed" - uv pip install -e packages/common -e packages/api -e packages/cards -e packages/apps -e packages/botbuilder -e packages/graph -e packages/ai -e packages/openai -e packages/mcpplugin -e packages/a2aprotocol -e packages/devtools + uv pip install -e packages/common -e packages/api -e packages/cards -e packages/apps -e packages/botbuilder -e packages/devtools -e packages/graph -e packages/ai -e packages/openai -e packages/mcpplugin -e packages/a2aprotocol env: SYSTEM_ACCESSTOKEN: $(System.AccessToken) target: diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index 81f46a24..f45e5829 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -74,14 +74,12 @@ Microsoft Teams SDK for Python is a comprehensive SDK for building Microsoft Tea ### Basic Teams App Validation 1. Navigate to test app: `cd examples/echo` 2. Start the app: `python src/main.py` -3. **Expected output**: App starts on ports 3978 and 3979 with logs: +3. **Expected output**: App starts on port 3978 with logs: ``` [INFO] Teams app started successfully - [INFO] @teams/app.DevToolsPlugin listening on port 3979 πŸš€ ``` 4. **Test endpoints**: - Health check: `curl http://localhost:3978/` (returns `{"status":"healthy","port":3978}`) - - DevTools UI: `curl http://localhost:3979/devtools` (returns HTML page) 5. Stop with Ctrl+C ### Required Pre-commit Validation @@ -101,7 +99,6 @@ pyright # Type checking validation - **microsoft-teams-api**: Teams API client - **microsoft-teams-cards**: Adaptive cards support - **microsoft-teams-common**: Shared utilities -- **microsoft-teams-devtools**: Development and debugging tools - **microsoft-teams-graph**: Microsoft Graph integration - **microsoft-teams-openai**: OpenAI integration - **microsoft-teams-mcpplugin**: MCP protocol integration @@ -126,13 +123,11 @@ Available test apps for development and validation: 1. **Run commands with UV** (recommended): Use `uv run pytest packages/[package-name]` or **activate virtual environment**: `source .venv/bin/activate` 2. **Run affected tests**: `pytest packages/[package-name]` for specific package (or `uv run pytest packages/[package-name]`) 3. **Validate with test app**: Use `examples/echo` for basic functionality validation (starts a blocking server process) -4. **Check DevTools web app**: Access http://localhost:3979/devtools when app is running ### Debugging and Development -- **DevTools Web App**: Available at port 3979 when running any test app - **Logging**: Apps provide structured logging for debugging - **Hot reload**: No hot reload - restart apps after changes -- **Port conflicts**: Default ports are 3978 (main) and 3979 (devtools) +- **Port conflicts**: Default port is 3978 (main) ### CI/CD Integration The CI pipeline (`.github/workflows/ci.yml`) runs: diff --git a/.github/scripts/analyze_issue.py b/.github/scripts/analyze_issue.py index c3bb98e6..a5f292b6 100644 --- a/.github/scripts/analyze_issue.py +++ b/.github/scripts/analyze_issue.py @@ -36,7 +36,6 @@ - cards: Adaptive cards - ai: AI/function calling utilities - botbuilder: Bot Framework integration plugin -- devtools: Development tools plugin - mcpplugin: MCP server plugin - a2aprotocol: A2A protocol plugin - graph: Microsoft Graph integration diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 70db7cce..feae05dd 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -6,10 +6,12 @@ on: pull_request: paths: - packages/** + - examples/** push: branches: ["main"] paths: - packages/** + - examples/** # Declare default permissions as read only. permissions: read-all diff --git a/CLAUDE.md b/CLAUDE.md index 756ef062..b9a9e31b 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -44,7 +44,6 @@ All packages live in `packages/`, each with `src/microsoft_teams//` lay | `cards` | Adaptive cards | | `ai` | AI/function calling utilities | | `botbuilder` | Bot Framework integration plugin | -| `devtools` | Development tools plugin | | `mcpplugin` | MCP server plugin | | `a2aprotocol` | A2A protocol plugin | | `graph` | Microsoft Graph integration | diff --git a/README.md b/README.md index 9bf9eb0f..08546465 100644 --- a/README.md +++ b/README.md @@ -62,7 +62,6 @@ A comprehensive SDK for building Microsoft Teams applications, bots, and AI agen - [`microsoft-teams-api`](./packages/api/README.md) - [`microsoft-teams-cards`](./packages/cards/README.md) - [`microsoft-teams-common`](./packages/common/README.md) -- [`microsoft-teams-devtools`](./packages/devtools/README.md) - [`microsoft-teams-graph`](./packages/graph/README.md) - [`microsoft-teams-openai`](./packages/openai/README.md) - [`microsoft-teams-botbuilder`](./packages/botbuilder/README.md) diff --git a/RELEASE.md b/RELEASE.md index 26dc224f..9e468b51 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -108,7 +108,7 @@ The [publish pipeline](https://dev.azure.com/DomoreexpGithub/Github_Pipelines/_b - **Public** β€” signs packages via ESRP and publishes to PyPI. Requires approval via the `teams-sdk-publish` ADO environment before the ESRP release proceeds. 5. Pipeline runs: Build > Test > Publish -> **Note:** The `devtools` package is excluded from publishing. The pipeline filters out packages matching the `ExcludePackageFolders` variable. Prerelease versions are tagged `next` on PyPI; stable versions are tagged `latest`. +> **Note:** The pipeline filters out packages matching the `ExcludePackageFolders` variable. Prerelease versions are tagged `next` on PyPI; stable versions are tagged `latest`. #### Installing Published Packages diff --git a/examples/a2a-test/README.md b/examples/a2a-test/README.md index 7ea66ada..5a5506d1 100644 --- a/examples/a2a-test/README.md +++ b/examples/a2a-test/README.md @@ -1,14 +1,117 @@ -# Sample: A2A Client and Server -a sample showcasing an a2a server / client +# Sample: Two Teams bots relaying questions via A2A + Adaptive Cards +Two symmetric Teams bots, Alice and Bob, each backed by an LLM agent. The user DMs one of them; the LLM decides whether to answer directly or forward the question to the other bot over the A2A protocol. -Open up devtools for the client, and send a message: +This sample demonstrates: + +- **LLM-driven peer routing** β€” each bot's agent reads the other's A2A `AgentCard.description` (fetched lazily via `A2ACardResolver`) and uses that to decide whether to forward. +- **Human-in-the-loop via Adaptive Cards** β€” when a peer asks, the answering bot pushes an ask-card to its human operator's 1:1; the operator types a reply and submits. +- **Async reply, folded back into chat** β€” the answer comes back over A2A and is delivered both as a reply card and as a `[peer update]` note injected into the user's LLM session, so the next turn's model sees it as context. + +## Flow + +```mermaid +sequenceDiagram + actor UA as User-A + participant A as Alice (LLM agent) + participant B as Bob (LLM agent) + actor OB as Operator-B + + UA->>A: "how do I scale my postgres database?" + Note over A: LLM reads Bob's AgentCard ("backend & infra"), picks tool:
send_to_peer("bob", "how do I scale my postgres database?")
stash awaiting_reply[qid] = User-A conv + A->>B: A2A ask {qid, question, sender, reply_url=Alice} + A-->>UA: streamed reply ("Asked Bob, will let you know…") + Note over B: validate reply_url ∈ allowlist
stash inbound_asks[qid] = {reply_url, sender, question} + B->>OB: push ask card + OB->>B: submit "use read replicas + pgbouncer" (carries qid) + Note over B: pop inbound_asks[qid] β†’ trusted reply_url + B->>A: A2A reply {qid, answer, responder} + Note over A: pop awaiting_reply[qid]
inject "[peer update] Bob replied: 'use read replicas + pgbouncer'…" into User-A's session + A->>UA: push reply card +``` + +## Files + +**Entry points** β€” start here. +- `src/bot_a.py` β€” Alice. Teams `/api/messages` and A2A `/a2a` share port **3978**. Edit the `DESCRIPTION` constant to set Alice's expertise; this becomes her A2A AgentCard description that Bob's LLM reads to decide when to forward. +- `src/bot_b.py` β€” Bob. Same layout on port **3979**. Same `DESCRIPTION` knob for Bob. + +**LLM agent** +- `src/agent.py` β€” `BotAgent` builds the `agent_framework` `Agent`, lazily fetches peer A2A cards via `A2ACardResolver`, and exposes `get_agent()`, `session_for(conv_id)`, and `record_peer_reply(...)` for the bot file to use. + +**A2A layer** +- `src/a2a_executor.py` β€” A2A server dispatch: `ask` β†’ validate `reply_url`, stash, push card to operator; `reply` β†’ push card to the original user and call `on_peer_reply`. +- `src/a2a_server.py` β€” `make_a2a_app(..., allowed_peer_urls=..., on_peer_reply=...)` wraps the executor in `A2AStarletteApplication`. +- `src/a2a_client.py` β€” `send_a2a(peer_url, data)` one-shot sender, plus `is_allowed_peer(url, allowed)` for origin-based peer URL validation. +- `src/messages.py` β€” `AskMessage` / `ReplyMessage` Pydantic models with a `kind` discriminator. + +**Cards & state** +- `src/cards.py` β€” `ask_card(sender, question, qid)` (submit carries only qid), `reply_card(...)`. +- `src/state.py` β€” `BotState` (operator conversation, outbound asks awaiting a reply, inbound asks awaiting an operator). + +## Operator model + +Each bot remembers the last **1:1** Teams conversation that messaged it (`state.operator_conv_id`). Incoming asks are pushed into that conversation. + +## Peer authorization + +The `reply_url` check in `is_allowed_peer` is a **demo-only** stand-in for authorization: a peer is trusted because its URL matches a configured origin. Production A2A should verify the caller's identity with a bearer token signed by an IdP or mTLS, not a self-declared URL. + +## Configuration + +Create `.env` in `examples/a2a-test/`: + +```dotenv +# Shared β€” your Microsoft tenant +TENANT_ID= + +# Azure OpenAI β€” used by both bots' LLM +AZURE_OPENAI_API_KEY= +AZURE_OPENAI_ENDPOINT= +AZURE_OPENAI_MODEL= + +# Bot A (Alice) β€” Teams app registration +BOT_A_CLIENT_ID= +BOT_A_CLIENT_SECRET= + +# Bot B (Bob) β€” Teams app registration +BOT_B_CLIENT_ID= +BOT_B_CLIENT_SECRET= + +# Optional β€” ports and A2A peer URLs (defaults shown) +# BOT_A_HOST=localhost +# BOT_A_PORT=3978 +# BOB_A2A_URL=http://localhost:3979/a2a/ +# BOT_B_HOST=localhost +# BOT_B_PORT=3979 +# ALICE_A2A_URL=http://localhost:3978/a2a/ +``` + +Each bot needs its **own** Teams app registration so DMs route to the right bot. + +## Run + +Two terminals from `examples/a2a-test/`: + +```bash +uv run python src/bot_a.py # Alice β€” Teams + A2A on 3978 +uv run python src/bot_b.py # Bob β€” Teams + A2A on 3979 +``` + +> ⚠ **DM each bot once before relaying.** The operator's conversation id is captured from the first Teams message the bot receives. If a peer ask arrives before its target has been DM'd, the target will log `no operator conversation; ask not pushed` and the card won't appear anywhere. + +### Try it + +With both bots DM'd at least once, try this transcript against Alice: ``` -C: What's the weather like? -S: Could you please specify the location for which you'd like to know the weather? -C: London -S: The weather in London is sunny -C: What's the weather like in Tokyo? -S: The weather in Tokyo is sunny -``` \ No newline at end of file +You β†’ Alice: how do I scale my postgres database? +Alice β†’ You: Asked Bob, will let you know… + (Bob's operator gets an ask card, types "use read replicas + pgbouncer", submits) +Alice β†’ You: [reply card] Bob says: use read replicas + pgbouncer +...... conversation ... +You β†’ Alice: how do I scale my postgres database again, i forgot ? +Alice β†’ You: ... Bob’s short recommendation earlier was: read replicas + PgBouncer. ... +``` + +The bots are symmetric β€” DM Bob with a UX question(eg. What's the best way to design a website?) and the same flow runs the other way (Bob's LLM forwards to Alice). diff --git a/examples/a2a-test/pyproject.toml b/examples/a2a-test/pyproject.toml index 9c97c02d..94c863f4 100644 --- a/examples/a2a-test/pyproject.toml +++ b/examples/a2a-test/pyproject.toml @@ -1,21 +1,22 @@ [project] -name = "a2a" +name = "a2a-test" version = "0.1.0" -description = "sample showcasing a2a server and client" +description = "Two Teams bots talking to each other via the official a2a-sdk, exchanging Adaptive Cards" readme = "README.md" requires-python = ">=3.12,<3.15" dependencies = [ "dotenv>=0.9.9", "microsoft-teams-apps", - "microsoft-teams-a2a", - "microsoft-teams-ai", - "microsoft-teams-openai", "microsoft-teams-common", + "microsoft-teams-cards", + "a2a-sdk[core,http-server]>=0.3.7", + "agent-framework-core", + "agent-framework-openai", + "uvicorn>=0.30", + "httpx>=0.27", ] [tool.uv.sources] microsoft-teams-apps = { workspace = true } -microsoft-teams-ai = { workspace = true } -microsoft-teams-a2a = { workspace = true } -microsoft-teams-openai = { workspace = true } microsoft-teams-common = { workspace = true } +microsoft-teams-cards = { workspace = true } diff --git a/examples/a2a-test/src/a2a_client.py b/examples/a2a-test/src/a2a_client.py new file mode 100644 index 00000000..ae43faf6 --- /dev/null +++ b/examples/a2a-test/src/a2a_client.py @@ -0,0 +1,64 @@ +""" +Copyright (c) Microsoft Corporation. All rights reserved. +Licensed under the MIT License. +""" + +import uuid +from typing import Any +from urllib.parse import urlsplit + +import httpx +from a2a.client import A2ACardResolver, ClientConfig, ClientFactory +from a2a.types import DataPart, Message, Part, Role + +# Outbound A2A helpers + peer URL allowlist check. + +_DEFAULT_PORTS = {"http": 80, "https": 443} + + +def _origin(url: str) -> tuple[str, str, int] | None: + try: + parts = urlsplit(url) + except ValueError: + return None + scheme = parts.scheme.lower() + host = (parts.hostname or "").lower() + if not scheme or not host: + return None + port = parts.port if parts.port is not None else _DEFAULT_PORTS.get(scheme, 0) + return (scheme, host, port) + + +def is_allowed_peer(url: str, allowed: list[str]) -> bool: + # Demo-only stand-in for authorization: we trust a peer because its + # reply_url matches a configured origin. Production A2A should verify the + # caller's identity (e.g. bearer token signed by an IdP, or mTLS) rather + # than trusting a self-declared URL. Match by scheme/host/port so a + # trailing slash or default port doesn't flip a valid peer to invalid. + target = _origin(url) + if target is None: + return False + for candidate in allowed: + candidate_origin = _origin(candidate) + if candidate_origin is not None and candidate_origin == target: + return True + return False + + +async def send_a2a(peer_url: str, data: dict[str, Any]) -> None: + # Resolve the peer's agent card, build an a2a-sdk client, and fire a + # single DataPart-carrying message. We drain the response stream + # without reading it β€” the peer only sends an `ack`; any "real" + # answer comes later as a separate inbound A2A call back to us. + async with httpx.AsyncClient(timeout=60.0) as http_client: + peer_card = await A2ACardResolver(httpx_client=http_client, base_url=peer_url).get_agent_card() + factory = ClientFactory(ClientConfig(httpx_client=http_client, streaming=True)) + client = factory.create(peer_card) + + request = Message( + message_id=str(uuid.uuid4()), + role=Role.user, + parts=[Part(root=DataPart(data=data))], + ) + async for _ in client.send_message(request): + pass diff --git a/examples/a2a-test/src/a2a_executor.py b/examples/a2a-test/src/a2a_executor.py new file mode 100644 index 00000000..75e42fa6 --- /dev/null +++ b/examples/a2a-test/src/a2a_executor.py @@ -0,0 +1,138 @@ +""" +Copyright (c) Microsoft Corporation. All rights reserved. +Licensed under the MIT License. +""" + +import logging +import uuid +from typing import Callable, Optional + +from a2a.server.agent_execution.agent_executor import AgentExecutor +from a2a.server.agent_execution.context import RequestContext +from a2a.server.events.event_queue import EventQueue +from a2a.types import ( + DataPart, + Message, + Part, + Role, + TaskState, + TaskStatus, + TaskStatusUpdateEvent, +) +from a2a_client import is_allowed_peer +from cards import ask_card, reply_card +from messages import A2AMessage, A2AMessageAdapter, AskMessage, ReplyMessage +from microsoft_teams.apps import App +from pydantic import ValidationError +from state import BotState + +OnPeerReply = Callable[[str, str, str, str], None] +# (user_conv_id, responder, question, answer) -> None + +# A2A server-side dispatch. Reads the incoming `DataPart`, branches on +# `data.kind` (`ask` vs `reply`), updates `BotState`, and builds the Teams +# card locally from the payload. + +logger = logging.getLogger(__name__) + + +def parse_a2a_message(message: Optional[Message]) -> Optional[A2AMessage]: + # A2A messages can carry multiple parts; this sample only uses one + # DataPart per message. Validate it against the discriminated union so + # the executor never handles a raw dict. + if message is None: + return None + for part in message.parts: + if isinstance(part.root, DataPart): + try: + return A2AMessageAdapter.validate_python(part.root.data) + except ValidationError as e: + logger.warning("invalid a2a message: %s", e) + return None + return None + + +class AskReplyExecutor(AgentExecutor): + def __init__( + self, + teams_app: App, + state: BotState, + allowed_peer_urls: list[str], + on_peer_reply: Optional[OnPeerReply] = None, + ) -> None: + self._teams_app = teams_app + self._state = state + self._allowed_peer_urls = allowed_peer_urls + self._on_peer_reply = on_peer_reply + + async def execute(self, context: RequestContext, event_queue: EventQueue) -> None: + task_id = context.task_id or str(uuid.uuid4()) + context_id = context.context_id or str(uuid.uuid4()) + message = parse_a2a_message(context.message) + + if isinstance(message, AskMessage): + await self._on_ask(message) + elif isinstance(message, ReplyMessage): + await self._on_reply(message) + + # A2A tasks need a terminal status event to close out. Our "real" + # response (if any) flows later as a separate inbound A2A message + # from the peer, so we just ack this one and finish. + ack = Message( + message_id=str(uuid.uuid4()), + role=Role.agent, + parts=[Part(root=DataPart(data={"kind": "ack"}))], + ) + await event_queue.enqueue_event( + TaskStatusUpdateEvent( + task_id=task_id, + context_id=context_id, + status=TaskStatus(state=TaskState.completed, message=ack), + final=True, + ) + ) + + async def cancel(self, context: RequestContext, event_queue: EventQueue) -> None: + await event_queue.enqueue_event( + TaskStatusUpdateEvent( + task_id=context.task_id or str(uuid.uuid4()), + context_id=context.context_id or str(uuid.uuid4()), + status=TaskStatus(state=TaskState.canceled), + final=True, + ) + ) + + async def _on_ask(self, msg: AskMessage) -> None: + # Peer is asking us a question. Stash routing by qid and push the + # ask card to our operator. + logger.info("[%s] received ask qid=%s from %s", self._state.name, msg.qid, msg.sender) + conv_id = self._state.operator_conv_id + if not conv_id: + logger.warning("[%s] no operator conversation; ask not pushed", self._state.name) + return + # Validate reply_url before we stash it or push a card tied to it. + if not is_allowed_peer(msg.reply_url, self._allowed_peer_urls): + logger.warning( + "[%s] rejecting ask qid=%s: reply_url %r not in allowlist", self._state.name, msg.qid, msg.reply_url + ) + return + self._state.inbound_asks[msg.qid] = { + "reply_url": msg.reply_url, + "sender": msg.sender, + "question": msg.question, + } + await self._teams_app.send(conv_id, ask_card(sender=msg.sender, question=msg.question, qid=msg.qid)) + + async def _on_reply(self, msg: ReplyMessage) -> None: + # Peer is answering a question we originally sent. Push the reply + # card to the user who asked, and let the caller (the LLM agent) + # know so it can fold the reply into the user's chat session. + pending = self._state.awaiting_reply.pop(msg.qid, None) + logger.info("[%s] received reply qid=%s", self._state.name, msg.qid) + if not pending: + logger.warning("[%s] no awaiting conversation for qid=%s", self._state.name, msg.qid) + return + card = reply_card(responder=msg.responder, question=pending["question"], answer=msg.answer, qid=msg.qid) + await self._teams_app.send(pending["conv_id"], card) + if self._on_peer_reply is not None: + self._on_peer_reply(pending["conv_id"], msg.responder, pending["question"], msg.answer) diff --git a/examples/a2a-test/src/a2a_server.py b/examples/a2a-test/src/a2a_server.py new file mode 100644 index 00000000..baf64b52 --- /dev/null +++ b/examples/a2a-test/src/a2a_server.py @@ -0,0 +1,50 @@ +""" +Copyright (c) Microsoft Corporation. All rights reserved. +Licensed under the MIT License. +""" + +from typing import Optional + +from a2a.server.apps.jsonrpc.starlette_app import A2AStarletteApplication +from a2a.server.request_handlers.default_request_handler import DefaultRequestHandler +from a2a.server.tasks.inmemory_task_store import InMemoryTaskStore +from a2a.types import AgentCapabilities, AgentCard, AgentSkill +from a2a_executor import AskReplyExecutor, OnPeerReply +from microsoft_teams.apps import App +from state import BotState + + +def make_a2a_app( + *, + teams_app: App, + state: BotState, + description: str, + skill: str, + url: str, + allowed_peer_urls: list[str], + on_peer_reply: Optional[OnPeerReply] = None, +) -> A2AStarletteApplication: + # Builds the A2A server for this bot: an AgentCard advertising the + # skill at `url`, plus a request handler wired to AskReplyExecutor + # which dispatches incoming asks/replies into the Teams app. + agent_card = AgentCard( + name=state.name, + description=description, + url=url, + version="1.0.0", + protocol_version="0.3.0", + default_input_modes=["text"], + default_output_modes=["text"], + capabilities=AgentCapabilities(streaming=True), + skills=[AgentSkill(id=skill, name=skill, description=description, tags=[skill])], + ) + handler = DefaultRequestHandler( + agent_executor=AskReplyExecutor( + teams_app=teams_app, + state=state, + allowed_peer_urls=allowed_peer_urls, + on_peer_reply=on_peer_reply, + ), + task_store=InMemoryTaskStore(), + ) + return A2AStarletteApplication(agent_card=agent_card, http_handler=handler) diff --git a/examples/a2a-test/src/agent.py b/examples/a2a-test/src/agent.py new file mode 100644 index 00000000..4950830d --- /dev/null +++ b/examples/a2a-test/src/agent.py @@ -0,0 +1,172 @@ +""" +Copyright (c) Microsoft Corporation. All rights reserved. +Licensed under the MIT License. +""" + +import logging +import uuid +from contextvars import ContextVar +from os import getenv +from typing import Annotated + +import httpx +from a2a.client import A2ACardResolver +from a2a.types import AgentCard +from a2a_client import send_a2a +from agent_framework import Agent, AgentSession, Message, tool +from agent_framework._sessions import InMemoryHistoryProvider +from agent_framework.openai import OpenAIChatClient +from dotenv import find_dotenv, load_dotenv +from messages import AskMessage +from pydantic import Field +from state import BotState + +# LLM-driven routing for the Teams bot. The agent has one tool, +# `send_to_peer`, whose system prompt advertises peers using the +# `description` field from each peer's A2A AgentCard (fetched lazily). + +load_dotenv(find_dotenv(usecwd=True)) +logger = logging.getLogger(__name__) + + +# Per-turn context the tool reads to know which Teams conversation it's +# serving. Set in handle_message before `agent.run(...)`. +current_user_conv_id: ContextVar[str] = ContextVar("current_user_conv_id") + + +def _require_env(name: str) -> str: + value = getenv(name) + if not value: + raise ValueError(f"Required environment variable {name!r} is not set.") + return value + + +class BotAgent: + def __init__( + self, + self_name: str, + self_a2a_url: str, + peers: dict[str, str], # peer_name -> A2A URL + state: BotState, + ) -> None: + self._self_name = self_name + self._self_a2a_url = self_a2a_url + self._peers = peers + self._state = state + self._peer_cards: dict[str, AgentCard] = {} + self._sessions: dict[str, AgentSession] = {} + self._client = OpenAIChatClient( + model=_require_env("AZURE_OPENAI_MODEL"), + azure_endpoint=_require_env("AZURE_OPENAI_ENDPOINT"), + api_key=_require_env("AZURE_OPENAI_API_KEY"), + ) + + async def _refresh_peer_cards(self) -> None: + # Lazy fetch β€” each turn, fill in any peers we couldn't reach yet. + # Once cached, never refetched (peers don't change descriptions + # at runtime in this sample). + missing = [name for name in self._peers if name not in self._peer_cards] + if not missing: + return + async with httpx.AsyncClient(timeout=10.0) as http: + for name in missing: + try: + card = await A2ACardResolver(http, self._peers[name]).get_agent_card() + self._peer_cards[name] = card + logger.info("[%s] resolved peer card: %s", self._self_name, name) + except Exception as e: + logger.warning("[%s] could not resolve peer card %s: %s", self._self_name, name, e) + + def _format_peers(self) -> str: + lines: list[str] = [] + for name in self._peers: + card = self._peer_cards.get(name) + if card is None: + lines.append(f"- {name}: (peer card not yet reachable; ask cautiously)") + continue + skills = "; ".join(s.description or s.name for s in (card.skills or [])) + entry = f"- {name}: {card.description}" + if skills: + entry += f" Skills: {skills}." + lines.append(entry) + return "\n".join(lines) + + def _build_agent(self) -> Agent: + peer_names = ", ".join(self._peers) + + @tool + async def send_to_peer( + peer: Annotated[str, Field(description=f"Peer to ask. Must be one of: {peer_names}.")], + question: Annotated[str, Field(description="The natural-language question to send to the peer.")], + ) -> str: + """Forward a question to a peer agent over A2A. + + Use this when the user's question fits a peer's expertise (per their description) better than + your own. The reply arrives asynchronously (a human operator answers on the peer's side), so + this call only *queues* the question and returns immediately. + """ + peer_url = self._peers.get(peer) + if peer_url is None: + return f"Unknown peer {peer!r}. Known peers: {peer_names}." + qid = str(uuid.uuid4()) + try: + user_conv_id = current_user_conv_id.get() + except LookupError: + logger.warning("send_to_peer called outside a turn; dropping") + return "Internal error: no active conversation." + self._state.awaiting_reply[qid] = {"conv_id": user_conv_id, "question": question} + msg = AskMessage(qid=qid, question=question, sender=self._self_name, reply_url=self._self_a2a_url) + await send_a2a(peer_url, msg.model_dump()) + return f"Queued question to {peer} (qid {qid[:8]}). Their reply will arrive separately." + + instructions = f"""You are {self._self_name}, a Teams bot assistant. + +You should forward questions to peer agents when their expertise fits better than your own. +Peers: +{self._format_peers()} + +Guidelines: +- If a peer is a good fit for the user's question, call `send_to_peer` with that peer's name. +- Otherwise, answer the user directly. +- When a peer reply arrives later, you'll see a "[peer update]" note in the conversation; reference it naturally. +- Keep replies short and conversational.""" + + # Force local in-memory history. OpenAIChatClient defaults to + # service-side history (Responses API). + return Agent( + client=self._client, + instructions=instructions, + tools=[send_to_peer], + context_providers=[InMemoryHistoryProvider()], + ) + + async def get_agent(self) -> Agent: + # Per-turn: ensure peer cards are loaded, then build a fresh Agent + # so its instructions reflect the latest cached descriptions. + # Agent construction is cheap (config wrapper); the LLM client and + # session caches are reused across turns. + await self._refresh_peer_cards() + return self._build_agent() + + def session_for(self, conv_id: str) -> AgentSession: + session = self._sessions.get(conv_id) + if session is None: + session = AgentSession() + self._sessions[conv_id] = session + return session + + def record_peer_reply(self, user_conv_id: str, responder: str, question: str, answer: str) -> None: + # Append a note to the user's session so the next LLM turn sees it + # as context. Uses "user" role because most providers accept arbitrary + # user-role context mid-conversation, while multiple system messages + # are sometimes rejected. + session = self._sessions.get(user_conv_id) + if session is None: + logger.warning( + "[%s] no session for user_conv_id=%s; peer reply not recorded", self._self_name, user_conv_id + ) + return + note = f"[peer update] {responder} replied: {answer!r} (to your earlier question: {question!r})." + store = session.state.setdefault(InMemoryHistoryProvider.DEFAULT_SOURCE_ID, {}) + messages: list[Message] = store.setdefault("messages", []) + messages.append(Message("user", [note])) diff --git a/examples/a2a-test/src/bot_a.py b/examples/a2a-test/src/bot_a.py new file mode 100644 index 00000000..ef6b1b59 --- /dev/null +++ b/examples/a2a-test/src/bot_a.py @@ -0,0 +1,124 @@ +""" +Copyright (c) Microsoft Corporation. All rights reserved. +Licensed under the MIT License. +""" + +import asyncio +import logging +from os import getenv + +import uvicorn +from a2a_client import send_a2a +from a2a_server import make_a2a_app +from agent import BotAgent, current_user_conv_id +from cards import ASK_REPLY_ACTION +from dotenv import find_dotenv, load_dotenv +from fastapi import FastAPI +from messages import ReplyMessage +from microsoft_teams.api import AdaptiveCardInvokeActivity, MessageActivity +from microsoft_teams.api.models.adaptive_card import AdaptiveCardActionMessageResponse +from microsoft_teams.api.models.invoke_response import AdaptiveCardInvokeResponse +from microsoft_teams.apps import ActivityContext, App, FastAPIAdapter +from microsoft_teams.common import ConsoleFormatter +from state import BotState + +# Bot A (Alice) β€” Teams bot + A2A server sharing one FastAPI + uvicorn. +# Teams `/api/messages` and the A2A routes (mounted at `/a2a`) live on the +# same HTTP server, so Alice only occupies port 3978. +# +# - Inbound user message β†’ agent.run() streams reply; the agent may call the +# `send_to_peer` tool, which queues an A2A ask to Bob. +# - When Bob's operator answers, Bob sends a reply over A2A. Alice's executor +# pushes a reply card to the user *and* injects a "[peer update]" note into +# the user's session so the next LLM turn knows about it. +# - When Alice receives an ask from Bob, her executor pushes an ask card into +# Alice's current operator's 1:1 conversation. The operator fills it in and +# submits β†’ Alice's card-action handler sends the reply back over A2A. + +load_dotenv(find_dotenv(usecwd=True)) + +NAME = "Alice" +HOST = getenv("BOT_A_HOST", "localhost") +PORT = int(getenv("BOT_A_PORT", "3978")) +SELF_A2A_URL = f"http://{HOST}:{PORT}/a2a/" +BOB_URL = getenv("BOB_A2A_URL", "http://localhost:3979/a2a/") +ALLOWED_PEER_URLS = [BOB_URL] + +logging.getLogger().setLevel(logging.INFO) +_handler = logging.StreamHandler() +_handler.setFormatter(ConsoleFormatter()) +logging.getLogger().addHandler(_handler) +logger = logging.getLogger(__name__) + +fastapi_app = FastAPI() +app = App( + http_server_adapter=FastAPIAdapter(app=fastapi_app), + client_id=getenv("BOT_A_CLIENT_ID"), + client_secret=getenv("BOT_A_CLIENT_SECRET"), + tenant_id=getenv("TENANT_ID"), +) +state = BotState(name=NAME) +bot_agent = BotAgent(self_name=NAME, self_a2a_url=SELF_A2A_URL, peers={"bob": BOB_URL}, state=state) + +# Description goes into Alice's A2A AgentCard. Peers' LLMs read it to +# decide whether to forward a question. Tweak to match your scenario. +DESCRIPTION = "Alice β€” a Teams bot whose human operator answers design and UX questions." + + +@app.on_message +async def handle_message(ctx: ActivityContext[MessageActivity]): + text = (ctx.activity.text or "").strip() + conv_id = ctx.activity.conversation.id + # Only 1:1 conversations become the operator channel for inbound asks. + if ctx.activity.conversation.conversation_type == "personal": + state.operator_conv_id = conv_id + + agent = await bot_agent.get_agent() + session = bot_agent.session_for(conv_id) + current_user_conv_id.set(conv_id) + async for chunk in agent.run(text, session=session, stream=True): + if chunk.text: + ctx.stream.emit(chunk.text) + + +# Operator clicked Send reply on an ask card we'd pushed them. Look up the +# original peer by qid and forward the answer back over A2A. +@app.on_card_action_execute(ASK_REPLY_ACTION) +async def handle_reply_submit(ctx: ActivityContext[AdaptiveCardInvokeActivity]) -> AdaptiveCardInvokeResponse: + d = ctx.activity.value.action.data + qid = d.get("qid", "") + answer_text = d.get("answer", "") + + pending = state.inbound_asks.pop(qid, None) + if pending is None: + logger.warning("[%s] no pending inbound ask for qid=%s", NAME, qid) + return AdaptiveCardActionMessageResponse(value="Reply not sent: no matching ask.") + + reply_url = pending["reply_url"] + sender = pending["sender"] + + reply = ReplyMessage(qid=qid, answer=answer_text, responder=NAME) + await send_a2a(reply_url, reply.model_dump()) + return AdaptiveCardActionMessageResponse(value=f"Reply sent back to {sender}.") + + +async def main() -> None: + # Mount the A2A Starlette sub-app on the shared FastAPI instance so the + # Teams `/api/messages` endpoint and A2A routes are served by one uvicorn. + a2a_app = make_a2a_app( + teams_app=app, + state=state, + description=DESCRIPTION, + skill="ask_reply", + url=SELF_A2A_URL, + allowed_peer_urls=ALLOWED_PEER_URLS, + on_peer_reply=bot_agent.record_peer_reply, + ) + fastapi_app.mount("/a2a", a2a_app.build()) + await app.initialize() + server = uvicorn.Server(uvicorn.Config(fastapi_app, host=HOST, port=PORT, log_level="info")) + await server.serve() + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/examples/a2a-test/src/bot_b.py b/examples/a2a-test/src/bot_b.py new file mode 100644 index 00000000..6a3fb7a2 --- /dev/null +++ b/examples/a2a-test/src/bot_b.py @@ -0,0 +1,113 @@ +""" +Copyright (c) Microsoft Corporation. All rights reserved. +Licensed under the MIT License. +""" + +import asyncio +import logging +from os import getenv + +import uvicorn +from a2a_client import send_a2a +from a2a_server import make_a2a_app +from agent import BotAgent, current_user_conv_id +from cards import ASK_REPLY_ACTION +from dotenv import find_dotenv, load_dotenv +from fastapi import FastAPI +from messages import ReplyMessage +from microsoft_teams.api import AdaptiveCardInvokeActivity, MessageActivity +from microsoft_teams.api.models.adaptive_card import AdaptiveCardActionMessageResponse +from microsoft_teams.api.models.invoke_response import AdaptiveCardInvokeResponse +from microsoft_teams.apps import ActivityContext, App, FastAPIAdapter +from microsoft_teams.common import ConsoleFormatter +from state import BotState + +# Bot B (Bob) β€” symmetric with Bot A. See bot_a.py for the flow description. + +load_dotenv(find_dotenv(usecwd=True)) + +NAME = "Bob" +HOST = getenv("BOT_B_HOST", "localhost") +PORT = int(getenv("BOT_B_PORT", "3979")) +SELF_A2A_URL = f"http://{HOST}:{PORT}/a2a/" +ALICE_URL = getenv("ALICE_A2A_URL", "http://localhost:3978/a2a/") +ALLOWED_PEER_URLS = [ALICE_URL] + +logging.getLogger().setLevel(logging.INFO) +_handler = logging.StreamHandler() +_handler.setFormatter(ConsoleFormatter()) +logging.getLogger().addHandler(_handler) +logger = logging.getLogger(__name__) + +fastapi_app = FastAPI() +app = App( + http_server_adapter=FastAPIAdapter(app=fastapi_app), + client_id=getenv("BOT_B_CLIENT_ID"), + client_secret=getenv("BOT_B_CLIENT_SECRET"), + tenant_id=getenv("TENANT_ID"), +) +state = BotState(name=NAME) +bot_agent = BotAgent(self_name=NAME, self_a2a_url=SELF_A2A_URL, peers={"alice": ALICE_URL}, state=state) + +# Description goes into Bob's A2A AgentCard. Peers' LLMs read it to +# decide whether to forward a question. Tweak to match your scenario. +DESCRIPTION = "Bob β€” a Teams bot whose human operator answers backend and infrastructure questions." + + +@app.on_message +async def handle_message(ctx: ActivityContext[MessageActivity]): + text = (ctx.activity.text or "").strip() + conv_id = ctx.activity.conversation.id + # Only 1:1 conversations become the operator channel for inbound asks. + if ctx.activity.conversation.conversation_type == "personal": + state.operator_conv_id = conv_id + + agent = await bot_agent.get_agent() + session = bot_agent.session_for(conv_id) + current_user_conv_id.set(conv_id) + async for chunk in agent.run(text, session=session, stream=True): + if chunk.text: + ctx.stream.emit(chunk.text) + + +# Operator clicked Send reply on an ask card we'd pushed them. Look up the +# original peer by qid and forward the answer back over A2A. +@app.on_card_action_execute(ASK_REPLY_ACTION) +async def handle_reply_submit(ctx: ActivityContext[AdaptiveCardInvokeActivity]) -> AdaptiveCardInvokeResponse: + d = ctx.activity.value.action.data + qid = d.get("qid", "") + answer_text = d.get("answer", "") + + pending = state.inbound_asks.pop(qid, None) + if pending is None: + logger.warning("[%s] no pending inbound ask for qid=%s", NAME, qid) + return AdaptiveCardActionMessageResponse(value="Reply not sent: no matching ask.") + + reply_url = pending["reply_url"] + sender = pending["sender"] + + reply = ReplyMessage(qid=qid, answer=answer_text, responder=NAME) + await send_a2a(reply_url, reply.model_dump()) + return AdaptiveCardActionMessageResponse(value=f"Reply sent back to {sender}.") + + +async def main() -> None: + # Mount the A2A Starlette sub-app on the shared FastAPI instance so the + # Teams `/api/messages` endpoint and A2A routes are served by one uvicorn. + a2a_app = make_a2a_app( + teams_app=app, + state=state, + description=DESCRIPTION, + skill="ask_reply", + url=SELF_A2A_URL, + allowed_peer_urls=ALLOWED_PEER_URLS, + on_peer_reply=bot_agent.record_peer_reply, + ) + fastapi_app.mount("/a2a", a2a_app.build()) + await app.initialize() + server = uvicorn.Server(uvicorn.Config(fastapi_app, host=HOST, port=PORT, log_level="info")) + await server.serve() + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/examples/a2a-test/src/cards.py b/examples/a2a-test/src/cards.py new file mode 100644 index 00000000..25bea54f --- /dev/null +++ b/examples/a2a-test/src/cards.py @@ -0,0 +1,54 @@ +""" +Copyright (c) Microsoft Corporation. All rights reserved. +Licensed under the MIT License. +""" + +from microsoft_teams.cards import ( + ActionSet, + AdaptiveCard, + ExecuteAction, + SubmitData, + TextBlock, +) +from microsoft_teams.cards.core import TextInput + +# Adaptive Card builders. The ask card's submit returns only `qid`; the bot +# resolves reply routing from server state (card data is client-tamperable). +ASK_REPLY_ACTION = "ask_reply" + + +def ask_card(sender: str, question: str, qid: str) -> AdaptiveCard: + # Shown to the operator of the receiving bot. Operator types an answer + # and clicks Send reply, which fires an Action.Execute back to the bot. + return AdaptiveCard( + schema="http://adaptivecards.io/schemas/adaptive-card.json", + version="1.4", + body=[ + TextBlock(text=f"From {sender}", weight="Bolder", size="Medium"), + TextBlock(text=question, wrap=True), + TextInput(id="answer").with_label("Your answer").with_placeholder("Type here…"), + ActionSet( + actions=[ + ExecuteAction(title="Send reply") + .with_data(SubmitData(action=ASK_REPLY_ACTION, data={"qid": qid})) + .with_associated_inputs("auto") + ] + ), + TextBlock(text=f"qid: {qid}", is_subtle=True, size="Small"), + ], + ) + + +def reply_card(responder: str, question: str, answer: str, qid: str) -> AdaptiveCard: + # Shown to the user who originally asked, once the peer's operator has + # answered. Display-only β€” no submit action. + return AdaptiveCard( + schema="http://adaptivecards.io/schemas/adaptive-card.json", + version="1.4", + body=[ + TextBlock(text=f"{responder} replies", weight="Bolder", size="Medium"), + TextBlock(text=f"You asked: {question}", is_subtle=True, wrap=True), + TextBlock(text=answer, wrap=True), + TextBlock(text=f"qid: {qid}", is_subtle=True, size="Small"), + ], + ) diff --git a/examples/a2a-test/src/main.py b/examples/a2a-test/src/main.py deleted file mode 100644 index 05c6ecf6..00000000 --- a/examples/a2a-test/src/main.py +++ /dev/null @@ -1,234 +0,0 @@ -""" -Copyright (c) Microsoft Corporation. All rights reserved. -Licensed under the MIT License. -""" - -import asyncio -import logging -import re -import uuid -from os import getenv -from typing import List, Union, cast - -from a2a.types import AgentCapabilities, AgentCard, AgentSkill, Message, Part, Role, TextPart -from microsoft_teams.a2a import ( - A2AClientPlugin, - A2AMessageEvent, - A2AMessageEventKey, - A2APlugin, - A2APluginOptions, - A2APluginUseParams, - BuildMessageForAgentMetadata, - BuildMessageFromAgentMetadata, - FunctionMetadata, -) -from microsoft_teams.ai import ChatPrompt, Function, ModelMessage -from microsoft_teams.api import MessageActivity, TypingActivityInput -from microsoft_teams.apps import ActivityContext, App, PluginBase -from microsoft_teams.common import ConsoleFormatter -from microsoft_teams.devtools import DevToolsPlugin -from microsoft_teams.openai.completions_model import OpenAICompletionsAIModel -from pydantic import BaseModel - -PORT = getenv("PORT", "4000") - -# Setup logging -logging.getLogger().setLevel(logging.DEBUG) -stream_handler = logging.StreamHandler() -stream_handler.setFormatter(ConsoleFormatter()) -logging.getLogger().addHandler(stream_handler) -logger = logging.getLogger(__name__) - - -# Setup AI -def get_required_env(key: str) -> str: - value = getenv(key) - if not value: - raise ValueError(f"Required environment variable {key} is not set") - return value - - -AZURE_OPENAI_MODEL = get_required_env("AZURE_OPENAI_MODEL") -completions_model = OpenAICompletionsAIModel(model=AZURE_OPENAI_MODEL) - -# Setup A2A Client Plugin -client_plugin = A2AClientPlugin() -# Specify the connection details for the agent we want to use -client_plugin.on_use_plugin( - A2APluginUseParams( - key="my-weather-agent", base_url=f"http://localhost:{PORT}/a2a", card_url=".well-known/agent-card.json" - ) -) -prompt = ChatPrompt( - model=completions_model, - plugins=[client_plugin], -) - - -def build_function_metadata(card: AgentCard) -> FunctionMetadata: - return FunctionMetadata( - name=f"ask{re.sub(r'\s+', '', card.name)}", - description=f"Ask {card.name} about {card.description or 'anything'}", - ) - - -def build_message_for_agent(data: BuildMessageForAgentMetadata) -> Union[Message, str]: - # Return a string - will be automatically wrapped in a Message - return f"[To {data.card.name}]: {data.input}" - - # Uncomment the following block to return a full Message object - # message = Message( - - -# kind='message', -# message_id=str(uuid4()), -# role=Role('user'), -# parts=[Part(root=TextPart(kind='text', text=f"[To {data.card.name}]: {data.input}"))], -# metadata={"source": "chat-prompt", **(data.metadata if data.metadata else {})} -# ) -# return message - - -def build_message_from_agent_response(data: BuildMessageFromAgentMetadata) -> str: - if isinstance(data.response, Message): - text_parts: List[str] = [] - for part in data.response.parts: - if getattr(part.root, "kind", None) == "text": - text_part = cast(TextPart, part.root) - text_parts.append(text_part.text) - return f"{data.card.name} says: {' '.join(text_parts)}" - return f"{data.card.name} sent a non-text response." - - -## Advanced A2AClientPlugin -advanced_plugin = A2AClientPlugin( - # Custom function metadata builder - build_function_metadata=build_function_metadata, - # Custom message builder - can return either Message or string - build_message_for_agent=build_message_for_agent, - # Custom response processor - build_message_from_agent_response=build_message_from_agent_response, -) -advanced_plugin.on_use_plugin( - A2APluginUseParams( - key="my-weather-agent", base_url=f"http://localhost:{PORT}/a2a", card_url=".well-known/agent-card.json" - ) -) -advanced_prompt = ChatPrompt(model=completions_model, plugins=[advanced_plugin]) - -# A2A Server Agent Card -agent_card = AgentCard( - name="weather_agent", - description="An agent that can tell you the weather", - url=f"http://localhost:{PORT}/a2a/", - version="0.0.1", - protocol_version="0.3.0", - capabilities=AgentCapabilities(), - default_input_modes=[], - default_output_modes=[], - skills=[ - AgentSkill( - # Expose various skills that this agent can perform - id="get_weather", - name="Get Weather", - description="Get the weather for a given location", - tags=["weather", "get", "location"], - examples=[ - # Give concrete examples on how to contact the agent - "Get the weather for London", - "What is the weather", - "What's the weather in Tokyo?", - "How is the current temperature in San Francisco?", - ], - ), - ], -) - - -# Define the parameter for A2AServer function -class LocationParams(BaseModel): - location: str - "The location to get the weather for" - - -# Setup the A2A Server Plugin -plugins: List[PluginBase] = [A2APlugin(A2APluginOptions(agent_card=agent_card)), DevToolsPlugin()] -app = App(plugins=plugins) - - -# A2A Server Event Handler -async def my_event_handler(user_message: str) -> Union[Message, str]: - logger.info(f"Received message: {user_message}") - tool_location = None - - async def location_handler(params: LocationParams) -> str: - nonlocal tool_location - tool_location = params.location - return f"The weather in {params.location} is sunny" - - result = ( - await ChatPrompt(model=completions_model) - .with_function( - Function( - name="location", - description="The location to get the weather for", - parameter_schema=LocationParams, - handler=location_handler, - ) - ) - .send(user_message, instructions="You are a weather agent that can tell you the weather for a given location") - ) - - if not tool_location: - return Message( - kind="message", - message_id=str(uuid.uuid4()), - role=Role("agent"), - parts=[Part(root=TextPart(kind="text", text="Please provide a location"))], - ) - else: - return result.response.content if result.response.content else "No weather information available." - - -# A2A Server Message Event Handler -@app.event(A2AMessageEventKey) -async def handle_a2a_message(message: A2AMessageEvent) -> None: - request_context = message.get("request_context") - respond = message.get("respond") - - logger.info(f"Received message: {request_context.message}") - - if request_context.message: - text_input = None - for part in request_context.message.parts: - if getattr(part.root, "kind", None) == "text": - text_part = cast(TextPart, part.root) - text_input = text_part.text - break - if not text_input: - await respond("My agent currently only supports text input") - return - - result = await my_event_handler(text_input) - await respond(result) - - -async def handler(message: str) -> ModelMessage: - # Now we can send the message to the prompt and it will decide if - # the a2a agent should be used or not and also manages contacting the agent - result = await prompt.send(message) - return result.response - - -# A2A Client Message Handler -@app.on_message -async def handle_message(ctx: ActivityContext[MessageActivity]): - await ctx.reply(TypingActivityInput()) - - result = await handler(ctx.activity.text) - if result.content: - await ctx.send(result.content) - - -if __name__ == "__main__": - asyncio.run(app.start()) diff --git a/examples/a2a-test/src/messages.py b/examples/a2a-test/src/messages.py new file mode 100644 index 00000000..d8013141 --- /dev/null +++ b/examples/a2a-test/src/messages.py @@ -0,0 +1,30 @@ +""" +Copyright (c) Microsoft Corporation. All rights reserved. +Licensed under the MIT License. +""" + +from typing import Annotated, Literal, Union + +from pydantic import BaseModel, Field, TypeAdapter + +# A2A message shapes exchanged between Alice and Bob. `kind` discriminates +# between an outbound question (`ask`) and the peer's answer (`reply`). + + +class AskMessage(BaseModel): + kind: Literal["ask"] = "ask" + qid: str + question: str + sender: str + reply_url: str + + +class ReplyMessage(BaseModel): + kind: Literal["reply"] = "reply" + qid: str + answer: str + responder: str + + +A2AMessage = Annotated[Union[AskMessage, ReplyMessage], Field(discriminator="kind")] +A2AMessageAdapter: TypeAdapter[A2AMessage] = TypeAdapter(A2AMessage) diff --git a/examples/a2a-test/src/state.py b/examples/a2a-test/src/state.py new file mode 100644 index 00000000..800fd87c --- /dev/null +++ b/examples/a2a-test/src/state.py @@ -0,0 +1,27 @@ +""" +Copyright (c) Microsoft Corporation. All rights reserved. +Licensed under the MIT License. +""" + +from dataclasses import dataclass, field +from typing import Any, Optional + + +@dataclass +class BotState: + """In-memory state for one bot. Single-process; fine for a sample.""" + + name: str + # Conversation id of the last Teams user to DM this bot β€” the human + # operator who will answer incoming A2A asks. + operator_conv_id: Optional[str] = None + + # Asks this bot initiated, keyed by qid. Each value: {conv_id, question}. + # When the peer's reply lands, we pop the qid and push the reply card into + # the stashed conversation. + awaiting_reply: dict[str, dict[str, Any]] = field(default_factory=dict[str, dict[str, Any]]) + + # Inbound asks waiting for an operator reply, keyed by qid. + # Value: {reply_url, sender, question}. Populated at A2A ingress; + # the reply handler resolves routing from here. + inbound_asks: dict[str, dict[str, Any]] = field(default_factory=dict[str, dict[str, Any]]) diff --git a/examples/ai-agentframework/README.md b/examples/ai-agentframework/README.md new file mode 100644 index 00000000..c0b48197 --- /dev/null +++ b/examples/ai-agentframework/README.md @@ -0,0 +1,92 @@ +> [!CAUTION] +> This project is in public preview. We'll do our best to maintain compatibility, but there may be breaking changes in upcoming releases. + +# Teams AI Agent (agent-framework) + +A Teams bot powered by [agent-framework](https://github.com/microsoft/agent-framework) and Azure OpenAI. Supports streaming responses, inline citations from MCP search results, per-conversation memory, and an Adaptive Card local tool alongside remote MCP servers. + +## Features + +- **Streaming responses** β€” text streams token-by-token into Teams as the model generates it +- **Citations** β€” sources from MCP search tools are attached as clickable references in the reply +- **Conversation memory** β€” each conversation maintains its own session so the agent remembers context across turns +- **AI-generated label + custom feedback** β€” replies include the Teams "AI-generated" label and thumbs up/down feedback buttons; clicking a reaction opens a custom Adaptive Card form for additional feedback +- **Welcome card tool** β€” a local `@tool` the agent calls to greet users with an Adaptive Card +- **MCP tools** β€” remote tool servers: Microsoft Learn docs search + +## Prerequisites + +- Python >= 3.12, < 3.15 +- UV >= 0.8.11 +- An Azure OpenAI resource with a deployed model +- A Teams bot registration (App ID + password) + +## Setup + +Create a `.env` file in `examples/ai-agentframework/`: + +```env +# Azure OpenAI +AZURE_OPENAI_ENDPOINT=https://.openai.azure.com +AZURE_OPENAI_MODEL= +AZURE_OPENAI_API_KEY= + +# Teams bot credentials +CLIENT_ID= +TENANT_ID= +CLIENT_SECRET= +``` + +`AZURE_OPENAI_MODEL` is the **deployment name** of your model, not the base model name. + +### Using a Service Principal for Azure OpenAI instead of an API key + +`agent.py` authenticates to Azure OpenAI with `AZURE_OPENAI_API_KEY`. If you'd rather use the bot's Service Principal, swap `api_key` for a `ClientSecretCredential`: + +```python +from azure.identity import ClientSecretCredential + +client = OpenAIChatClient( + model=getenv("AZURE_OPENAI_MODEL"), + azure_endpoint=getenv("AZURE_OPENAI_ENDPOINT"), + credential=ClientSecretCredential( + tenant_id=getenv("TENANT_ID"), + client_id=getenv("CLIENT_ID"), + client_secret=getenv("CLIENT_SECRET"), + ), +) +``` + +Then drop `AZURE_OPENAI_API_KEY` from `.env` and grant the Service Principal the **Azure AI User** role on the Azure OpenAI resource. + +### Teams bot registration + +Follow the standard Teams bot setup to get a bot App ID and password, and configure the messaging endpoint to point at this bot (e.g. via [Dev Tunnels](https://learn.microsoft.com/azure/developer/dev-tunnels/overview) for local development). + +## Running + +```bash +cd examples/ai-agentframework +uv run src/main.py +``` + +## Example interactions + +Once the bot is running in a Teams chat, try: + +- `Hi! My name is Alex!` β€” agent calls `send_welcome_card` and greets you with an Adaptive Card +- `How do I stream in teams.py?` β€” searches Microsoft Learn docs (MCP) with inline citations +- `How do I send a proactive message in teams.py?` β€” searches Microsoft Learn docs (MCP) with inline citations + +## Architecture + +``` +main.py β€” Teams App, message handler, streaming, citations, card attachment +agent.py β€” Agent setup, OpenAIChatClient, AgentMiddleware +local_tools.py β€” @tool functions (welcome card) +mcp_tools.py β€” MCP server declarations (remote tool servers) +``` + +`main.py` streams every response with `agent.run(..., stream=True)`. Citations collected during tool calls, and any cards queued by local tools, are attached to the final activity. + +`AgentMiddleware` intercepts every tool call to log it and extract citation URLs from MCP search results. Citations are filtered to only those the model actually referenced with `[N]` markers before being attached to the Teams reply. diff --git a/examples/ai-agentframework/pyproject.toml b/examples/ai-agentframework/pyproject.toml new file mode 100644 index 00000000..90159a84 --- /dev/null +++ b/examples/ai-agentframework/pyproject.toml @@ -0,0 +1,15 @@ +[project] +name = "ai-agentframework" +version = "0.1.0" +description = "Microsoft Teams bot using the agent-framework library" +readme = "README.md" +requires-python = ">=3.12,<3.15" +dependencies = [ + "dotenv>=0.9.9", + "microsoft-teams-apps", + "agent-framework-core", + "agent-framework-openai" +] + +[tool.uv.sources] +microsoft-teams-apps = { workspace = true } \ No newline at end of file diff --git a/examples/ai-agentframework/src/agent.py b/examples/ai-agentframework/src/agent.py new file mode 100644 index 00000000..a6bbfff0 --- /dev/null +++ b/examples/ai-agentframework/src/agent.py @@ -0,0 +1,97 @@ +""" +Copyright (c) Microsoft Corporation. All rights reserved. +Licensed under the MIT License. +""" + +import json +import logging +from collections.abc import Awaitable, Callable +from os import getenv +from typing import Any, cast + +from agent_framework import Agent, FunctionInvocationContext, FunctionMiddleware +from agent_framework.openai import OpenAIChatClient +from dotenv import find_dotenv, load_dotenv +from local_tools import tools as local_tools +from mcp_tools import mcp_tools + +load_dotenv(find_dotenv(usecwd=True)) + +logger = logging.getLogger(__name__) +logger.setLevel(logging.INFO) + + +class AgentMiddleware(FunctionMiddleware): + """Logs every tool call and extracts MCP citations from results. + + citations is reset at the start of each message turn and populated as tools run. + Only URLs from results matching { results: [{ contentUrl, title, content }] } are collected. + """ + + citations: dict[str, Any] + + async def process(self, context: FunctionInvocationContext, call_next: Callable[[], Awaitable[None]]) -> None: + logger.info("tool call: %s(%s)", context.function.name, context.arguments) + await call_next() + result: Any = context.result + if isinstance(result, list): + blocks = cast("list[Any]", result) + result = " ".join(str(c.text) for c in blocks if getattr(c, "text", None)) + logger.info("tool result: %s -> %s", context.function.name, result) + + try: + parsed = json.loads(result) + except (json.JSONDecodeError, TypeError) as e: + logger.debug("citation extraction skipped for %s: %s", context.function.name, e) + return + if not isinstance(parsed, dict): + return + parsed = cast("dict[str, Any]", parsed) + + for item in cast("list[dict[str, Any]]", parsed.get("results", [])): + url = item.get("contentUrl") or item.get("link") + if not url: + continue + entry = self.citations.setdefault( + url, + { + "position": len(self.citations) + 1, + "url": url, + "title": item.get("title") or "", + "snippet": (item.get("content") or item.get("description") or "")[:160], + }, + ) + item["citation"] = f"[{entry['position']}]" + context.result = json.dumps(parsed) + + +def _require_env(name: str) -> str: + value = getenv(name) + if not value: + raise ValueError(f"Required environment variable {name!r} is not set.") + return value + + +client = OpenAIChatClient( + model=_require_env("AZURE_OPENAI_MODEL"), + azure_endpoint=_require_env("AZURE_OPENAI_ENDPOINT"), + api_key=_require_env("AZURE_OPENAI_API_KEY"), +) + +INSTRUCTIONS = """\ +You are a helpful Teams assistant with access to local tools and remote MCP servers. + +Always greet new users with a welcome card. + +When you use information from a search tool, cite your sources inline using the "citation" value \ +provided in each result (e.g. [1], [2]). +Do not add a references or sources list at the end of your response β€” citations are displayed separately in the UI. +""" + +tool_logger = AgentMiddleware() +agent = Agent( + client=client, + instructions=INSTRUCTIONS, + tools=[*local_tools, *mcp_tools], + middleware=[tool_logger], +) diff --git a/examples/ai-agentframework/src/local_tools.py b/examples/ai-agentframework/src/local_tools.py new file mode 100644 index 00000000..8f559c62 --- /dev/null +++ b/examples/ai-agentframework/src/local_tools.py @@ -0,0 +1,44 @@ +""" +Copyright (c) Microsoft Corporation. All rights reserved. +Licensed under the MIT License. +""" + +from contextvars import ContextVar +from typing import Annotated + +from agent_framework import tool +from microsoft_teams.cards import AdaptiveCard, Fact, FactSet, TextBlock +from pydantic import Field + +# Per-turn card bucket. main.py sets a fresh list at the start of each handler so concurrent turns +# don't clobber each other. The tool appends into whichever list is active in its context. +pending_cards: ContextVar[list[AdaptiveCard] | None] = ContextVar("pending_cards", default=None) + + +@tool +async def send_welcome_card( + greeting: Annotated[str, Field(description="The greeting message for the user. eg Hello, John! or Welcome!")], +) -> str: + """Attach a welcome card with a capabilities overview.""" + cards = pending_cards.get() + if cards is None: + return "No active turn context; card could not be attached." + card = AdaptiveCard(version="1.5").with_body( + [ + TextBlock(text=f"{greeting} Here are some things I can do:", size="Large", weight="Bolder", wrap=True), + FactSet( + facts=[ + Fact(title="Docs", value="Microsoft Learn search with citations"), + Fact(title="Streaming", value="Token-by-token replies"), + Fact(title="Memory", value="Per-conversation context"), + Fact(title="Feedback", value="Thumbs up/down with a follow-up form"), + ] + ), + ] + ) + + cards.append(card) + return "Card attached." + + +tools = [send_welcome_card] diff --git a/examples/ai-agentframework/src/main.py b/examples/ai-agentframework/src/main.py new file mode 100644 index 00000000..e102ea09 --- /dev/null +++ b/examples/ai-agentframework/src/main.py @@ -0,0 +1,133 @@ +""" +Copyright (c) Microsoft Corporation. All rights reserved. +Licensed under the MIT License. +""" + +import asyncio +import logging +import re +from os import getenv + +from agent import agent, tool_logger +from agent_framework import AgentSession +from local_tools import pending_cards +from microsoft_teams.api import ( + AdaptiveCardAttachment, + CardAction, + CardActionType, + CardTaskModuleTaskInfo, + CitationAppearance, + MessageActivity, + MessageActivityInput, + MessageFetchTaskInvokeActivity, + MessageSubmitActionInvokeActivity, + SuggestedActions, + TaskModuleContinueResponse, + TaskModuleInvokeResponse, + card_attachment, +) +from microsoft_teams.apps import ActivityContext, App +from microsoft_teams.cards import AdaptiveCard, SubmitAction, TextBlock, TextInput + +logging.basicConfig(level=getenv("LOG_LEVEL", "INFO").upper()) +logger = logging.getLogger(__name__) + +# App is the Teams bot host for this example. +app = App() + +# Per-conversation sessions preserve message history across turns. +_sessions: dict[str, AgentSession] = {} + +_SUGGESTED_PROMPTS = [ + CardAction(type=CardActionType.IM_BACK, title="How do I stream in teams.py?", value="How do I stream in teams.py?"), + CardAction( + type=CardActionType.IM_BACK, + title="How do I create an Adaptive Card in teams.py?", + value="How do I create an Adaptive Card in teams.py?", + ), +] + + +@app.on_message +async def handle_message(ctx: ActivityContext[MessageActivity]): + conversation_id = ctx.activity.conversation.id + if conversation_id not in _sessions: + _sessions[conversation_id] = agent.create_session() + + text = ctx.activity.text or "" + tool_logger.citations = {} + cards: list[AdaptiveCard] = [] + pending_cards.set(cards) + + full_text = "" + async for chunk in agent.run(text, session=_sessions[conversation_id], stream=True): + if chunk.text: + ctx.stream.emit(chunk.text) + full_text += chunk.text + + reply = _build_reply(full_text, cards, ctx) + ctx.stream.emit(reply) + + +def _build_reply( + full_text: str, cards: list[AdaptiveCard], ctx: ActivityContext[MessageActivity] +) -> MessageActivityInput: + # add_ai_generated() adds the "AI-generated" label; add_feedback() enables thumbs up/down. + reply = MessageActivityInput().add_ai_generated().add_feedback(mode="custom") + for card in cards: + reply.add_card(card) + _attach_citations(reply, full_text) + reply.with_suggested_actions(SuggestedActions(to=[ctx.activity.from_.id], actions=_SUGGESTED_PROMPTS)) + return reply + + +def _attach_citations(reply: MessageActivityInput, full_text: str) -> None: + """Attach citations from tool_logger that were referenced in the reply text.""" + used_positions = {int(n) for n in re.findall(r"\[(\d+)\]", full_text)} + for annotation in tool_logger.citations.values(): + pos = annotation["position"] + if pos in used_positions: + reply.add_citation( + position=pos, + appearance=CitationAppearance( + name=annotation.get("title") or f"Source {pos}", + abstract=annotation.get("snippet") or "No description available.", + url=annotation.get("url"), + ), + ) + + +@app.on_message_fetch_task +async def handle_feedback_fetch_task( + ctx: ActivityContext[MessageFetchTaskInvokeActivity], +) -> TaskModuleInvokeResponse: + reaction = ctx.activity.value.data.action_value.reaction + card = ( + AdaptiveCard(version="1.4") + .with_body( + [ + TextBlock(text=f"You clicked {reaction}. Tell us more:"), + TextInput(id="feedbackText", placeholder="Enter your feedback here...", is_multiline=True), + ] + ) + .with_actions([SubmitAction(title="Submit")]) + ) + return TaskModuleInvokeResponse( + task=TaskModuleContinueResponse( + value=CardTaskModuleTaskInfo( + title="Feedback", + card=card_attachment(AdaptiveCardAttachment(content=card)), + ) + ) + ) + + +@app.on_message_submit_feedback +async def handle_feedback(ctx: ActivityContext[MessageSubmitActionInvokeActivity]): + reaction = ctx.activity.value.action_value.reaction + feedback = ctx.activity.value.action_value.feedback + logger.info("feedback: %s | %s", reaction, feedback) + + +if __name__ == "__main__": + asyncio.run(app.start()) diff --git a/examples/ai-agentframework/src/mcp_tools.py b/examples/ai-agentframework/src/mcp_tools.py new file mode 100644 index 00000000..3a7de94b --- /dev/null +++ b/examples/ai-agentframework/src/mcp_tools.py @@ -0,0 +1,10 @@ +""" +Copyright (c) Microsoft Corporation. All rights reserved. +Licensed under the MIT License. +""" + +from agent_framework import MCPStreamableHTTPTool + +mcp_tools = [ + MCPStreamableHTTPTool(name="MSLearn", url="https://learn.microsoft.com/api/mcp"), +] diff --git a/examples/ai-test/README.md b/examples/ai-test/README.md deleted file mode 100644 index 42c72dbf..00000000 --- a/examples/ai-test/README.md +++ /dev/null @@ -1,101 +0,0 @@ -# Sample: AI - -A sample demonstrating various AI capabilities in the Python Teams SDK. - -## Prerequisites - -- Python 3.12 or later -- UV package manager -- An Microsoft 365 development account. If you don't have one, you can get one for free by signing up for the [Microsoft 365 Developer Program](https://developer.microsoft.com/microsoft-365/dev-program). - -## Setup - -1. Install dependencies: - -```bash -uv sync -``` - -2. Set up your `.env` file with your Azure OpenAI API key (or OpenAI API Key): - -```bash -AZURE_OPENAI_API_KEY= -AZURE_OPENAI_ENDPOINT= -AZURE_OPENAI_MODEL= -AZURE_OPENAI_API_VERSION= - -# Alternatively, set the OpenAI API key: -OPENAI_API_KEY= -``` - -## Run - -```bash -# Activate virtual environment -source .venv/bin/activate # On macOS/Linux -# .venv\Scripts\Activate # On Windows - -# Run the AI test -python examples/ai-test/src/main.py -``` - -## Usage - -From Teams, DevTools, or your test client, use any of the following commands to trigger specific scenarios: - -| Scenario | Usage | Description | -| ---------------------- | --------------------------------------- | --------------------------------------------------------- | -| Simple LLM check | `hi` | Basic ChatPrompt with pirate personality | -| Function calling | `pokemon ` | Single function calling - searches Pokemon via PokeAPI | -| Multi-Function calling | `weather` | Multiple function calling - gets location then weather | -| Streaming | `stream ` | Shows streaming responses in verbose language | -| Citations | `citations` | Demonstrates citation functionality with position markers | -| Model switching | `model completions` / `model responses` | Switch between Chat Completions and Responses API models | -| Plugin stats | `plugin` | Shows AI plugin function call statistics | -| Memory management | `memory clear` | Clears conversation memory | -| Feedback collection | `feedback demo` | Demonstrates message feedback with like/dislike buttons | -| Feedback statistics | `feedback stats ` | Shows feedback summary for a specific message | -| Stateful interactions | `` | Shows persistent conversation memory across interactions | - -## Features Demonstrated - -### Core AI Functionality - -- **ChatPrompt** - Basic LLM interaction patterns with optional persistent memory -- **String instructions** - Simple instruction passing (vs SystemMessage objects) -- **Model switching** - Runtime switching between different AI models - -### Function Calling - -- **Single functions** - Pokemon search with real API integration -- **Multiple functions** - Location detection followed by weather lookup -- **Error handling** - Proper exception handling for API failures - -### Advanced Features - -- **Streaming responses** - Real-time response streaming with group/1:1 handling -- **Memory management** - Per-conversation memory with manual clearing -- **Custom plugins** - AI plugin system with lifecycle hooks -- **Citations** - Position-based citations with proper formatting -- **Feedback collection** - Message feedback with like/dislike reactions and text feedback - -### Best Practices - -- **AI-generated indicators** - All AI responses marked appropriately -- **Modular handlers** - Clean separation of concerns across files -- **Pattern matching** - Uses `app.on_message_pattern` for command routing -- **Type safety** - Full pyright compliance with proper typing - -## Architecture - -The sample follows a modular architecture: - -- `main.py` - Main application with pattern-based message handlers -- `handlers/` - Separate modules for different AI functionality: - - `function_calling.py` - Pokemon and weather function implementations - - `memory_management.py` - Stateful conversation handling - - `citations.py` - Citation demo functionality - - `plugins.py` - Custom AI plugin implementation - - `feedback_management.py` - Message feedback collection and storage - -This structure mirrors the TypeScript AI test implementation for consistency across language implementations. diff --git a/examples/ai-test/pyproject.toml b/examples/ai-test/pyproject.toml deleted file mode 100644 index c2ebda30..00000000 --- a/examples/ai-test/pyproject.toml +++ /dev/null @@ -1,17 +0,0 @@ -[project] -name = "ai-test" -version = "0.1.0" -description = "test the ai interactions" -readme = "README.md" -requires-python = ">=3.12,<3.15" -dependencies = [ - "dotenv>=0.9.9", - "microsoft-teams-ai", - "microsoft-teams-apps", - "microsoft-teams-openai", -] - -[tool.uv.sources] -microsoft-teams-apps = { workspace = true } -microsoft-teams-ai = { workspace = true } -microsoft-teams-openai = { workspace = true } diff --git a/examples/ai-test/src/handlers/__init__.py b/examples/ai-test/src/handlers/__init__.py deleted file mode 100644 index a0798a90..00000000 --- a/examples/ai-test/src/handlers/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -""" -Copyright (c) Microsoft Corporation. All rights reserved. -Licensed under the MIT License. -""" - -from .citations import handle_citations_demo -from .function_calling import handle_multiple_functions, handle_pokemon_search -from .memory_management import handle_stateful_conversation -from .plugins import LoggingAIPlugin - -__all__ = [ - "handle_pokemon_search", - "handle_multiple_functions", - "handle_stateful_conversation", - "handle_citations_demo", - "LoggingAIPlugin", -] diff --git a/examples/ai-test/src/handlers/citations.py b/examples/ai-test/src/handlers/citations.py deleted file mode 100644 index e794c407..00000000 --- a/examples/ai-test/src/handlers/citations.py +++ /dev/null @@ -1,27 +0,0 @@ -""" -Copyright (c) Microsoft Corporation. All rights reserved. -Licensed under the MIT License. -""" - -from microsoft_teams.api import CitationAppearance, MessageActivity, MessageActivityInput -from microsoft_teams.apps import ActivityContext - - -async def handle_citations_demo(ctx: ActivityContext[MessageActivity]) -> None: - """Demo citations functionality as shown in docs""" - cited_docs = [ - {"title": "Weather Documentation", "content": "Weather data shows sunny conditions across the region"}, - {"title": "Pokemon Database", "content": "Comprehensive database of Pokemon characteristics and abilities"}, - {"title": "AI Development Guide", "content": "Best practices for integrating AI into Teams applications"}, - ] - - response_text = ( - "Here's some information with citations [1] about weather patterns," - "[2] Pokemon data, and [3] AI development best practices." - ) - - message_activity = MessageActivityInput(text=response_text).add_ai_generated() - for i, doc in enumerate(cited_docs): - message_activity.add_citation(i + 1, CitationAppearance(name=doc["title"], abstract=doc["content"])) - - await ctx.send(message_activity) diff --git a/examples/ai-test/src/handlers/feedback_management.py b/examples/ai-test/src/handlers/feedback_management.py deleted file mode 100644 index d980116c..00000000 --- a/examples/ai-test/src/handlers/feedback_management.py +++ /dev/null @@ -1,106 +0,0 @@ -""" -Copyright (c) Microsoft Corporation. All rights reserved. -Licensed under the MIT License. -""" - -import json -import logging -from dataclasses import dataclass, field -from typing import Any, Dict, List - -from microsoft_teams.api.activities.invoke.message.submit_action import MessageSubmitActionInvokeActivity -from microsoft_teams.apps import ActivityContext - - -@dataclass -class StoredFeedback: - """Data structure for storing feedback information""" - - message_id: str - likes: int = 0 - dislikes: int = 0 - feedbacks: List[Dict[str, Any]] = field(default_factory=lambda: []) - - -# Global storage for feedback (in production, use proper storage) -stored_feedback_by_message_id: Dict[str, StoredFeedback] = {} - - -def initialize_feedback_storage(message_id: str) -> StoredFeedback: - """Initialize feedback storage for a message""" - feedback = StoredFeedback(message_id=message_id) - stored_feedback_by_message_id[message_id] = feedback - return feedback - - -def get_feedback_storage(message_id: str) -> StoredFeedback | None: - """Get feedback storage for a message""" - return stored_feedback_by_message_id.get(message_id) - - -async def handle_feedback_submission(ctx: ActivityContext[MessageSubmitActionInvokeActivity]) -> None: - """Handle feedback submission event""" - activity = ctx.activity - logger = logging.getLogger(__name__) - - # Extract feedback data from activity value - if not hasattr(activity, "value") or not activity.value: - logger.warning(f"No value found in activity {activity.id}") - return - - # Type-safe access to activity value - invoke_value = activity.value - assert invoke_value.action_name == "feedback" - feedback_str = invoke_value.action_value.feedback - reaction = invoke_value.action_value.reaction - feedback_json: Dict[str, Any] = json.loads(feedback_str) - # { 'feedbackText': 'the ai response was great!' } - - if not activity.reply_to_id: - logger.warning(f"No replyToId found for messageId {activity.id}") - return - - existing_feedback = get_feedback_storage(activity.reply_to_id) - - if not existing_feedback: - new_feedback = StoredFeedback(message_id=activity.reply_to_id) - stored_feedback_by_message_id[activity.reply_to_id] = new_feedback - existing_feedback = new_feedback - - # Update feedback counts and store text feedback - likes_increment = 1 if reaction == "like" else 0 - dislikes_increment = 1 if reaction == "dislike" else 0 - - updated_feedback = StoredFeedback( - message_id=existing_feedback.message_id, - likes=existing_feedback.likes + likes_increment, - dislikes=existing_feedback.dislikes + dislikes_increment, - feedbacks=[*existing_feedback.feedbacks, feedback_json], - ) - - stored_feedback_by_message_id[activity.reply_to_id] = updated_feedback - - # Send confirmation response - feedback_text: str = feedback_json.get("feedbackText", "") - reaction_text: str = f" and {reaction}" if reaction else "" - text_part: str = f" with comment: '{feedback_text}'" if feedback_text else "" - - await ctx.reply(f"βœ… Thank you for your feedback{reaction_text}{text_part}!") - - -def get_feedback_summary(message_id: str) -> str: - """Get a summary of feedback for a message""" - feedback = get_feedback_storage(message_id) - if not feedback: - return "No feedback collected yet." - - total_reactions = feedback.likes + feedback.dislikes - comments_count = len([f for f in feedback.feedbacks if f.get("feedbackText")]) - - summary_parts: List[str] = [] - if total_reactions > 0: - summary_parts.append(f"πŸ‘ {feedback.likes} likes, πŸ‘Ž {feedback.dislikes} dislikes") - if comments_count > 0: - summary_parts.append(f"πŸ’¬ {comments_count} comments") - - return " | ".join(summary_parts) if summary_parts else "No feedback collected yet." diff --git a/examples/ai-test/src/handlers/function_calling.py b/examples/ai-test/src/handlers/function_calling.py deleted file mode 100644 index 251c6998..00000000 --- a/examples/ai-test/src/handlers/function_calling.py +++ /dev/null @@ -1,136 +0,0 @@ -""" -Copyright (c) Microsoft Corporation. All rights reserved. -Licensed under the MIT License. -""" - -import random -from typing import Any, Dict - -import aiohttp -from microsoft_teams.ai import ChatPrompt, Function -from microsoft_teams.ai.ai_model import AIModel -from microsoft_teams.api import MessageActivity, MessageActivityInput -from microsoft_teams.apps import ActivityContext -from pydantic import BaseModel - - -class SearchPokemonParams(BaseModel): - pokemon_name: str - """The name of the pokemon.""" - - -class GetLocationParams(BaseModel): - """No parameters needed for location""" - - pass - - -class GetWeatherParams(BaseModel): - location: str - """The location to get weather for""" - - -async def pokemon_search_handler(params: SearchPokemonParams) -> str: - """Search for Pokemon using PokeAPI - matches documentation example""" - try: - async with aiohttp.ClientSession() as session: - async with session.get(f"https://pokeapi.co/api/v2/pokemon/{params.pokemon_name.lower()}") as response: - if response.status != 200: - raise ValueError(f"Pokemon '{params.pokemon_name}' not found") - - data = await response.json() - - result_data = { - "name": data["name"], - "height": data["height"], - "weight": data["weight"], - "types": [type_info["type"]["name"] for type_info in data["types"]], - } - - return f"Pokemon {result_data['name']}: height={result_data['height']}, weight={result_data['weight']}, types={', '.join(result_data['types'])}" # noqa: E501 - except Exception as e: - raise ValueError(f"Error searching for Pokemon: {str(e)}") from e - - -async def handle_pokemon_search(model: AIModel, ctx: ActivityContext[MessageActivity]) -> None: - """Handle single function calling - Pokemon search""" - prompt = ChatPrompt(model) - prompt.with_function( - Function( - name="pokemon_search", - description="Search for pokemon information including height, weight, and types", - parameter_schema=SearchPokemonParams, - handler=pokemon_search_handler, - ) - ) - - chat_result = await prompt.send( - input=ctx.activity.text, instructions="You are a helpful assistant that can look up Pokemon for the user." - ) - - if chat_result.response.content: - message = MessageActivityInput(text=chat_result.response.content).add_ai_generated() - await ctx.send(message) - else: - await ctx.reply("Sorry I could not find that pokemon") - - -def get_location_handler(params: GetLocationParams) -> str: - """Get user location (mock)""" - locations = ["Seattle", "San Francisco", "New York"] - location = random.choice(locations) - return location - - -def get_weather_handler(params: BaseModel) -> str: - """Get weather for location (mock)""" - weather_by_location: Dict[str, Dict[str, Any]] = { - "Seattle": {"temperature": 65, "condition": "sunny"}, - "San Francisco": {"temperature": 60, "condition": "foggy"}, - "New York": {"temperature": 75, "condition": "rainy"}, - } - - location = getattr(params, "location") # noqa - weather = weather_by_location.get(location) - if not weather: - return "Sorry, I could not find the weather for that location" - - return f"The weather in {location} is {weather['condition']} with a temperature of {weather['temperature']}Β°F" - - -async def handle_multiple_functions(model: AIModel, ctx: ActivityContext[MessageActivity]) -> None: - """Handle multiple function calling - location then weather""" - prompt = ChatPrompt(model) - - prompt.with_function( - Function( - name="get_user_location", - description="Gets the location of the user", - parameter_schema=GetLocationParams, - handler=get_location_handler, - ) - ).with_function( - name="weather_search", - description="Search for weather at a specific location", - parameter_schema={ - "title": "GetWeatherParams", - "type": "object", - "properties": {"location": {"title": "Location", "type": "string"}}, - "required": ["location"], - }, - handler=get_weather_handler, - ) - - chat_result = await prompt.send( - input=ctx.activity.text, - instructions=( - "You are a helpful assistant that can help the user get the weather." - "First get their location, then get the weather for that location." - ), - ) - - if chat_result.response.content: - message = MessageActivityInput(text=chat_result.response.content).add_ai_generated() - await ctx.send(message) - else: - await ctx.reply("Sorry I could not figure it out") diff --git a/examples/ai-test/src/handlers/memory_management.py b/examples/ai-test/src/handlers/memory_management.py deleted file mode 100644 index 8da509ac..00000000 --- a/examples/ai-test/src/handlers/memory_management.py +++ /dev/null @@ -1,58 +0,0 @@ -""" -Copyright (c) Microsoft Corporation. All rights reserved. -Licensed under the MIT License. -""" - -from microsoft_teams.ai import ChatPrompt, ListMemory -from microsoft_teams.ai.ai_model import AIModel -from microsoft_teams.api import MessageActivity, MessageActivityInput -from microsoft_teams.apps import ActivityContext - -# Simple in-memory store for conversation histories -# In your application, it may be a good idea to use a more -# persistent store backed by a database or other storage solution -conversation_store: dict[str, ListMemory] = {} - - -def get_or_create_conversation_memory(conversation_id: str) -> ListMemory: - """Get or create conversation memory for a specific conversation""" - if conversation_id not in conversation_store: - conversation_store[conversation_id] = ListMemory() - return conversation_store[conversation_id] - - -async def handle_stateful_conversation(model: AIModel, ctx: ActivityContext[MessageActivity]) -> None: - """Example of stateful conversation handler that maintains conversation history""" - print(f"Received message: {ctx.activity.text}") - - # Retrieve existing conversation memory or initialize new one - memory = get_or_create_conversation_memory(ctx.activity.conversation.id) - - # Get existing messages for logging - existing_messages = await memory.get_all() - print(f"Existing messages before sending to prompt: {len(existing_messages)} messages") - - # Create prompt with conversation-specific memory - prompt = ChatPrompt(model, memory=memory) - - chat_result = await prompt.send( - input=ctx.activity.text, instructions="You are a helpful assistant that remembers our previous conversation." - ) - - if chat_result.response.content: - message = MessageActivityInput(text=chat_result.response.content).add_ai_generated() - await ctx.send(message) - else: - await ctx.reply("I did not generate a response.") - - # Log final message count - final_messages = await memory.get_all() - print(f"Messages after sending to prompt: {len(final_messages)} messages") - - -async def clear_conversation_memory(conversation_id: str) -> None: - """Clear memory for a specific conversation""" - if conversation_id in conversation_store: - memory = conversation_store[conversation_id] - await memory.set_all([]) - print(f"Cleared memory for conversation {conversation_id}") diff --git a/examples/ai-test/src/handlers/plugins.py b/examples/ai-test/src/handlers/plugins.py deleted file mode 100644 index 224aa59b..00000000 --- a/examples/ai-test/src/handlers/plugins.py +++ /dev/null @@ -1,33 +0,0 @@ -""" -Copyright (c) Microsoft Corporation. All rights reserved. -Licensed under the MIT License. -""" - -from typing import Optional - -from microsoft_teams.ai.message import Message -from microsoft_teams.ai.plugin import BaseAIPlugin -from pydantic import BaseModel - - -class LoggingAIPlugin(BaseAIPlugin): - """Custom AI Plugin for logging and tracking function calls""" - - def __init__(self): - super().__init__("logging_plugin") - self.function_calls: list[str] = [] - - async def on_before_function_call(self, function_name: str, args: Optional[BaseModel] = None) -> None: - print(f"[PLUGIN] About to call function: {function_name} with args: {args}") - self.function_calls.append(function_name) - - async def on_after_function_call( - self, function_name: str, result: str, args: Optional[BaseModel] = None - ) -> str | None: - print(f"[PLUGIN] Function {function_name} returned: {result}") - return f"{result} (verified by logging plugin)" - - async def on_before_send(self, input: Message) -> Message | None: - if hasattr(input, "content") and input.content: - print(f"[PLUGIN] Processing input: {input.content[:50]}...") - return None diff --git a/examples/ai-test/src/main.py b/examples/ai-test/src/main.py deleted file mode 100644 index 7b5d67d7..00000000 --- a/examples/ai-test/src/main.py +++ /dev/null @@ -1,209 +0,0 @@ -""" -Copyright (c) Microsoft Corporation. All rights reserved. -Licensed under the MIT License. -""" - -import asyncio -import logging -import re -from os import getenv - -from dotenv import find_dotenv, load_dotenv -from handlers import ( - LoggingAIPlugin, - handle_citations_demo, - handle_multiple_functions, - handle_pokemon_search, - handle_stateful_conversation, -) -from handlers.feedback_management import ( - get_feedback_summary, - handle_feedback_submission, - initialize_feedback_storage, -) -from handlers.memory_management import clear_conversation_memory -from microsoft_teams.ai import ChatPrompt -from microsoft_teams.api import MessageActivity, MessageActivityInput -from microsoft_teams.api.activities.invoke.message.submit_action import MessageSubmitActionInvokeActivity -from microsoft_teams.apps import ActivityContext, App -from microsoft_teams.devtools import DevToolsPlugin -from microsoft_teams.openai import OpenAICompletionsAIModel, OpenAIResponsesAIModel - -load_dotenv(find_dotenv(usecwd=True)) - -logging.basicConfig(level=getenv("LOG_LEVEL", "WARNING").upper()) - - -def get_required_env(key: str) -> str: - value = getenv(key) - if not value: - raise ValueError(f"Required environment variable {key} is not set") - return value - - -AZURE_OPENAI_MODEL = get_required_env("AZURE_OPENAI_MODEL") - -# Global plugin instance for tracking -plugin_instance = LoggingAIPlugin() - -app = App(plugins=[DevToolsPlugin()]) - -# Models for different AI approaches -completions_model = OpenAICompletionsAIModel( - model=AZURE_OPENAI_MODEL, -) - -responses_model = OpenAIResponsesAIModel( - model=AZURE_OPENAI_MODEL, - stateful=True, -) - -# Global state -current_model = completions_model - - -# Simple chat handler (like TypeScript "hi" example) -@app.on_message_pattern(re.compile(r"^hi$", re.IGNORECASE)) -async def handle_simple_chat(ctx: ActivityContext[MessageActivity]): - """Handle 'hi' message with simple AI response""" - prompt = ChatPrompt(completions_model) - chat_result = await prompt.send( - input=ctx.activity.text, instructions="You are a friendly assistant who talks like a pirate" - ) - - if chat_result.response.content: - message = MessageActivityInput(text=chat_result.response.content).add_ai_generated() - await ctx.send(message) - - -# Command handlers (like TypeScript command pattern) -@app.on_message_pattern(re.compile(r"^pokemon\s+(.+)", re.IGNORECASE)) -async def handle_pokemon_command(ctx: ActivityContext[MessageActivity]): - """Handle 'pokemon ' command""" - match = re.match(r"^pokemon\s+(.+)", ctx.activity.text, re.IGNORECASE) - if match: - pokemon_name = match.group(1).strip() - ctx.activity.text = pokemon_name # Update activity text for handler - await handle_pokemon_search(current_model, ctx) - - -@app.on_message_pattern(re.compile(r"^weather\b", re.IGNORECASE)) -async def handle_weather_command(ctx: ActivityContext[MessageActivity]): - """Handle 'weather' command with multiple functions""" - await handle_multiple_functions(current_model, ctx) - - -# Streaming handler (like TypeScript streaming example) -@app.on_message_pattern(re.compile(r"^stream\s+(.+)", re.IGNORECASE)) -async def handle_streaming(ctx: ActivityContext[MessageActivity]): - """Handle 'stream ' command""" - match = re.match(r"^stream\s+(.+)", ctx.activity.text, re.IGNORECASE) - if match: - query = match.group(1).strip() - - prompt = ChatPrompt(current_model) - chat_result = await prompt.send( - input=query, - instructions="You are a friendly assistant who responds in extremely verbose language", - on_chunk=lambda chunk: ctx.stream.emit(chunk) if hasattr(ctx, "stream") else None, - ) - - if hasattr(ctx.activity.conversation, "is_group") and ctx.activity.conversation.is_group: - # Group chat - send final response - if chat_result.response.content: - message = MessageActivityInput(text=chat_result.response.content).add_ai_generated() - await ctx.send(message) - else: - # 1:1 chat - streaming handled above - if hasattr(ctx, "stream"): - ctx.stream.emit(MessageActivityInput().add_ai_generated()) - - -# Utility commands -@app.on_message_pattern(re.compile(r"^citations?\b", re.IGNORECASE)) -async def handle_citations_command(ctx: ActivityContext[MessageActivity]): - """Handle 'citations' command""" - await handle_citations_demo(ctx) - - -@app.on_message_pattern(re.compile(r"^model\s*(.*)$", re.IGNORECASE)) -async def handle_model_switch(ctx: ActivityContext[MessageActivity]): - """Handle model switching""" - global current_model - - match = re.match(r"^model\s*(.*)$", ctx.activity.text, re.IGNORECASE) - if match: - model_name = match.group(1).strip().lower() - if "completion" in model_name: - current_model = completions_model - await ctx.reply("πŸ”„ Switched to **Chat Completions** model") - elif "response" in model_name: - current_model = responses_model - await ctx.reply("πŸ”„ Switched to **Responses API** model") - else: - await ctx.reply( - f"πŸ“‹ Current model: **{'completions' if current_model == completions_model else 'responses'}**" - ) - - -@app.on_message_pattern(re.compile(r"^plugin\b", re.IGNORECASE)) -async def handle_plugin_stats(ctx: ActivityContext[MessageActivity]): - """Handle 'plugin stats' command""" - await ctx.reply( - f"πŸ”Œ Plugin function calls so far: {', '.join(plugin_instance.function_calls) if plugin_instance.function_calls else 'None'}" # noqa E501 - ) - - -@app.on_message_pattern(re.compile(r"^memory\s+clear\b", re.IGNORECASE)) -async def handle_memory_clear(ctx: ActivityContext[MessageActivity]): - """Handle 'memory clear' command""" - await clear_conversation_memory(ctx.activity.conversation.id) - await ctx.reply("🧠 Memory cleared!") - - -# Feedback demonstration -@app.on_message_pattern(re.compile(r"^feedback\s+demo\b", re.IGNORECASE)) -async def handle_feedback_demo(ctx: ActivityContext[MessageActivity]): - """Handle 'feedback demo' command to demonstrate feedback collection""" - prompt = ChatPrompt(current_model) - chat_result = await prompt.send( - input="Tell me a short joke", instructions="You are a comedian. Keep responses brief and funny." - ) - - if chat_result.response.content: - # Create message with feedback enabled and initialize storage - message = MessageActivityInput(text=chat_result.response.content).add_ai_generated().add_feedback() - sent_message = await ctx.send(message) - - # Initialize feedback storage for this message - if sent_message and hasattr(sent_message, "id"): - initialize_feedback_storage(sent_message.id) - await ctx.reply(f"πŸ’‘ Feedback enabled! Try reacting or providing feedback. Message ID: {sent_message.id}") - - -@app.on_message_pattern(re.compile(r"^feedback\s+stats\s+(.+)", re.IGNORECASE)) -async def handle_feedback_stats(ctx: ActivityContext[MessageActivity]): - """Handle 'feedback stats ' command""" - match = re.match(r"^feedback\s+stats\s+(.+)", ctx.activity.text, re.IGNORECASE) - if match: - message_id = match.group(1).strip() - summary = get_feedback_summary(message_id) - await ctx.reply(f"πŸ“Š Feedback for message {message_id}: {summary}") - - -# Handle feedback submission events (like TypeScript message.submit.feedback) -@app.on_message_submit_feedback -async def handle_message_feedback(ctx: ActivityContext[MessageSubmitActionInvokeActivity]): - """Handle feedback submission events""" - await handle_feedback_submission(ctx) - - -# Fallback stateful conversation handler (like TypeScript fallback) -@app.on_message -async def handle_fallback(ctx: ActivityContext[MessageActivity]): - """Fallback handler for stateful conversation""" - await handle_stateful_conversation(current_model, ctx) - - -if __name__ == "__main__": - asyncio.run(app.start()) diff --git a/examples/botbuilder/pyproject.toml b/examples/botbuilder/pyproject.toml index d0805a26..7d0dfe01 100644 --- a/examples/botbuilder/pyproject.toml +++ b/examples/botbuilder/pyproject.toml @@ -8,11 +8,9 @@ dependencies = [ "dotenv>=0.9.9", "botbuilder-core>=4.14.0", "microsoft-teams-apps", - "microsoft-teams-devtools", "microsoft-teams-botbuilder" ] [tool.uv.sources] microsoft-teams-apps = { workspace = true } -microsoft-teams-devtools = { workspace = true } microsoft-teams-botbuilder = { workspace = true } diff --git a/examples/botbuilder/src/main.py b/examples/botbuilder/src/main.py index e5cd8bad..7433d1c1 100644 --- a/examples/botbuilder/src/main.py +++ b/examples/botbuilder/src/main.py @@ -19,7 +19,6 @@ from microsoft_teams.api import MessageActivity from microsoft_teams.apps import ActivityContext, App from microsoft_teams.botbuilder import BotBuilderPlugin -from microsoft_teams.devtools import DevToolsPlugin config = DefaultConfig() adapter = CloudAdapter(ConfigurationBotFrameworkAuthentication(config)) @@ -57,7 +56,6 @@ async def on_error(context: TurnContext, error: Exception): # This is the Bot Framework handler handler=EchoBot(), ), - DevToolsPlugin(), ] ) diff --git a/examples/cards/src/main.py b/examples/cards/src/main.py index 236f8c65..11297172 100644 --- a/examples/cards/src/main.py +++ b/examples/cards/src/main.py @@ -151,9 +151,7 @@ def create_profile_card_input_validation() -> AdaptiveCard: TextInput(id="location").with_label("Location"), ActionSet( actions=[ - ExecuteAction(title="Save") - .with_data(SubmitData("save_profile")) - .with_associated_inputs("auto") + ExecuteAction(title="Save").with_data(SubmitData("save_profile")).with_associated_inputs("auto") ] ), ], diff --git a/examples/echo/pyproject.toml b/examples/echo/pyproject.toml index 99046719..a8f25fd5 100644 --- a/examples/echo/pyproject.toml +++ b/examples/echo/pyproject.toml @@ -8,7 +8,6 @@ dependencies = [ "dotenv>=0.9.9", "microsoft-teams-apps", "microsoft-teams-api", - "microsoft-teams-devtools" ] [tool.uv.sources] diff --git a/examples/echo/src/main.py b/examples/echo/src/main.py index 8fadbf06..58eee1e6 100644 --- a/examples/echo/src/main.py +++ b/examples/echo/src/main.py @@ -9,9 +9,8 @@ from microsoft_teams.api import MessageActivity from microsoft_teams.api.activities.typing import TypingActivityInput from microsoft_teams.apps import ActivityContext, App -from microsoft_teams.devtools import DevToolsPlugin -app = App(plugins=[DevToolsPlugin()]) +app = App() @app.on_message_pattern(re.compile(r"hello|hi|greetings")) diff --git a/examples/mcp-client/README.md b/examples/mcp-client/README.md deleted file mode 100644 index 90cf7f2f..00000000 --- a/examples/mcp-client/README.md +++ /dev/null @@ -1,12 +0,0 @@ -# Sample: MCP Client - - -### Available Commands - -| Command | Description | Example Usage | -|---------|-------------|---------------| -| `agent ` | Use stateful ChatPrompt with MCP tools | `agent What's the weather like?` | -| `prompt ` | Use stateless ChatPrompt with MCP tools | `prompt Find information about Python` | -| `mcp info` | Show connected MCP servers and usage | `mcp info` | -| `` | Fallback to ChatPrompt with MCP tools | `Hello, can you help me?` | - diff --git a/examples/mcp-client/pyproject.toml b/examples/mcp-client/pyproject.toml deleted file mode 100644 index a7ab652a..00000000 --- a/examples/mcp-client/pyproject.toml +++ /dev/null @@ -1,18 +0,0 @@ -[project] -name = "mcp-client" -version = "0.1.0" -description = "a test to test out mcp client and server" -readme = "README.md" -requires-python = ">=3.12,<3.15" -dependencies = [ - "dotenv>=0.9.9", - "microsoft-teams-ai", - "microsoft-teams-common", - "microsoft-teams-openai", - "microsoft-teams-devtools", - "microsoft-teams-mcpplugin" -] - -[tool.uv.sources] -microsoft-teams-ai = { workspace = true } -microsoft-teams-common = { workspace = true } diff --git a/examples/mcp-client/src/main.py b/examples/mcp-client/src/main.py deleted file mode 100644 index d3e8631a..00000000 --- a/examples/mcp-client/src/main.py +++ /dev/null @@ -1,166 +0,0 @@ -""" -Copyright (c) Microsoft Corporation. All rights reserved. -Licensed under the MIT License. -""" - -import asyncio -import re -from os import getenv - -from dotenv import find_dotenv, load_dotenv -from microsoft_teams.ai import ChatPrompt, ListMemory -from microsoft_teams.api import MessageActivity, MessageActivityInput, TypingActivityInput -from microsoft_teams.apps import ActivityContext, App -from microsoft_teams.devtools import DevToolsPlugin -from microsoft_teams.mcpplugin import McpClientPlugin, McpClientPluginParams -from microsoft_teams.openai import OpenAICompletionsAIModel, OpenAIResponsesAIModel - -load_dotenv(find_dotenv(usecwd=True)) - -app = App(plugins=[DevToolsPlugin()]) - - -def get_required_env(key: str) -> str: - value = getenv(key) - if not value: - raise ValueError(f"Required environment variable {key} is not set") - return value - - -AZURE_OPENAI_MODEL = get_required_env("AZURE_OPENAI_MODEL") - - -# GitHub PAT for MCP server (optional) -def get_optional_env(key: str) -> str | None: - return getenv(key) - - -# This example uses a PersonalAccessToken, but you may get -# the user's oauth token as well by getting them to sign in -# and then using app.sign_in to get their token. -GITHUB_PAT = get_optional_env("GITHUB_PAT") - -# Set up AI models -completions_model = OpenAICompletionsAIModel(model=AZURE_OPENAI_MODEL) -responses_model = OpenAIResponsesAIModel(model=AZURE_OPENAI_MODEL, stateful=True) - -# Configure MCP Client Plugin with multiple remote servers (as shown in docs) -mcp_plugin = McpClientPlugin() - -# Add multiple MCP servers to demonstrate the concept from documentation -mcp_plugin.use_mcp_server("https://learn.microsoft.com/api/mcp") - -# Add GitHub MCP server with authentication headers (demonstrates header functionality) -if GITHUB_PAT: - mcp_plugin.use_mcp_server( - "https://api.githubcopilot.com/mcp/", McpClientPluginParams(headers={"Authorization": f"Bearer {GITHUB_PAT}"}) - ) - print("βœ… GitHub MCP server configured with authentication") -else: - print("⚠️ GITHUB_PAT not found - GitHub MCP server not configured") - print(" Set GITHUB_PAT environment variable to enable GitHub MCP integration") -# Example of additional servers (commented out - would need actual working endpoints): -# mcp_plugin.use_mcp_server("https://example.com/mcp/weather") -# mcp_plugin.use_mcp_server("https://example.com/mcp/pokemon") - -# Memory for stateful conversations -chat_memory = ListMemory() - -# ChatPrompt using Responses API with MCP tools (stateful) -responses_prompt = ChatPrompt(responses_model, memory=chat_memory, plugins=[mcp_plugin]) - -# ChatPrompt with MCP tools (demonstrating docs example) -chat_prompt = ChatPrompt(completions_model, plugins=[mcp_plugin]) - - -# Pattern-based handlers to demonstrate different MCP usage patterns - - -@app.on_message_pattern(re.compile(r"^agent\s+(.+)", re.IGNORECASE)) -async def handle_agent_chat(ctx: ActivityContext[MessageActivity]): - """Handle 'agent ' command using ChatPrompt with MCP tools (stateful)""" - match = re.match(r"^agent\s+(.+)", ctx.activity.text, re.IGNORECASE) - if match: - query = match.group(1).strip() - - print(f"[AGENT] Processing: {query}") - await ctx.send(TypingActivityInput()) - - # Use ChatPrompt with MCP tools (stateful conversation) - result = await responses_prompt.send(query) - if result.response.content: - message = MessageActivityInput(text=result.response.content).add_ai_generated() - await ctx.send(message) - - -@app.on_message_pattern(re.compile(r"^prompt\s+(.+)", re.IGNORECASE)) -async def handle_prompt_chat(ctx: ActivityContext[MessageActivity]): - """Handle 'prompt ' command using ChatPrompt with MCP tools (stateless)""" - match = re.match(r"^prompt\s+(.+)", ctx.activity.text, re.IGNORECASE) - if match: - query = match.group(1).strip() - - print(f"[PROMPT] Processing: {query}") - await ctx.send(TypingActivityInput()) - - # Use ChatPrompt with MCP tools (demonstrates docs pattern) - result = await chat_prompt.send( - input=query, - instructions=( - "You are a helpful assistant with access to remote MCP tools.Use them to help answer questions." - ), - ) - - if result.response.content: - message = MessageActivityInput(text=result.response.content).add_ai_generated() - await ctx.send(message) - - -@app.on_message_pattern(re.compile(r"^mcp\s+info", re.IGNORECASE)) -async def handle_mcp_info(ctx: ActivityContext[MessageActivity]): - """Handle 'mcp info' command to show available MCP servers and tools""" - # Build server list dynamically based on what's configured - servers_info = "**Connected MCP Servers:**\n" - servers_info += "β€’ `https://learn.microsoft.com/api/mcp` - Microsoft Learn API\n" - - if GITHUB_PAT: - servers_info += "β€’ `https://api.githubcopilot.com/mcp/` - GitHub Copilot API (authenticated)\n" - else: - servers_info += "β€’ GitHub MCP server (not configured - set GITHUB_PAT env var)\n" - - info_text = ( - "πŸ”— **MCP Client Information**\n\n" - f"{servers_info}\n" - "**Authentication Demo:**\n" - "β€’ GitHub server uses Bearer token authentication via headers\n" - "β€’ Example: `headers={'Authorization': f'Bearer {GITHUB_PAT}'}`\n\n" - "**Usage Patterns:**\n" - "β€’ `agent ` - Use stateful Agent with MCP tools\n" - "β€’ `prompt ` - Use stateless ChatPrompt with MCP tools\n" - "β€’ `mcp info` - Show this information\n\n" - "**How it works:**\n" - "1. MCP Client connects to remote servers via SSE protocol\n" - "2. Headers (like Authorization) are passed with each request\n" - "3. Remote tools are loaded and integrated with ChatPrompt/Agent\n" - "4. LLM can call remote tools as needed to answer your questions" - ) - await ctx.reply(info_text) - - -# Fallback handler for general chat (uses ChatPrompt by default) -@app.on_message -async def handle_fallback_message(ctx: ActivityContext[MessageActivity]): - """Fallback handler using ChatPrompt with MCP tools""" - print(f"[FALLBACK] Message received: {ctx.activity.text}") - print(f"[FALLBACK] From: {ctx.activity.from_}") - await ctx.send(TypingActivityInput()) - - # Use ChatPrompt with MCP tools for general conversation - result = await responses_prompt.send(ctx.activity.text) - if result.response.content: - message = MessageActivityInput(text=result.response.content).add_ai_generated() - await ctx.send(message) - - -if __name__ == "__main__": - asyncio.run(app.start()) diff --git a/examples/proactive-messaging/src/main.py b/examples/proactive-messaging/src/main.py index 006723a9..a8f886aa 100644 --- a/examples/proactive-messaging/src/main.py +++ b/examples/proactive-messaging/src/main.py @@ -20,6 +20,7 @@ import argparse import asyncio +from microsoft_teams.api import MessageActivityInput from microsoft_teams.apps import App from microsoft_teams.cards import ActionSet, AdaptiveCard, OpenUrlAction, TextBlock @@ -68,6 +69,30 @@ async def send_proactive_card(app: App, conversation_id: str) -> None: print(f"βœ“ Card sent successfully! Activity ID: {result.id}") +async def send_and_update_proactive_message(app: App, conversation_id: str) -> None: + """ + Send a message proactively and then update it proactively. + + Args: + app: The initialized App instance + conversation_id: The Teams conversation ID + """ + # First, send a message proactively + original_text = "Status: Pending... (sent proactively without a running server)" + print(f"Sending message to update: {original_text}") + result = await app.send(conversation_id, original_text) + activity_id = result.id + print(f"βœ“ Original message sent! Activity ID: {activity_id}") + + # Wait so the user can see the original + await asyncio.sleep(3) + + # Now update the same message proactively using the ConversationClient + updated = MessageActivityInput(text="Status: Complete βœ… (updated proactively without a running server)") + await app.api.conversations.activities(conversation_id).update(activity_id, updated) + print(f"βœ“ Message updated successfully! Activity ID: {activity_id}") + + async def main(): """ Main function demonstrating proactive messaging. @@ -103,6 +128,12 @@ async def main(): # Example 2: Send an Adaptive Card await send_proactive_card(app, args.conversation_id) + # Wait a bit between messages + await asyncio.sleep(2) + + # Example 3: Update an existing message + await send_and_update_proactive_message(app, args.conversation_id) + print("\nβœ“ All proactive messages sent successfully!") diff --git a/examples/stream/src/main.py b/examples/stream/src/main.py index 3b13f128..b4d6911d 100644 --- a/examples/stream/src/main.py +++ b/examples/stream/src/main.py @@ -6,7 +6,7 @@ import asyncio from random import random -from microsoft_teams.api import MessageActivity +from microsoft_teams.api import CardAction, CardActionType, MessageActivity, MessageActivityInput, SuggestedActions from microsoft_teams.apps import ActivityContext, App app = App() @@ -35,11 +35,23 @@ async def handle_message(ctx: ActivityContext[MessageActivity]): # Stream messages with delays using ctx.stream.emit for message in STREAM_MESSAGES: - # Add some randomness to timing await asyncio.sleep(random()) - ctx.stream.emit(message) + # Add suggested actions to the final message + ctx.stream.emit( + MessageActivityInput().with_suggested_actions( + SuggestedActions( + to=[ctx.activity.from_.id], + actions=[ + CardAction(type=CardActionType.IM_BACK, title="Run again", value="Run again"), + CardAction(type=CardActionType.IM_BACK, title="Show status", value="Show status"), + CardAction(type=CardActionType.IM_BACK, title="Help", value="Help"), + ], + ) + ) + ) + if __name__ == "__main__": asyncio.run(app.start()) diff --git a/examples/tab/Web/package-lock.json b/examples/tab/Web/package-lock.json index a6810287..f95c84cc 100644 --- a/examples/tab/Web/package-lock.json +++ b/examples/tab/Web/package-lock.json @@ -2099,9 +2099,9 @@ } }, "node_modules/postcss": { - "version": "8.5.6", - "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.6.tgz", - "integrity": "sha512-3Ybi1tAuwAP9s0r1UQ2J4n5Y0G05bJkpUIO0/bI9MhwmD70S5aTWbXGBwxHrelT+XM1k6dM0pk+SwNkpTRN7Pg==", + "version": "8.5.10", + "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.10.tgz", + "integrity": "sha512-pMMHxBOZKFU6HgAZ4eyGnwXF/EvPGGqUr0MnZ5+99485wwW41kW91A4LOGxSHhgugZmSChL5AlElNdwlNgcnLQ==", "dev": true, "funding": [ { @@ -2117,6 +2117,7 @@ "url": "https://github.com/sponsors/ai" } ], + "license": "MIT", "dependencies": { "nanoid": "^3.3.11", "picocolors": "^1.1.1", diff --git a/examples/tab/pyproject.toml b/examples/tab/pyproject.toml index e201add5..213754ed 100644 --- a/examples/tab/pyproject.toml +++ b/examples/tab/pyproject.toml @@ -8,7 +8,6 @@ dependencies = [ "dotenv>=0.9.9", "microsoft-teams-apps", "microsoft-teams-api", - "microsoft-teams-devtools" ] [tool.uv.sources] diff --git a/examples/tab/src/main.py b/examples/tab/src/main.py index 982658a9..ffa789eb 100644 --- a/examples/tab/src/main.py +++ b/examples/tab/src/main.py @@ -8,9 +8,8 @@ from typing import Any from microsoft_teams.apps import App, FunctionContext -from microsoft_teams.devtools import DevToolsPlugin -app = App(plugins=[DevToolsPlugin()]) +app = App() app.tab("test", str(Path("Web/dist").resolve())) diff --git a/examples/threading/README.md b/examples/threading/README.md new file mode 100644 index 00000000..33d817ce --- /dev/null +++ b/examples/threading/README.md @@ -0,0 +1,35 @@ +# Example: Threading + +A bot that demonstrates reactive and proactive threading in Microsoft Teams channels. + +## Commands + +| Command | Behavior | +|---------|----------| +| `test reply` | `ctx.reply()` β€” reactive threaded reply with visual quote | +| `test send` | `ctx.send()` β€” reactive send to same thread, no quote | +| `test proactive` | `app.reply()` β€” proactive threaded reply | +| `test manual` | `to_threaded_conversation_id()` + `app.send()` β€” advanced manual control | +| `help` | Shows available commands | + +## Notes + +- `test reply` and `test send` work in all scopes (1:1, group chat, channels) +- `test proactive` constructs a threaded conversation ID and sends to that thread +- `test manual` does the same using `to_threaded_conversation_id()` + `app.send()` directly +- `test proactive` and `test manual` may return a service error in conversation types that do not currently support threading (e.g. meetings) + +## Run + +```bash +uv run python src/main.py +``` + +## Environment Variables + +Create a `.env` file: + +``` +CLIENT_ID= +CLIENT_SECRET= +``` diff --git a/examples/threading/pyproject.toml b/examples/threading/pyproject.toml new file mode 100644 index 00000000..f2806b56 --- /dev/null +++ b/examples/threading/pyproject.toml @@ -0,0 +1,13 @@ +[project] +name = "threading" +version = "0.1.0" +description = "Threading test bot" +requires-python = ">=3.12,<3.15" +dependencies = [ + "dotenv>=0.9.9", + "microsoft-teams-apps", + "microsoft-teams-api" +] + +[tool.uv.sources] +microsoft-teams-apps = { workspace = true } diff --git a/examples/threading/src/main.py b/examples/threading/src/main.py new file mode 100644 index 00000000..302dcc63 --- /dev/null +++ b/examples/threading/src/main.py @@ -0,0 +1,76 @@ +""" +Copyright (c) Microsoft Corporation. All rights reserved. +Licensed under the MIT License. +""" + +import asyncio + +from microsoft_teams.api import MessageActivity +from microsoft_teams.api.activities.typing import TypingActivityInput +from microsoft_teams.apps import ActivityContext, App, to_threaded_conversation_id + +app = App() + + +@app.on_message +async def handle_message(ctx: ActivityContext[MessageActivity]): + """Handle message activities.""" + await ctx.reply(TypingActivityInput()) + + text = (ctx.activity.text or "").lower() + conversation_id = ctx.conversation_ref.conversation.id + message_id = ctx.activity.id + + # When inside a thread, conversation_id contains ;messageid=. + # Extract the root ID for threading; for top-level messages, use activity.id. + parts = conversation_id.split(";messageid=") + thread_root_id = parts[1] if len(parts) > 1 else message_id + + # ============================================ + # context.reply() β€” reactive threaded reply + # ============================================ + if "test reply" in text: + await ctx.reply("This is a threaded reply to your message.") + return + + # ============================================ + # context.send() β€” reactive send to same thread + # ============================================ + if "test send" in text: + await ctx.send("This is sent to the same thread, without quoting.") + return + + # ============================================ + # app.reply() β€” proactive threaded reply + # ============================================ + if "test proactive" in text: + await app.reply(conversation_id, thread_root_id, "This is a proactive threaded reply using app.reply().") + return + + # ============================================ + # to_threaded_conversation_id() + app.send() β€” advanced manual control + # ============================================ + if "test manual" in text: + thread_id = to_threaded_conversation_id(conversation_id, thread_root_id) + await app.send(thread_id, "This was sent using to_threaded_conversation_id() + app.send() for manual control.") + return + + # ============================================ + # Help / Default + # ============================================ + if "help" in text: + await ctx.reply( + "**Threading Test Bot**\n\n" + "**Commands:**\n" + "- `test reply` - ctx.reply() reactive threaded reply\n" + "- `test send` - ctx.send() to same thread without quoting\n" + "- `test proactive` - app.reply() proactive threaded reply\n" + "- `test manual` - to_threaded_conversation_id() + app.send() for advanced control" + ) + return + + await ctx.send('Say "help" for available commands.') + + +if __name__ == "__main__": + asyncio.run(app.start()) diff --git a/packages/a2aprotocol/README.md b/packages/a2aprotocol/README.md index 80ae6c97..3b746880 100644 --- a/packages/a2aprotocol/README.md +++ b/packages/a2aprotocol/README.md @@ -1,5 +1,8 @@ # Microsoft Teams A2A +> [!WARNING] +> **Deprecated** β€” This package was originally in preview, but we have decided to stop maintaining it before General Availability. We recommend using the official [A2A Python SDK](https://github.com/a2aproject/a2a-python) instead, which provides better long-term support for Agent-to-Agent protocol integrations. +

diff --git a/packages/a2aprotocol/pyproject.toml b/packages/a2aprotocol/pyproject.toml index 5281cb85..4daa6689 100644 --- a/packages/a2aprotocol/pyproject.toml +++ b/packages/a2aprotocol/pyproject.toml @@ -8,6 +8,9 @@ requires-python = ">=3.12,<3.15" repository = "https://github.com/microsoft/teams.py" keywords = ["microsoft", "teams", "ai", "bot", "agents"] license = "MIT" +classifiers = [ + "Development Status :: 7 - Inactive", +] dependencies = [ "a2a-sdk[core,http-server]>=0.3.7", "microsoft-teams-ai", diff --git a/packages/a2aprotocol/src/microsoft_teams/a2a/__init__.py b/packages/a2aprotocol/src/microsoft_teams/a2a/__init__.py index be841c65..d3adaa10 100644 --- a/packages/a2aprotocol/src/microsoft_teams/a2a/__init__.py +++ b/packages/a2aprotocol/src/microsoft_teams/a2a/__init__.py @@ -4,6 +4,7 @@ """ import logging +import warnings from . import chat_prompt, server from .chat_prompt import * # noqa: F403 @@ -11,6 +12,13 @@ logging.getLogger(__name__).addHandler(logging.NullHandler()) +warnings.warn( + "microsoft-teams-a2a is deprecated and will no longer be maintained. " + "Use the official A2A Python SDK instead: https://github.com/a2aproject/a2a-python", + FutureWarning, + stacklevel=2, +) + # Combine all exports from submodules __all__: list[str] = [] __all__.extend(chat_prompt.__all__) diff --git a/packages/ai/README.md b/packages/ai/README.md index ef4db335..d319fc46 100644 --- a/packages/ai/README.md +++ b/packages/ai/README.md @@ -1,5 +1,8 @@ # Microsoft Teams SDK +> [!WARNING] +> **Deprecated** β€” This package was originally in preview, but we have decided to stop maintaining it before General Availability. We recommend using the [Agent Framework](https://learn.microsoft.com/en-us/agent-framework/overview/?pivots=programming-language-python) instead, which provides better long-term support for building AI-powered Teams applications. +

@@ -15,7 +18,7 @@ AI-powered conversational experiences for Microsoft Teams applications. Provides prompt management, action planning, and model integration for building intelligent Teams bots. -[πŸ“– Documentation](https://microsoft.github.io/teams-sdk/python/in-depth-guides/ai/) +[πŸ“– Documentation](https://learn.microsoft.com/en-us/agent-framework/overview/?pivots=programming-language-python) ## Installation diff --git a/packages/ai/pyproject.toml b/packages/ai/pyproject.toml index ab70727a..ec161a00 100644 --- a/packages/ai/pyproject.toml +++ b/packages/ai/pyproject.toml @@ -8,6 +8,9 @@ requires-python = ">=3.12,<3.15" repository = "https://github.com/microsoft/teams.py" keywords = ["microsoft", "teams", "ai", "bot", "agents"] license = "MIT" +classifiers = [ + "Development Status :: 7 - Inactive", +] dependencies = [ "microsoft-teams-common", ] diff --git a/packages/ai/src/microsoft_teams/ai/__init__.py b/packages/ai/src/microsoft_teams/ai/__init__.py index 7e641d51..b6bdaae8 100644 --- a/packages/ai/src/microsoft_teams/ai/__init__.py +++ b/packages/ai/src/microsoft_teams/ai/__init__.py @@ -4,6 +4,7 @@ """ import logging +import warnings from .ai_model import AIModel from .chat_prompt import ChatPrompt, ChatSendResult @@ -13,6 +14,13 @@ logging.getLogger(__name__).addHandler(logging.NullHandler()) +warnings.warn( + "microsoft-teams-ai is deprecated and will no longer be maintained. " + "Use the Agent Framework instead: https://learn.microsoft.com/en-us/agent-framework/overview/?pivots=programming-language-python", + FutureWarning, + stacklevel=2, +) + __all__ = [ "ChatSendResult", "ChatPrompt", diff --git a/packages/api/src/microsoft_teams/api/auth/cloud_environment.py b/packages/api/src/microsoft_teams/api/auth/cloud_environment.py index b86e0138..674364e3 100644 --- a/packages/api/src/microsoft_teams/api/auth/cloud_environment.py +++ b/packages/api/src/microsoft_teams/api/auth/cloud_environment.py @@ -29,6 +29,8 @@ class CloudEnvironment: """The token issuer for Bot Framework tokens (e.g. "https://api.botframework.com").""" graph_scope: str """The Microsoft Graph token scope (e.g. "https://graph.microsoft.com/.default").""" + allowed_service_urls: tuple[str, ...] = () + """Allowed service URL hostnames for this cloud environment.""" PUBLIC = CloudEnvironment( @@ -39,6 +41,11 @@ class CloudEnvironment: openid_metadata_url="https://login.botframework.com/v1/.well-known/openidconfiguration", token_issuer="https://api.botframework.com", graph_scope="https://graph.microsoft.com/.default", + allowed_service_urls=( + "smba.trafficmanager.net", + "smba.onyx.prod.teams.trafficmanager.net", + "smba.infra.gcc.teams.microsoft.com", + ), ) """Microsoft public (commercial) cloud.""" @@ -50,6 +57,7 @@ class CloudEnvironment: openid_metadata_url="https://login.botframework.azure.us/v1/.well-known/openidconfiguration", token_issuer="https://api.botframework.us", graph_scope="https://graph.microsoft.us/.default", + allowed_service_urls=("smba.infra.gov.teams.microsoft.us",), ) """US Government Community Cloud High (GCCH).""" @@ -61,6 +69,7 @@ class CloudEnvironment: openid_metadata_url="https://login.botframework.azure.us/v1/.well-known/openidconfiguration", token_issuer="https://api.botframework.us", graph_scope="https://dod-graph.microsoft.us/.default", + allowed_service_urls=("smba.infra.dod.teams.microsoft.us",), ) """US Government Department of Defense (DoD).""" @@ -72,6 +81,7 @@ class CloudEnvironment: openid_metadata_url="https://login.botframework.azure.cn/v1/.well-known/openidconfiguration", token_issuer="https://api.botframework.azure.cn", graph_scope="https://microsoftgraph.chinacloudapi.cn/.default", + allowed_service_urls=("frontend.botapi.msg.infra.teams.microsoftonline.cn",), ) """China cloud (21Vianet).""" @@ -90,9 +100,7 @@ def from_name(name: str) -> CloudEnvironment: """ env = _CLOUD_ENVIRONMENTS.get(name.lower()) if env is None: - raise ValueError( - f"Unknown cloud environment: '{name}'. Valid values are: Public, USGov, USGovDoD, China." - ) + raise ValueError(f"Unknown cloud environment: '{name}'. Valid values are: Public, USGov, USGovDoD, China.") return env diff --git a/packages/api/src/microsoft_teams/api/clients/api_client_settings.py b/packages/api/src/microsoft_teams/api/clients/api_client_settings.py index 9f39c511..ea43f906 100644 --- a/packages/api/src/microsoft_teams/api/clients/api_client_settings.py +++ b/packages/api/src/microsoft_teams/api/clients/api_client_settings.py @@ -44,6 +44,4 @@ def merge_api_client_settings( if api_client_settings and api_client_settings.oauth_url: return api_client_settings - return ApiClientSettings( - oauth_url=env_oauth_url or cloud.token_service_url - ) + return ApiClientSettings(oauth_url=env_oauth_url or cloud.token_service_url) diff --git a/packages/apps/src/microsoft_teams/apps/__init__.py b/packages/apps/src/microsoft_teams/apps/__init__.py index f5aa4080..54ad5b92 100644 --- a/packages/apps/src/microsoft_teams/apps/__init__.py +++ b/packages/apps/src/microsoft_teams/apps/__init__.py @@ -15,6 +15,7 @@ from .options import AppOptions from .plugins import * # noqa: F401, F403 from .routing import ActivityContext +from .utils.thread import to_threaded_conversation_id logging.getLogger(__name__).addHandler(logging.NullHandler()) @@ -27,6 +28,7 @@ "FastAPIAdapter", "HttpStream", "ActivityContext", + "to_threaded_conversation_id", ] __all__.extend(auth.__all__) __all__.extend(events.__all__) diff --git a/packages/apps/src/microsoft_teams/apps/app.py b/packages/apps/src/microsoft_teams/apps/app.py index cf720a85..aaff47b2 100644 --- a/packages/apps/src/microsoft_teams/apps/app.py +++ b/packages/apps/src/microsoft_teams/apps/app.py @@ -23,6 +23,7 @@ FederatedIdentityCredentials, ManagedIdentityCredentials, MessageActivityInput, + SentActivity, TokenCredentials, TokenProtocol, ) @@ -59,6 +60,7 @@ from .routing.activity_context import ActivityContext from .token_manager import TokenManager from .utils import create_graph_client +from .utils.thread import to_threaded_conversation_id version = importlib.metadata.version("microsoft-teams-apps") @@ -168,6 +170,7 @@ def __init__(self, **options: Unpack[AppOptions]): self.credentials.tenant_id, application_id_uri=self.options.application_id_uri, cloud=self.cloud, + additional_allowed_domains=self.options.additional_allowed_domains, ) @property @@ -212,7 +215,12 @@ async def initialize(self) -> None: # Initialize HttpServer (JWT validation + messaging endpoint route) self.server.on_request = self._process_activity_event - self.server.initialize(credentials=self.credentials, skip_auth=self.options.skip_auth, cloud=self.cloud) + self.server.initialize( + credentials=self.credentials, + skip_auth=self.options.skip_auth, + additional_allowed_domains=self.options.additional_allowed_domains, + cloud=self.cloud, + ) self._initialized = True logger.info("Teams app initialized successfully") @@ -292,7 +300,12 @@ async def stop(self) -> None: raise async def send(self, conversation_id: str, activity: str | ActivityParams | AdaptiveCard): - """Send an activity proactively.""" + """Send an activity proactively to a conversation. + + Sends to the exact conversation ID provided. For channel threads, + the conversation ID must include ``;messageid=`` - use :func:`to_threaded_conversation_id` + to construct it, or use :meth:`reply` which handles this automatically. + """ if not self._initialized: raise ValueError("app not initialized - call app.initialize() or app.start() first") @@ -304,7 +317,7 @@ async def send(self, conversation_id: str, activity: str | ActivityParams | Adap channel_id="msteams", service_url=self.api.service_url, bot=Account(id=self.id), - conversation=ConversationAccount(id=conversation_id, conversation_type="personal"), + conversation=ConversationAccount(id=conversation_id), ) if isinstance(activity, str): @@ -316,6 +329,50 @@ async def send(self, conversation_id: str, activity: str | ActivityParams | Adap return await self.activity_sender.send(activity, conversation_ref) + @overload + async def reply( + self, + conversation_id: str, + message_id: str, + activity: str | ActivityParams | AdaptiveCard, + ) -> SentActivity: ... + + @overload + async def reply( + self, + conversation_id: str, + message_id: str | ActivityParams | AdaptiveCard, + ) -> SentActivity: ... + + async def reply( # type: ignore[reportInconsistentOverload] + self, + conversation_id: str, + message_id: str | ActivityParams | AdaptiveCard = "", + activity: str | ActivityParams | AdaptiveCard | None = None, + ) -> SentActivity: + """Send an activity proactively to a conversation, optionally as a threaded reply. + + **3-arg form** ``reply(conversation_id, message_id, activity)``: + Constructs a threaded conversation ID via :func:`to_threaded_conversation_id` + and sends to that thread. The service determines whether threading is + supported for the given conversation type. + + **2-arg form** ``reply(conversation_id, activity)``: + Sends to the exact conversation ID provided - threaded if it contains + ``;messageid=``, flat otherwise. + + Args: + conversation_id: The conversation ID + message_id: The thread root message ID (3-arg form) or the activity (2-arg form) + activity: The activity to send (only in 3-arg form) + """ + if activity is not None: + if not isinstance(message_id, str): + raise TypeError("message_id must be a string when activity is provided") + return await self.send(to_threaded_conversation_id(conversation_id, message_id), activity) + + return await self.send(conversation_id, message_id) + def use(self, middleware: Callable[[ActivityContext[ActivityBase]], Awaitable[None]]) -> None: """Add middleware to run on all activities.""" self.router.add_handler(lambda _: True, middleware) diff --git a/packages/apps/src/microsoft_teams/apps/auth/token_validator.py b/packages/apps/src/microsoft_teams/apps/auth/token_validator.py index ad04dfca..ba77c8e5 100644 --- a/packages/apps/src/microsoft_teams/apps/auth/token_validator.py +++ b/packages/apps/src/microsoft_teams/apps/auth/token_validator.py @@ -9,6 +9,7 @@ import re from dataclasses import dataclass from typing import Any, Dict, List, Optional +from urllib.parse import urlparse import jwt from microsoft_teams.api.auth.cloud_environment import PUBLIC, CloudEnvironment @@ -18,6 +19,37 @@ logger = logging.getLogger(__name__) +def is_allowed_service_url( + service_url: str, + cloud: CloudEnvironment, + additional_domains: Optional[List[str]] = None, +) -> bool: + """Validate that a service URL hostname is allowed. + + Checks against the cloud environment's allowed service URLs, + plus any additional domains provided by the caller. + Localhost is always allowed for local development. + """ + try: + parsed = urlparse(service_url) + hostname = (parsed.hostname or "").lower() + + if hostname in ("localhost", "127.0.0.1"): + return True + + if parsed.scheme != "https": + return False + + allowed = [d.lower() for d in [*cloud.allowed_service_urls, *(additional_domains or [])]] + if "*" in allowed: + return True + + return hostname in allowed + except Exception: # pragma: no cover + logger.error("Failed to parse service URL for validation: %s", service_url) + return False + + @dataclass class JwtValidationOptions: """Configuration for JWT validation.""" @@ -41,14 +73,28 @@ class TokenValidator: JWT token validator using PyJWKClient for simplified validation. """ - def __init__(self, jwt_validation_options: JwtValidationOptions): + def __init__( + self, + jwt_validation_options: JwtValidationOptions, + cloud: Optional[CloudEnvironment] = None, + additional_allowed_domains: Optional[List[str]] = None, + ): """ Initialize the token validator. Args: jwt_validation_options: Configuration for JWT validation + cloud: Optional cloud environment for service URL validation + additional_allowed_domains: Additional service URL hostnames accepted beyond the cloud + preset. Entries must be bare hostnames matched exactly (case-insensitive) β€” wildcard + patterns like ``"*.example.com"``, URL suffixes, or full URLs are NOT supported. + Pass ``["*"]`` as the sole wildcard to accept any hostname. """ self.options = jwt_validation_options + self.cloud = cloud or PUBLIC + self.additional_allowed_domains = ( + list(additional_allowed_domains) if additional_allowed_domains is not None else None + ) self._jwks_client = jwt.PyJWKClient(jwt_validation_options.jwks_uri) @staticmethod @@ -62,6 +108,7 @@ def for_service( app_id: str, service_url: Optional[str] = None, cloud: Optional[CloudEnvironment] = None, + additional_allowed_domains: Optional[List[str]] = None, ) -> TokenValidator: """Create a validator for Bot Framework service tokens. @@ -71,6 +118,10 @@ def for_service( app_id: The bot's Microsoft App ID (used for audience validation) service_url: Optional service URL to validate against token claims cloud: Optional cloud environment for sovereign cloud support + additional_allowed_domains: Additional service URL hostnames accepted beyond the cloud + preset. Entries must be bare hostnames matched exactly (case-insensitive) β€” wildcard + patterns like ``"*.example.com"``, URL suffixes, or full URLs are NOT supported. + Pass ``["*"]`` as the sole wildcard to accept any hostname. """ env = cloud or PUBLIC jwks_keys_uri = re.sub(r"/openidconfiguration$", "/keys", env.openid_metadata_url) @@ -81,7 +132,7 @@ def for_service( jwks_uri=jwks_keys_uri, service_url=service_url, ) - return cls(options) + return cls(options, cloud=env, additional_allowed_domains=additional_allowed_domains) @classmethod def for_entra( @@ -91,6 +142,7 @@ def for_entra( scope: Optional[str] = None, application_id_uri: Optional[str] = None, cloud: Optional[CloudEnvironment] = None, + additional_allowed_domains: Optional[List[str]] = None, ) -> TokenValidator: """Create a validator for Entra ID tokens. @@ -101,11 +153,20 @@ def for_entra( application_id_uri: Optional Application ID URI from Azure portal. Matches webApplicationInfo.resource in the app manifest. cloud: Optional cloud environment for sovereign cloud support + additional_allowed_domains: Additional service URL hostnames accepted beyond the cloud + preset. Entries must be bare hostnames matched exactly (case-insensitive) β€” wildcard + patterns like ``"*.example.com"``, URL suffixes, or full URLs are NOT supported. + Pass ``["*"]`` as the sole wildcard to accept any hostname. """ env = cloud or PUBLIC valid_issuers: List[str] = [] if tenant_id: valid_issuers.append(f"{env.login_endpoint}/{tenant_id}/v2.0") + else: + logger.warning( + "No tenant_id provided for Entra token validation. " + "Issuer validation will be skipped, accepting tokens from any tenant." + ) tenant_id = tenant_id or "common" valid_audiences = cls._default_audiences(app_id) if application_id_uri: @@ -116,7 +177,7 @@ def for_entra( jwks_uri=f"{env.login_endpoint}/{tenant_id}/discovery/v2.0/keys", scope=scope, ) - return cls(options) + return cls(options, cloud=env, additional_allowed_domains=additional_allowed_domains) async def validate_token( self, raw_token: str, service_url: Optional[str] = None, scope: Optional[str] = None @@ -159,10 +220,17 @@ async def validate_token( leeway=JWT_LEEWAY_SECONDS, ) - # Optional service URL validation - expected_service_url = service_url or self.options.service_url - if expected_service_url: - self._validate_service_url(payload, expected_service_url) + # Validate service URL against allowed domains + effective_service_url = service_url or self.options.service_url + if effective_service_url and not is_allowed_service_url( + effective_service_url, self.cloud, self.additional_allowed_domains + ): + logger.error(f"Rejected service URL: {effective_service_url}") + raise jwt.InvalidTokenError("Service URL is not from an allowed domain") + + # Optional service URL claim validation + if effective_service_url: + self._validate_service_url(payload, effective_service_url) required_scope = scope or self.options.scope if required_scope: @@ -205,7 +273,7 @@ def _validate_scope(self, payload: Dict[str, Any], required_scope: str) -> None: payload: The decoded JWT payload required_scope: The scope required to be present in the token """ - scopes = payload.get("scp", "") or "" - if required_scope not in scopes: + scope_set = set((payload.get("scp", "") or "").split()) + if required_scope not in scope_set: logger.error(f"Token missing required scope: {required_scope}") raise jwt.InvalidTokenError(f"Token missing required scope: {required_scope}") diff --git a/packages/apps/src/microsoft_teams/apps/http/http_server.py b/packages/apps/src/microsoft_teams/apps/http/http_server.py index 0d695db2..9d0e2888 100644 --- a/packages/apps/src/microsoft_teams/apps/http/http_server.py +++ b/packages/apps/src/microsoft_teams/apps/http/http_server.py @@ -8,11 +8,12 @@ from typing import Any, Awaitable, Callable, Dict, Optional, cast from microsoft_teams.api import Credentials, InvokeResponse, TokenProtocol -from microsoft_teams.api.auth.cloud_environment import CloudEnvironment +from microsoft_teams.api.auth.cloud_environment import PUBLIC, CloudEnvironment from microsoft_teams.api.auth.json_web_token import JsonWebToken from pydantic import BaseModel from ..auth import TokenValidator +from ..auth.token_validator import is_allowed_service_url from ..events import ActivityEvent, CoreActivity from .adapter import HttpRequest, HttpResponse, HttpServerAdapter @@ -36,6 +37,8 @@ def __init__(self, adapter: HttpServerAdapter, messaging_endpoint: str = "/api/m self._on_request: Optional[Callable[[ActivityEvent], Awaitable[InvokeResponse[Any]]]] = None self._token_validator: Optional[TokenValidator] = None self._skip_auth: bool = False + self._additional_allowed_domains: Optional[list[str]] = None + self._cloud: CloudEnvironment = PUBLIC self._initialized: bool = False @property @@ -61,6 +64,7 @@ def initialize( self, credentials: Optional[Credentials] = None, skip_auth: bool = False, + additional_allowed_domains: Optional[list[str]] = None, cloud: Optional[CloudEnvironment] = None, ) -> None: """ @@ -69,16 +73,26 @@ def initialize( Args: credentials: App credentials for JWT validation. skip_auth: Whether to skip JWT validation. + additional_allowed_domains: Additional allowed service URL domain suffixes. cloud: Optional cloud environment for sovereign cloud support. """ if self._initialized: return self._skip_auth = skip_auth + self._additional_allowed_domains = additional_allowed_domains + self._cloud = cloud or PUBLIC + + if "*" in (additional_allowed_domains or []): + logger.warning("Service URL validation is disabled via wildcard in additional_allowed_domains") app_id = getattr(credentials, "client_id", None) if credentials else None if app_id and not skip_auth: - self._token_validator = TokenValidator.for_service(app_id, cloud=cloud) + self._token_validator = TokenValidator.for_service( + app_id, + cloud=self._cloud, + additional_allowed_domains=self._additional_allowed_domains, + ) logger.debug("JWT validation enabled for %s", self._messaging_endpoint) self._adapter.register_route("POST", self._messaging_endpoint, self.handle_request) @@ -123,6 +137,11 @@ async def handle_request(self, request: HttpRequest) -> HttpResponse: ), ) + # Validate service URL against allowed domains + if service_url and not is_allowed_service_url(service_url, self._cloud, self._additional_allowed_domains): + logger.warning(f"Rejected service URL: {service_url}") + return HttpResponse(status=403, body={"error": "Service URL not allowed"}) + core_activity = CoreActivity.model_validate(body) activity_type = core_activity.type or "unknown" activity_id = core_activity.id or "unknown" diff --git a/packages/apps/src/microsoft_teams/apps/http_stream.py b/packages/apps/src/microsoft_teams/apps/http_stream.py index 8ee66185..5aa515ee 100644 --- a/packages/apps/src/microsoft_teams/apps/http_stream.py +++ b/packages/apps/src/microsoft_teams/apps/http_stream.py @@ -6,15 +6,13 @@ import asyncio import logging from collections import deque -from typing import Awaitable, Callable, List, Optional, Union +from typing import Awaitable, Callable, Optional, Union from httpx import HTTPStatusError from microsoft_teams.api import ( ApiClient, - Attachment, ChannelData, ConversationReference, - Entity, MessageActivityInput, SentActivity, TypingActivityInput, @@ -71,9 +69,8 @@ def _reset_state(self) -> None: self._index = 1 self._id: Optional[str] = None self._text: str = "" - self._attachments: List[Attachment] = [] self._channel_data: ChannelData = ChannelData() - self._entities: List[Entity] = [] + self._final_activity: Optional[MessageActivityInput] = None self._queue: deque[Union[MessageActivityInput, TypingActivityInput, str]] = deque() @property @@ -167,14 +164,19 @@ async def close(self) -> Optional[SentActivity]: logger.warning("Timeout while waiting for _id to be set and queue to be empty, cannot close stream") return None - if self._text == "" and self._attachments == []: - logger.warning("no text or attachments to send, cannot close stream") + has_content = ( + self._text != "" + or (self._final_activity and self._final_activity.attachments) + or (self._final_activity and self._final_activity.suggested_actions) + ) + if not has_content: + logger.warning("no text, attachments, or suggested actions to send, cannot close stream") return None - # Build final message + # Build final message from the last emitted MessageActivityInput (last wins) assert self._id is not None, "ID should be set by this point" - activity = MessageActivityInput(text=self._text).with_id(self._id).with_channel_data(self._channel_data) - activity.add_attachments(*self._attachments).add_entities(*self._entities).add_stream_final() + activity = self._final_activity or MessageActivityInput() + activity.with_text(self._text).with_id(self._id).with_channel_data(self._channel_data).add_stream_final() res = await retry(lambda: self._send(activity), options=RetryOptions()) @@ -205,7 +207,7 @@ async def _flush(self) -> None: self._timeout.cancel() self._timeout = None - informative_updates: List[TypingActivityInput] = [] + informative_updates: list[TypingActivityInput] = [] start_length = len(self._queue) while self._queue: @@ -213,8 +215,7 @@ async def _flush(self) -> None: if isinstance(activity, MessageActivityInput): self._text += activity.text or "" - self._attachments.extend(activity.attachments or []) - self._entities.extend(activity.entities or []) + self._final_activity = activity if isinstance(activity, (MessageActivityInput, TypingActivityInput)) and activity.channel_data: merged = {**self._channel_data.model_dump(), **activity.channel_data.model_dump()} self._channel_data = ChannelData(**merged) diff --git a/packages/apps/src/microsoft_teams/apps/options.py b/packages/apps/src/microsoft_teams/apps/options.py index f0e58845..17825055 100644 --- a/packages/apps/src/microsoft_teams/apps/options.py +++ b/packages/apps/src/microsoft_teams/apps/options.py @@ -46,6 +46,8 @@ class AppOptions(TypedDict, total=False): storage: Optional[Storage[str, Any]] plugins: Optional[List[PluginBase]] skip_auth: Optional[bool] + additional_allowed_domains: Optional[List[str]] + """Additional allowed service URL hostnames beyond the built-in defaults.""" # HTTP adapter http_server_adapter: Optional[HttpServerAdapter] @@ -86,6 +88,8 @@ class InternalAppOptions: # Fields with defaults skip_auth: bool = False + additional_allowed_domains: Optional[List[str]] = None + """Additional allowed service URL hostnames beyond the built-in defaults.""" default_connection_name: str = "graph" """The OAuth connection name to use for authentication.""" plugins: List[PluginBase] = field(default_factory=lambda: []) diff --git a/packages/apps/src/microsoft_teams/apps/routing/activity_context.py b/packages/apps/src/microsoft_teams/apps/routing/activity_context.py index 5dfb9296..d0ae3cac 100644 --- a/packages/apps/src/microsoft_teams/apps/routing/activity_context.py +++ b/packages/apps/src/microsoft_teams/apps/routing/activity_context.py @@ -161,12 +161,15 @@ async def send( message: str | ActivityParams | AdaptiveCard, conversation_ref: Optional[ConversationReference] = None, ) -> SentActivity: - """ - Send a message to the conversation. + """Send a message in the current conversation without quoting. + + In channels, sends to the current thread. In scopes that do not + support threading (group chat, meetings), sends as a normal message. + To send with a visual quote of the inbound message, use :meth:`reply`. Args: message: The message to send, can be a string, ActivityParams, or AdaptiveCard - conversation_ref: Optional conversation reference to override the current conversation reference + conversation_ref: Optional conversation reference to send to a different conversation or thread """ if isinstance(message, str): activity = MessageActivityInput(text=message) @@ -180,7 +183,12 @@ async def send( return res async def reply(self, input: str | ActivityParams) -> SentActivity: - """Send a reply to the activity.""" + """Send a message in the current conversation with a visual quote of the inbound message. + + In channels, sends to the current thread with a quoted reply. + In other scopes, sends with a quoted reply. + To send without quoting, use :meth:`send`. + """ activity = MessageActivityInput(text=input) if isinstance(input, str) else input if isinstance(activity, MessageActivityInput): block_quote = self._build_block_quote_for_activity() diff --git a/packages/apps/src/microsoft_teams/apps/utils/__init__.py b/packages/apps/src/microsoft_teams/apps/utils/__init__.py index 64c314f5..511041a5 100644 --- a/packages/apps/src/microsoft_teams/apps/utils/__init__.py +++ b/packages/apps/src/microsoft_teams/apps/utils/__init__.py @@ -6,5 +6,6 @@ from .activity_utils import extract_tenant_id from .graph import create_graph_client from .retry import RetryOptions, retry +from .thread import to_threaded_conversation_id -__all__ = ["create_graph_client", "extract_tenant_id", "retry", "RetryOptions"] +__all__ = ["create_graph_client", "extract_tenant_id", "retry", "RetryOptions", "to_threaded_conversation_id"] diff --git a/packages/apps/src/microsoft_teams/apps/utils/thread.py b/packages/apps/src/microsoft_teams/apps/utils/thread.py new file mode 100644 index 00000000..79c03073 --- /dev/null +++ b/packages/apps/src/microsoft_teams/apps/utils/thread.py @@ -0,0 +1,27 @@ +""" +Copyright (c) Microsoft Corporation. All rights reserved. +Licensed under the MIT License. +""" + + +def to_threaded_conversation_id(conversation_id: str, message_id: str) -> str: + """Construct a threaded conversation ID by appending `;messageid={message_id}` + to the conversation ID. This is the format APX uses to route messages + to a specific thread. + + Args: + conversation_id: The conversation to thread into (e.g. `19:abc@thread.skype`) + message_id: The thread root message ID (must be a non-zero numeric string) + + Returns: + The threaded conversation ID (e.g. `19:abc@thread.skype;messageid=123`) + """ + if not conversation_id: + raise ValueError("conversation_id must be a non-empty string") + + if not message_id or not message_id.isdigit() or message_id == "0": + raise ValueError(f'Invalid message_id "{message_id}": must be a non-zero numeric value') + + # Strip any existing ;messageid= suffix (mirrors APX's NormalizeConversationId) + base_id = conversation_id.split(";")[0] + return f"{base_id};messageid={message_id}" diff --git a/packages/apps/tests/test_app.py b/packages/apps/tests/test_app.py index e1072918..e78bcf0e 100644 --- a/packages/apps/tests/test_app.py +++ b/packages/apps/tests/test_app.py @@ -762,3 +762,64 @@ async def test_proactive_targeted_with_explicit_recipient_succeeds(self, mock_st app.activity_sender.send.assert_called_once() assert result.id == "sent-activity-id" + + +class TestAppReply: + """Test cases for App.reply() method.""" + + @pytest.fixture(scope="function") + def started_app(self): + options = AppOptions(client_id="test-client-id", client_secret="test-secret") + app = App(**options) + app._initialized = True + app.activity_sender.send = AsyncMock( + return_value=SentActivity(id="sent-activity-id", activity_params=MessageActivityInput(text="sent")) + ) + return app + + @pytest.mark.asyncio + async def test_reply_with_three_args_constructs_threaded_id(self, started_app): + await started_app.reply("19:abc@thread.skype", "1680000000000", "Hello thread") + + started_app.activity_sender.send.assert_called_once() + _, ref = started_app.activity_sender.send.call_args[0] + assert ref.conversation.id == "19:abc@thread.skype;messageid=1680000000000" + + @pytest.mark.asyncio + async def test_reply_with_two_args_passes_conversation_id_as_is(self, started_app): + await started_app.reply("19:abc@thread.skype", "Hello flat") + + started_app.activity_sender.send.assert_called_once() + _, ref = started_app.activity_sender.send.call_args[0] + assert ref.conversation.id == "19:abc@thread.skype" + + @pytest.mark.asyncio + async def test_reply_with_pre_constructed_threaded_id(self, started_app): + await started_app.reply("19:abc@thread.skype;messageid=123", "Hello") + + started_app.activity_sender.send.assert_called_once() + _, ref = started_app.activity_sender.send.call_args[0] + assert ref.conversation.id == "19:abc@thread.skype;messageid=123" + + @pytest.mark.asyncio + async def test_reply_with_invalid_message_id_raises(self, started_app): + with pytest.raises(ValueError, match="Invalid message_id"): + await started_app.reply("19:abc@thread.skype", "not-a-number", "Hello") + + @pytest.mark.asyncio + async def test_reply_raises_when_not_initialized(self): + options = AppOptions(client_id="test-client-id", client_secret="test-secret") + app = App(**options) + + with pytest.raises(ValueError, match="app not initialized"): + await app.reply("conv-id", "Hello") + + +class TestMergeAppOptions: + def test_merge_with_defaults(self): + from microsoft_teams.apps.options import merge_app_options_with_defaults + + result = merge_app_options_with_defaults(client_id="test-id") + assert result["client_id"] == "test-id" + assert result["skip_auth"] is False + assert result["default_connection_name"] == "graph" diff --git a/packages/apps/tests/test_http_server.py b/packages/apps/tests/test_http_server.py index bba4bcd8..03ed93e1 100644 --- a/packages/apps/tests/test_http_server.py +++ b/packages/apps/tests/test_http_server.py @@ -39,6 +39,18 @@ def test_init(self, server, mock_adapter): assert server.adapter is mock_adapter assert server.on_request is None + def test_initialize_idempotent(self, server, mock_adapter): + """Test that initialize can be called multiple times safely.""" + server.initialize(skip_auth=True) + server.initialize(skip_auth=True) + # Should only register route once + assert mock_adapter.register_route.call_count == 1 + + def test_invalid_messaging_endpoint_raises(self, mock_adapter): + """Test that invalid messaging endpoint raises ValueError.""" + with pytest.raises(ValueError, match="must be a non-empty path"): + HttpServer(mock_adapter, messaging_endpoint="no-slash") + def test_messaging_endpoint_default(self, server): """Test default messaging endpoint.""" assert server.messaging_endpoint == "/api/messages" @@ -130,6 +142,127 @@ async def test_handle_activity_no_handler(self, server): assert result["status"] == 500 + @pytest.mark.asyncio + async def test_rejects_missing_bearer_token(self, server, mock_adapter): + """Test that auth-enabled server rejects requests without Bearer token.""" + from unittest.mock import MagicMock + + creds = MagicMock() + creds.client_id = "test-app-id" + server.initialize(credentials=creds) + + request = HttpRequest( + body={"type": "message", "id": "test-123"}, + headers={"authorization": "Basic invalid"}, + ) + + result = await server.handle_request(request) + assert result["status"] == 401 + + @pytest.mark.asyncio + async def test_rejects_invalid_jwt(self, server, mock_adapter): + """Test that auth-enabled server rejects invalid JWT tokens.""" + from unittest.mock import MagicMock + + creds = MagicMock() + creds.client_id = "test-app-id" + server.initialize(credentials=creds) + + request = HttpRequest( + body={"type": "message", "id": "test-123"}, + headers={"authorization": "Bearer invalid.jwt.token"}, + ) + + result = await server.handle_request(request) + assert result["status"] == 401 + + @pytest.mark.asyncio + async def test_rejects_non_allowed_service_url(self, server): + """Test that requests with non-allowed serviceUrl are rejected.""" + server.initialize(skip_auth=True) + + request = HttpRequest( + body={ + "type": "message", + "id": "test-123", + "text": "Test", + "serviceUrl": "https://evil.com/steal", + }, + headers={}, + ) + + result = await server.handle_request(request) + assert result["status"] == 403 + + @pytest.mark.asyncio + async def test_accepts_allowed_service_url(self, server): + """Test that requests with allowed serviceUrl pass validation.""" + + async def mock_handler(event): + return InvokeResponse(status=200, body=cast(ConfigResponse, {})) + + server.on_request = mock_handler + server.initialize(skip_auth=True) + + request = HttpRequest( + body={ + "type": "message", + "id": "test-123", + "text": "Test", + "serviceUrl": "https://smba.trafficmanager.net/teams/", + }, + headers={}, + ) + + result = await server.handle_request(request) + assert result["status"] == 200 + + @pytest.mark.asyncio + async def test_allows_any_service_url_with_wildcard_domain(self, mock_adapter): + """Test that additional_allowed_domains=["*"] allows any serviceUrl.""" + server = HttpServer(mock_adapter) + + async def mock_handler(event): + return InvokeResponse(status=200, body=cast(ConfigResponse, {})) + + server.on_request = mock_handler + server.initialize(skip_auth=True, additional_allowed_domains=["*"]) + + request = HttpRequest( + body={ + "type": "message", + "id": "test-123", + "text": "Test", + "serviceUrl": "https://evil.com/steal", + }, + headers={}, + ) + + result = await server.handle_request(request) + assert result["status"] == 200 + + def test_initialize_forwards_allowlist_to_token_validator(self, mock_adapter): + """With skip_auth=False and credentials, the allowlist must reach TokenValidator. + + Regression: HttpServer.initialize() previously constructed TokenValidator.for_service() + without passing additional_allowed_domains, so the token-validation service-URL check + ignored user-configured domains. + """ + server = HttpServer(mock_adapter) + credentials = MagicMock(client_id="test-app-id") + + with patch("microsoft_teams.apps.http.http_server.TokenValidator.for_service") as mock_for_service: + mock_for_service.return_value = MagicMock() + server.initialize( + credentials=credentials, + skip_auth=False, + additional_allowed_domains=["canary.botapi.skype.com"], + ) + + mock_for_service.assert_called_once() + _, kwargs = mock_for_service.call_args + assert kwargs.get("additional_allowed_domains") == ["canary.botapi.skype.com"] + class TestFastAPIAdapter: """Test cases for FastAPIAdapter.""" diff --git a/packages/apps/tests/test_http_stream.py b/packages/apps/tests/test_http_stream.py index 9a0f0a46..64d7b124 100644 --- a/packages/apps/tests/test_http_stream.py +++ b/packages/apps/tests/test_http_stream.py @@ -12,9 +12,13 @@ from microsoft_teams.api import ( Account, ApiClient, + CardAction, + CardActionType, ConversationAccount, ConversationReference, + MessageActivityInput, SentActivity, + SuggestedActions, TypingActivityInput, ) from microsoft_teams.apps import HttpStream @@ -327,3 +331,98 @@ async def test_close_returns_none_when_canceled(self, mock_api_client, conversat result = await stream.close() assert result is None + + @pytest.mark.asyncio + async def test_final_activity_last_wins(self, mock_api_client, conversation_reference, patch_loop_call_later): + """When multiple MessageActivityInputs are emitted, the last one's non-text fields are used.""" + loop = asyncio.get_running_loop() + patcher, scheduled = patch_loop_call_later(loop) + + update_call_count = 0 + original_create = mock_api_client.conversations.activities().create + + async def mock_send(activity): + nonlocal update_call_count + if ( + hasattr(activity, "id") + and activity.id + and not any(e.type == "streaminfo" for e in (activity.entities or [])) + ): + update_call_count += 1 + return SentActivity(id=activity.id, activity_params=activity) + return await original_create(activity) + + mock_api_client.conversations.activities().create = mock_send + mock_api_client.conversations.activities().update = mock_send + + with patcher: + stream = HttpStream(mock_api_client, conversation_reference) + + early_actions = SuggestedActions( + to=[], + actions=[CardAction(type=CardActionType.IM_BACK, title="Early", value="early")], + ) + late_actions = SuggestedActions( + to=[], + actions=[CardAction(type=CardActionType.IM_BACK, title="Late", value="late")], + ) + + stream.emit("Hello ") + stream.emit(MessageActivityInput(text="world").with_suggested_actions(early_actions)) + stream.emit(MessageActivityInput().add_ai_generated().with_suggested_actions(late_actions)) + await asyncio.sleep(0) + await self._run_scheduled_flushes(scheduled) + + result = await stream.close() + assert result is not None + # The final activity should use the last emitted MessageActivityInput's suggested actions + assert result.activity_params.suggested_actions == late_actions + # Text should be accumulated from all emits + assert result.activity_params.text == "Hello world" + + @pytest.mark.asyncio + async def test_suggested_actions_on_final_message( + self, mock_api_client, conversation_reference, patch_loop_call_later + ): + """Suggested actions emitted mid-stream appear on the final close() message.""" + loop = asyncio.get_running_loop() + patcher, scheduled = patch_loop_call_later(loop) + + update_call_count = 0 + original_create = mock_api_client.conversations.activities().create + + async def mock_send(activity): + nonlocal update_call_count + if ( + hasattr(activity, "id") + and activity.id + and not any(e.type == "streaminfo" for e in (activity.entities or [])) + ): + update_call_count += 1 + return SentActivity(id=activity.id, activity_params=activity) + return await original_create(activity) + + mock_api_client.conversations.activities().create = mock_send + mock_api_client.conversations.activities().update = mock_send + + with patcher: + stream = HttpStream(mock_api_client, conversation_reference) + + actions = SuggestedActions( + to=[], + actions=[ + CardAction(type=CardActionType.IM_BACK, title="Option A", value="a"), + CardAction(type=CardActionType.IM_BACK, title="Option B", value="b"), + ], + ) + + stream.emit("Streaming content...") + stream.emit(MessageActivityInput().with_suggested_actions(actions)) + await asyncio.sleep(0) + await self._run_scheduled_flushes(scheduled) + + result = await stream.close() + assert result is not None + assert result.activity_params.suggested_actions is not None + assert len(result.activity_params.suggested_actions.actions) == 2 + assert result.activity_params.suggested_actions.actions[0].title == "Option A" diff --git a/packages/apps/tests/test_thread.py b/packages/apps/tests/test_thread.py new file mode 100644 index 00000000..fca09dce --- /dev/null +++ b/packages/apps/tests/test_thread.py @@ -0,0 +1,45 @@ +""" +Copyright (c) Microsoft Corporation. All rights reserved. +Licensed under the MIT License. +""" + +import pytest +from microsoft_teams.apps.utils.thread import to_threaded_conversation_id + + +class TestToThreadedConversationId: + def test_constructs_threaded_conversation_id(self): + result = to_threaded_conversation_id("19:abc@thread.skype", "1680000000000") + assert result == "19:abc@thread.skype;messageid=1680000000000" + + def test_works_with_different_conversation_id_formats(self): + result = to_threaded_conversation_id("19:meeting_abc@thread.v2", "999") + assert result == "19:meeting_abc@thread.v2;messageid=999" + + def test_raises_on_empty_conversation_id(self): + with pytest.raises(ValueError, match="conversation_id must be a non-empty string"): + to_threaded_conversation_id("", "123") + + def test_raises_on_empty_message_id(self): + with pytest.raises(ValueError, match="Invalid message_id"): + to_threaded_conversation_id("19:abc@thread.skype", "") + + def test_raises_on_zero_message_id(self): + with pytest.raises(ValueError, match="Invalid message_id"): + to_threaded_conversation_id("19:abc@thread.skype", "0") + + def test_raises_on_non_numeric_message_id(self): + with pytest.raises(ValueError, match="Invalid message_id"): + to_threaded_conversation_id("19:abc@thread.skype", "abc") + + def test_raises_on_negative_message_id(self): + with pytest.raises(ValueError, match="Invalid message_id"): + to_threaded_conversation_id("19:abc@thread.skype", "-1") + + def test_raises_on_decimal_message_id(self): + with pytest.raises(ValueError, match="Invalid message_id"): + to_threaded_conversation_id("19:abc@thread.skype", "1.5") + + def test_strips_existing_messageid_and_replaces_with_thread_root(self): + result = to_threaded_conversation_id("19:abc@thread.skype;messageid=111", "222") + assert result == "19:abc@thread.skype;messageid=222" diff --git a/packages/apps/tests/test_token_validator.py b/packages/apps/tests/test_token_validator.py index b7534012..0ce5a5bc 100644 --- a/packages/apps/tests/test_token_validator.py +++ b/packages/apps/tests/test_token_validator.py @@ -7,6 +7,7 @@ import jwt import pytest +from microsoft_teams.api.auth.cloud_environment import PUBLIC, US_GOV from microsoft_teams.apps.auth.token_validator import TokenValidator # pyright: basic @@ -336,3 +337,252 @@ async def test_validate_entra_token_invalid_audience(self, validator_entra, mock ): with pytest.raises(jwt.InvalidTokenError): await validator_entra.validate_token(token) + + # --- Finding 4: Scope validation uses exact match, not substring --- + + @pytest.mark.asyncio + async def test_scope_validation_rejects_substring_match(self, mock_jwks_client): + """Scope 'User.Read' should NOT match 'User.ReadBasic.All' (substring).""" + validator = TokenValidator.for_entra(app_id="test-app-id", tenant_id="test-tenant-id", scope="User.Read") + validator._jwks_client = mock_jwks_client + payload = { + "iss": "https://login.microsoftonline.com/test-tenant-id/v2.0", + "aud": "test-app-id", + "scp": "User.ReadBasic.All", + "exp": 9999999999, + "iat": 1000000000, + } + + with patch("jwt.decode", return_value=payload): + with pytest.raises(jwt.InvalidTokenError, match="Token missing required scope: User.Read"): + await validator.validate_token("valid.jwt.token") + + @pytest.mark.asyncio + async def test_scope_validation_accepts_exact_match_among_multiple(self, mock_jwks_client): + """Scope 'User.Read' should match when present among multiple scopes.""" + validator = TokenValidator.for_entra(app_id="test-app-id", tenant_id="test-tenant-id", scope="User.Read") + validator._jwks_client = mock_jwks_client + payload = { + "iss": "https://login.microsoftonline.com/test-tenant-id/v2.0", + "aud": "test-app-id", + "scp": "Mail.Read User.Read Files.ReadWrite", + "exp": 9999999999, + "iat": 1000000000, + } + + with patch("jwt.decode", return_value=payload): + result = await validator.validate_token("valid.jwt.token") + assert result["scp"] == "Mail.Read User.Read Files.ReadWrite" + + # --- Finding 10: Issuer validation bypass --- + + def test_for_entra_without_tenant_id_logs_warning(self, caplog): + """Creating Entra validator without tenant_id should log a warning.""" + import logging + + with caplog.at_level(logging.WARNING): + validator = TokenValidator.for_entra(app_id="test-app-id", tenant_id=None) + assert validator.options.valid_issuers == [] + assert "Issuer validation will be skipped" in caplog.text + + # --- Finding 1: Service URL domain allowlist --- + + @pytest.mark.asyncio + @pytest.mark.asyncio + async def test_service_url_rejects_botframework_by_default(self, validator, mock_jwks_client, valid_payload): + """botframework.com should be rejected by default (non-Teams channel).""" + validator._jwks_client = mock_jwks_client + with patch("jwt.decode", return_value=valid_payload): + with pytest.raises(jwt.InvalidTokenError, match="is not from an allowed domain"): + await validator.validate_token("valid.jwt.token", "https://webchat.botframework.com") + + @pytest.mark.asyncio + async def test_service_url_rejects_non_allowed_domain(self, validator, mock_jwks_client, valid_payload): + """Service URL from unknown domain should be rejected.""" + validator._jwks_client = mock_jwks_client + with patch("jwt.decode", return_value=valid_payload): + with pytest.raises(jwt.InvalidTokenError, match="is not from an allowed domain"): + await validator.validate_token("valid.jwt.token", "https://evil.com/api") + + @pytest.mark.asyncio + async def test_service_url_accepts_cloud_preset_fqdn(self, validator, mock_jwks_client): + """Service URL from cloud preset should be accepted.""" + payload = { + "iss": "https://api.botframework.com", + "aud": "test-app-id", + "serviceurl": "https://smba.trafficmanager.net/amer/", + "exp": 9999999999, + "iat": 1000000000, + } + validator._jwks_client = mock_jwks_client + with patch("jwt.decode", return_value=payload): + result = await validator.validate_token("valid.jwt.token", "https://smba.trafficmanager.net/amer/") + assert result["serviceurl"] == "https://smba.trafficmanager.net/amer/" + + @pytest.mark.asyncio + async def test_service_url_accepts_localhost(self, validator, mock_jwks_client): + """Localhost service URL should be accepted for development.""" + payload = { + "iss": "https://api.botframework.com", + "aud": "test-app-id", + "serviceurl": "http://localhost:3978", + "exp": 9999999999, + "iat": 1000000000, + } + validator._jwks_client = mock_jwks_client + with patch("jwt.decode", return_value=payload): + result = await validator.validate_token("valid.jwt.token", "http://localhost:3978") + assert result["serviceurl"] == "http://localhost:3978" + + @pytest.mark.asyncio + async def test_service_url_accepts_gov_cloud_with_us_gov_preset(self, mock_jwks_client): + """US Government cloud service URL should be accepted with US_GOV cloud.""" + validator = TokenValidator.for_service("test-app-id", cloud=US_GOV) + payload = { + "iss": "https://api.botframework.us", + "aud": "test-app-id", + "serviceurl": "https://smba.infra.gov.teams.microsoft.us/", + "exp": 9999999999, + "iat": 1000000000, + } + validator._jwks_client = mock_jwks_client + with patch("jwt.decode", return_value=payload): + result = await validator.validate_token("valid.jwt.token", "https://smba.infra.gov.teams.microsoft.us/") + assert isinstance(result, dict) + + @pytest.mark.asyncio + async def test_service_url_rejects_spoofed_suffix(self, validator, mock_jwks_client, valid_payload): + """Domain containing allowed suffix as substring should be rejected.""" + validator._jwks_client = mock_jwks_client + with patch("jwt.decode", return_value=valid_payload): + with pytest.raises(jwt.InvalidTokenError, match="is not from an allowed domain"): + await validator.validate_token("valid.jwt.token", "https://botframework.com.evil.com") + + @pytest.mark.asyncio + async def test_service_url_rejects_attacker_trafficmanager(self, validator, mock_jwks_client, valid_payload): + """Attacker-controlled trafficmanager subdomain should be rejected.""" + validator._jwks_client = mock_jwks_client + with patch("jwt.decode", return_value=valid_payload): + with pytest.raises(jwt.InvalidTokenError, match="is not from an allowed domain"): + await validator.validate_token("valid.jwt.token", "https://attacker.trafficmanager.net") + + @pytest.mark.asyncio + async def test_service_url_accepts_smba_onyx_trafficmanager(self, validator, mock_jwks_client): + """smba.onyx.prod.teams.trafficmanager.net should be accepted.""" + payload = { + "iss": "https://api.botframework.com", + "aud": "test-app-id", + "serviceurl": "https://smba.onyx.prod.teams.trafficmanager.net", + "exp": 9999999999, + "iat": 1000000000, + } + validator._jwks_client = mock_jwks_client + with patch("jwt.decode", return_value=payload): + result = await validator.validate_token( + "valid.jwt.token", "https://smba.onyx.prod.teams.trafficmanager.net" + ) + assert isinstance(result, dict) + + def test_is_allowed_service_url_invalid_url(self): + """Invalid URL should return False.""" + from microsoft_teams.apps.auth.token_validator import is_allowed_service_url + + assert is_allowed_service_url("not-a-url", PUBLIC) is False + + def test_is_allowed_service_url_empty(self): + """Empty string should return False (no hostname to match).""" + from microsoft_teams.apps.auth.token_validator import is_allowed_service_url + + assert is_allowed_service_url("", PUBLIC) is False + + def test_is_allowed_service_url_with_additional_domains(self): + """Additional domains should be accepted.""" + from microsoft_teams.apps.auth.token_validator import is_allowed_service_url + + assert is_allowed_service_url("https://api.custom.com", PUBLIC, ["api.custom.com"]) is True + assert is_allowed_service_url("https://api.custom.com", PUBLIC) is False + + def test_is_allowed_service_url_wildcard(self): + """Wildcard '*' should accept any domain.""" + from microsoft_teams.apps.auth.token_validator import is_allowed_service_url + + assert is_allowed_service_url("https://anything.example.com", PUBLIC, ["*"]) is True + + # ----- additional_allowed_domains plumbing through validate_token ----- + + @pytest.mark.asyncio + async def test_validate_token_honors_additional_allowed_domains(self, mock_jwks_client): + """validate_token must accept a service URL listed in additional_allowed_domains. + + Regression: for_service previously dropped the allowlist, so non-default channels + (canary, custom) were rejected even when the user configured them. + """ + validator = TokenValidator.for_service("test-app-id", additional_allowed_domains=["canary.botapi.skype.com"]) + validator._jwks_client = mock_jwks_client + payload = { + "iss": "https://api.botframework.com", + "aud": "test-app-id", + "serviceurl": "https://canary.botapi.skype.com/amer", + "exp": 9999999999, + "iat": 1000000000, + } + with patch("jwt.decode", return_value=payload): + result = await validator.validate_token("valid.jwt.token", "https://canary.botapi.skype.com/amer") + assert isinstance(result, dict) + + @pytest.mark.asyncio + async def test_validate_token_rejects_when_domain_not_in_allowlist(self, mock_jwks_client): + """Sanity check: without additional_allowed_domains, canary is rejected.""" + validator = TokenValidator.for_service("test-app-id") + validator._jwks_client = mock_jwks_client + payload = { + "iss": "https://api.botframework.com", + "aud": "test-app-id", + "serviceurl": "https://canary.botapi.skype.com/amer", + "exp": 9999999999, + "iat": 1000000000, + } + with patch("jwt.decode", return_value=payload): + with pytest.raises(jwt.InvalidTokenError, match="Service URL is not from an allowed domain"): + await validator.validate_token("valid.jwt.token", "https://canary.botapi.skype.com/amer") + + @pytest.mark.asyncio + async def test_validate_token_wildcard_allows_arbitrary_domain(self, mock_jwks_client): + """additional_allowed_domains=['*'] must disable the allowlist check at validate_token level.""" + validator = TokenValidator.for_service("test-app-id", additional_allowed_domains=["*"]) + validator._jwks_client = mock_jwks_client + payload = { + "iss": "https://api.botframework.com", + "aud": "test-app-id", + "serviceurl": "https://anything.example.com/", + "exp": 9999999999, + "iat": 1000000000, + } + with patch("jwt.decode", return_value=payload): + result = await validator.validate_token("valid.jwt.token", "https://anything.example.com/") + assert isinstance(result, dict) + + def test_for_service_stores_additional_allowed_domains(self): + """Factory must surface the allowlist on the instance so validate_token can use it.""" + validator = TokenValidator.for_service( + "test-app-id", additional_allowed_domains=["a.example.com", "b.example.com"] + ) + assert validator.additional_allowed_domains == ["a.example.com", "b.example.com"] + + def test_for_entra_stores_additional_allowed_domains(self): + """Same plumbing check for the Entra factory.""" + validator = TokenValidator.for_entra( + app_id="test-app-id", + tenant_id="test-tenant-id", + additional_allowed_domains=["custom.example.com"], + ) + assert validator.additional_allowed_domains == ["custom.example.com"] + + def test_init_copies_additional_allowed_domains(self): + """Mutating the caller's list after construction must not change validator behavior.""" + caller_list = ["a.example.com"] + validator = TokenValidator.for_service("test-app-id", additional_allowed_domains=caller_list) + + caller_list.append("b.example.com") + + assert validator.additional_allowed_domains == ["a.example.com"] diff --git a/packages/devtools/README.md b/packages/devtools/README.md index b53145a1..1d0734e0 100644 --- a/packages/devtools/README.md +++ b/packages/devtools/README.md @@ -1,5 +1,5 @@ -> [!CAUTION] -> This project is in public preview. We’ll do our best to maintain compatibility, but there may be breaking changes in upcoming releases. +> [!WARNING] +> **Deprecated** β€” This package was originally in preview, but we have decided to stop maintaining it before General Availability. We recommend testing with Microsoft Teams directly, or with the [Agents Playground](https://learn.microsoft.com/en-us/microsoftteams/platform/toolkit/debug-your-agents-playground). # Microsoft Teams DevTools diff --git a/packages/devtools/pyproject.toml b/packages/devtools/pyproject.toml index 121f973d..af32b41a 100644 --- a/packages/devtools/pyproject.toml +++ b/packages/devtools/pyproject.toml @@ -8,6 +8,9 @@ requires-python = ">=3.12,<3.15" repository = "https://github.com/microsoft/teams.py" keywords = ["microsoft", "teams", "ai", "bot", "agents"] license = "MIT" +classifiers = [ + "Development Status :: 7 - Inactive", +] dependencies = [ "uvicorn[standard]>=0.34.3", "fastapi>=0.115.13", diff --git a/packages/devtools/src/microsoft_teams/devtools/__init__.py b/packages/devtools/src/microsoft_teams/devtools/__init__.py index e41d9d78..cda8aaf4 100644 --- a/packages/devtools/src/microsoft_teams/devtools/__init__.py +++ b/packages/devtools/src/microsoft_teams/devtools/__init__.py @@ -4,10 +4,19 @@ """ import logging +import warnings from .devtools_plugin import DevToolsPlugin from .page import Page logging.getLogger(__name__).addHandler(logging.NullHandler()) +warnings.warn( + "microsoft-teams-devtools is deprecated and will no longer be maintained. " + "We recommend testing with Microsoft Teams directly, or with the Agents Playground: " + "https://learn.microsoft.com/en-us/microsoftteams/platform/toolkit/debug-your-agents-playground", + FutureWarning, + stacklevel=2, +) + __all__: list[str] = ["DevToolsPlugin", "Page"] diff --git a/packages/devtools/src/microsoft_teams/devtools/devtools_plugin.py b/packages/devtools/src/microsoft_teams/devtools/devtools_plugin.py index ff75aaf7..20153088 100644 --- a/packages/devtools/src/microsoft_teams/devtools/devtools_plugin.py +++ b/packages/devtools/src/microsoft_teams/devtools/devtools_plugin.py @@ -114,6 +114,15 @@ def on_stopped_callback(self, callback: Optional[Callable[[], Awaitable[None]]]) self._on_stopped_callback = callback async def on_init(self) -> None: + python_env = os.environ.get("PYTHON_ENV", "").lower() + node_env = os.environ.get("NODE_ENV", "").lower() + if python_env == "production" or node_env == "production": + raise RuntimeError( + "Devtools plugin cannot be used in production environments " + "(PYTHON_ENV=production or NODE_ENV=production). " + "Remove the devtools plugin from your app configuration." + ) + logger.warning("⚠️ Devtools is not secure and should not be used in production environments ⚠️") async def on_start(self, event: PluginStartEvent) -> None: diff --git a/packages/devtools/tests/test_devtools_plugin.py b/packages/devtools/tests/test_devtools_plugin.py new file mode 100644 index 00000000..6ac81dd2 --- /dev/null +++ b/packages/devtools/tests/test_devtools_plugin.py @@ -0,0 +1,85 @@ +""" +Copyright (c) Microsoft Corporation. All rights reserved. +Licensed under the MIT License. +""" + +import pytest +from microsoft_teams.devtools.devtools_plugin import DevToolsPlugin + +# pyright: basic + + +class TestDevToolsPluginEnvironmentGuard: + """Test that DevTools refuses to start in production environments.""" + + @pytest.fixture(autouse=True) + def _clear_env(self, monkeypatch): + """Clear environment variables before each test.""" + monkeypatch.delenv("PYTHON_ENV", raising=False) + monkeypatch.delenv("NODE_ENV", raising=False) + + @pytest.mark.asyncio + async def test_raises_when_python_env_is_production(self, monkeypatch): + monkeypatch.setenv("PYTHON_ENV", "production") + plugin = DevToolsPlugin() + + with pytest.raises(RuntimeError, match="cannot be used in production"): + await plugin.on_init() + + @pytest.mark.asyncio + async def test_raises_when_node_env_is_production(self, monkeypatch): + monkeypatch.setenv("NODE_ENV", "production") + plugin = DevToolsPlugin() + + with pytest.raises(RuntimeError, match="cannot be used in production"): + await plugin.on_init() + + @pytest.mark.asyncio + async def test_does_not_raise_in_development(self, monkeypatch): + monkeypatch.setenv("NODE_ENV", "development") + plugin = DevToolsPlugin() + + # Should not raise β€” just logs a warning + await plugin.on_init() + + @pytest.mark.asyncio + async def test_does_not_raise_when_env_not_set(self): + plugin = DevToolsPlugin() + + # Should not raise when no env var is set + await plugin.on_init() + + +class TestDevToolsPluginInit: + """Test DevToolsPlugin initialization and properties.""" + + def test_init_defaults(self): + plugin = DevToolsPlugin() + assert plugin.pages == [] + assert plugin.sockets == {} + assert plugin.pending == {} + assert plugin.on_ready_callback is None + assert plugin.on_stopped_callback is None + + def test_callback_setters(self): + plugin = DevToolsPlugin() + + async def ready(): + pass + + async def stopped(): + pass + + plugin.on_ready_callback = ready + plugin.on_stopped_callback = stopped + assert plugin.on_ready_callback is ready + assert plugin.on_stopped_callback is stopped + + def test_add_page(self): + from microsoft_teams.devtools.page import Page + + plugin = DevToolsPlugin() + page = Page(name="test", display_name="Test Page", url="/test") + plugin.add_page(page) + assert len(plugin.pages) == 1 + assert plugin.pages[0].name == "test" diff --git a/packages/mcpplugin/README.md b/packages/mcpplugin/README.md index 10ca3c8e..36814404 100644 --- a/packages/mcpplugin/README.md +++ b/packages/mcpplugin/README.md @@ -1,5 +1,8 @@ # Microsoft Teams MCP Plugin +> [!WARNING] +> **Deprecated** β€” This package was originally in preview, but we have decided to stop maintaining it before General Availability. We recommend using the official [MCP Python SDK](https://github.com/modelcontextprotocol/python-sdk) instead, which provides better long-term support for Model Context Protocol integrations. +

diff --git a/packages/mcpplugin/pyproject.toml b/packages/mcpplugin/pyproject.toml index 1e07579c..072e1328 100644 --- a/packages/mcpplugin/pyproject.toml +++ b/packages/mcpplugin/pyproject.toml @@ -8,6 +8,9 @@ requires-python = ">=3.12,<3.15" repository = "https://github.com/microsoft/teams.py" keywords = ["microsoft", "teams", "ai", "bot", "agents"] license = "MIT" +classifiers = [ + "Development Status :: 7 - Inactive", +] dependencies = [ "mcp>=1.13.1", "microsoft-teams-common", diff --git a/packages/mcpplugin/src/microsoft_teams/mcpplugin/__init__.py b/packages/mcpplugin/src/microsoft_teams/mcpplugin/__init__.py index 378035b2..e7f880c2 100644 --- a/packages/mcpplugin/src/microsoft_teams/mcpplugin/__init__.py +++ b/packages/mcpplugin/src/microsoft_teams/mcpplugin/__init__.py @@ -4,6 +4,7 @@ """ import logging +import warnings from . import models from .ai_plugin import McpClientPlugin, McpClientPluginParams, McpToolDetails @@ -12,5 +13,12 @@ logging.getLogger(__name__).addHandler(logging.NullHandler()) +warnings.warn( + "microsoft-teams-mcpplugin is deprecated and will no longer be maintained. " + "Use the official MCP Python SDK instead: https://github.com/modelcontextprotocol/python-sdk", + FutureWarning, + stacklevel=2, +) + __all__: list[str] = ["McpClientPlugin", "McpClientPluginParams", "McpToolDetails", "McpServerPlugin"] __all__.extend(models.__all__) diff --git a/packages/openai/README.md b/packages/openai/README.md index a02f6c18..32bfa3b9 100644 --- a/packages/openai/README.md +++ b/packages/openai/README.md @@ -1,5 +1,8 @@ # Microsoft Teams OpenAI +> [!WARNING] +> **Deprecated** β€” This package was originally in preview, but we have decided to stop maintaining it before General Availability. We recommend using the official [OpenAI Python SDK](https://github.com/openai/openai-python) instead, which provides better long-term support for OpenAI integrations. +

diff --git a/packages/openai/pyproject.toml b/packages/openai/pyproject.toml index 064a2974..fe161742 100644 --- a/packages/openai/pyproject.toml +++ b/packages/openai/pyproject.toml @@ -8,6 +8,9 @@ requires-python = ">=3.12,<3.15" repository = "https://github.com/microsoft/teams.py" keywords = ["microsoft", "teams", "ai", "bot", "agents"] license = "MIT" +classifiers = [ + "Development Status :: 7 - Inactive", +] dependencies = [ "microsoft-teams-ai", "microsoft-teams-common", diff --git a/packages/openai/src/microsoft_teams/openai/__init__.py b/packages/openai/src/microsoft_teams/openai/__init__.py index da6e7607..2a41e63a 100644 --- a/packages/openai/src/microsoft_teams/openai/__init__.py +++ b/packages/openai/src/microsoft_teams/openai/__init__.py @@ -4,10 +4,18 @@ """ import logging +import warnings from .completions_model import OpenAICompletionsAIModel from .responses_chat_model import OpenAIResponsesAIModel logging.getLogger(__name__).addHandler(logging.NullHandler()) +warnings.warn( + "microsoft-teams-openai is deprecated and will no longer be maintained. " + "Use the official OpenAI Python SDK instead: https://github.com/openai/openai-python", + FutureWarning, + stacklevel=2, +) + __all__ = ["OpenAICompletionsAIModel", "OpenAIResponsesAIModel"] diff --git a/pyproject.toml b/pyproject.toml index edcfa220..9058081f 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -3,13 +3,13 @@ "microsoft-teams-api" = { workspace = true } "microsoft-teams-common" = { workspace = true } "microsoft-teams-cards" = { workspace = true } -"microsoft-teams-devtools" = { workspace = true } "microsoft-teams-graph" = { workspace = true } "microsoft-teams-ai" = { workspace = true } "microsoft-teams-openai" = { workspace = true } "microsoft-teams-mcpplugin" = { workspace = true } "microsoft-teams-a2a" = { workspace = true } "microsoft-teams-botbuilder" = { workspace = true } +"microsoft-teams-devtools" = { workspace = true } "hatch-teams-build" = { path = "tools/hatch-teams-build" } [tool.uv.workspace] diff --git a/pyrightconfig.json b/pyrightconfig.json index bcbde077..32abd7b4 100644 --- a/pyrightconfig.json +++ b/pyrightconfig.json @@ -12,12 +12,12 @@ "packages/cards/src", "packages/apps/src", "packages/graph/src", - "packages/devtools/src", "packages/ai/src", "packages/openai/src", "packages/mcpplugin/src", "packages/a2aprotocol/src", - "packages/botbuilder/src" + "packages/botbuilder/src", + "packages/devtools/src" ], "typeCheckingMode": "strict", "executionEnvironments": [ @@ -27,9 +27,9 @@ "reportIncompatibleMethodOverride": "none" }, { - "root": "packages/botbuilder/src", - "reportMissingTypeStubs": "none", - "reportUnknownMemberType": "none" + "root": "packages/botbuilder/src", + "reportMissingTypeStubs": "none", + "reportUnknownMemberType": "none" }, { "root": "examples/botbuilder/src", @@ -37,4 +37,4 @@ "reportUnknownMemberType": "none" } ] -} +} \ No newline at end of file diff --git a/uv.lock b/uv.lock index 9ccdd318..80f09d55 100644 --- a/uv.lock +++ b/uv.lock @@ -2,21 +2,21 @@ version = 1 revision = 3 requires-python = ">=3.12, <3.15" resolution-markers = [ - "python_full_version >= '3.13'", + "python_full_version >= '3.14'", + "python_full_version == '3.13.*'", "python_full_version < '3.13'", ] [manifest] members = [ - "a2a", - "ai-test", + "a2a-test", + "ai-agentframework", "botbuilder", "cards", "dialogs", "echo", "graph", "http-adapters", - "mcp-client", "mcp-server", "meetings", "message-extensions", @@ -37,6 +37,7 @@ members = [ "stream", "tab", "targeted-messages", + "threading", ] [manifest.dependency-groups] @@ -53,68 +54,102 @@ dev = [ release = [{ name = "hatch-teams-build", directory = "tools/hatch-teams-build" }] [[package]] -name = "a2a" +name = "a2a-sdk" +version = "0.3.9" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "google-api-core" }, + { name = "httpx" }, + { name = "httpx-sse" }, + { name = "protobuf" }, + { name = "pydantic" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/65/0b/80671e784f61b55ac4c340d125d121ba91eba58ad7ba0f03b53b3831cd32/a2a_sdk-0.3.9.tar.gz", hash = "sha256:1dff7b5b1cab0b221519d0faed50176e200a1a87a8de8b64308d876505cc7c77", size = 224528, upload-time = "2025-10-15T17:35:28.299Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/34/ee/53b2da6d2768b136f996b8c6ab00ebcc44852f9a33816a64deaca6b279fe/a2a_sdk-0.3.9-py3-none-any.whl", hash = "sha256:7ed03a915bae98def46ea0313786da0a7a488346c3dc8af88407bb0b2a763926", size = 139027, upload-time = "2025-10-15T17:35:26.628Z" }, +] + +[package.optional-dependencies] +http-server = [ + { name = "fastapi" }, + { name = "sse-starlette" }, + { name = "starlette" }, +] + +[[package]] +name = "a2a-test" version = "0.1.0" source = { virtual = "examples/a2a-test" } dependencies = [ + { name = "a2a-sdk", extra = ["http-server"] }, + { name = "agent-framework-core" }, + { name = "agent-framework-openai" }, { name = "dotenv" }, - { name = "microsoft-teams-a2a" }, - { name = "microsoft-teams-ai" }, + { name = "httpx" }, { name = "microsoft-teams-apps" }, + { name = "microsoft-teams-cards" }, { name = "microsoft-teams-common" }, - { name = "microsoft-teams-openai" }, + { name = "uvicorn" }, ] [package.metadata] requires-dist = [ + { name = "a2a-sdk", extras = ["core", "http-server"], specifier = ">=0.3.7" }, + { name = "agent-framework-core" }, + { name = "agent-framework-openai" }, { name = "dotenv", specifier = ">=0.9.9" }, - { name = "microsoft-teams-a2a", editable = "packages/a2aprotocol" }, - { name = "microsoft-teams-ai", editable = "packages/ai" }, + { name = "httpx", specifier = ">=0.27" }, { name = "microsoft-teams-apps", editable = "packages/apps" }, + { name = "microsoft-teams-cards", editable = "packages/cards" }, { name = "microsoft-teams-common", editable = "packages/common" }, - { name = "microsoft-teams-openai", editable = "packages/openai" }, + { name = "uvicorn", specifier = ">=0.30" }, ] [[package]] -name = "a2a-sdk" -version = "0.3.9" +name = "agent-framework-core" +version = "1.1.0" source = { registry = "https://pypi.org/simple" } dependencies = [ - { name = "google-api-core" }, - { name = "httpx" }, - { name = "httpx-sse" }, - { name = "protobuf" }, + { name = "opentelemetry-api" }, { name = "pydantic" }, + { name = "python-dotenv" }, + { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/65/0b/80671e784f61b55ac4c340d125d121ba91eba58ad7ba0f03b53b3831cd32/a2a_sdk-0.3.9.tar.gz", hash = "sha256:1dff7b5b1cab0b221519d0faed50176e200a1a87a8de8b64308d876505cc7c77", size = 224528, upload-time = "2025-10-15T17:35:28.299Z" } +sdist = { url = "https://files.pythonhosted.org/packages/f7/97/7f429dec279cd156c4a08ab18f07263ef57134ece340e598a2c4b0d2e149/agent_framework_core-1.1.0.tar.gz", hash = "sha256:e399a9aad7fa6757b3b10a11c8b16a7229f908823720bc12996dc11e8e4d9d78", size = 290690, upload-time = "2026-04-21T06:20:14.195Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/34/ee/53b2da6d2768b136f996b8c6ab00ebcc44852f9a33816a64deaca6b279fe/a2a_sdk-0.3.9-py3-none-any.whl", hash = "sha256:7ed03a915bae98def46ea0313786da0a7a488346c3dc8af88407bb0b2a763926", size = 139027, upload-time = "2025-10-15T17:35:26.628Z" }, + { url = "https://files.pythonhosted.org/packages/1a/98/0628a25bfda7d270dc01b6f3a4a65f9b7d14f017978a237fd8ff858a2fca/agent_framework_core-1.1.0-py3-none-any.whl", hash = "sha256:0778ae7735403ac28216cd8ae26486e3d71f42641b9c4026dcb6f4a9885632e8", size = 329185, upload-time = "2026-04-21T06:29:26.077Z" }, ] -[package.optional-dependencies] -http-server = [ - { name = "fastapi" }, - { name = "sse-starlette" }, - { name = "starlette" }, +[[package]] +name = "agent-framework-openai" +version = "1.1.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "agent-framework-core" }, + { name = "openai" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/3d/47/6f1f5ae20a0a763e63723f615bb63837173dcf69b817ee66f1829e137a71/agent_framework_openai-1.1.0.tar.gz", hash = "sha256:b34d05f40fb76e3acd79ea9b59db96f4d1268702b9b353b95d78dd6a05e586fd", size = 45378, upload-time = "2026-04-21T06:20:04.278Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/1c/58/1878b9ac4db703f40c687129c0bccdbf88c94d557b64f0172f3c4e954558/agent_framework_openai-1.1.0-py3-none-any.whl", hash = "sha256:8fbcdb87fbc3fb6aa6f3d781a61a2c48fbc308d2ee8165454f94e53e026a787b", size = 50280, upload-time = "2026-04-21T06:20:11.864Z" }, ] [[package]] -name = "ai-test" +name = "ai-agentframework" version = "0.1.0" -source = { virtual = "examples/ai-test" } +source = { virtual = "examples/ai-agentframework" } dependencies = [ + { name = "agent-framework-core" }, + { name = "agent-framework-openai" }, { name = "dotenv" }, - { name = "microsoft-teams-ai" }, { name = "microsoft-teams-apps" }, - { name = "microsoft-teams-openai" }, ] [package.metadata] requires-dist = [ + { name = "agent-framework-core" }, + { name = "agent-framework-openai" }, { name = "dotenv", specifier = ">=0.9.9" }, - { name = "microsoft-teams-ai", editable = "packages/ai" }, { name = "microsoft-teams-apps", editable = "packages/apps" }, - { name = "microsoft-teams-openai", editable = "packages/openai" }, ] [[package]] @@ -292,14 +327,14 @@ wheels = [ [[package]] name = "authlib" -version = "1.6.9" +version = "1.6.11" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "cryptography" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/af/98/00d3dd826d46959ad8e32af2dbb2398868fd9fd0683c26e56d0789bd0e68/authlib-1.6.9.tar.gz", hash = "sha256:d8f2421e7e5980cc1ddb4e32d3f5fa659cfaf60d8eaf3281ebed192e4ab74f04", size = 165134, upload-time = "2026-03-02T07:44:01.998Z" } +sdist = { url = "https://files.pythonhosted.org/packages/28/10/b325d58ffe86815b399334a101e63bc6fa4e1953921cb23703b48a0a0220/authlib-1.6.11.tar.gz", hash = "sha256:64db35b9b01aeccb4715a6c9a6613a06f2bd7be2ab9d2eb89edd1dfc7580a38f", size = 165359, upload-time = "2026-04-16T07:22:50.279Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/53/23/b65f568ed0c22f1efacb744d2db1a33c8068f384b8c9b482b52ebdbc3ef6/authlib-1.6.9-py2.py3-none-any.whl", hash = "sha256:f08b4c14e08f0861dc18a32357b33fbcfd2ea86cfe3fe149484b4d764c4a0ac3", size = 244197, upload-time = "2026-03-02T07:44:00.307Z" }, + { url = "https://files.pythonhosted.org/packages/57/2f/55fca558f925a51db046e5b929deb317ddb05afed74b22d89f4eca578980/authlib-1.6.11-py2.py3-none-any.whl", hash = "sha256:c8687a9a26451c51a34a06fa17bb97cb15bba46a6a626755e2d7f50da8bff3e3", size = 244469, upload-time = "2026-04-16T07:22:48.413Z" }, ] [[package]] @@ -361,7 +396,6 @@ dependencies = [ { name = "dotenv" }, { name = "microsoft-teams-apps" }, { name = "microsoft-teams-botbuilder" }, - { name = "microsoft-teams-devtools" }, ] [package.metadata] @@ -370,7 +404,6 @@ requires-dist = [ { name = "dotenv", specifier = ">=0.9.9" }, { name = "microsoft-teams-apps", editable = "packages/apps" }, { name = "microsoft-teams-botbuilder", editable = "packages/botbuilder" }, - { name = "microsoft-teams-devtools", editable = "packages/devtools" }, ] [[package]] @@ -904,7 +937,6 @@ dependencies = [ { name = "dotenv" }, { name = "microsoft-teams-api" }, { name = "microsoft-teams-apps" }, - { name = "microsoft-teams-devtools" }, ] [package.metadata] @@ -912,7 +944,6 @@ requires-dist = [ { name = "dotenv", specifier = ">=0.9.9" }, { name = "microsoft-teams-api", editable = "packages/api" }, { name = "microsoft-teams-apps", editable = "packages/apps" }, - { name = "microsoft-teams-devtools", editable = "packages/devtools" }, ] [[package]] @@ -1413,50 +1444,74 @@ wheels = [ [[package]] name = "jiter" -version = "0.11.0" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/9d/c0/a3bb4cc13aced219dd18191ea66e874266bd8aa7b96744e495e1c733aa2d/jiter-0.11.0.tar.gz", hash = "sha256:1d9637eaf8c1d6a63d6562f2a6e5ab3af946c66037eb1b894e8fad75422266e4", size = 167094, upload-time = "2025-09-15T09:20:38.212Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/ba/b5/3009b112b8f673e568ef79af9863d8309a15f0a8cdcc06ed6092051f377e/jiter-0.11.0-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:2fb7b377688cc3850bbe5c192a6bd493562a0bc50cbc8b047316428fbae00ada", size = 305510, upload-time = "2025-09-15T09:19:25.893Z" }, - { url = "https://files.pythonhosted.org/packages/fe/82/15514244e03b9e71e086bbe2a6de3e4616b48f07d5f834200c873956fb8c/jiter-0.11.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:a1b7cbe3f25bd0d8abb468ba4302a5d45617ee61b2a7a638f63fee1dc086be99", size = 316521, upload-time = "2025-09-15T09:19:27.525Z" }, - { url = "https://files.pythonhosted.org/packages/92/94/7a2e905f40ad2d6d660e00b68d818f9e29fb87ffe82774f06191e93cbe4a/jiter-0.11.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c0a7f0ec81d5b7588c5cade1eb1925b91436ae6726dc2df2348524aeabad5de6", size = 338214, upload-time = "2025-09-15T09:19:28.727Z" }, - { url = "https://files.pythonhosted.org/packages/a8/9c/5791ed5bdc76f12110158d3316a7a3ec0b1413d018b41c5ed399549d3ad5/jiter-0.11.0-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:07630bb46ea2a6b9c6ed986c6e17e35b26148cce2c535454b26ee3f0e8dcaba1", size = 361280, upload-time = "2025-09-15T09:19:30.013Z" }, - { url = "https://files.pythonhosted.org/packages/d4/7f/b7d82d77ff0d2cb06424141000176b53a9e6b16a1125525bb51ea4990c2e/jiter-0.11.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7764f27d28cd4a9cbc61704dfcd80c903ce3aad106a37902d3270cd6673d17f4", size = 487895, upload-time = "2025-09-15T09:19:31.424Z" }, - { url = "https://files.pythonhosted.org/packages/42/44/10a1475d46f1fc1fd5cc2e82c58e7bca0ce5852208e0fa5df2f949353321/jiter-0.11.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1d4a6c4a737d486f77f842aeb22807edecb4a9417e6700c7b981e16d34ba7c72", size = 378421, upload-time = "2025-09-15T09:19:32.746Z" }, - { url = "https://files.pythonhosted.org/packages/9a/5f/0dc34563d8164d31d07bc09d141d3da08157a68dcd1f9b886fa4e917805b/jiter-0.11.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cf408d2a0abd919b60de8c2e7bc5eeab72d4dafd18784152acc7c9adc3291591", size = 347932, upload-time = "2025-09-15T09:19:34.612Z" }, - { url = "https://files.pythonhosted.org/packages/f7/de/b68f32a4fcb7b4a682b37c73a0e5dae32180140cd1caf11aef6ad40ddbf2/jiter-0.11.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:cdef53eda7d18e799625023e1e250dbc18fbc275153039b873ec74d7e8883e09", size = 386959, upload-time = "2025-09-15T09:19:35.994Z" }, - { url = "https://files.pythonhosted.org/packages/76/0a/c08c92e713b6e28972a846a81ce374883dac2f78ec6f39a0dad9f2339c3a/jiter-0.11.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:53933a38ef7b551dd9c7f1064f9d7bb235bb3168d0fa5f14f0798d1b7ea0d9c5", size = 517187, upload-time = "2025-09-15T09:19:37.426Z" }, - { url = "https://files.pythonhosted.org/packages/89/b5/4a283bec43b15aad54fcae18d951f06a2ec3f78db5708d3b59a48e9c3fbd/jiter-0.11.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:11840d2324c9ab5162fc1abba23bc922124fedcff0d7b7f85fffa291e2f69206", size = 509461, upload-time = "2025-09-15T09:19:38.761Z" }, - { url = "https://files.pythonhosted.org/packages/34/a5/f8bad793010534ea73c985caaeef8cc22dfb1fedb15220ecdf15c623c07a/jiter-0.11.0-cp312-cp312-win32.whl", hash = "sha256:4f01a744d24a5f2bb4a11657a1b27b61dc038ae2e674621a74020406e08f749b", size = 206664, upload-time = "2025-09-15T09:19:40.096Z" }, - { url = "https://files.pythonhosted.org/packages/ed/42/5823ec2b1469395a160b4bf5f14326b4a098f3b6898fbd327366789fa5d3/jiter-0.11.0-cp312-cp312-win_amd64.whl", hash = "sha256:29fff31190ab3a26de026da2f187814f4b9c6695361e20a9ac2123e4d4378a4c", size = 203520, upload-time = "2025-09-15T09:19:41.798Z" }, - { url = "https://files.pythonhosted.org/packages/97/c4/d530e514d0f4f29b2b68145e7b389cbc7cac7f9c8c23df43b04d3d10fa3e/jiter-0.11.0-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:4441a91b80a80249f9a6452c14b2c24708f139f64de959943dfeaa6cb915e8eb", size = 305021, upload-time = "2025-09-15T09:19:43.523Z" }, - { url = "https://files.pythonhosted.org/packages/7a/77/796a19c567c5734cbfc736a6f987affc0d5f240af8e12063c0fb93990ffa/jiter-0.11.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:ff85fc6d2a431251ad82dbd1ea953affb5a60376b62e7d6809c5cd058bb39471", size = 314384, upload-time = "2025-09-15T09:19:44.849Z" }, - { url = "https://files.pythonhosted.org/packages/14/9c/824334de0b037b91b6f3fa9fe5a191c83977c7ec4abe17795d3cb6d174cf/jiter-0.11.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c5e86126d64706fd28dfc46f910d496923c6f95b395138c02d0e252947f452bd", size = 337389, upload-time = "2025-09-15T09:19:46.094Z" }, - { url = "https://files.pythonhosted.org/packages/a2/95/ed4feab69e6cf9b2176ea29d4ef9d01a01db210a3a2c8a31a44ecdc68c38/jiter-0.11.0-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:4ad8bd82165961867a10f52010590ce0b7a8c53da5ddd8bbb62fef68c181b921", size = 360519, upload-time = "2025-09-15T09:19:47.494Z" }, - { url = "https://files.pythonhosted.org/packages/b5/0c/2ad00f38d3e583caba3909d95b7da1c3a7cd82c0aa81ff4317a8016fb581/jiter-0.11.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b42c2cd74273455ce439fd9528db0c6e84b5623cb74572305bdd9f2f2961d3df", size = 487198, upload-time = "2025-09-15T09:19:49.116Z" }, - { url = "https://files.pythonhosted.org/packages/ea/8b/919b64cf3499b79bdfba6036da7b0cac5d62d5c75a28fb45bad7819e22f0/jiter-0.11.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f0062dab98172dd0599fcdbf90214d0dcde070b1ff38a00cc1b90e111f071982", size = 377835, upload-time = "2025-09-15T09:19:50.468Z" }, - { url = "https://files.pythonhosted.org/packages/29/7f/8ebe15b6e0a8026b0d286c083b553779b4dd63db35b43a3f171b544de91d/jiter-0.11.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bb948402821bc76d1f6ef0f9e19b816f9b09f8577844ba7140f0b6afe994bc64", size = 347655, upload-time = "2025-09-15T09:19:51.726Z" }, - { url = "https://files.pythonhosted.org/packages/8e/64/332127cef7e94ac75719dda07b9a472af6158ba819088d87f17f3226a769/jiter-0.11.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:25a5b1110cca7329fd0daf5060faa1234be5c11e988948e4f1a1923b6a457fe1", size = 386135, upload-time = "2025-09-15T09:19:53.075Z" }, - { url = "https://files.pythonhosted.org/packages/20/c8/557b63527442f84c14774159948262a9d4fabb0d61166f11568f22fc60d2/jiter-0.11.0-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:bf11807e802a214daf6c485037778843fadd3e2ec29377ae17e0706ec1a25758", size = 516063, upload-time = "2025-09-15T09:19:54.447Z" }, - { url = "https://files.pythonhosted.org/packages/86/13/4164c819df4a43cdc8047f9a42880f0ceef5afeb22e8b9675c0528ebdccd/jiter-0.11.0-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:dbb57da40631c267861dd0090461222060960012d70fd6e4c799b0f62d0ba166", size = 508139, upload-time = "2025-09-15T09:19:55.764Z" }, - { url = "https://files.pythonhosted.org/packages/fa/70/6e06929b401b331d41ddb4afb9f91cd1168218e3371972f0afa51c9f3c31/jiter-0.11.0-cp313-cp313-win32.whl", hash = "sha256:8e36924dad32c48d3c5e188d169e71dc6e84d6cb8dedefea089de5739d1d2f80", size = 206369, upload-time = "2025-09-15T09:19:57.048Z" }, - { url = "https://files.pythonhosted.org/packages/f4/0d/8185b8e15de6dce24f6afae63380e16377dd75686d56007baa4f29723ea1/jiter-0.11.0-cp313-cp313-win_amd64.whl", hash = "sha256:452d13e4fd59698408087235259cebe67d9d49173b4dacb3e8d35ce4acf385d6", size = 202538, upload-time = "2025-09-15T09:19:58.35Z" }, - { url = "https://files.pythonhosted.org/packages/13/3a/d61707803260d59520721fa326babfae25e9573a88d8b7b9cb54c5423a59/jiter-0.11.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:089f9df9f69532d1339e83142438668f52c97cd22ee2d1195551c2b1a9e6cf33", size = 313737, upload-time = "2025-09-15T09:19:59.638Z" }, - { url = "https://files.pythonhosted.org/packages/cd/cc/c9f0eec5d00f2a1da89f6bdfac12b8afdf8d5ad974184863c75060026457/jiter-0.11.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:29ed1fe69a8c69bf0f2a962d8d706c7b89b50f1332cd6b9fbda014f60bd03a03", size = 346183, upload-time = "2025-09-15T09:20:01.442Z" }, - { url = "https://files.pythonhosted.org/packages/a6/87/fc632776344e7aabbab05a95a0075476f418c5d29ab0f2eec672b7a1f0ac/jiter-0.11.0-cp313-cp313t-win_amd64.whl", hash = "sha256:a4d71d7ea6ea8786291423fe209acf6f8d398a0759d03e7f24094acb8ab686ba", size = 204225, upload-time = "2025-09-15T09:20:03.102Z" }, - { url = "https://files.pythonhosted.org/packages/ee/3b/e7f45be7d3969bdf2e3cd4b816a7a1d272507cd0edd2d6dc4b07514f2d9a/jiter-0.11.0-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:9a6dff27eca70930bdbe4cbb7c1a4ba8526e13b63dc808c0670083d2d51a4a72", size = 304414, upload-time = "2025-09-15T09:20:04.357Z" }, - { url = "https://files.pythonhosted.org/packages/06/32/13e8e0d152631fcc1907ceb4943711471be70496d14888ec6e92034e2caf/jiter-0.11.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:b1ae2a7593a62132c7d4c2abbee80bbbb94fdc6d157e2c6cc966250c564ef774", size = 314223, upload-time = "2025-09-15T09:20:05.631Z" }, - { url = "https://files.pythonhosted.org/packages/0c/7e/abedd5b5a20ca083f778d96bba0d2366567fcecb0e6e34ff42640d5d7a18/jiter-0.11.0-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7b13a431dba4b059e9e43019d3022346d009baf5066c24dcdea321a303cde9f0", size = 337306, upload-time = "2025-09-15T09:20:06.917Z" }, - { url = "https://files.pythonhosted.org/packages/ac/e2/30d59bdc1204c86aa975ec72c48c482fee6633120ee9c3ab755e4dfefea8/jiter-0.11.0-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:af62e84ca3889604ebb645df3b0a3f3bcf6b92babbff642bd214616f57abb93a", size = 360565, upload-time = "2025-09-15T09:20:08.283Z" }, - { url = "https://files.pythonhosted.org/packages/fe/88/567288e0d2ed9fa8f7a3b425fdaf2cb82b998633c24fe0d98f5417321aa8/jiter-0.11.0-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c6f3b32bb723246e6b351aecace52aba78adb8eeb4b2391630322dc30ff6c773", size = 486465, upload-time = "2025-09-15T09:20:09.613Z" }, - { url = "https://files.pythonhosted.org/packages/18/6e/7b72d09273214cadd15970e91dd5ed9634bee605176107db21e1e4205eb1/jiter-0.11.0-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:adcab442f4a099a358a7f562eaa54ed6456fb866e922c6545a717be51dbed7d7", size = 377581, upload-time = "2025-09-15T09:20:10.884Z" }, - { url = "https://files.pythonhosted.org/packages/58/52/4db456319f9d14deed325f70102577492e9d7e87cf7097bda9769a1fcacb/jiter-0.11.0-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c9967c2ab338ee2b2c0102fd379ec2693c496abf71ffd47e4d791d1f593b68e2", size = 347102, upload-time = "2025-09-15T09:20:12.175Z" }, - { url = "https://files.pythonhosted.org/packages/ce/b4/433d5703c38b26083aec7a733eb5be96f9c6085d0e270a87ca6482cbf049/jiter-0.11.0-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e7d0bed3b187af8b47a981d9742ddfc1d9b252a7235471ad6078e7e4e5fe75c2", size = 386477, upload-time = "2025-09-15T09:20:13.428Z" }, - { url = "https://files.pythonhosted.org/packages/c8/7a/a60bfd9c55b55b07c5c441c5085f06420b6d493ce9db28d069cc5b45d9f3/jiter-0.11.0-cp314-cp314-musllinux_1_1_aarch64.whl", hash = "sha256:f6fe0283e903ebc55f1a6cc569b8c1f3bf4abd026fed85e3ff8598a9e6f982f0", size = 516004, upload-time = "2025-09-15T09:20:14.848Z" }, - { url = "https://files.pythonhosted.org/packages/2e/46/f8363e5ecc179b4ed0ca6cb0a6d3bfc266078578c71ff30642ea2ce2f203/jiter-0.11.0-cp314-cp314-musllinux_1_1_x86_64.whl", hash = "sha256:4ee5821e3d66606b29ae5b497230b304f1376f38137d69e35f8d2bd5f310ff73", size = 507855, upload-time = "2025-09-15T09:20:16.176Z" }, - { url = "https://files.pythonhosted.org/packages/90/33/396083357d51d7ff0f9805852c288af47480d30dd31d8abc74909b020761/jiter-0.11.0-cp314-cp314-win32.whl", hash = "sha256:c2d13ba7567ca8799f17c76ed56b1d49be30df996eb7fa33e46b62800562a5e2", size = 205802, upload-time = "2025-09-15T09:20:17.661Z" }, - { url = "https://files.pythonhosted.org/packages/e7/ab/eb06ca556b2551d41de7d03bf2ee24285fa3d0c58c5f8d95c64c9c3281b1/jiter-0.11.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:fb4790497369d134a07fc763cc88888c46f734abdd66f9fdf7865038bf3a8f40", size = 313405, upload-time = "2025-09-15T09:20:18.918Z" }, - { url = "https://files.pythonhosted.org/packages/af/22/7ab7b4ec3a1c1f03aef376af11d23b05abcca3fb31fbca1e7557053b1ba2/jiter-0.11.0-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6e2bbf24f16ba5ad4441a9845e40e4ea0cb9eed00e76ba94050664ef53ef4406", size = 347102, upload-time = "2025-09-15T09:20:20.16Z" }, +version = "0.14.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/6e/c1/0cddc6eb17d4c53a99840953f95dd3accdc5cfc7a337b0e9b26476276be9/jiter-0.14.0.tar.gz", hash = "sha256:e8a39e66dac7153cf3f964a12aad515afa8d74938ec5cc0018adcdae5367c79e", size = 165725, upload-time = "2026-04-10T14:28:42.01Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/5a/68/7390a418f10897da93b158f2d5a8bd0bcd73a0f9ec3bb36917085bb759ef/jiter-0.14.0-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:2fb2ce3a7bc331256dfb14cefc34832366bb28a9aca81deaf43bbf2a5659e607", size = 316295, upload-time = "2026-04-10T14:26:24.887Z" }, + { url = "https://files.pythonhosted.org/packages/60/a0/5854ac00ff63551c52c6c89534ec6aba4b93474e7924d64e860b1c94165b/jiter-0.14.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:5252a7ca23785cef5d02d4ece6077a1b556a410c591b379f82091c3001e14844", size = 315898, upload-time = "2026-04-10T14:26:26.601Z" }, + { url = "https://files.pythonhosted.org/packages/41/a1/4f44832650a16b18e8391f1bf1d6ca4909bc738351826bcc198bba4357f4/jiter-0.14.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c409578cbd77c338975670ada777add4efd53379667edf0aceea730cabede6fb", size = 343730, upload-time = "2026-04-10T14:26:28.326Z" }, + { url = "https://files.pythonhosted.org/packages/48/64/a329e9d469f86307203594b1707e11ae51c3348d03bfd514a5f997870012/jiter-0.14.0-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:7ede4331a1899d604463369c730dbb961ffdc5312bc7f16c41c2896415b1304a", size = 370102, upload-time = "2026-04-10T14:26:30.089Z" }, + { url = "https://files.pythonhosted.org/packages/94/c1/5e3dfc59635aa4d4c7bd20a820ac1d09b8ed851568356802cf1c08edb3cf/jiter-0.14.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:92cd8b6025981a041f5310430310b55b25ca593972c16407af8837d3d7d2ca01", size = 461335, upload-time = "2026-04-10T14:26:31.911Z" }, + { url = "https://files.pythonhosted.org/packages/e3/1b/dd157009dbc058f7b00108f545ccb72a2d56461395c4fc7b9cfdccb00af4/jiter-0.14.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:351bf6eda4e3a7ceb876377840c702e9a3e4ecc4624dbfb2d6463c67ae52637d", size = 378536, upload-time = "2026-04-10T14:26:33.595Z" }, + { url = "https://files.pythonhosted.org/packages/91/78/256013667b7c10b8834f8e6e54cd3e562d4c6e34227a1596addccc05e38c/jiter-0.14.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c1dcfbeb93d9ecd9ca128bbf8910120367777973fa193fb9a39c31237d8df165", size = 353859, upload-time = "2026-04-10T14:26:35.098Z" }, + { url = "https://files.pythonhosted.org/packages/de/d9/137d65ade9093a409fe80955ce60b12bb753722c986467aeda47faf450ad/jiter-0.14.0-cp312-cp312-manylinux_2_31_riscv64.whl", hash = "sha256:ae039aaef8de3f8157ecc1fdd4d85043ac4f57538c245a0afaecb8321ec951c3", size = 357626, upload-time = "2026-04-10T14:26:36.685Z" }, + { url = "https://files.pythonhosted.org/packages/2e/48/76750835b87029342727c1a268bea8878ab988caf81ee4e7b880900eeb5a/jiter-0.14.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:7d9d51eb96c82a9652933bd769fe6de66877d6eb2b2440e281f2938c51b5643e", size = 393172, upload-time = "2026-04-10T14:26:38.097Z" }, + { url = "https://files.pythonhosted.org/packages/a6/60/456c4e81d5c8045279aefe60e9e483be08793828800a4e64add8fdde7f2a/jiter-0.14.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:d824ca4148b705970bf4e120924a212fdfca9859a73e42bd7889a63a4ea6bb98", size = 520300, upload-time = "2026-04-10T14:26:39.532Z" }, + { url = "https://files.pythonhosted.org/packages/a8/9f/2020e0984c235f678dced38fe4eec3058cf528e6af36ebf969b410305941/jiter-0.14.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:ff3a6465b3a0f54b1a430f45c3c0ba7d61ceb45cbc3e33f9e1a7f638d690baf3", size = 553059, upload-time = "2026-04-10T14:26:40.991Z" }, + { url = "https://files.pythonhosted.org/packages/ef/32/e2d298e1a22a4bbe6062136d1c7192db7dba003a6975e51d9a9eecabc4c2/jiter-0.14.0-cp312-cp312-win32.whl", hash = "sha256:5dec7c0a3e98d2a3f8a2e67382d0d7c3ac60c69103a4b271da889b4e8bb1e129", size = 206030, upload-time = "2026-04-10T14:26:42.517Z" }, + { url = "https://files.pythonhosted.org/packages/36/ac/96369141b3d8a4a8e4590e983085efe1c436f35c0cda940dd76d942e3e40/jiter-0.14.0-cp312-cp312-win_amd64.whl", hash = "sha256:fc7e37b4b8bc7e80a63ad6cfa5fc11fab27dbfea4cc4ae644b1ab3f273dc348f", size = 201603, upload-time = "2026-04-10T14:26:44.328Z" }, + { url = "https://files.pythonhosted.org/packages/01/c3/75d847f264647017d7e3052bbcc8b1e24b95fa139c320c5f5066fa7a0bdd/jiter-0.14.0-cp312-cp312-win_arm64.whl", hash = "sha256:ee4a72f12847ef29b072aee9ad5474041ab2924106bdca9fcf5d7d965853e057", size = 191525, upload-time = "2026-04-10T14:26:46Z" }, + { url = "https://files.pythonhosted.org/packages/97/2a/09f70020898507a89279659a1afe3364d57fc1b2c89949081975d135f6f5/jiter-0.14.0-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:af72f204cf4d44258e5b4c1745130ac45ddab0e71a06333b01de660ab4187a94", size = 315502, upload-time = "2026-04-10T14:26:47.697Z" }, + { url = "https://files.pythonhosted.org/packages/d6/be/080c96a45cd74f9fce5db4fd68510b88087fb37ffe2541ff73c12db92535/jiter-0.14.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:4b77da71f6e819be5fbcec11a453fde5b1d0267ef6ed487e2a392fd8e14e4e3a", size = 314870, upload-time = "2026-04-10T14:26:49.149Z" }, + { url = "https://files.pythonhosted.org/packages/7d/5e/2d0fee155826a968a832cc32438de5e2a193292c8721ca70d0b53e58245b/jiter-0.14.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:77f4ea612fe8b84b8b04e51d0e78029ecf3466348e25973f953de6e6a59aa4c1", size = 343406, upload-time = "2026-04-10T14:26:50.762Z" }, + { url = "https://files.pythonhosted.org/packages/70/af/bf9ee0d3a4f8dc0d679fc1337f874fe60cdbf841ebbb304b374e1c9aaceb/jiter-0.14.0-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:62fe2451f8fcc0240261e6a4df18ecbcd58327857e61e625b2393ea3b468aac9", size = 369415, upload-time = "2026-04-10T14:26:52.188Z" }, + { url = "https://files.pythonhosted.org/packages/0f/83/8e8561eadba31f4d3948a5b712fb0447ec71c3560b57a855449e7b8ddc98/jiter-0.14.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6112f26f5afc75bcb475787d29da3aa92f9d09c7858f632f4be6ffe607be82e9", size = 461456, upload-time = "2026-04-10T14:26:53.611Z" }, + { url = "https://files.pythonhosted.org/packages/f6/c9/c5299e826a5fe6108d172b344033f61c69b1bb979dd8d9ddd4278a160971/jiter-0.14.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:215a6cb8fb7dc702aa35d475cc00ddc7f970e5c0b1417fb4b4ac5d82fa2a29db", size = 378488, upload-time = "2026-04-10T14:26:55.211Z" }, + { url = "https://files.pythonhosted.org/packages/5d/37/c16d9d15c0a471b8644b1abe3c82668092a707d9bedcf076f24ff2e380cd/jiter-0.14.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fc4ab96a30fb3cb2c7e0cd33f7616c8860da5f5674438988a54ac717caccdbaa", size = 353242, upload-time = "2026-04-10T14:26:56.705Z" }, + { url = "https://files.pythonhosted.org/packages/58/ea/8050cb0dc654e728e1bfacbc0c640772f2181af5dedd13ae70145743a439/jiter-0.14.0-cp313-cp313-manylinux_2_31_riscv64.whl", hash = "sha256:3a99c1387b1f2928f799a9de899193484d66206a50e98233b6b088a7f0c1edb2", size = 356823, upload-time = "2026-04-10T14:26:58.281Z" }, + { url = "https://files.pythonhosted.org/packages/b0/3b/cf71506d270e5f84d97326bf220e47aed9b95e9a4a060758fb07772170ab/jiter-0.14.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:ab18d11074485438695f8d34a1b6da61db9754248f96d51341956607a8f39985", size = 392564, upload-time = "2026-04-10T14:27:00.018Z" }, + { url = "https://files.pythonhosted.org/packages/b0/cc/8c6c74a3efb5bd671bfd14f51e8a73375464ca914b1551bc3b40e26ac2c9/jiter-0.14.0-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:801028dcfc26ac0895e4964cbc0fd62c73be9fd4a7d7b1aaf6e5790033a719b7", size = 520322, upload-time = "2026-04-10T14:27:01.664Z" }, + { url = "https://files.pythonhosted.org/packages/41/24/68d7b883ec959884ddf00d019b2e0e82ba81b167e1253684fa90519ce33c/jiter-0.14.0-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:ad425b087aafb4a1c7e1e98a279200743b9aaf30c3e0ba723aec93f061bd9bc8", size = 552619, upload-time = "2026-04-10T14:27:03.316Z" }, + { url = "https://files.pythonhosted.org/packages/b6/89/b1a0985223bbf3150ff9e8f46f98fc9360c1de94f48abe271bbe1b465682/jiter-0.14.0-cp313-cp313-win32.whl", hash = "sha256:882bcb9b334318e233950b8be366fe5f92c86b66a7e449e76975dfd6d776a01f", size = 205699, upload-time = "2026-04-10T14:27:04.662Z" }, + { url = "https://files.pythonhosted.org/packages/4c/19/3f339a5a7f14a11730e67f6be34f9d5105751d547b615ef593fa122a5ded/jiter-0.14.0-cp313-cp313-win_amd64.whl", hash = "sha256:9b8c571a5dba09b98bd3462b5a53f27209a5cbbe85670391692ede71974e979f", size = 201323, upload-time = "2026-04-10T14:27:06.139Z" }, + { url = "https://files.pythonhosted.org/packages/50/56/752dd89c84be0e022a8ea3720bcfa0a8431db79a962578544812ce061739/jiter-0.14.0-cp313-cp313-win_arm64.whl", hash = "sha256:34f19dcc35cb1abe7c369b3756babf8c7f04595c0807a848df8f26ef8298ef92", size = 191099, upload-time = "2026-04-10T14:27:07.564Z" }, + { url = "https://files.pythonhosted.org/packages/91/28/292916f354f25a1fe8cf2c918d1415c699a4a659ae00be0430e1c5d9ffea/jiter-0.14.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:e89bcd7d426a75bb4952c696b267075790d854a07aad4c9894551a82c5b574ab", size = 320880, upload-time = "2026-04-10T14:27:09.326Z" }, + { url = "https://files.pythonhosted.org/packages/ad/c7/b002a7d8b8957ac3d469bd59c18ef4b1595a5216ae0de639a287b9816023/jiter-0.14.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7b25beaa0d4447ea8c7ae0c18c688905d34840d7d0b937f2f7bdd52162c98a40", size = 346563, upload-time = "2026-04-10T14:27:11.287Z" }, + { url = "https://files.pythonhosted.org/packages/f9/3b/f8d07580d8706021d255a6356b8fab13ee4c869412995550ce6ed4ddf97d/jiter-0.14.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:651a8758dd413c51e3b7f6557cdc6921faf70b14106f45f969f091f5cda990ea", size = 357928, upload-time = "2026-04-10T14:27:12.729Z" }, + { url = "https://files.pythonhosted.org/packages/47/5b/ac1a974da29e35507230383110ffec59998b290a8732585d04e19a9eb5ba/jiter-0.14.0-cp313-cp313t-win_amd64.whl", hash = "sha256:e1a7eead856a5038a8d291f1447176ab0b525c77a279a058121b5fccee257f6f", size = 203519, upload-time = "2026-04-10T14:27:14.125Z" }, + { url = "https://files.pythonhosted.org/packages/96/6d/9fc8433d667d2454271378a79747d8c76c10b51b482b454e6190e511f244/jiter-0.14.0-cp313-cp313t-win_arm64.whl", hash = "sha256:2e692633a12cda97e352fdcd1c4acc971b1c28707e1e33aeef782b0cbf051975", size = 190113, upload-time = "2026-04-10T14:27:16.638Z" }, + { url = "https://files.pythonhosted.org/packages/4f/1e/354ed92461b165bd581f9ef5150971a572c873ec3b68a916d5aa91da3cc2/jiter-0.14.0-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:6f396837fc7577871ca8c12edaf239ed9ccef3bbe39904ae9b8b63ce0a48b140", size = 315277, upload-time = "2026-04-10T14:27:18.109Z" }, + { url = "https://files.pythonhosted.org/packages/a6/95/8c7c7028aa8636ac21b7a55faef3e34215e6ed0cbf5ae58258427f621aa3/jiter-0.14.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:a4d50ea3d8ba4176f79754333bd35f1bbcd28e91adc13eb9b7ca91bc52a6cef9", size = 315923, upload-time = "2026-04-10T14:27:19.603Z" }, + { url = "https://files.pythonhosted.org/packages/47/40/e2a852a44c4a089f2681a16611b7ce113224a80fd8504c46d78491b47220/jiter-0.14.0-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ce17f8a050447d1b4153bda4fb7d26e6a9e74eb4f4a41913f30934c5075bf615", size = 344943, upload-time = "2026-04-10T14:27:21.262Z" }, + { url = "https://files.pythonhosted.org/packages/fc/1f/670f92adee1e9895eac41e8a4d623b6da68c4d46249d8b556b60b63f949e/jiter-0.14.0-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f4f1c4b125e1652aefbc2e2c1617b60a160ab789d180e3d423c41439e5f32850", size = 369725, upload-time = "2026-04-10T14:27:22.766Z" }, + { url = "https://files.pythonhosted.org/packages/01/2f/541c9ba567d05de1c4874a0f8f8c5e3fd78e2b874266623da9a775cf46e0/jiter-0.14.0-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:be808176a6a3a14321d18c603f2d40741858a7c4fc982f83232842689fe86dd9", size = 461210, upload-time = "2026-04-10T14:27:24.315Z" }, + { url = "https://files.pythonhosted.org/packages/ce/a9/c31cbec09627e0d5de7aeaec7690dba03e090caa808fefd8133137cf45bc/jiter-0.14.0-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:26679d58ba816f88c3849306dd58cb863a90a1cf352cdd4ef67e30ccf8a77994", size = 380002, upload-time = "2026-04-10T14:27:26.155Z" }, + { url = "https://files.pythonhosted.org/packages/50/02/3c05c1666c41904a2f607475a73e7a4763d1cbde2d18229c4f85b22dc253/jiter-0.14.0-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:80381f5a19af8fa9aef743f080e34f6b25ebd89656475f8cf0470ec6157052aa", size = 354678, upload-time = "2026-04-10T14:27:27.701Z" }, + { url = "https://files.pythonhosted.org/packages/7d/97/e15b33545c2b13518f560d695f974b9891b311641bdcf178d63177e8801e/jiter-0.14.0-cp314-cp314-manylinux_2_31_riscv64.whl", hash = "sha256:004df5fdb8ecbd6d99f3227df18ba1a259254c4359736a2e6f036c944e02d7c5", size = 358920, upload-time = "2026-04-10T14:27:29.256Z" }, + { url = "https://files.pythonhosted.org/packages/ad/d2/8b1461def6b96ba44530df20d07ef7a1c7da22f3f9bf1727e2d611077bf1/jiter-0.14.0-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:cff5708f7ed0fa098f2b53446c6fa74c48469118e5cd7497b4f1cd569ab06928", size = 394512, upload-time = "2026-04-10T14:27:31.344Z" }, + { url = "https://files.pythonhosted.org/packages/e3/88/837566dd6ed6e452e8d3205355afd484ce44b2533edfa4ed73a298ea893e/jiter-0.14.0-cp314-cp314-musllinux_1_1_aarch64.whl", hash = "sha256:2492e5f06c36a976d25c7cc347a60e26d5470178d44cde1b9b75e60b4e519f28", size = 521120, upload-time = "2026-04-10T14:27:33.299Z" }, + { url = "https://files.pythonhosted.org/packages/89/6b/b00b45c4d1b4c031777fe161d620b755b5b02cdade1e316dcb46e4471d63/jiter-0.14.0-cp314-cp314-musllinux_1_1_x86_64.whl", hash = "sha256:7609cfbe3a03d37bfdbf5052012d5a879e72b83168a363deae7b3a26564d57de", size = 553668, upload-time = "2026-04-10T14:27:34.868Z" }, + { url = "https://files.pythonhosted.org/packages/ad/d8/6fe5b42011d19397433d345716eac16728ac241862a2aac9c91923c7509a/jiter-0.14.0-cp314-cp314-win32.whl", hash = "sha256:7282342d32e357543565286b6450378c3cd402eea333fc1ebe146f1fabb306fc", size = 207001, upload-time = "2026-04-10T14:27:36.455Z" }, + { url = "https://files.pythonhosted.org/packages/e5/43/5c2e08da1efad5e410f0eaaabeadd954812612c33fbbd8fd5328b489139d/jiter-0.14.0-cp314-cp314-win_amd64.whl", hash = "sha256:bd77945f38866a448e73b0b7637366afa814d4617790ecd88a18ca74377e6c02", size = 202187, upload-time = "2026-04-10T14:27:38Z" }, + { url = "https://files.pythonhosted.org/packages/aa/1f/6e39ac0b4cdfa23e606af5b245df5f9adaa76f35e0c5096790da430ca506/jiter-0.14.0-cp314-cp314-win_arm64.whl", hash = "sha256:f2d4c61da0821ee42e0cdf5489da60a6d074306313a377c2b35af464955a3611", size = 192257, upload-time = "2026-04-10T14:27:39.504Z" }, + { url = "https://files.pythonhosted.org/packages/05/57/7dbc0ffbbb5176a27e3518716608aa464aee2e2887dc938f0b900a120449/jiter-0.14.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:1bf7ff85517dd2f20a5750081d2b75083c1b269cf75afc7511bdf1f9548beb3b", size = 323441, upload-time = "2026-04-10T14:27:41.039Z" }, + { url = "https://files.pythonhosted.org/packages/83/6e/7b3314398d8983f06b557aa21b670511ec72d3b79a68ee5e4d9bff972286/jiter-0.14.0-cp314-cp314t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c8ef8791c3e78d6c6b157c6d360fbb5c715bebb8113bc6a9303c5caff012754a", size = 348109, upload-time = "2026-04-10T14:27:42.552Z" }, + { url = "https://files.pythonhosted.org/packages/ae/4f/8dc674bcd7db6dba566de73c08c763c337058baff1dbeb34567045b27cdc/jiter-0.14.0-cp314-cp314t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:e74663b8b10da1fe0f4e4703fd7980d24ad17174b6bb35d8498d6e3ebce2ae6a", size = 368328, upload-time = "2026-04-10T14:27:44.574Z" }, + { url = "https://files.pythonhosted.org/packages/3b/5f/188e09a1f20906f98bbdec44ed820e19f4e8eb8aff88b9d1a5a497587ff3/jiter-0.14.0-cp314-cp314t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1aca29ba52913f78362ec9c2da62f22cdc4c3083313403f90c15460979b84d9b", size = 463301, upload-time = "2026-04-10T14:27:46.717Z" }, + { url = "https://files.pythonhosted.org/packages/ac/f0/19046ef965ed8f349e8554775bb12ff4352f443fbe12b95d31f575891256/jiter-0.14.0-cp314-cp314t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8b39b7d87a952b79949af5fef44d2544e58c21a28da7f1bae3ef166455c61746", size = 378891, upload-time = "2026-04-10T14:27:48.32Z" }, + { url = "https://files.pythonhosted.org/packages/c4/c3/da43bd8431ee175695777ee78cf0e93eacbb47393ff493f18c45231b427d/jiter-0.14.0-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:78d918a68b26e9fab068c2b5453577ef04943ab2807b9a6275df2a812599a310", size = 360749, upload-time = "2026-04-10T14:27:49.88Z" }, + { url = "https://files.pythonhosted.org/packages/72/26/e054771be889707c6161dbdec9c23d33a9ec70945395d70f07cfea1e9a6f/jiter-0.14.0-cp314-cp314t-manylinux_2_31_riscv64.whl", hash = "sha256:b08997c35aee1201c1a5361466a8fb9162d03ae7bf6568df70b6c859f1e654a4", size = 358526, upload-time = "2026-04-10T14:27:51.504Z" }, + { url = "https://files.pythonhosted.org/packages/c3/0f/7bea65ea2a6d91f2bf989ff11a18136644392bf2b0497a1fa50934c30a9c/jiter-0.14.0-cp314-cp314t-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:260bf7ca20704d58d41f669e5e9fe7fe2fa72901a6b324e79056f5d52e9c9be2", size = 393926, upload-time = "2026-04-10T14:27:53.368Z" }, + { url = "https://files.pythonhosted.org/packages/3c/a1/b1ff7d70deef61ac0b7c6c2f12d2ace950cdeecb4fdc94500a0926802857/jiter-0.14.0-cp314-cp314t-musllinux_1_1_aarch64.whl", hash = "sha256:37826e3df29e60f30a382f9294348d0238ef127f4b5d7f5f8da78b5b9e050560", size = 521052, upload-time = "2026-04-10T14:27:55.058Z" }, + { url = "https://files.pythonhosted.org/packages/0b/7b/3b0649983cbaf15eda26a414b5b1982e910c67bd6f7b1b490f3cfc76896a/jiter-0.14.0-cp314-cp314t-musllinux_1_1_x86_64.whl", hash = "sha256:645be49c46f2900937ba0eaf871ad5183c96858c0af74b6becc7f4e367e36e06", size = 553716, upload-time = "2026-04-10T14:27:57.269Z" }, + { url = "https://files.pythonhosted.org/packages/97/f8/33d78c83bd93ae0c0af05293a6660f88a1977caef39a6d72a84afab94ce0/jiter-0.14.0-cp314-cp314t-win32.whl", hash = "sha256:2f7877ed45118de283786178eceaf877110abacd04fde31efff3940ae9672674", size = 207957, upload-time = "2026-04-10T14:27:59.285Z" }, + { url = "https://files.pythonhosted.org/packages/d6/ac/2b760516c03e2227826d1f7025d89bf6bf6357a28fe75c2a2800873c50bf/jiter-0.14.0-cp314-cp314t-win_amd64.whl", hash = "sha256:14c0cb10337c49f5eafe8e7364daca5e29a020ea03580b8f8e6c597fed4e1588", size = 204690, upload-time = "2026-04-10T14:28:00.962Z" }, + { url = "https://files.pythonhosted.org/packages/dc/2e/a44c20c58aeed0355f2d326969a181696aeb551a25195f47563908a815be/jiter-0.14.0-cp314-cp314t-win_arm64.whl", hash = "sha256:5419d4aa2024961da9fe12a9cfe7484996735dca99e8e090b5c88595ef1951ff", size = 191338, upload-time = "2026-04-10T14:28:02.853Z" }, + { url = "https://files.pythonhosted.org/packages/21/42/9042c3f3019de4adcb8c16591c325ec7255beea9fcd33a42a43f3b0b1000/jiter-0.14.0-graalpy312-graalpy250_312_native-macosx_10_12_x86_64.whl", hash = "sha256:fbd9e482663ca9d005d051330e4d2d8150bb208a209409c10f7e7dfdf7c49da9", size = 308810, upload-time = "2026-04-10T14:28:34.673Z" }, + { url = "https://files.pythonhosted.org/packages/60/cf/a7e19b308bd86bb04776803b1f01a5f9a287a4c55205f4708827ee487fbf/jiter-0.14.0-graalpy312-graalpy250_312_native-macosx_11_0_arm64.whl", hash = "sha256:33a20d838b91ef376b3a56896d5b04e725c7df5bc4864cc6569cf046a8d73b6d", size = 308443, upload-time = "2026-04-10T14:28:36.658Z" }, + { url = "https://files.pythonhosted.org/packages/ca/44/e26ede3f0caeff93f222559cb0cc4ca68579f07d009d7b6010c5b586f9b1/jiter-0.14.0-graalpy312-graalpy250_312_native-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:432c4db5255d86a259efde91e55cb4c8d18c0521d844c9e2e7efcce3899fb016", size = 343039, upload-time = "2026-04-10T14:28:38.356Z" }, + { url = "https://files.pythonhosted.org/packages/da/e9/1f9ada30cef7b05e74bb06f52127e7a724976c225f46adb65c37b1dadfb6/jiter-0.14.0-graalpy312-graalpy250_312_native-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:67f00d94b281174144d6532a04b66a12cb866cbdc47c3af3bfe2973677f9861a", size = 349613, upload-time = "2026-04-10T14:28:40.066Z" }, ] [[package]] @@ -1636,29 +1691,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/fd/d9/eaa1f80170d2b7c5ba23f3b59f766f3a0bb41155fbc32a69adfa1adaaef9/mcp-1.26.0-py3-none-any.whl", hash = "sha256:904a21c33c25aa98ddbeb47273033c435e595bbacfdb177f4bd87f6dceebe1ca", size = 233615, upload-time = "2026-01-24T19:40:30.652Z" }, ] -[[package]] -name = "mcp-client" -version = "0.1.0" -source = { virtual = "examples/mcp-client" } -dependencies = [ - { name = "dotenv" }, - { name = "microsoft-teams-ai" }, - { name = "microsoft-teams-common" }, - { name = "microsoft-teams-devtools" }, - { name = "microsoft-teams-mcpplugin" }, - { name = "microsoft-teams-openai" }, -] - -[package.metadata] -requires-dist = [ - { name = "dotenv", specifier = ">=0.9.9" }, - { name = "microsoft-teams-ai", editable = "packages/ai" }, - { name = "microsoft-teams-common", editable = "packages/common" }, - { name = "microsoft-teams-devtools", editable = "packages/devtools" }, - { name = "microsoft-teams-mcpplugin", editable = "packages/mcpplugin" }, - { name = "microsoft-teams-openai", editable = "packages/openai" }, -] - [[package]] name = "mcp-server" version = "0.1.0" @@ -2314,7 +2346,7 @@ wheels = [ [[package]] name = "openai" -version = "2.4.0" +version = "2.31.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "anyio" }, @@ -2326,9 +2358,9 @@ dependencies = [ { name = "tqdm" }, { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/ef/24/d0b0f088c39fa75a73292aef24d7436fbc537f12bf6026c6a69a1bfdae6e/openai-2.4.0.tar.gz", hash = "sha256:97860859172b637ffb308433c207a371d4683586ed2b24b360cb4c08cf377d01", size = 591541, upload-time = "2025-10-16T15:14:05.163Z" } +sdist = { url = "https://files.pythonhosted.org/packages/94/fe/64b3d035780b3188f86c4f6f1bc202e7bb74757ef028802112273b9dcacf/openai-2.31.0.tar.gz", hash = "sha256:43ca59a88fc973ad1848d86b98d7fac207e265ebbd1828b5e4bdfc85f79427a5", size = 684772, upload-time = "2026-04-08T21:01:41.797Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/d8/f6/68a8bbb62c001e5580de6b89372ddab03c3484ac3e251c298a30da094f5e/openai-2.4.0-py3-none-any.whl", hash = "sha256:5099f4fbfa80e7e5785ba52402c580eadba21e6172c85df05455676605ad150f", size = 1003092, upload-time = "2025-10-16T15:14:02.826Z" }, + { url = "https://files.pythonhosted.org/packages/66/bc/a8f7c3aa03452fedbb9af8be83e959adba96a6b4a35e416faffcc959c568/openai-2.31.0-py3-none-any.whl", hash = "sha256:44e1344d87e56a493d649b17e2fac519d1368cbb0745f59f1957c4c26de50a0a", size = 1153479, upload-time = "2026-04-08T21:01:39.217Z" }, ] [[package]] @@ -2345,42 +2377,42 @@ wheels = [ [[package]] name = "opentelemetry-api" -version = "1.38.0" +version = "1.41.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "importlib-metadata" }, { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/08/d8/0f354c375628e048bd0570645b310797299754730079853095bf000fba69/opentelemetry_api-1.38.0.tar.gz", hash = "sha256:f4c193b5e8acb0912b06ac5b16321908dd0843d75049c091487322284a3eea12", size = 65242, upload-time = "2025-10-16T08:35:50.25Z" } +sdist = { url = "https://files.pythonhosted.org/packages/47/8e/3778a7e87801d994869a9396b9fc2a289e5f9be91ff54a27d41eace494b0/opentelemetry_api-1.41.0.tar.gz", hash = "sha256:9421d911326ec12dee8bc933f7839090cad7a3f13fcfb0f9e82f8174dc003c09", size = 71416, upload-time = "2026-04-09T14:38:34.544Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/ae/a2/d86e01c28300bd41bab8f18afd613676e2bd63515417b77636fc1add426f/opentelemetry_api-1.38.0-py3-none-any.whl", hash = "sha256:2891b0197f47124454ab9f0cf58f3be33faca394457ac3e09daba13ff50aa582", size = 65947, upload-time = "2025-10-16T08:35:30.23Z" }, + { url = "https://files.pythonhosted.org/packages/58/ee/99ab786653b3bda9c37ade7e24a7b607a1b1f696063172768417539d876d/opentelemetry_api-1.41.0-py3-none-any.whl", hash = "sha256:0e77c806e6a89c9e4f8d372034622f3e1418a11bdbe1c80a50b3d3397ad0fa4f", size = 69007, upload-time = "2026-04-09T14:38:11.833Z" }, ] [[package]] name = "opentelemetry-sdk" -version = "1.38.0" +version = "1.41.0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "opentelemetry-api" }, { name = "opentelemetry-semantic-conventions" }, { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/85/cb/f0eee1445161faf4c9af3ba7b848cc22a50a3d3e2515051ad8628c35ff80/opentelemetry_sdk-1.38.0.tar.gz", hash = "sha256:93df5d4d871ed09cb4272305be4d996236eedb232253e3ab864c8620f051cebe", size = 171942, upload-time = "2025-10-16T08:36:02.257Z" } +sdist = { url = "https://files.pythonhosted.org/packages/f8/0e/a586df1186f9f56b5a0879d52653effc40357b8e88fc50fe300038c3c08b/opentelemetry_sdk-1.41.0.tar.gz", hash = "sha256:7bddf3961131b318fc2d158947971a8e37e38b1cd23470cfb72b624e7cc108bd", size = 230181, upload-time = "2026-04-09T14:38:47.225Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/2f/2e/e93777a95d7d9c40d270a371392b6d6f1ff170c2a3cb32d6176741b5b723/opentelemetry_sdk-1.38.0-py3-none-any.whl", hash = "sha256:1c66af6564ecc1553d72d811a01df063ff097cdc82ce188da9951f93b8d10f6b", size = 132349, upload-time = "2025-10-16T08:35:46.995Z" }, + { url = "https://files.pythonhosted.org/packages/2c/13/a7825118208cb32e6a4edcd0a99f925cbef81e77b3b0aedfd9125583c543/opentelemetry_sdk-1.41.0-py3-none-any.whl", hash = "sha256:a596f5687964a3e0d7f8edfdcf5b79cbca9c93c7025ebf5fb00f398a9443b0bd", size = 180214, upload-time = "2026-04-09T14:38:30.657Z" }, ] [[package]] name = "opentelemetry-semantic-conventions" -version = "0.59b0" +version = "0.62b0" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "opentelemetry-api" }, { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/40/bc/8b9ad3802cd8ac6583a4eb7de7e5d7db004e89cb7efe7008f9c8a537ee75/opentelemetry_semantic_conventions-0.59b0.tar.gz", hash = "sha256:7a6db3f30d70202d5bf9fa4b69bc866ca6a30437287de6c510fb594878aed6b0", size = 129861, upload-time = "2025-10-16T08:36:03.346Z" } +sdist = { url = "https://files.pythonhosted.org/packages/a3/b0/c14f723e86c049b7bf8ff431160d982519b97a7be2857ed2247377397a24/opentelemetry_semantic_conventions-0.62b0.tar.gz", hash = "sha256:cbfb3c8fc259575cf68a6e1b94083cc35adc4a6b06e8cf431efa0d62606c0097", size = 145753, upload-time = "2026-04-09T14:38:48.274Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/24/7d/c88d7b15ba8fe5c6b8f93be50fc11795e9fc05386c44afaf6b76fe191f9b/opentelemetry_semantic_conventions-0.59b0-py3-none-any.whl", hash = "sha256:35d3b8833ef97d614136e253c1da9342b4c3c083bbaf29ce31d572a1c3825eed", size = 207954, upload-time = "2025-10-16T08:35:48.054Z" }, + { url = "https://files.pythonhosted.org/packages/58/6c/5e86fa1759a525ef91c2d8b79d668574760ff3f900d114297765eb8786cb/opentelemetry_semantic_conventions-0.62b0-py3-none-any.whl", hash = "sha256:0ddac1ce59eaf1a827d9987ab60d9315fb27aea23304144242d1fcad9e16b489", size = 231619, upload-time = "2026-04-09T14:38:32.394Z" }, ] [[package]] @@ -2782,15 +2814,15 @@ wheels = [ [[package]] name = "pyright" -version = "1.1.406" +version = "1.1.408" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "nodeenv" }, { name = "typing-extensions" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/f7/16/6b4fbdd1fef59a0292cbb99f790b44983e390321eccbc5921b4d161da5d1/pyright-1.1.406.tar.gz", hash = "sha256:c4872bc58c9643dac09e8a2e74d472c62036910b3bd37a32813989ef7576ea2c", size = 4113151, upload-time = "2025-10-02T01:04:45.488Z" } +sdist = { url = "https://files.pythonhosted.org/packages/74/b2/5db700e52554b8f025faa9c3c624c59f1f6c8841ba81ab97641b54322f16/pyright-1.1.408.tar.gz", hash = "sha256:f28f2321f96852fa50b5829ea492f6adb0e6954568d1caa3f3af3a5f555eb684", size = 4400578, upload-time = "2026-01-08T08:07:38.795Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/f6/a2/e309afbb459f50507103793aaef85ca4348b66814c86bc73908bdeb66d12/pyright-1.1.406-py3-none-any.whl", hash = "sha256:1d81fb43c2407bf566e97e57abb01c811973fdb21b2df8df59f870f688bdca71", size = 5980982, upload-time = "2025-10-02T01:04:43.137Z" }, + { url = "https://files.pythonhosted.org/packages/0c/82/a2c93e32800940d9573fb28c346772a14778b84ba7524e691b324620ab89/pyright-1.1.408-py3-none-any.whl", hash = "sha256:090b32865f4fdb1e0e6cd82bf5618480d48eecd2eb2e70f960982a3d9a4c17c1", size = 6399144, upload-time = "2026-01-08T08:07:37.082Z" }, ] [[package]] @@ -2850,11 +2882,11 @@ wheels = [ [[package]] name = "python-dotenv" -version = "1.1.1" +version = "1.2.2" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/f6/b0/4bc07ccd3572a2f9df7e6782f52b0c6c90dcbb803ac4a167702d7d0dfe1e/python_dotenv-1.1.1.tar.gz", hash = "sha256:a8a6399716257f45be6a007360200409fce5cda2661e3dec71d23dc15f6189ab", size = 41978, upload-time = "2025-06-24T04:21:07.341Z" } +sdist = { url = "https://files.pythonhosted.org/packages/82/ed/0301aeeac3e5353ef3d94b6ec08bbcabd04a72018415dcb29e588514bba8/python_dotenv-1.2.2.tar.gz", hash = "sha256:2c371a91fbd7ba082c2c1dc1f8bf89ca22564a087c2c287cd9b662adde799cf3", size = 50135, upload-time = "2026-03-01T16:00:26.196Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/5f/ed/539768cf28c661b5b068d66d96a2f155c4971a5d55684a514c1a0e0dec2f/python_dotenv-1.1.1-py3-none-any.whl", hash = "sha256:31f23644fe2602f88ff55e1f5c79ba497e01224ee7737937930c448e4d0e24dc", size = 20556, upload-time = "2025-06-24T04:21:06.073Z" }, + { url = "https://files.pythonhosted.org/packages/0b/d7/1959b9648791274998a9c3526f6d0ec8fd2233e4d4acce81bbae76b44b2a/python_dotenv-1.2.2-py3-none-any.whl", hash = "sha256:1d8214789a24de455a8b8bd8ae6fe3c6b69a5e3d64aa8a8e5d68e694bbcb285a", size = 22101, upload-time = "2026-03-01T16:00:25.09Z" }, ] [[package]] @@ -3243,7 +3275,6 @@ dependencies = [ { name = "dotenv" }, { name = "microsoft-teams-api" }, { name = "microsoft-teams-apps" }, - { name = "microsoft-teams-devtools" }, ] [package.metadata] @@ -3251,7 +3282,6 @@ requires-dist = [ { name = "dotenv", specifier = ">=0.9.9" }, { name = "microsoft-teams-api", editable = "packages/api" }, { name = "microsoft-teams-apps", editable = "packages/apps" }, - { name = "microsoft-teams-devtools", editable = "packages/devtools" }, ] [[package]] @@ -3280,6 +3310,23 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/a6/a5/c0b6468d3824fe3fde30dbb5e1f687b291608f9473681bbf7dabbf5a87d7/text_unidecode-1.3-py2.py3-none-any.whl", hash = "sha256:1311f10e8b895935241623731c2ba64f4c455287888b18189350b67134a822e8", size = 78154, upload-time = "2019-08-30T21:37:03.543Z" }, ] +[[package]] +name = "threading" +version = "0.1.0" +source = { virtual = "examples/threading" } +dependencies = [ + { name = "dotenv" }, + { name = "microsoft-teams-api" }, + { name = "microsoft-teams-apps" }, +] + +[package.metadata] +requires-dist = [ + { name = "dotenv", specifier = ">=0.9.9" }, + { name = "microsoft-teams-api", editable = "packages/api" }, + { name = "microsoft-teams-apps", editable = "packages/apps" }, +] + [[package]] name = "tqdm" version = "4.67.1" @@ -3375,22 +3422,34 @@ standard = [ [[package]] name = "uvloop" -version = "0.21.0" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/af/c0/854216d09d33c543f12a44b393c402e89a920b1a0a7dc634c42de91b9cf6/uvloop-0.21.0.tar.gz", hash = "sha256:3bf12b0fda68447806a7ad847bfa591613177275d35b6724b1ee573faa3704e3", size = 2492741, upload-time = "2024-10-14T23:38:35.489Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/8c/4c/03f93178830dc7ce8b4cdee1d36770d2f5ebb6f3d37d354e061eefc73545/uvloop-0.21.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:359ec2c888397b9e592a889c4d72ba3d6befba8b2bb01743f72fffbde663b59c", size = 1471284, upload-time = "2024-10-14T23:37:47.833Z" }, - { url = "https://files.pythonhosted.org/packages/43/3e/92c03f4d05e50f09251bd8b2b2b584a2a7f8fe600008bcc4523337abe676/uvloop-0.21.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:f7089d2dc73179ce5ac255bdf37c236a9f914b264825fdaacaded6990a7fb4c2", size = 821349, upload-time = "2024-10-14T23:37:50.149Z" }, - { url = "https://files.pythonhosted.org/packages/a6/ef/a02ec5da49909dbbfb1fd205a9a1ac4e88ea92dcae885e7c961847cd51e2/uvloop-0.21.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:baa4dcdbd9ae0a372f2167a207cd98c9f9a1ea1188a8a526431eef2f8116cc8d", size = 4580089, upload-time = "2024-10-14T23:37:51.703Z" }, - { url = "https://files.pythonhosted.org/packages/06/a7/b4e6a19925c900be9f98bec0a75e6e8f79bb53bdeb891916609ab3958967/uvloop-0.21.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:86975dca1c773a2c9864f4c52c5a55631038e387b47eaf56210f873887b6c8dc", size = 4693770, upload-time = "2024-10-14T23:37:54.122Z" }, - { url = "https://files.pythonhosted.org/packages/ce/0c/f07435a18a4b94ce6bd0677d8319cd3de61f3a9eeb1e5f8ab4e8b5edfcb3/uvloop-0.21.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:461d9ae6660fbbafedd07559c6a2e57cd553b34b0065b6550685f6653a98c1cb", size = 4451321, upload-time = "2024-10-14T23:37:55.766Z" }, - { url = "https://files.pythonhosted.org/packages/8f/eb/f7032be105877bcf924709c97b1bf3b90255b4ec251f9340cef912559f28/uvloop-0.21.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:183aef7c8730e54c9a3ee3227464daed66e37ba13040bb3f350bc2ddc040f22f", size = 4659022, upload-time = "2024-10-14T23:37:58.195Z" }, - { url = "https://files.pythonhosted.org/packages/3f/8d/2cbef610ca21539f0f36e2b34da49302029e7c9f09acef0b1c3b5839412b/uvloop-0.21.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:bfd55dfcc2a512316e65f16e503e9e450cab148ef11df4e4e679b5e8253a5281", size = 1468123, upload-time = "2024-10-14T23:38:00.688Z" }, - { url = "https://files.pythonhosted.org/packages/93/0d/b0038d5a469f94ed8f2b2fce2434a18396d8fbfb5da85a0a9781ebbdec14/uvloop-0.21.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:787ae31ad8a2856fc4e7c095341cccc7209bd657d0e71ad0dc2ea83c4a6fa8af", size = 819325, upload-time = "2024-10-14T23:38:02.309Z" }, - { url = "https://files.pythonhosted.org/packages/50/94/0a687f39e78c4c1e02e3272c6b2ccdb4e0085fda3b8352fecd0410ccf915/uvloop-0.21.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5ee4d4ef48036ff6e5cfffb09dd192c7a5027153948d85b8da7ff705065bacc6", size = 4582806, upload-time = "2024-10-14T23:38:04.711Z" }, - { url = "https://files.pythonhosted.org/packages/d2/19/f5b78616566ea68edd42aacaf645adbf71fbd83fc52281fba555dc27e3f1/uvloop-0.21.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f3df876acd7ec037a3d005b3ab85a7e4110422e4d9c1571d4fc89b0fc41b6816", size = 4701068, upload-time = "2024-10-14T23:38:06.385Z" }, - { url = "https://files.pythonhosted.org/packages/47/57/66f061ee118f413cd22a656de622925097170b9380b30091b78ea0c6ea75/uvloop-0.21.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:bd53ecc9a0f3d87ab847503c2e1552b690362e005ab54e8a48ba97da3924c0dc", size = 4454428, upload-time = "2024-10-14T23:38:08.416Z" }, - { url = "https://files.pythonhosted.org/packages/63/9a/0962b05b308494e3202d3f794a6e85abe471fe3cafdbcf95c2e8c713aabd/uvloop-0.21.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:a5c39f217ab3c663dc699c04cbd50c13813e31d917642d459fdcec07555cc553", size = 4660018, upload-time = "2024-10-14T23:38:10.888Z" }, +version = "0.22.1" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/06/f0/18d39dbd1971d6d62c4629cc7fa67f74821b0dc1f5a77af43719de7936a7/uvloop-0.22.1.tar.gz", hash = "sha256:6c84bae345b9147082b17371e3dd5d42775bddce91f885499017f4607fdaf39f", size = 2443250, upload-time = "2025-10-16T22:17:19.342Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/3d/ff/7f72e8170be527b4977b033239a83a68d5c881cc4775fca255c677f7ac5d/uvloop-0.22.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:fe94b4564e865d968414598eea1a6de60adba0c040ba4ed05ac1300de402cd42", size = 1359936, upload-time = "2025-10-16T22:16:29.436Z" }, + { url = "https://files.pythonhosted.org/packages/c3/c6/e5d433f88fd54d81ef4be58b2b7b0cea13c442454a1db703a1eea0db1a59/uvloop-0.22.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:51eb9bd88391483410daad430813d982010f9c9c89512321f5b60e2cddbdddd6", size = 752769, upload-time = "2025-10-16T22:16:30.493Z" }, + { url = "https://files.pythonhosted.org/packages/24/68/a6ac446820273e71aa762fa21cdcc09861edd3536ff47c5cd3b7afb10eeb/uvloop-0.22.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:700e674a166ca5778255e0e1dc4e9d79ab2acc57b9171b79e65feba7184b3370", size = 4317413, upload-time = "2025-10-16T22:16:31.644Z" }, + { url = "https://files.pythonhosted.org/packages/5f/6f/e62b4dfc7ad6518e7eff2516f680d02a0f6eb62c0c212e152ca708a0085e/uvloop-0.22.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:7b5b1ac819a3f946d3b2ee07f09149578ae76066d70b44df3fa990add49a82e4", size = 4426307, upload-time = "2025-10-16T22:16:32.917Z" }, + { url = "https://files.pythonhosted.org/packages/90/60/97362554ac21e20e81bcef1150cb2a7e4ffdaf8ea1e5b2e8bf7a053caa18/uvloop-0.22.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:e047cc068570bac9866237739607d1313b9253c3051ad84738cbb095be0537b2", size = 4131970, upload-time = "2025-10-16T22:16:34.015Z" }, + { url = "https://files.pythonhosted.org/packages/99/39/6b3f7d234ba3964c428a6e40006340f53ba37993f46ed6e111c6e9141d18/uvloop-0.22.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:512fec6815e2dd45161054592441ef76c830eddaad55c8aa30952e6fe1ed07c0", size = 4296343, upload-time = "2025-10-16T22:16:35.149Z" }, + { url = "https://files.pythonhosted.org/packages/89/8c/182a2a593195bfd39842ea68ebc084e20c850806117213f5a299dfc513d9/uvloop-0.22.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:561577354eb94200d75aca23fbde86ee11be36b00e52a4eaf8f50fb0c86b7705", size = 1358611, upload-time = "2025-10-16T22:16:36.833Z" }, + { url = "https://files.pythonhosted.org/packages/d2/14/e301ee96a6dc95224b6f1162cd3312f6d1217be3907b79173b06785f2fe7/uvloop-0.22.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:1cdf5192ab3e674ca26da2eada35b288d2fa49fdd0f357a19f0e7c4e7d5077c8", size = 751811, upload-time = "2025-10-16T22:16:38.275Z" }, + { url = "https://files.pythonhosted.org/packages/b7/02/654426ce265ac19e2980bfd9ea6590ca96a56f10c76e63801a2df01c0486/uvloop-0.22.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6e2ea3d6190a2968f4a14a23019d3b16870dd2190cd69c8180f7c632d21de68d", size = 4288562, upload-time = "2025-10-16T22:16:39.375Z" }, + { url = "https://files.pythonhosted.org/packages/15/c0/0be24758891ef825f2065cd5db8741aaddabe3e248ee6acc5e8a80f04005/uvloop-0.22.1-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0530a5fbad9c9e4ee3f2b33b148c6a64d47bbad8000ea63704fa8260f4cf728e", size = 4366890, upload-time = "2025-10-16T22:16:40.547Z" }, + { url = "https://files.pythonhosted.org/packages/d2/53/8369e5219a5855869bcee5f4d317f6da0e2c669aecf0ef7d371e3d084449/uvloop-0.22.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:bc5ef13bbc10b5335792360623cc378d52d7e62c2de64660616478c32cd0598e", size = 4119472, upload-time = "2025-10-16T22:16:41.694Z" }, + { url = "https://files.pythonhosted.org/packages/f8/ba/d69adbe699b768f6b29a5eec7b47dd610bd17a69de51b251126a801369ea/uvloop-0.22.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:1f38ec5e3f18c8a10ded09742f7fb8de0108796eb673f30ce7762ce1b8550cad", size = 4239051, upload-time = "2025-10-16T22:16:43.224Z" }, + { url = "https://files.pythonhosted.org/packages/90/cd/b62bdeaa429758aee8de8b00ac0dd26593a9de93d302bff3d21439e9791d/uvloop-0.22.1-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:3879b88423ec7e97cd4eba2a443aa26ed4e59b45e6b76aabf13fe2f27023a142", size = 1362067, upload-time = "2025-10-16T22:16:44.503Z" }, + { url = "https://files.pythonhosted.org/packages/0d/f8/a132124dfda0777e489ca86732e85e69afcd1ff7686647000050ba670689/uvloop-0.22.1-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:4baa86acedf1d62115c1dc6ad1e17134476688f08c6efd8a2ab076e815665c74", size = 752423, upload-time = "2025-10-16T22:16:45.968Z" }, + { url = "https://files.pythonhosted.org/packages/a3/94/94af78c156f88da4b3a733773ad5ba0b164393e357cc4bd0ab2e2677a7d6/uvloop-0.22.1-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:297c27d8003520596236bdb2335e6b3f649480bd09e00d1e3a99144b691d2a35", size = 4272437, upload-time = "2025-10-16T22:16:47.451Z" }, + { url = "https://files.pythonhosted.org/packages/b5/35/60249e9fd07b32c665192cec7af29e06c7cd96fa1d08b84f012a56a0b38e/uvloop-0.22.1-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c1955d5a1dd43198244d47664a5858082a3239766a839b2102a269aaff7a4e25", size = 4292101, upload-time = "2025-10-16T22:16:49.318Z" }, + { url = "https://files.pythonhosted.org/packages/02/62/67d382dfcb25d0a98ce73c11ed1a6fba5037a1a1d533dcbb7cab033a2636/uvloop-0.22.1-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:b31dc2fccbd42adc73bc4e7cdbae4fc5086cf378979e53ca5d0301838c5682c6", size = 4114158, upload-time = "2025-10-16T22:16:50.517Z" }, + { url = "https://files.pythonhosted.org/packages/f0/7a/f1171b4a882a5d13c8b7576f348acfe6074d72eaf52cccef752f748d4a9f/uvloop-0.22.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:93f617675b2d03af4e72a5333ef89450dfaa5321303ede6e67ba9c9d26878079", size = 4177360, upload-time = "2025-10-16T22:16:52.646Z" }, + { url = "https://files.pythonhosted.org/packages/79/7b/b01414f31546caf0919da80ad57cbfe24c56b151d12af68cee1b04922ca8/uvloop-0.22.1-cp314-cp314t-macosx_10_13_universal2.whl", hash = "sha256:37554f70528f60cad66945b885eb01f1bb514f132d92b6eeed1c90fd54ed6289", size = 1454790, upload-time = "2025-10-16T22:16:54.355Z" }, + { url = "https://files.pythonhosted.org/packages/d4/31/0bb232318dd838cad3fa8fb0c68c8b40e1145b32025581975e18b11fab40/uvloop-0.22.1-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:b76324e2dc033a0b2f435f33eb88ff9913c156ef78e153fb210e03c13da746b3", size = 796783, upload-time = "2025-10-16T22:16:55.906Z" }, + { url = "https://files.pythonhosted.org/packages/42/38/c9b09f3271a7a723a5de69f8e237ab8e7803183131bc57c890db0b6bb872/uvloop-0.22.1-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:badb4d8e58ee08dad957002027830d5c3b06aea446a6a3744483c2b3b745345c", size = 4647548, upload-time = "2025-10-16T22:16:57.008Z" }, + { url = "https://files.pythonhosted.org/packages/c1/37/945b4ca0ac27e3dc4952642d4c900edd030b3da6c9634875af6e13ae80e5/uvloop-0.22.1-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b91328c72635f6f9e0282e4a57da7470c7350ab1c9f48546c0f2866205349d21", size = 4467065, upload-time = "2025-10-16T22:16:58.206Z" }, + { url = "https://files.pythonhosted.org/packages/97/cc/48d232f33d60e2e2e0b42f4e73455b146b76ebe216487e862700457fbf3c/uvloop-0.22.1-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:daf620c2995d193449393d6c62131b3fbd40a63bf7b307a1527856ace637fe88", size = 4328384, upload-time = "2025-10-16T22:16:59.36Z" }, + { url = "https://files.pythonhosted.org/packages/e4/16/c1fd27e9549f3c4baf1dc9c20c456cd2f822dbf8de9f463824b0c0357e06/uvloop-0.22.1-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:6cde23eeda1a25c75b2e07d39970f3374105d5eafbaab2a4482be82f272d5a5e", size = 4296730, upload-time = "2025-10-16T22:17:00.744Z" }, ] [[package]]