From 87fee60e2952477a6e25f42e9af4115468f96428 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 15 Apr 2026 02:46:13 +0000 Subject: [PATCH 01/10] Initial plan From aadb6536c8fad9d7d9d253ce1dc30de1ecd56656 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 15 Apr 2026 02:50:23 +0000 Subject: [PATCH 02/10] Add 'Choose the right .NET AI tool' guidance article and update AI ecosystem docs Agent-Logs-Url: https://github.com/dotnet/docs/sessions/dcb14719-132d-4dad-87e0-bd0d48d895a4 Co-authored-by: gewarren <24882762+gewarren@users.noreply.github.com> --- docs/ai/conceptual/choose-ai-tool.md | 265 +++++++++++++++++++++++++++ docs/ai/dotnet-ai-ecosystem.md | 5 +- docs/ai/overview.md | 4 +- docs/ai/toc.yml | 2 + 4 files changed, 273 insertions(+), 3 deletions(-) create mode 100644 docs/ai/conceptual/choose-ai-tool.md diff --git a/docs/ai/conceptual/choose-ai-tool.md b/docs/ai/conceptual/choose-ai-tool.md new file mode 100644 index 0000000000000..2915e1110594a --- /dev/null +++ b/docs/ai/conceptual/choose-ai-tool.md @@ -0,0 +1,265 @@ +--- +title: Choose the right .NET AI tool +description: Learn which .NET AI technology to use for your scenario, including Microsoft.Extensions.AI, Microsoft Agent Framework, vector stores, data ingestion, MCP, evaluations, Azure AI Foundry, Foundry Local, and Aspire. +ms.date: 04/15/2026 +ms.topic: concept-article +ai-usage: ai-assisted +--- + +# Choose the right .NET AI tool + +The .NET AI ecosystem includes many powerful tools and libraries for different purposes. Picking the right one—or the right combination—makes your application easier to build, test, and maintain. This article helps you understand which tool fits your scenario. + +## Quick reference + +The following table summarizes when to reach for each component: + +| Component | Use it when... | Don't lead with it when... | +|-----------|---------------|---------------------------| +| **Microsoft.Extensions.AI (MEAI)** | You need to add AI behavior to an app: chat, summarization, structured outputs, tool calling, embeddings, or streaming. | The hardest problem is ingestion, retrieval, orchestration, or infrastructure. | +| **Evaluations** | You need repeatable quality checks for prompts, models, agents, or AI features—including regressions and side-by-side comparisons. | You haven't yet built the AI behavior you want to measure. | +| **Microsoft.Extensions.DataIngestion (MEDI)** | Your challenge is getting source content into shape for AI: reading, chunking, enriching, and preparing data for grounding or RAG. | You only need a direct model call and no serious data preparation pipeline. | +| **Microsoft.Extensions.VectorData (MEVD)** | You need semantic search, retrieval, embeddings-backed lookup, or RAG over your own data. | You don't need vector retrieval or grounding. | +| **MCP Server** | You want to expose tools, resources, or prompts so external assistants, IDEs, or agents can discover and use them. | The capability is local to one app and ordinary in-process function calling is enough. | +| **MCP Client** | Your app or agent needs to connect to existing MCP servers and consume external tools or resources. | You don't need interoperability beyond your own process boundary. | +| **Microsoft Agent Framework (MAF)** | The system must pursue goals across multiple steps with routing, planning, handoffs, or multiple collaborating agents. | A single LLM call or straightforward tool-calling loop is sufficient. | +| **AI Toolkit** | You want a better developer workflow for trying models, prompts, and evaluations during development. | You need the runtime abstraction or production architecture itself. | +| **Copilot SDK** | You want a pre-built agent harness with tools, context, and automatic tool calling out of the box. | You want a blank-slate app stack or full low-level control from MEAI or MAF. | +| **Azure AI Foundry** | You need managed model hosting, safety, governance, enterprise controls, and a cloud deployment target. | You're only deciding how app code should call a model. | +| **Foundry Local** | You need local or local-first AI for privacy or compliance reasons, or you want some alignment with Azure AI Foundry without requiring a perfect cloud transition. | Any local runtime works, or you expect local-to-cloud to be a drop-in identical move. | +| **Aspire** | The solution spans multiple services and you need orchestration, service discovery, and observability. | The app is still a single service or proof of concept. | + +## How to decide + +Start by identifying your primary challenge: + +- **Adding AI behavior to an app** → Start with [MEAI](#microsoftextensionsai-meai). Add [Evaluations](#evaluations) once you have something worth measuring. +- **Working with your own data** → If you need to read, chunk, or enrich content first, start with [MEDI](#microsoftextensionsdataingestion-medi). Then use [MEVD](#microsoftextensionsvectordata-mevd) for vector storage and retrieval. +- **Sharing or consuming capabilities across AI clients** → Build an [MCP Server](#mcp-server) to publish capabilities, or use an [MCP Client](#mcp-client) to consume them. +- **Building a truly agentic system** → If you want a ready-made harness, use the [Copilot SDK](#copilot-sdk). For multi-step goal pursuit, routing, or handoffs, use [MAF](#microsoft-agent-framework-maf). +- **Choosing a hosting or execution model** → Use [Azure AI Foundry](#azure-ai-foundry) for managed cloud, [Foundry Local](#foundry-local) for local-first or privacy-sensitive execution, and [Aspire](#aspire) when the solution is a distributed multi-service system. +- **Improving the developer workflow** → Use [AI Toolkit](#ai-toolkit). + +## Component guidance + +### Microsoft.Extensions.AI (MEAI) + +`Microsoft.Extensions.AI` is the app-facing foundation for adding model-powered behavior to a .NET application. + +**Use MEAI when you want to:** + +- Build chat or conversational UX. +- Stream responses. +- Summarize, extract, or classify content. +- Produce structured outputs. +- Generate or work with embeddings. +- Call tools or functions. + +MEAI gives .NET developers a clean abstraction for model interaction. It fits naturally into dependency injection, configuration, and existing app architectures and is the usual first layer of an AI-enabled .NET application. + +**Important boundary:** MEAI alone isn't an agent framework. A one-shot call, chat feature, or tool-call loop can be built with MEAI without becoming "agentic." When the system needs goal-directed, multi-step orchestration, that's where MAF matters. + +**Don't lead with MEAI alone when:** + +- The hard problem is preparing enterprise content for AI—add [MEDI](#microsoftextensionsdataingestion-medi). +- The hard problem is retrieval or RAG—add [MEVD](#microsoftextensionsvectordata-mevd). +- Tools must be discoverable across assistants or apps—add [MCP Server](#mcp-server). +- The app needs to consume external MCP capabilities—add [MCP Client](#mcp-client). +- The challenge is multi-step autonomous workflows—add [MAF](#microsoft-agent-framework-maf). +- The main concern is hosting, governance, and enterprise controls—add [Azure AI Foundry](#azure-ai-foundry). + +For more information, see [Microsoft.Extensions.AI overview](../microsoft-extensions-ai.md). + +### Evaluations + +`Microsoft.Extensions.AI.Evaluations` is the quality and regression layer for AI features built with the .NET AI stack. + +**Use Evaluations when you need to answer questions like:** + +- "Is this prompt change actually better?" +- "Did switching models hurt quality or safety?" +- "Did the agent regress on key scenarios?" +- "Can you measure behavior before shipping?" + +AI behavior changes easily as prompts, models, and tools evolve. Intuition doesn't scale—Evaluations give teams a repeatable way to compare outputs and catch regressions. + +**Important boundary:** Evaluations aren't the runtime feature itself. They're most valuable once you already have a feature, workflow, or agent worth measuring. + +For more information, see [Microsoft.Extensions.AI.Evaluation libraries](../evaluation/libraries.md). + +### Microsoft.Extensions.DataIngestion (MEDI) + +`Microsoft.Extensions.DataIngestion` is the ingestion and preparation layer for AI-ready data in .NET. + +**Use MEDI when:** + +- You need to read content from files, stores, or enterprise sources. +- You need to chunk documents for retrieval and grounding. +- You need to normalize and enrich content with metadata. +- You're preparing data to feed a vector index or downstream RAG pipeline. + +Many AI apps fail before retrieval because data is messy, oversized, or poorly structured. Ingestion quality strongly affects downstream answer quality. + +**Important boundary:** MEDI is not the retrieval layer—it comes *before* retrieval. It prepares and shapes the data that MEVD or another store later queries. + +**Don't lead with MEDI when:** + +- The app just needs chat, extraction, summarization, or tool calling over immediate input—start with [MEAI](#microsoftextensionsai-meai). +- Your content is already prepared and the need is semantic lookup—lead with [MEVD](#microsoftextensionsvectordata-mevd). + +For more information, see [Data ingestion for AI apps](data-ingestion.md). + +### Microsoft.Extensions.VectorData (MEVD) + +`Microsoft.Extensions.VectorData` is the vector data storage and retrieval layer for semantic search, similarity lookup, and grounding in .NET AI apps. + +**Use MEVD when:** + +- You need semantic search. +- You need embeddings-backed retrieval. +- You're implementing RAG. +- You need similarity search or hybrid retrieval patterns. + +MEVD gives .NET applications a consistent way to work with vector stores and helps separate vector storage and retrieval concerns from model invocation concerns. + +**Important boundary:** MEVD isn't the model layer or the ingestion layer. MEDI prepares the data, MEVD stores and retrieves the data, and MEAI uses that retrieved context with the model. + +For more information, see [Vector stores overview](../vector-stores/overview.md). + +### MCP Server + +An MCP Server exposes capabilities—tools, resources, or prompts—over the Model Context Protocol so other assistants, IDEs, and agents can discover and use them through a standard protocol. + +**Use an MCP Server when:** + +- You want to make internal APIs or business actions available to multiple AI clients. +- You want interoperability instead of custom one-off integrations. +- The same capability should be reusable across tools, products, or agent systems. + +An MCP Server turns app capabilities into reusable AI-facing endpoints. It reduces duplicated tool integration work across assistants and creates a cleaner boundary between capability providers and capability consumers. + +**Important boundary:** An MCP Server is about *publishing* capabilities. If the capability only needs to be used inside one app, ordinary in-process function calling is simpler. + +### MCP Client + +An MCP Client is the consumer side of the protocol: it connects to MCP servers and brings their exposed capabilities into an app, assistant, or agent runtime. + +**Use an MCP Client when:** + +- Your app needs to call tools that are already exposed elsewhere through MCP. +- You want an agent or assistant to consume capabilities without custom per-tool plumbing. +- You're composing with an external ecosystem of AI-facing tools. + +**Important boundary:** An MCP Client is about *consuming* capabilities, not publishing them. If everything the app needs is local and in-process, ordinary function or tool calling is still simpler. + +For more information, see [Get started with MCP](../get-started-mcp.md). + +### Microsoft Agent Framework (MAF) + +Microsoft Agent Framework is the orchestration layer for systems that are truly agentic: they pursue a goal across multiple steps, make decisions along the way, use tools, and might coordinate multiple agents. + +**Use MAF when the system needs:** + +- Planning or stepwise execution. +- Routing across tools or specialist agents. +- Handoffs between agents or humans. +- Stateful, multi-step workflows that adapt as results come back. + +**Important boundary:** Not every AI feature needs MAF. If a direct MEAI call or a simple tool-calling loop solves the problem, stay simpler. MAF matters when orchestration complexity is the real challenge, not just model access. + +For more information, see [Microsoft Agent Framework overview](/agent-framework/overview/agent-framework-overview). + +### AI Toolkit + +AI Toolkit is a VS Code extension pack for AI development that speeds up experimentation with models, prompts, agents, and evaluations. + +**Use AI Toolkit when teams want:** + +- A model catalog and playground inside VS Code. +- A faster loop for testing prompts and models. +- An agent builder and agent inspector for creating, debugging, and visualizing agents. +- A friendlier workflow for experimenting during development. +- Bulk runs, tracing, or fine-tuning as part of the development loop. + +**Important boundary:** AI Toolkit isn't the core runtime architecture for the production app. It complements MEAI, Evaluations, and Foundry Local. + +### Copilot SDK + +Copilot SDK is a pre-built agent harness and runtime that brings tools, context, and automatic tool calling out of the box. + +**Use Copilot SDK when:** + +- You want more built-in runtime behavior than a blank-slate MEAI app provides. +- You want tools and context wired in quickly. +- Automatic tool calling and agent-harness behavior are more valuable than assembling everything manually. +- You want a faster path to a working assistant or agent runtime. + +**Important boundary:** Copilot SDK is more opinionated and pre-wired than MEAI. If the goal is a fully custom app architecture, direct MEAI or MAF composition might be a better fit. + +### Azure AI Foundry + +Azure AI Foundry is the managed cloud platform layer for enterprise AI solutions, with two primary functions: model management and hosted agents. + +**Use Azure AI Foundry when priorities include:** + +- Managed model hosting. +- Hosted agent capabilities with persistent memory and built-in tools. +- Code-free RAG through file upload or grounding services. +- Out-of-the-box evaluations, comparisons, and safety checks. +- Safety filtering and governance controls. +- Enterprise compliance and operational consistency. +- Centralized access to models and cloud deployment infrastructure. + +**Important boundary:** Azure AI Foundry isn't the app-facing programming abstraction—MEAI still plays that role in .NET code. Azure AI Foundry becomes the right lead when the real question is *where* and under what controls the model runs. + +For more information, see the [Azure AI Foundry documentation](/azure/ai-foundry/). + +### Foundry Local + +Foundry Local is a local development and local-first deployment option for teams that need to keep AI workloads close to the machine or environment. + +**Use Foundry Local when:** + +- Local development needs better production parity. +- The organization is local-first or all-local because of privacy, compliance, or data residency requirements. +- You want the local experience to align somewhat with Azure AI Foundry. +- Teams need to experiment locally without sending sensitive data to the cloud. + +**Important boundary:** Foundry Local is about the development and deployment path, not the higher-level app architecture itself. Local-to-cloud is not a clean one-to-one move—expect differences in features, hosting model, and operations. + +### Aspire + +.NET Aspire is the orchestration, service-wiring, and observability layer for distributed .NET applications, including AI systems that span multiple services. + +**Use Aspire when the solution includes:** + +- Separate API, agent, retrieval, and model-facing services. +- Service discovery and distributed configuration. +- Tracing, telemetry, and end-to-end visibility across the system. + +AI systems often stop being "just one app" once retrieval, tools, gateways, and worker services are involved. Aspire helps teams keep those parts understandable and observable, and its visuals make it easier to trace AI flows across services. + +**Important boundary:** Aspire isn't specifically the AI runtime; it's the multi-service application layer around it. It doesn't replace MEAI, MAF, or Azure AI Foundry. + +For more information, see the [.NET Aspire documentation](/dotnet/aspire/). + +## Common combinations + +Most production AI applications combine several components: + +- **Chat or summarization app**: MEAI + Evaluations +- **RAG application**: MEDI + MEVD + MEAI +- **Multi-agent system**: MEAI + MAF + Aspire +- **Tool interoperability**: MEAI + MCP Server + MCP Client +- **Enterprise cloud app**: MEAI + Azure AI Foundry + Aspire +- **Local-first app**: MEAI + Foundry Local + AI Toolkit (development) + +## Next steps + +- [.NET + AI ecosystem tools and SDKs](../dotnet-ai-ecosystem.md) +- [Microsoft.Extensions.AI overview](../microsoft-extensions-ai.md) +- [Microsoft Agent Framework overview](/agent-framework/overview/agent-framework-overview) +- [Get started with MCP](../get-started-mcp.md) +- [Vector stores overview](../vector-stores/overview.md) +- [Data ingestion for AI apps](data-ingestion.md) +- [Evaluations overview](../evaluation/libraries.md) diff --git a/docs/ai/dotnet-ai-ecosystem.md b/docs/ai/dotnet-ai-ecosystem.md index 84e5d0d2dad37..d202137bb5bb2 100644 --- a/docs/ai/dotnet-ai-ecosystem.md +++ b/docs/ai/dotnet-ai-ecosystem.md @@ -1,7 +1,7 @@ --- title: .NET + AI ecosystem tools and SDKs description: This article provides an overview of the ecosystem of SDKs and tools available to .NET developers integrating AI into their applications. -ms.date: 12/10/2025 +ms.date: 04/15/2026 ms.topic: overview --- @@ -9,6 +9,9 @@ ms.topic: overview The .NET ecosystem provides many powerful tools, libraries, and services to develop AI applications. .NET supports both cloud and local AI model connections, many different SDKs for various AI and vector database services, and other tools to help you build intelligent apps of varying scope and complexity. +> [!TIP] +> Not sure which tool to use? See [Choose the right .NET AI tool](conceptual/choose-ai-tool.md) for guidance on when to use each component. + > [!IMPORTANT] > Not all of the SDKs and services presented in this article are maintained by Microsoft. When considering an SDK, make sure to evaluate its quality, licensing, support, and compatibility to ensure they meet your requirements. diff --git a/docs/ai/overview.md b/docs/ai/overview.md index fbf5273042704..8e8c1cf53025f 100644 --- a/docs/ai/overview.md +++ b/docs/ai/overview.md @@ -1,7 +1,7 @@ --- title: Develop .NET apps with AI features description: Learn how you can build .NET applications that include AI features. -ms.date: 12/10/2025 +ms.date: 04/15/2026 ms.topic: overview --- @@ -58,7 +58,7 @@ We recommend the following sequence of tutorials and articles for an introductio | Generate images | [Generate images from text](./quickstarts/text-to-image.md) | | Train your own model | [ML.NET tutorial](https://dotnet.microsoft.com/learn/ml-dotnet/get-started-tutorial/intro) | -Browse the table of contents to learn more about the core concepts, starting with [How generative AI and LLMs work](./conceptual/how-genai-and-llms-work.md). +Browse the table of contents to learn more about the core concepts, starting with [How generative AI and LLMs work](./conceptual/how-genai-and-llms-work.md). If you're not sure which .NET AI tool or SDK to use for your scenario, see [Choose the right .NET AI tool](./conceptual/choose-ai-tool.md). ## Next steps diff --git a/docs/ai/toc.yml b/docs/ai/toc.yml index 8f8d56e517470..b92c5a3937869 100644 --- a/docs/ai/toc.yml +++ b/docs/ai/toc.yml @@ -41,6 +41,8 @@ items: items: - name: How generative AI and LLMs work href: conceptual/how-genai-and-llms-work.md + - name: Choose the right .NET AI tool + href: conceptual/choose-ai-tool.md - name: Agents href: conceptual/agents.md - name: Tokens From efe04a895f224e4dc964d59ade52fcbf20aa0b96 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 15 Apr 2026 02:51:56 +0000 Subject: [PATCH 03/10] Address code review feedback: improve writing style in choose-ai-tool.md Agent-Logs-Url: https://github.com/dotnet/docs/sessions/dcb14719-132d-4dad-87e0-bd0d48d895a4 Co-authored-by: gewarren <24882762+gewarren@users.noreply.github.com> --- docs/ai/conceptual/choose-ai-tool.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/ai/conceptual/choose-ai-tool.md b/docs/ai/conceptual/choose-ai-tool.md index 2915e1110594a..b8a30991a4cfb 100644 --- a/docs/ai/conceptual/choose-ai-tool.md +++ b/docs/ai/conceptual/choose-ai-tool.md @@ -57,7 +57,7 @@ Start by identifying your primary challenge: MEAI gives .NET developers a clean abstraction for model interaction. It fits naturally into dependency injection, configuration, and existing app architectures and is the usual first layer of an AI-enabled .NET application. -**Important boundary:** MEAI alone isn't an agent framework. A one-shot call, chat feature, or tool-call loop can be built with MEAI without becoming "agentic." When the system needs goal-directed, multi-step orchestration, that's where MAF matters. +**Important boundary:** MEAI alone isn't an agent framework. A one-shot call, chat feature, or tool-call loop can be built with MEAI without becoming "agentic." When the system needs goal-directed, multi-step orchestration, use [MAF](#microsoft-agent-framework-maf) instead. **Don't lead with MEAI alone when:** @@ -100,7 +100,7 @@ For more information, see [Microsoft.Extensions.AI.Evaluation libraries](../eval Many AI apps fail before retrieval because data is messy, oversized, or poorly structured. Ingestion quality strongly affects downstream answer quality. -**Important boundary:** MEDI is not the retrieval layer—it comes *before* retrieval. It prepares and shapes the data that MEVD or another store later queries. +**Important boundary:** MEDI isn't the retrieval layer—it comes *before* retrieval. It prepares and shapes the data that MEVD or another store later queries. **Don't lead with MEDI when:** @@ -122,7 +122,7 @@ For more information, see [Data ingestion for AI apps](data-ingestion.md). MEVD gives .NET applications a consistent way to work with vector stores and helps separate vector storage and retrieval concerns from model invocation concerns. -**Important boundary:** MEVD isn't the model layer or the ingestion layer. MEDI prepares the data, MEVD stores and retrieves the data, and MEAI uses that retrieved context with the model. +**Important boundary:** MEVD isn't the model layer or the ingestion layer. MEDI prepares the data. MEVD stores and retrieves the data. MEAI uses that retrieved context with the model. For more information, see [Vector stores overview](../vector-stores/overview.md). @@ -138,7 +138,7 @@ An MCP Server exposes capabilities—tools, resources, or prompts—over the Mod An MCP Server turns app capabilities into reusable AI-facing endpoints. It reduces duplicated tool integration work across assistants and creates a cleaner boundary between capability providers and capability consumers. -**Important boundary:** An MCP Server is about *publishing* capabilities. If the capability only needs to be used inside one app, ordinary in-process function calling is simpler. +**Important boundary:** An MCP Server is about *publishing* capabilities. If the capability is used only inside one app, ordinary in-process function calling is simpler. ### MCP Client @@ -165,7 +165,7 @@ Microsoft Agent Framework is the orchestration layer for systems that are truly - Handoffs between agents or humans. - Stateful, multi-step workflows that adapt as results come back. -**Important boundary:** Not every AI feature needs MAF. If a direct MEAI call or a simple tool-calling loop solves the problem, stay simpler. MAF matters when orchestration complexity is the real challenge, not just model access. +**Important boundary:** Not every AI feature needs MAF. If a direct MEAI call or a simple tool-calling loop solves the problem, use a simpler approach. MAF matters when orchestration complexity is the real challenge, not just model access. For more information, see [Microsoft Agent Framework overview](/agent-framework/overview/agent-framework-overview). @@ -194,7 +194,7 @@ Copilot SDK is a pre-built agent harness and runtime that brings tools, context, - Automatic tool calling and agent-harness behavior are more valuable than assembling everything manually. - You want a faster path to a working assistant or agent runtime. -**Important boundary:** Copilot SDK is more opinionated and pre-wired than MEAI. If the goal is a fully custom app architecture, direct MEAI or MAF composition might be a better fit. +**Important boundary:** Copilot SDK is more opinionated and pre-wired than MEAI. If the goal is a fully custom app architecture, direct MEAI or MAF composition can be a better fit. ### Azure AI Foundry @@ -210,7 +210,7 @@ Azure AI Foundry is the managed cloud platform layer for enterprise AI solutions - Enterprise compliance and operational consistency. - Centralized access to models and cloud deployment infrastructure. -**Important boundary:** Azure AI Foundry isn't the app-facing programming abstraction—MEAI still plays that role in .NET code. Azure AI Foundry becomes the right lead when the real question is *where* and under what controls the model runs. +**Important boundary:** Azure AI Foundry isn't the app-facing programming abstraction—MEAI still plays that role in .NET code. Azure AI Foundry becomes the right lead when the real question is *where* the model runs and under what controls. For more information, see the [Azure AI Foundry documentation](/azure/ai-foundry/). @@ -225,7 +225,7 @@ Foundry Local is a local development and local-first deployment option for teams - You want the local experience to align somewhat with Azure AI Foundry. - Teams need to experiment locally without sending sensitive data to the cloud. -**Important boundary:** Foundry Local is about the development and deployment path, not the higher-level app architecture itself. Local-to-cloud is not a clean one-to-one move—expect differences in features, hosting model, and operations. +**Important boundary:** Foundry Local is about the development and deployment path, not the higher-level app architecture itself. Local-to-cloud isn't a clean one-to-one move—expect differences in features, hosting model, and operations. ### Aspire From e8f7ddcd54dc5667a8df136ebddb950453c71f95 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 15 Apr 2026 03:07:11 +0000 Subject: [PATCH 04/10] Address review feedback: xref links, AI Toolkit link, Aspire naming, add to index.yml Agent-Logs-Url: https://github.com/dotnet/docs/sessions/e4a789c2-5b6b-4a84-bd20-92983b288814 Co-authored-by: gewarren <24882762+gewarren@users.noreply.github.com> --- docs/ai/conceptual/choose-ai-tool.md | 14 ++++++++------ docs/ai/index.yml | 2 ++ 2 files changed, 10 insertions(+), 6 deletions(-) diff --git a/docs/ai/conceptual/choose-ai-tool.md b/docs/ai/conceptual/choose-ai-tool.md index b8a30991a4cfb..498f6eadfe5ee 100644 --- a/docs/ai/conceptual/choose-ai-tool.md +++ b/docs/ai/conceptual/choose-ai-tool.md @@ -44,7 +44,7 @@ Start by identifying your primary challenge: ### Microsoft.Extensions.AI (MEAI) -`Microsoft.Extensions.AI` is the app-facing foundation for adding model-powered behavior to a .NET application. + is the app-facing foundation for adding model-powered behavior to a .NET application. **Use MEAI when you want to:** @@ -72,7 +72,7 @@ For more information, see [Microsoft.Extensions.AI overview](../microsoft-extens ### Evaluations -`Microsoft.Extensions.AI.Evaluations` is the quality and regression layer for AI features built with the .NET AI stack. + is the quality and regression layer for AI features built with the .NET AI stack. **Use Evaluations when you need to answer questions like:** @@ -89,7 +89,7 @@ For more information, see [Microsoft.Extensions.AI.Evaluation libraries](../eval ### Microsoft.Extensions.DataIngestion (MEDI) -`Microsoft.Extensions.DataIngestion` is the ingestion and preparation layer for AI-ready data in .NET. + is the ingestion and preparation layer for AI-ready data in .NET. **Use MEDI when:** @@ -111,7 +111,7 @@ For more information, see [Data ingestion for AI apps](data-ingestion.md). ### Microsoft.Extensions.VectorData (MEVD) -`Microsoft.Extensions.VectorData` is the vector data storage and retrieval layer for semantic search, similarity lookup, and grounding in .NET AI apps. + is the vector data storage and retrieval layer for semantic search, similarity lookup, and grounding in .NET AI apps. **Use MEVD when:** @@ -183,6 +183,8 @@ AI Toolkit is a VS Code extension pack for AI development that speeds up experim **Important boundary:** AI Toolkit isn't the core runtime architecture for the production app. It complements MEAI, Evaluations, and Foundry Local. +For more information, see [AI Toolkit for Visual Studio Code](https://code.visualstudio.com/docs/intelligentapps/overview). + ### Copilot SDK Copilot SDK is a pre-built agent harness and runtime that brings tools, context, and automatic tool calling out of the box. @@ -229,7 +231,7 @@ Foundry Local is a local development and local-first deployment option for teams ### Aspire -.NET Aspire is the orchestration, service-wiring, and observability layer for distributed .NET applications, including AI systems that span multiple services. +Aspire is the orchestration, service-wiring, and observability layer for distributed .NET applications, including AI systems that span multiple services. **Use Aspire when the solution includes:** @@ -241,7 +243,7 @@ AI systems often stop being "just one app" once retrieval, tools, gateways, and **Important boundary:** Aspire isn't specifically the AI runtime; it's the multi-service application layer around it. It doesn't replace MEAI, MAF, or Azure AI Foundry. -For more information, see the [.NET Aspire documentation](/dotnet/aspire/). +For more information, see the [Aspire documentation](/dotnet/aspire/). ## Common combinations diff --git a/docs/ai/index.yml b/docs/ai/index.yml index 393f59396a2fd..28ad720ed7113 100644 --- a/docs/ai/index.yml +++ b/docs/ai/index.yml @@ -43,6 +43,8 @@ landingContent: linkLists: - linkListType: concept links: + - text: Choose the right .NET AI tool + url: conceptual/choose-ai-tool.md - text: How generative AI and LLMs work url: conceptual/how-genai-and-llms-work.md - text: Build agents to automate workflows From c3ae6f2d2edf00e29c0337dd97d276ab35a88450 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 15 Apr 2026 17:45:07 +0000 Subject: [PATCH 05/10] Convert 'How to decide' bullets to table; add Copilot SDK and Foundry Local links Agent-Logs-Url: https://github.com/dotnet/docs/sessions/837d6ea3-2507-4ed2-bcae-2e9cee61572d Co-authored-by: gewarren <24882762+gewarren@users.noreply.github.com> --- docs/ai/conceptual/choose-ai-tool.md | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/docs/ai/conceptual/choose-ai-tool.md b/docs/ai/conceptual/choose-ai-tool.md index 498f6eadfe5ee..77e7116429c32 100644 --- a/docs/ai/conceptual/choose-ai-tool.md +++ b/docs/ai/conceptual/choose-ai-tool.md @@ -31,14 +31,14 @@ The following table summarizes when to reach for each component: ## How to decide -Start by identifying your primary challenge: - -- **Adding AI behavior to an app** → Start with [MEAI](#microsoftextensionsai-meai). Add [Evaluations](#evaluations) once you have something worth measuring. -- **Working with your own data** → If you need to read, chunk, or enrich content first, start with [MEDI](#microsoftextensionsdataingestion-medi). Then use [MEVD](#microsoftextensionsvectordata-mevd) for vector storage and retrieval. -- **Sharing or consuming capabilities across AI clients** → Build an [MCP Server](#mcp-server) to publish capabilities, or use an [MCP Client](#mcp-client) to consume them. -- **Building a truly agentic system** → If you want a ready-made harness, use the [Copilot SDK](#copilot-sdk). For multi-step goal pursuit, routing, or handoffs, use [MAF](#microsoft-agent-framework-maf). -- **Choosing a hosting or execution model** → Use [Azure AI Foundry](#azure-ai-foundry) for managed cloud, [Foundry Local](#foundry-local) for local-first or privacy-sensitive execution, and [Aspire](#aspire) when the solution is a distributed multi-service system. -- **Improving the developer workflow** → Use [AI Toolkit](#ai-toolkit). +| If your primary challenge is... | Start with... | +|---------------------------------|---------------| +| **Adding AI behavior to an app** | [MEAI](#microsoftextensionsai-meai). Add [Evaluations](#evaluations) once you have something worth measuring. | +| **Working with your own data** | [MEDI](#microsoftextensionsdataingestion-medi) to read, chunk, or enrich content. Then use [MEVD](#microsoftextensionsvectordata-mevd) for vector storage and retrieval. | +| **Sharing or consuming capabilities across AI clients** | An [MCP Server](#mcp-server) to publish capabilities, or an [MCP Client](#mcp-client) to consume them. | +| **Building a truly agentic system** | [Copilot SDK](#copilot-sdk) for a ready-made harness, or [MAF](#microsoft-agent-framework-maf) for multi-step goal pursuit, routing, or handoffs. | +| **Choosing a hosting or execution model** | [Azure AI Foundry](#azure-ai-foundry) for managed cloud, [Foundry Local](#foundry-local) for local-first or privacy-sensitive execution, and [Aspire](#aspire) for distributed multi-service systems. | +| **Improving the developer workflow** | [AI Toolkit](#ai-toolkit) | ## Component guidance @@ -198,6 +198,8 @@ Copilot SDK is a pre-built agent harness and runtime that brings tools, context, **Important boundary:** Copilot SDK is more opinionated and pre-wired than MEAI. If the goal is a fully custom app architecture, direct MEAI or MAF composition can be a better fit. +For more information, see the [Copilot SDK repository](https://github.com/github/copilot-sdk). + ### Azure AI Foundry Azure AI Foundry is the managed cloud platform layer for enterprise AI solutions, with two primary functions: model management and hosted agents. @@ -229,6 +231,8 @@ Foundry Local is a local development and local-first deployment option for teams **Important boundary:** Foundry Local is about the development and deployment path, not the higher-level app architecture itself. Local-to-cloud isn't a clean one-to-one move—expect differences in features, hosting model, and operations. +For more information, see the [Foundry Local documentation](/azure/foundry-local/). + ### Aspire Aspire is the orchestration, service-wiring, and observability layer for distributed .NET applications, including AI systems that span multiple services. From 7076bafb7da88febd4efabf67b5af24f6a0c9bde Mon Sep 17 00:00:00 2001 From: Genevieve Warren <24882762+gewarren@users.noreply.github.com> Date: Wed, 15 Apr 2026 13:21:45 -0700 Subject: [PATCH 06/10] human edits --- docs/ai/conceptual/choose-ai-tool.md | 130 +++++++++++++-------------- 1 file changed, 62 insertions(+), 68 deletions(-) diff --git a/docs/ai/conceptual/choose-ai-tool.md b/docs/ai/conceptual/choose-ai-tool.md index 77e7116429c32..4b4abcc8336d3 100644 --- a/docs/ai/conceptual/choose-ai-tool.md +++ b/docs/ai/conceptual/choose-ai-tool.md @@ -8,7 +8,7 @@ ai-usage: ai-assisted # Choose the right .NET AI tool -The .NET AI ecosystem includes many powerful tools and libraries for different purposes. Picking the right one—or the right combination—makes your application easier to build, test, and maintain. This article helps you understand which tool fits your scenario. +The .NET AI ecosystem includes many powerful tools and libraries for different purposes. This article helps you understand which tools to use in which scenarios. ## Quick reference @@ -31,24 +31,33 @@ The following table summarizes when to reach for each component: ## How to decide -| If your primary challenge is... | Start with... | -|---------------------------------|---------------| -| **Adding AI behavior to an app** | [MEAI](#microsoftextensionsai-meai). Add [Evaluations](#evaluations) once you have something worth measuring. | -| **Working with your own data** | [MEDI](#microsoftextensionsdataingestion-medi) to read, chunk, or enrich content. Then use [MEVD](#microsoftextensionsvectordata-mevd) for vector storage and retrieval. | -| **Sharing or consuming capabilities across AI clients** | An [MCP Server](#mcp-server) to publish capabilities, or an [MCP Client](#mcp-client) to consume them. | -| **Building a truly agentic system** | [Copilot SDK](#copilot-sdk) for a ready-made harness, or [MAF](#microsoft-agent-framework-maf) for multi-step goal pursuit, routing, or handoffs. | -| **Choosing a hosting or execution model** | [Azure AI Foundry](#azure-ai-foundry) for managed cloud, [Foundry Local](#foundry-local) for local-first or privacy-sensitive execution, and [Aspire](#aspire) for distributed multi-service systems. | -| **Improving the developer workflow** | [AI Toolkit](#ai-toolkit) | +The following table recommends which technology to use based on different objectives. -## Component guidance +| Objective | Technology to use | +|-------------------------------|-------------------| +| **Add AI behavior to an app** | [MEAI](#microsoftextensionsai-meai). Add [Evaluations](#evaluations) once you have something worth measuring. | +| **Work with your own data** | [MEDI](#microsoftextensionsdataingestion-medi) to read, chunk, or enrich content. Then use [MEVD](#microsoftextensionsvectordata-mevd) for vector storage and retrieval. | +| **Share or consume capabilities across AI clients** | An [MCP Server](#mcp-server) to publish capabilities, or an [MCP Client](#mcp-client) to consume them. | +| **Build an agentic system** | [Copilot SDK](#copilot-sdk) for a ready-made harness, or [MAF](#microsoft-agent-framework-maf) for multi-step goal pursuit, routing, or handoffs. | +| **Choose a hosting or execution model** | [Azure AI Foundry](#azure-ai-foundry) for managed cloud, [Foundry Local](#foundry-local) for local-first or privacy-sensitive execution, and [Aspire](#aspire) for distributed multi-service systems. | +| **Improve the developer workflow** | [AI Toolkit](#ai-toolkit) | -### Microsoft.Extensions.AI (MEAI) +Most production AI applications combine several components: + +- **Chat or summarization app**: MEAI + Evaluations +- **RAG application**: MEDI + MEVD + MEAI +- **Multi-agent system**: MEAI + MAF + Aspire +- **Tool interoperability**: MEAI + MCP Server + MCP Client +- **Enterprise cloud app**: MEAI + Azure AI Foundry + Aspire +- **Local-first app**: MEAI + Foundry Local + AI Toolkit (development) + +## Microsoft.Extensions.AI (MEAI) is the app-facing foundation for adding model-powered behavior to a .NET application. -**Use MEAI when you want to:** +Use MEAI when you want to: -- Build chat or conversational UX. +- Build a chat or conversational user interface. - Stream responses. - Summarize, extract, or classify content. - Produce structured outputs. @@ -57,9 +66,9 @@ The following table summarizes when to reach for each component: MEAI gives .NET developers a clean abstraction for model interaction. It fits naturally into dependency injection, configuration, and existing app architectures and is the usual first layer of an AI-enabled .NET application. -**Important boundary:** MEAI alone isn't an agent framework. A one-shot call, chat feature, or tool-call loop can be built with MEAI without becoming "agentic." When the system needs goal-directed, multi-step orchestration, use [MAF](#microsoft-agent-framework-maf) instead. +MEAI alone isn't an agent framework. A one-shot call, chat feature, or tool-call loop can be built with MEAI without becoming "agentic." When the system needs goal-directed, multi-step orchestration, use [MAF](#microsoft-agent-framework-maf) instead. -**Don't lead with MEAI alone when:** +Don't lead with MEAI alone when: - The hard problem is preparing enterprise content for AI—add [MEDI](#microsoftextensionsdataingestion-medi). - The hard problem is retrieval or RAG—add [MEVD](#microsoftextensionsvectordata-mevd). @@ -70,50 +79,46 @@ MEAI gives .NET developers a clean abstraction for model interaction. It fits na For more information, see [Microsoft.Extensions.AI overview](../microsoft-extensions-ai.md). -### Evaluations +## Evaluations - is the quality and regression layer for AI features built with the .NET AI stack. +The [The Microsoft.Extensions.AI.Evaluation library](../evaluation/libraries.md) is the quality and regression layer for AI features built with the .NET AI stack. -**Use Evaluations when you need to answer questions like:** +Use evaluations when you need to answer questions like: - "Is this prompt change actually better?" - "Did switching models hurt quality or safety?" - "Did the agent regress on key scenarios?" - "Can you measure behavior before shipping?" -AI behavior changes easily as prompts, models, and tools evolve. Intuition doesn't scale—Evaluations give teams a repeatable way to compare outputs and catch regressions. - -**Important boundary:** Evaluations aren't the runtime feature itself. They're most valuable once you already have a feature, workflow, or agent worth measuring. +AI behavior changes readily as prompts, models, and tools evolve. The evaluations library give teams a repeatable way to compare outputs and catch regressions. For more information, see [Microsoft.Extensions.AI.Evaluation libraries](../evaluation/libraries.md). -### Microsoft.Extensions.DataIngestion (MEDI) +## Microsoft.Extensions.DataIngestion (MEDI) is the ingestion and preparation layer for AI-ready data in .NET. -**Use MEDI when:** +Use MEDI when: - You need to read content from files, stores, or enterprise sources. - You need to chunk documents for retrieval and grounding. - You need to normalize and enrich content with metadata. - You're preparing data to feed a vector index or downstream RAG pipeline. -Many AI apps fail before retrieval because data is messy, oversized, or poorly structured. Ingestion quality strongly affects downstream answer quality. - -**Important boundary:** MEDI isn't the retrieval layer—it comes *before* retrieval. It prepares and shapes the data that MEVD or another store later queries. +Many AI apps fail before retrieval because data is messy, oversized, or poorly structured. Ingestion quality strongly affects downstream answer quality. MEDI prepares and shapes the data that MEVD or another store later queries. -**Don't lead with MEDI when:** +Don't lead with MEDI when: -- The app just needs chat, extraction, summarization, or tool calling over immediate input—start with [MEAI](#microsoftextensionsai-meai). -- Your content is already prepared and the need is semantic lookup—lead with [MEVD](#microsoftextensionsvectordata-mevd). +- The app just needs chat, extraction, summarization, or tool calling over immediate input. Instead, start with [MEAI](#microsoftextensionsai-meai). +- Your content is already prepared and the need is semantic lookup. Instead, lead with [MEVD](#microsoftextensionsvectordata-mevd). For more information, see [Data ingestion for AI apps](data-ingestion.md). -### Microsoft.Extensions.VectorData (MEVD) +## Microsoft.Extensions.VectorData (MEVD) is the vector data storage and retrieval layer for semantic search, similarity lookup, and grounding in .NET AI apps. -**Use MEVD when:** +Use MEVD when: - You need semantic search. - You need embeddings-backed retrieval. @@ -122,15 +127,15 @@ For more information, see [Data ingestion for AI apps](data-ingestion.md). MEVD gives .NET applications a consistent way to work with vector stores and helps separate vector storage and retrieval concerns from model invocation concerns. -**Important boundary:** MEVD isn't the model layer or the ingestion layer. MEDI prepares the data. MEVD stores and retrieves the data. MEAI uses that retrieved context with the model. +MEDI prepares the data. MEVD stores and retrieves the data. MEAI uses that retrieved context with the model. For more information, see [Vector stores overview](../vector-stores/overview.md). -### MCP Server +## MCP Server -An MCP Server exposes capabilities—tools, resources, or prompts—over the Model Context Protocol so other assistants, IDEs, and agents can discover and use them through a standard protocol. +An MCP Server exposes capabilities such as tools, resources, or prompts over the Model Context Protocol so other assistants, IDEs, and agents can discover and use them through a standard protocol. -**Use an MCP Server when:** +Use an MCP Server when: - You want to make internal APIs or business actions available to multiple AI clients. - You want interoperability instead of custom one-off integrations. @@ -138,42 +143,42 @@ An MCP Server exposes capabilities—tools, resources, or prompts—over the Mod An MCP Server turns app capabilities into reusable AI-facing endpoints. It reduces duplicated tool integration work across assistants and creates a cleaner boundary between capability providers and capability consumers. -**Important boundary:** An MCP Server is about *publishing* capabilities. If the capability is used only inside one app, ordinary in-process function calling is simpler. +An MCP Server is about *publishing* capabilities. If the capability is used only inside one app, ordinary in-process function calling is simpler. -### MCP Client +## MCP Client An MCP Client is the consumer side of the protocol: it connects to MCP servers and brings their exposed capabilities into an app, assistant, or agent runtime. -**Use an MCP Client when:** +Use an MCP Client when: - Your app needs to call tools that are already exposed elsewhere through MCP. - You want an agent or assistant to consume capabilities without custom per-tool plumbing. - You're composing with an external ecosystem of AI-facing tools. -**Important boundary:** An MCP Client is about *consuming* capabilities, not publishing them. If everything the app needs is local and in-process, ordinary function or tool calling is still simpler. +An MCP Client is about *consuming* capabilities, not publishing them. If everything the app needs is local and in-process, ordinary function or tool calling is still simpler. For more information, see [Get started with MCP](../get-started-mcp.md). -### Microsoft Agent Framework (MAF) +## Microsoft Agent Framework (MAF) Microsoft Agent Framework is the orchestration layer for systems that are truly agentic: they pursue a goal across multiple steps, make decisions along the way, use tools, and might coordinate multiple agents. -**Use MAF when the system needs:** +Use MAF when the system needs: - Planning or stepwise execution. - Routing across tools or specialist agents. - Handoffs between agents or humans. - Stateful, multi-step workflows that adapt as results come back. -**Important boundary:** Not every AI feature needs MAF. If a direct MEAI call or a simple tool-calling loop solves the problem, use a simpler approach. MAF matters when orchestration complexity is the real challenge, not just model access. +Not every AI feature needs MAF. If a direct MEAI call or a simple tool-calling loop solves the problem, use a simpler approach. MAF matters when orchestration complexity is the real challenge, not just model access. For more information, see [Microsoft Agent Framework overview](/agent-framework/overview/agent-framework-overview). -### AI Toolkit +## AI Toolkit AI Toolkit is a VS Code extension pack for AI development that speeds up experimentation with models, prompts, agents, and evaluations. -**Use AI Toolkit when teams want:** +Use AI Toolkit when you want: - A model catalog and playground inside VS Code. - A faster loop for testing prompts and models. @@ -181,30 +186,30 @@ AI Toolkit is a VS Code extension pack for AI development that speeds up experim - A friendlier workflow for experimenting during development. - Bulk runs, tracing, or fine-tuning as part of the development loop. -**Important boundary:** AI Toolkit isn't the core runtime architecture for the production app. It complements MEAI, Evaluations, and Foundry Local. +AI Toolkit isn't the core runtime architecture for the production app. It complements MEAI, Evaluations, and Foundry Local. For more information, see [AI Toolkit for Visual Studio Code](https://code.visualstudio.com/docs/intelligentapps/overview). -### Copilot SDK +## Copilot SDK -Copilot SDK is a pre-built agent harness and runtime that brings tools, context, and automatic tool calling out of the box. +Copilot SDK is a prebuilt agent harness and runtime that brings tools, context, and automatic tool calling out of the box. **Use Copilot SDK when:** -- You want more built-in runtime behavior than a blank-slate MEAI app provides. +- You want more built-in runtime behavior than the blank slate that an MEAI app provides. - You want tools and context wired in quickly. - Automatic tool calling and agent-harness behavior are more valuable than assembling everything manually. - You want a faster path to a working assistant or agent runtime. -**Important boundary:** Copilot SDK is more opinionated and pre-wired than MEAI. If the goal is a fully custom app architecture, direct MEAI or MAF composition can be a better fit. +Copilot SDK is more opinionated and prewired than MEAI. If the goal is a fully custom app architecture, direct MEAI or MAF composition can be a better fit. For more information, see the [Copilot SDK repository](https://github.com/github/copilot-sdk). -### Azure AI Foundry +## Azure AI Foundry Azure AI Foundry is the managed cloud platform layer for enterprise AI solutions, with two primary functions: model management and hosted agents. -**Use Azure AI Foundry when priorities include:** +Use Azure AI Foundry when your priorities are: - Managed model hosting. - Hosted agent capabilities with persistent memory and built-in tools. @@ -214,30 +219,30 @@ Azure AI Foundry is the managed cloud platform layer for enterprise AI solutions - Enterprise compliance and operational consistency. - Centralized access to models and cloud deployment infrastructure. -**Important boundary:** Azure AI Foundry isn't the app-facing programming abstraction—MEAI still plays that role in .NET code. Azure AI Foundry becomes the right lead when the real question is *where* the model runs and under what controls. +Azure AI Foundry isn't the app-facing programming abstraction; MEAI still plays that role in .NET code. Azure AI Foundry becomes the right lead when the real question is *where* the model runs and under what controls. For more information, see the [Azure AI Foundry documentation](/azure/ai-foundry/). -### Foundry Local +## Foundry Local Foundry Local is a local development and local-first deployment option for teams that need to keep AI workloads close to the machine or environment. -**Use Foundry Local when:** +Use Foundry Local when: - Local development needs better production parity. - The organization is local-first or all-local because of privacy, compliance, or data residency requirements. - You want the local experience to align somewhat with Azure AI Foundry. - Teams need to experiment locally without sending sensitive data to the cloud. -**Important boundary:** Foundry Local is about the development and deployment path, not the higher-level app architecture itself. Local-to-cloud isn't a clean one-to-one move—expect differences in features, hosting model, and operations. +**Important boundary:** Foundry Local is about the development and deployment path, not the higher-level app architecture itself. Local-to-cloud isn't a clean one-to-one move, so expect differences in features, hosting model, and operations. For more information, see the [Foundry Local documentation](/azure/foundry-local/). -### Aspire +## Aspire Aspire is the orchestration, service-wiring, and observability layer for distributed .NET applications, including AI systems that span multiple services. -**Use Aspire when the solution includes:** +Use Aspire when the solution includes: - Separate API, agent, retrieval, and model-facing services. - Service discovery and distributed configuration. @@ -249,17 +254,6 @@ AI systems often stop being "just one app" once retrieval, tools, gateways, and For more information, see the [Aspire documentation](/dotnet/aspire/). -## Common combinations - -Most production AI applications combine several components: - -- **Chat or summarization app**: MEAI + Evaluations -- **RAG application**: MEDI + MEVD + MEAI -- **Multi-agent system**: MEAI + MAF + Aspire -- **Tool interoperability**: MEAI + MCP Server + MCP Client -- **Enterprise cloud app**: MEAI + Azure AI Foundry + Aspire -- **Local-first app**: MEAI + Foundry Local + AI Toolkit (development) - ## Next steps - [.NET + AI ecosystem tools and SDKs](../dotnet-ai-ecosystem.md) From b8d2a3f694110b4b5686043b51dbdcf943172c7f Mon Sep 17 00:00:00 2001 From: Genevieve Warren <24882762+gewarren@users.noreply.github.com> Date: Wed, 15 Apr 2026 18:25:08 -0700 Subject: [PATCH 07/10] human edits --- .openpublishing.redirection.ai.json | 4 + .../{ai-tools.md => calling-tools.md} | 0 docs/ai/conceptual/choose-ai-tool.md | 265 ------------------ docs/ai/conceptual/data-ingestion.md | 32 +-- docs/ai/conceptual/medi-library.md | 38 +++ docs/ai/conceptual/mevd-library.md | 30 ++ docs/ai/dotnet-ai-ecosystem.md | 147 ++++++---- docs/ai/index.yml | 2 - docs/ai/overview.md | 2 +- docs/ai/toc.yml | 36 +-- docs/ai/vector-stores/overview.md | 19 +- 11 files changed, 192 insertions(+), 383 deletions(-) rename docs/ai/conceptual/{ai-tools.md => calling-tools.md} (100%) delete mode 100644 docs/ai/conceptual/choose-ai-tool.md create mode 100644 docs/ai/conceptual/medi-library.md create mode 100644 docs/ai/conceptual/mevd-library.md diff --git a/.openpublishing.redirection.ai.json b/.openpublishing.redirection.ai.json index edf20ef4ba148..0d717d29c46bb 100644 --- a/.openpublishing.redirection.ai.json +++ b/.openpublishing.redirection.ai.json @@ -13,6 +13,10 @@ "redirect_url": "/dotnet/ai/microsoft-extensions-ai", "redirect_document_id": true }, + { + "source_path_from_root": "/docs/ai/conceptual/ai-tools.md", + "redirect_url": "/dotnet/ai/conceptual/calling-tools" + }, { "source_path_from_root": "/docs/ai/conceptual/evaluation-libraries.md", "redirect_url": "/dotnet/ai/evaluation/libraries", diff --git a/docs/ai/conceptual/ai-tools.md b/docs/ai/conceptual/calling-tools.md similarity index 100% rename from docs/ai/conceptual/ai-tools.md rename to docs/ai/conceptual/calling-tools.md diff --git a/docs/ai/conceptual/choose-ai-tool.md b/docs/ai/conceptual/choose-ai-tool.md deleted file mode 100644 index 4b4abcc8336d3..0000000000000 --- a/docs/ai/conceptual/choose-ai-tool.md +++ /dev/null @@ -1,265 +0,0 @@ ---- -title: Choose the right .NET AI tool -description: Learn which .NET AI technology to use for your scenario, including Microsoft.Extensions.AI, Microsoft Agent Framework, vector stores, data ingestion, MCP, evaluations, Azure AI Foundry, Foundry Local, and Aspire. -ms.date: 04/15/2026 -ms.topic: concept-article -ai-usage: ai-assisted ---- - -# Choose the right .NET AI tool - -The .NET AI ecosystem includes many powerful tools and libraries for different purposes. This article helps you understand which tools to use in which scenarios. - -## Quick reference - -The following table summarizes when to reach for each component: - -| Component | Use it when... | Don't lead with it when... | -|-----------|---------------|---------------------------| -| **Microsoft.Extensions.AI (MEAI)** | You need to add AI behavior to an app: chat, summarization, structured outputs, tool calling, embeddings, or streaming. | The hardest problem is ingestion, retrieval, orchestration, or infrastructure. | -| **Evaluations** | You need repeatable quality checks for prompts, models, agents, or AI features—including regressions and side-by-side comparisons. | You haven't yet built the AI behavior you want to measure. | -| **Microsoft.Extensions.DataIngestion (MEDI)** | Your challenge is getting source content into shape for AI: reading, chunking, enriching, and preparing data for grounding or RAG. | You only need a direct model call and no serious data preparation pipeline. | -| **Microsoft.Extensions.VectorData (MEVD)** | You need semantic search, retrieval, embeddings-backed lookup, or RAG over your own data. | You don't need vector retrieval or grounding. | -| **MCP Server** | You want to expose tools, resources, or prompts so external assistants, IDEs, or agents can discover and use them. | The capability is local to one app and ordinary in-process function calling is enough. | -| **MCP Client** | Your app or agent needs to connect to existing MCP servers and consume external tools or resources. | You don't need interoperability beyond your own process boundary. | -| **Microsoft Agent Framework (MAF)** | The system must pursue goals across multiple steps with routing, planning, handoffs, or multiple collaborating agents. | A single LLM call or straightforward tool-calling loop is sufficient. | -| **AI Toolkit** | You want a better developer workflow for trying models, prompts, and evaluations during development. | You need the runtime abstraction or production architecture itself. | -| **Copilot SDK** | You want a pre-built agent harness with tools, context, and automatic tool calling out of the box. | You want a blank-slate app stack or full low-level control from MEAI or MAF. | -| **Azure AI Foundry** | You need managed model hosting, safety, governance, enterprise controls, and a cloud deployment target. | You're only deciding how app code should call a model. | -| **Foundry Local** | You need local or local-first AI for privacy or compliance reasons, or you want some alignment with Azure AI Foundry without requiring a perfect cloud transition. | Any local runtime works, or you expect local-to-cloud to be a drop-in identical move. | -| **Aspire** | The solution spans multiple services and you need orchestration, service discovery, and observability. | The app is still a single service or proof of concept. | - -## How to decide - -The following table recommends which technology to use based on different objectives. - -| Objective | Technology to use | -|-------------------------------|-------------------| -| **Add AI behavior to an app** | [MEAI](#microsoftextensionsai-meai). Add [Evaluations](#evaluations) once you have something worth measuring. | -| **Work with your own data** | [MEDI](#microsoftextensionsdataingestion-medi) to read, chunk, or enrich content. Then use [MEVD](#microsoftextensionsvectordata-mevd) for vector storage and retrieval. | -| **Share or consume capabilities across AI clients** | An [MCP Server](#mcp-server) to publish capabilities, or an [MCP Client](#mcp-client) to consume them. | -| **Build an agentic system** | [Copilot SDK](#copilot-sdk) for a ready-made harness, or [MAF](#microsoft-agent-framework-maf) for multi-step goal pursuit, routing, or handoffs. | -| **Choose a hosting or execution model** | [Azure AI Foundry](#azure-ai-foundry) for managed cloud, [Foundry Local](#foundry-local) for local-first or privacy-sensitive execution, and [Aspire](#aspire) for distributed multi-service systems. | -| **Improve the developer workflow** | [AI Toolkit](#ai-toolkit) | - -Most production AI applications combine several components: - -- **Chat or summarization app**: MEAI + Evaluations -- **RAG application**: MEDI + MEVD + MEAI -- **Multi-agent system**: MEAI + MAF + Aspire -- **Tool interoperability**: MEAI + MCP Server + MCP Client -- **Enterprise cloud app**: MEAI + Azure AI Foundry + Aspire -- **Local-first app**: MEAI + Foundry Local + AI Toolkit (development) - -## Microsoft.Extensions.AI (MEAI) - - is the app-facing foundation for adding model-powered behavior to a .NET application. - -Use MEAI when you want to: - -- Build a chat or conversational user interface. -- Stream responses. -- Summarize, extract, or classify content. -- Produce structured outputs. -- Generate or work with embeddings. -- Call tools or functions. - -MEAI gives .NET developers a clean abstraction for model interaction. It fits naturally into dependency injection, configuration, and existing app architectures and is the usual first layer of an AI-enabled .NET application. - -MEAI alone isn't an agent framework. A one-shot call, chat feature, or tool-call loop can be built with MEAI without becoming "agentic." When the system needs goal-directed, multi-step orchestration, use [MAF](#microsoft-agent-framework-maf) instead. - -Don't lead with MEAI alone when: - -- The hard problem is preparing enterprise content for AI—add [MEDI](#microsoftextensionsdataingestion-medi). -- The hard problem is retrieval or RAG—add [MEVD](#microsoftextensionsvectordata-mevd). -- Tools must be discoverable across assistants or apps—add [MCP Server](#mcp-server). -- The app needs to consume external MCP capabilities—add [MCP Client](#mcp-client). -- The challenge is multi-step autonomous workflows—add [MAF](#microsoft-agent-framework-maf). -- The main concern is hosting, governance, and enterprise controls—add [Azure AI Foundry](#azure-ai-foundry). - -For more information, see [Microsoft.Extensions.AI overview](../microsoft-extensions-ai.md). - -## Evaluations - -The [The Microsoft.Extensions.AI.Evaluation library](../evaluation/libraries.md) is the quality and regression layer for AI features built with the .NET AI stack. - -Use evaluations when you need to answer questions like: - -- "Is this prompt change actually better?" -- "Did switching models hurt quality or safety?" -- "Did the agent regress on key scenarios?" -- "Can you measure behavior before shipping?" - -AI behavior changes readily as prompts, models, and tools evolve. The evaluations library give teams a repeatable way to compare outputs and catch regressions. - -For more information, see [Microsoft.Extensions.AI.Evaluation libraries](../evaluation/libraries.md). - -## Microsoft.Extensions.DataIngestion (MEDI) - - is the ingestion and preparation layer for AI-ready data in .NET. - -Use MEDI when: - -- You need to read content from files, stores, or enterprise sources. -- You need to chunk documents for retrieval and grounding. -- You need to normalize and enrich content with metadata. -- You're preparing data to feed a vector index or downstream RAG pipeline. - -Many AI apps fail before retrieval because data is messy, oversized, or poorly structured. Ingestion quality strongly affects downstream answer quality. MEDI prepares and shapes the data that MEVD or another store later queries. - -Don't lead with MEDI when: - -- The app just needs chat, extraction, summarization, or tool calling over immediate input. Instead, start with [MEAI](#microsoftextensionsai-meai). -- Your content is already prepared and the need is semantic lookup. Instead, lead with [MEVD](#microsoftextensionsvectordata-mevd). - -For more information, see [Data ingestion for AI apps](data-ingestion.md). - -## Microsoft.Extensions.VectorData (MEVD) - - is the vector data storage and retrieval layer for semantic search, similarity lookup, and grounding in .NET AI apps. - -Use MEVD when: - -- You need semantic search. -- You need embeddings-backed retrieval. -- You're implementing RAG. -- You need similarity search or hybrid retrieval patterns. - -MEVD gives .NET applications a consistent way to work with vector stores and helps separate vector storage and retrieval concerns from model invocation concerns. - -MEDI prepares the data. MEVD stores and retrieves the data. MEAI uses that retrieved context with the model. - -For more information, see [Vector stores overview](../vector-stores/overview.md). - -## MCP Server - -An MCP Server exposes capabilities such as tools, resources, or prompts over the Model Context Protocol so other assistants, IDEs, and agents can discover and use them through a standard protocol. - -Use an MCP Server when: - -- You want to make internal APIs or business actions available to multiple AI clients. -- You want interoperability instead of custom one-off integrations. -- The same capability should be reusable across tools, products, or agent systems. - -An MCP Server turns app capabilities into reusable AI-facing endpoints. It reduces duplicated tool integration work across assistants and creates a cleaner boundary between capability providers and capability consumers. - -An MCP Server is about *publishing* capabilities. If the capability is used only inside one app, ordinary in-process function calling is simpler. - -## MCP Client - -An MCP Client is the consumer side of the protocol: it connects to MCP servers and brings their exposed capabilities into an app, assistant, or agent runtime. - -Use an MCP Client when: - -- Your app needs to call tools that are already exposed elsewhere through MCP. -- You want an agent or assistant to consume capabilities without custom per-tool plumbing. -- You're composing with an external ecosystem of AI-facing tools. - -An MCP Client is about *consuming* capabilities, not publishing them. If everything the app needs is local and in-process, ordinary function or tool calling is still simpler. - -For more information, see [Get started with MCP](../get-started-mcp.md). - -## Microsoft Agent Framework (MAF) - -Microsoft Agent Framework is the orchestration layer for systems that are truly agentic: they pursue a goal across multiple steps, make decisions along the way, use tools, and might coordinate multiple agents. - -Use MAF when the system needs: - -- Planning or stepwise execution. -- Routing across tools or specialist agents. -- Handoffs between agents or humans. -- Stateful, multi-step workflows that adapt as results come back. - -Not every AI feature needs MAF. If a direct MEAI call or a simple tool-calling loop solves the problem, use a simpler approach. MAF matters when orchestration complexity is the real challenge, not just model access. - -For more information, see [Microsoft Agent Framework overview](/agent-framework/overview/agent-framework-overview). - -## AI Toolkit - -AI Toolkit is a VS Code extension pack for AI development that speeds up experimentation with models, prompts, agents, and evaluations. - -Use AI Toolkit when you want: - -- A model catalog and playground inside VS Code. -- A faster loop for testing prompts and models. -- An agent builder and agent inspector for creating, debugging, and visualizing agents. -- A friendlier workflow for experimenting during development. -- Bulk runs, tracing, or fine-tuning as part of the development loop. - -AI Toolkit isn't the core runtime architecture for the production app. It complements MEAI, Evaluations, and Foundry Local. - -For more information, see [AI Toolkit for Visual Studio Code](https://code.visualstudio.com/docs/intelligentapps/overview). - -## Copilot SDK - -Copilot SDK is a prebuilt agent harness and runtime that brings tools, context, and automatic tool calling out of the box. - -**Use Copilot SDK when:** - -- You want more built-in runtime behavior than the blank slate that an MEAI app provides. -- You want tools and context wired in quickly. -- Automatic tool calling and agent-harness behavior are more valuable than assembling everything manually. -- You want a faster path to a working assistant or agent runtime. - -Copilot SDK is more opinionated and prewired than MEAI. If the goal is a fully custom app architecture, direct MEAI or MAF composition can be a better fit. - -For more information, see the [Copilot SDK repository](https://github.com/github/copilot-sdk). - -## Azure AI Foundry - -Azure AI Foundry is the managed cloud platform layer for enterprise AI solutions, with two primary functions: model management and hosted agents. - -Use Azure AI Foundry when your priorities are: - -- Managed model hosting. -- Hosted agent capabilities with persistent memory and built-in tools. -- Code-free RAG through file upload or grounding services. -- Out-of-the-box evaluations, comparisons, and safety checks. -- Safety filtering and governance controls. -- Enterprise compliance and operational consistency. -- Centralized access to models and cloud deployment infrastructure. - -Azure AI Foundry isn't the app-facing programming abstraction; MEAI still plays that role in .NET code. Azure AI Foundry becomes the right lead when the real question is *where* the model runs and under what controls. - -For more information, see the [Azure AI Foundry documentation](/azure/ai-foundry/). - -## Foundry Local - -Foundry Local is a local development and local-first deployment option for teams that need to keep AI workloads close to the machine or environment. - -Use Foundry Local when: - -- Local development needs better production parity. -- The organization is local-first or all-local because of privacy, compliance, or data residency requirements. -- You want the local experience to align somewhat with Azure AI Foundry. -- Teams need to experiment locally without sending sensitive data to the cloud. - -**Important boundary:** Foundry Local is about the development and deployment path, not the higher-level app architecture itself. Local-to-cloud isn't a clean one-to-one move, so expect differences in features, hosting model, and operations. - -For more information, see the [Foundry Local documentation](/azure/foundry-local/). - -## Aspire - -Aspire is the orchestration, service-wiring, and observability layer for distributed .NET applications, including AI systems that span multiple services. - -Use Aspire when the solution includes: - -- Separate API, agent, retrieval, and model-facing services. -- Service discovery and distributed configuration. -- Tracing, telemetry, and end-to-end visibility across the system. - -AI systems often stop being "just one app" once retrieval, tools, gateways, and worker services are involved. Aspire helps teams keep those parts understandable and observable, and its visuals make it easier to trace AI flows across services. - -**Important boundary:** Aspire isn't specifically the AI runtime; it's the multi-service application layer around it. It doesn't replace MEAI, MAF, or Azure AI Foundry. - -For more information, see the [Aspire documentation](/dotnet/aspire/). - -## Next steps - -- [.NET + AI ecosystem tools and SDKs](../dotnet-ai-ecosystem.md) -- [Microsoft.Extensions.AI overview](../microsoft-extensions-ai.md) -- [Microsoft Agent Framework overview](/agent-framework/overview/agent-framework-overview) -- [Get started with MCP](../get-started-mcp.md) -- [Vector stores overview](../vector-stores/overview.md) -- [Data ingestion for AI apps](data-ingestion.md) -- [Evaluations overview](../evaluation/libraries.md) diff --git a/docs/ai/conceptual/data-ingestion.md b/docs/ai/conceptual/data-ingestion.md index 27281dcf0fcef..fd73f0a723e61 100644 --- a/docs/ai/conceptual/data-ingestion.md +++ b/docs/ai/conceptual/data-ingestion.md @@ -16,7 +16,7 @@ Data ingestion is the process of collecting, reading, and preparing data from di - **Transform** the data by cleaning, chunking, enriching, or converting formats. - **Load** the data into a destination like a database, vector store, or AI model for retrieval and analysis. -For AI and machine learning scenarios, especially Retrieval-Augmented Generation (RAG), data ingestion is not just about converting data from one format to another. It is about making data usable for intelligent applications. This means representing documents in a way that preserves their structure and meaning, splitting them into manageable chunks, enriching them with metadata or embeddings, and storing them so they can be retrieved quickly and accurately. +For AI and machine learning scenarios, especially retrieval-augmented generation (RAG), data ingestion is not just about converting data from one format to another. It is about making data usable for intelligent applications. This means representing documents in a way that preserves their structure and meaning, splitting them into manageable chunks, enriching them with metadata or embeddings, and storing them so they can be retrieved quickly and accurately. ## Why data ingestion matters for AI applications @@ -26,37 +26,9 @@ Your chatbot needs to understand and search through thousands of documents to pr This is where data ingestion becomes critical. You need to extract text from different file formats, break large documents into smaller chunks that fit within AI model limits, enrich the content with metadata, generate embeddings for semantic search, and store everything in a way that enables fast retrieval. Each step requires careful consideration of how to preserve the original meaning and context. -## The Microsoft.Extensions.DataIngestion library - -The [📦 Microsoft.Extensions.DataIngestion package](https://www.nuget.org/packages/Microsoft.Extensions.DataIngestion) provides foundational .NET building blocks for data ingestion. It enables developers to read, process, and prepare documents for AI and machine learning workflows, especially Retrieval-Augmented Generation (RAG) scenarios. - -With these building blocks, you can create robust, flexible, and intelligent data ingestion pipelines tailored for your application needs: - -- **Unified document representation:** Represent any file type (for example, PDF, Image, or Microsoft Word) in a consistent format that works well with large language models. -- **Flexible data ingestion:** Read documents from both cloud services and local sources using multiple built-in readers, making it easy to bring in data from wherever it lives. -- **Built-in AI enhancements:** Automatically enrich content with summaries, sentiment analysis, keyword extraction, and classification, preparing your data for intelligent workflows. -- **Customizable chunking strategies:** Split documents into chunks using token-based, section-based, or semantic-aware approaches, so you can optimize for your retrieval and analysis needs. -- **Production-ready storage:** Store processed chunks in popular vector databases and document stores, with support for embedding generation, making your pipelines ready for real-world scenarios. -- **End-to-end pipeline composition:** Chain together readers, processors, chunkers, and writers with the API, reducing boilerplate and making it easy to build, customize, and extend complete workflows. -- **Performance and scalability:** Designed for scalable data processing, these components can handle large volumes of data efficiently, making them suitable for enterprise-grade applications. - -All of these components are open and extensible by design. You can add custom logic and new connectors, and extend the system to support emerging AI scenarios. By standardizing how documents are represented, processed, and stored, .NET developers can build reliable, scalable, and maintainable data pipelines without "reinventing the wheel" for every project. - -### Built on stable foundations - -![Data Ingestion Architecture Diagram](../media/data-ingestion/dataingestion.png) - -These data ingestion building blocks are built on top of proven and extensible components in the .NET ecosystem, ensuring reliability, interoperability, and seamless integration with existing AI workflows: - -- **Microsoft.ML.Tokenizers:** Tokenizers provide the foundation for chunking documents based on tokens. This enables precise splitting of content, which is essential for preparing data for large language models and optimizing retrieval strategies. -- **Microsoft.Extensions.AI:** This set of libraries powers enrichment transformations using large language models. It enables features like summarization, sentiment analysis, keyword extraction, and embedding generation, making it easy to enhance your data with intelligent insights. -- **Microsoft.Extensions.VectorData:** This set of libraries offers a consistent interface for storing processed chunks in a wide variety of vector stores, including Qdrant, Azure SQL, CosmosDB, MongoDB, ElasticSearch, and many more. This ensures your data pipelines are ready for production and can scale across different storage backends. - -In addition to familiar patterns and tools, these abstractions build on already extensible components. Plug-in capability and interoperability are paramount, so as the rest of the .NET AI ecosystem grows, the capabilities of the data ingestion components grow as well. This approach empowers developers to easily integrate new providers, enrichments, and storage options, keeping their pipelines future-ready and adaptable to evolving AI scenarios. - ## Data ingestion building blocks -The [Microsoft.Extensions.DataIngestion](https://www.nuget.org/packages/Microsoft.Extensions.DataIngestion) library is built around several key components that work together to create a complete data processing pipeline. This section explores each component and how they fit together. +The [Microsoft.Extensions.DataIngestion](hmedi-library.md) library is built around several key components that work together to create a complete data processing pipeline. This section explores each component and how they fit together. ### Documents and document readers diff --git a/docs/ai/conceptual/medi-library.md b/docs/ai/conceptual/medi-library.md new file mode 100644 index 0000000000000..4bc51121934e5 --- /dev/null +++ b/docs/ai/conceptual/medi-library.md @@ -0,0 +1,38 @@ +--- +title: "The Microsoft.Extensions.DataIngestion library" +description: "Learn about the Microsoft.Extensions.DataIngestion library, which provides foundational .NET building blocks for data ingestion." +ms.topic: concept-article +ms.date: 04/15/2026 +--- + +# The Microsoft.Extensions.DataIngestion library + +The [📦 Microsoft.Extensions.DataIngestion package](https://www.nuget.org/packages/Microsoft.Extensions.DataIngestion) provides foundational .NET building blocks for data ingestion. It enables developers to read, process, and prepare documents for AI and machine learning workflows, especially Retrieval-Augmented Generation (RAG) scenarios. + +With these building blocks, you can create robust, flexible, and intelligent data ingestion pipelines tailored for your application needs: + +- **Unified document representation:** Represent any file type (for example, PDF, Image, or Microsoft Word) in a consistent format that works well with large language models. +- **Flexible data ingestion:** Read documents from both cloud services and local sources using multiple built-in readers, making it easy to bring in data from wherever it lives. +- **Built-in AI enhancements:** Automatically enrich content with summaries, sentiment analysis, keyword extraction, and classification, preparing your data for intelligent workflows. +- **Customizable chunking strategies:** Split documents into chunks using token-based, section-based, or semantic-aware approaches, so you can optimize for your retrieval and analysis needs. +- **Production-ready storage:** Store processed chunks in popular vector databases and document stores, with support for embedding generation, making your pipelines ready for real-world scenarios. +- **End-to-end pipeline composition:** Chain together readers, processors, chunkers, and writers with the API, reducing boilerplate and making it easy to build, customize, and extend complete workflows. +- **Performance and scalability:** Designed for scalable data processing, these components can handle large volumes of data efficiently, making them suitable for enterprise-grade applications. + +All of these components are open and extensible by design. You can add custom logic and new connectors, and extend the system to support emerging AI scenarios. By standardizing how documents are represented, processed, and stored, .NET developers can build reliable, scalable, and maintainable data pipelines without "reinventing the wheel" for every project. + +## Built on stable foundations + +![Data Ingestion Architecture Diagram](../media/data-ingestion/dataingestion.png) + +These data ingestion building blocks are built on top of proven and extensible components in the .NET ecosystem, ensuring reliability, interoperability, and seamless integration with existing AI workflows: + +- **Microsoft.ML.Tokenizers:** Tokenizers provide the foundation for chunking documents based on tokens. This enables precise splitting of content, which is essential for preparing data for large language models and optimizing retrieval strategies. +- **Microsoft.Extensions.AI:** This set of libraries powers enrichment transformations using large language models. It enables features like summarization, sentiment analysis, keyword extraction, and embedding generation, making it easy to enhance your data with intelligent insights. +- **Microsoft.Extensions.VectorData:** This set of libraries offers a consistent interface for storing processed chunks in a wide variety of vector stores, including Qdrant, Azure SQL, CosmosDB, MongoDB, ElasticSearch, and many more. This ensures your data pipelines are ready for production and can scale across different storage backends. + +In addition to familiar patterns and tools, these abstractions build on already extensible components. Plug-in capability and interoperability are paramount, so as the rest of the .NET AI ecosystem grows, the capabilities of the data ingestion components grow as well. This approach empowers developers to easily integrate new providers, enrichments, and storage options, keeping their pipelines future-ready and adaptable to evolving AI scenarios. + +## See also + +- [Data ingestion](data-ingestion.md) diff --git a/docs/ai/conceptual/mevd-library.md b/docs/ai/conceptual/mevd-library.md new file mode 100644 index 0000000000000..97d4c6347419c --- /dev/null +++ b/docs/ai/conceptual/mevd-library.md @@ -0,0 +1,30 @@ +--- +title: "The Microsoft.Extensions.VectorData library" +description: "Learn how to use Microsoft.Extensions.VectorData to build semantic search features." +ms.topic: concept-article +ms.date: 04/15/2026 +ai-usage: ai-assisted +--- + +# The Microsoft.Extensions.VectorData library + +The [📦 Microsoft.Extensions.VectorData.Abstractions](https://www.nuget.org/packages/Microsoft.Extensions.VectorData.Abstractions) package provides a unified layer of abstractions for interacting with vector stores in .NET. These abstractions let you write simple, high-level code against a single API, and swap out the underlying vector store with minimal changes to your application. + +The library provides the following key capabilities: + +- **Seamless .NET type mapping**: Map your .NET type directly to the database, similar to an object/relational mapper. +- **Unified data model**: Define your data model once using .NET attributes and use it across any supported vector store. +- **CRUD operations**: Create, read, update, and delete records in a vector store. +- **Vector and hybrid search**: Query records by semantic similarity using vector search, or combine vector and text search for hybrid search. +- **Embedding generation management**: Configure your embedding generator once and let the library transparently handle generation. +- **Collection management**: Create, list, and delete collections (tables or indices) in a vector store. + +Microsoft.Extensions.VectorData is also the building block for additional, higher-level layers that need to interact with vector databases, for example, the [Microsoft.Extensions.DataIngestion](../conceptual/data-ingestion.md) library. + +## Microsoft.Extensions.VectorData and Entity Framework Core + +If you're already using [Entity Framework Core](/ef/core) to access your database, it's likely that your database provider already supports vector search, and LINQ queries can be used to express such searches. In such applications, Microsoft.Extensions.VectorData isn't necessarily needed. However, most dedicated vector databases aren't supported by EF Core, and Microsoft.Extensions.VectorData can provide a good experience for working with those. In addition, you might also find yourself using both EF and Microsoft.Extensions.VectorData in the same application, for example, when using an additional layer such as [Microsoft.Extensions.DataIngestion](../conceptual/medi-library.md). + +## See also + +- [Vector databases for .NET AI apps](../vector-stores/overview.md) diff --git a/docs/ai/dotnet-ai-ecosystem.md b/docs/ai/dotnet-ai-ecosystem.md index d202137bb5bb2..0d7105d9c8ff4 100644 --- a/docs/ai/dotnet-ai-ecosystem.md +++ b/docs/ai/dotnet-ai-ecosystem.md @@ -9,11 +9,35 @@ ms.topic: overview The .NET ecosystem provides many powerful tools, libraries, and services to develop AI applications. .NET supports both cloud and local AI model connections, many different SDKs for various AI and vector database services, and other tools to help you build intelligent apps of varying scope and complexity. -> [!TIP] -> Not sure which tool to use? See [Choose the right .NET AI tool](conceptual/choose-ai-tool.md) for guidance on when to use each component. +## Decide which tool to use -> [!IMPORTANT] -> Not all of the SDKs and services presented in this article are maintained by Microsoft. When considering an SDK, make sure to evaluate its quality, licensing, support, and compatibility to ensure they meet your requirements. +The following table recommends which technology to use based on different objectives. + +| Objective | Technology to use | +|-------------------------------|-------------------| +| **Add AI behavior to an app** | Use [Microsoft.Extensions.AI library (MEAI)](#microsoftextensionsai-libraries). Add [Evaluations](#evaluation-libraries) once you have something worth measuring. | +| **Work with your own data** | Use [Microsoft.Extensions.DataIngestion (MEDI)](#microsoftextensionsdataingestion-medi) to read, chunk, or enrich content. Then use [Microsoft.Extensions.VectorData (MEVD)](#microsoftextensionsvectordata-mevd) to store and retrieve vectors. | +| **Share or consume capabilities across AI clients** | Use an [MCP Server](#mcp-server) to publish capabilities, or an [MCP Client](#mcp-client) to consume them. | +| **Build an agentic system** | Use [Copilot SDK](#copilot-sdk) for a ready-made harness, or [Microsoft Agent Framework](#microsoft-agent-framework-maf) for multi-step goal pursuit, routing, or handoffs. | +| **Choose a hosting or execution model** | Use [Azure AI Foundry](#azure-ai-foundry) for managed cloud, [Foundry Local](#foundry-local) for local-first or privacy-sensitive execution, and [Aspire](#aspire) for distributed multi-service systems. | +| **Improve the developer workflow** | Use [AI Toolkit](#ai-toolkit). | + +Most production AI applications combine several components: + +- **Chat or summarization app**: MEAI + Evaluations +- **RAG application**: MEDI + MEVD + MEAI +- **Multi-agent system**: MEAI + MAF + Aspire +- **Tool interoperability**: MEAI + MCP Server + MCP Client +- **Enterprise cloud app**: MEAI + Azure AI Foundry + Aspire +- **Local-first app**: MEAI + Foundry Local + AI Toolkit (development) + +Use these practical rules to choose quickly: + +- Start with `Microsoft.Extensions.AI` for most app-level AI features. +- Add `Microsoft.Extensions.DataIngestion` and `Microsoft.Extensions.VectorData` when grounding responses with your own data. +- Use MCP when capabilities must be shared across process or product boundaries. +- Move to Agent Framework when one-step prompts become multi-step workflows. +- Add evaluations once behavior is useful enough to measure and protect from regressions. ## Microsoft.Extensions.AI libraries @@ -21,75 +45,96 @@ The .NET ecosystem provides many powerful tools, libraries, and services to deve `Microsoft.Extensions.AI` provides abstractions that can be implemented by various services, all adhering to the same core concepts. This library is not intended to provide APIs tailored to any specific provider's services. The goal of `Microsoft.Extensions.AI` is to act as a unifying layer within the .NET ecosystem, enabling developers to choose their preferred frameworks and libraries while ensuring seamless integration and collaboration across the ecosystem. -## Other AI-related Microsoft.Extensions libraries +MEAI gives .NET developers a clean abstraction for model interaction. It fits naturally into dependency injection, configuration, and existing app architectures and is the usual first layer of an AI-enabled .NET application. + +MEAI alone isn't an agent framework. A one-shot call, chat feature, or tool-call loop can be built with MEAI without becoming "agentic." When the system needs goal-directed, multi-step orchestration, use [MAF](#microsoft-agent-framework-maf) instead. + +For more information, see [Microsoft.Extensions.AI overview](../microsoft-extensions-ai.md). + +## Evaluation libraries + +The [The Microsoft.Extensions.AI.Evaluation library](../evaluation/libraries.md) is the quality and regression layer for AI features built with the .NET AI stack. AI behavior changes readily as prompts, models, and tools evolve. The evaluations library give teams a repeatable way to compare outputs and catch regressions. + +For more information, see [Microsoft.Extensions.AI.Evaluation libraries](../evaluation/libraries.md). + +## Microsoft.Extensions.DataIngestion (MEDI) + +[Microsoft.Extensions.DataIngestion](medi-library.md) is the ingestion and preparation layer for AI-ready data in .NET. + +Many AI apps fail before retrieval because data is messy, oversized, or poorly structured. Ingestion quality strongly affects downstream answer quality. MEDI prepares and shapes the data that MEVD or another store later queries. + +For more information, see [Data ingestion for AI apps](data-ingestion.md). + +## Microsoft.Extensions.VectorData (MEVD) + +[Microsoft.Extensions.VectorData](mevd-library.md) is the vector data storage and retrieval layer for semantic search, similarity lookup, and grounding in .NET AI apps. + +MEVD gives .NET applications a consistent way to work with vector stores and helps separate vector storage and retrieval concerns from model invocation concerns. + +For more information, see [Vector stores overview](../vector-stores/overview.md). + +## MCP Server + +An MCP Server exposes capabilities such as tools, resources, or prompts over the Model Context Protocol so other assistants, IDEs, and agents can discover and use them through a standard protocol. + +An MCP Server turns app capabilities into reusable AI-facing endpoints. It reduces duplicated tool integration work across assistants and creates a cleaner boundary between capability providers and capability consumers. + +An MCP Server is about *publishing* capabilities. If the capability is used only inside one app, ordinary in-process function calling is simpler. + +## MCP Client + +An MCP Client is the consumer side of the protocol: it connects to MCP servers and brings their exposed capabilities into an app, assistant, or agent runtime. + +An MCP Client is about *consuming* capabilities, not publishing them. If everything the app needs is local and in-process, ordinary function or tool calling is still simpler. + +For more information, see [Get started with MCP](../get-started-mcp.md). -The [📦 Microsoft.Extensions.VectorData.Abstractions package](https://www.nuget.org/packages/Microsoft.Extensions.VectorData.Abstractions/) provides a unified layer of abstractions for interacting with a variety of vector stores. It lets you store processed chunks in vector stores such as Qdrant, Azure SQL, CosmosDB, MongoDB, ElasticSearch, and many more. For more information, see [Build a .NET AI vector search app](vector-stores/how-to/build-vector-search-app.md). +## Microsoft Agent Framework (MAF) -The [📦 Microsoft.Extensions.DataIngestion package](https://www.nuget.org/packages/Microsoft.Extensions.DataIngestion) provides foundational .NET building blocks for data ingestion. It enables developers to read, process, and prepare documents for AI and machine learning workflows, especially retrieval-augmented generation (RAG) scenarios. For more information, see [Data ingestion](conceptual/data-ingestion.md). +Microsoft Agent Framework is the orchestration layer for systems that are truly agentic: they pursue a goal across multiple steps, make decisions along the way, use tools, and might coordinate multiple agents. -## Microsoft Agent Framework +Not every AI feature needs MAF. If a direct MEAI call or a simple tool-calling loop solves the problem, use a simpler approach. MAF matters when orchestration complexity is the real challenge, not just model access. -If you want to use low-level services, such as and , you can reference the `Microsoft.Extensions.AI.Abstractions` package directly from your app. However, if you want to build agentic AI applications with higher-level orchestration capabilities, you should use [Microsoft Agent Framework](/agent-framework/overview/agent-framework-overview). Agent Framework builds on the `Microsoft.Extensions.AI.Abstractions` package and provides concrete implementations of for different services, including OpenAI, Azure OpenAI, Microsoft Foundry, and more. +For more information, see [Microsoft Agent Framework overview](/agent-framework/overview/agent-framework-overview). -This framework is the recommended approach for .NET apps that need to build agentic AI systems with advanced orchestration, multi-agent collaboration, and enterprise-grade security and observability. +## AI Toolkit -Agent Framework is a production-ready, open-source framework that brings together the best capabilities of Semantic Kernel and Microsoft Research's AutoGen. Agent Framework provides: +AI Toolkit is a VS Code extension pack for AI development that speeds up experimentation with models, prompts, agents, and evaluations. -- **Multi-agent orchestration**: Support for sequential, concurrent, group chat, handoff, and *magentic* (where a lead agent directs other agents) orchestration patterns. -- **Cloud and provider flexibility**: Cloud-agnostic (containers, on-premises, or multi-cloud) and provider-agnostic (for example, OpenAI or Foundry) using plugin and connector models. -- **Enterprise-grade features**: Built-in observability (OpenTelemetry), Microsoft Entra security integration, and responsible AI features including prompt injection protection and task adherence monitoring. -- **Standards-based interoperability**: Integration with open standards like Agent-to-Agent (A2A) protocol and Model Context Protocol (MCP) for agent discovery and tool interaction. +AI Toolkit isn't the core runtime architecture for the production app. It complements MEAI, Evaluations, and Foundry Local. -For more information, see the [Microsoft Agent Framework documentation](/agent-framework/overview/agent-framework-overview). +For more information, see [AI Toolkit for Visual Studio Code](https://code.visualstudio.com/docs/intelligentapps/overview). -## Semantic Kernel for .NET +## Copilot SDK -[Semantic Kernel](/semantic-kernel/overview/) is an open-source library that enables AI integration and orchestration capabilities in your .NET apps. However, for new applications that require agentic capabilities, multi-agent orchestration, or enterprise-grade observability and security, the recommended framework is [Microsoft Agent Framework](/agent-framework/overview/agent-framework-overview). +Copilot SDK is a prebuilt agent harness and runtime that brings tools, context, and automatic tool calling out of the box. -## .NET SDKs for building AI apps +Copilot SDK is more opinionated and prewired than MEAI. If your goal is a fully custom app architecture, direct MEAI or MAF composition can be a better fit. -Many different SDKs are available to build .NET apps with AI capabilities depending on the target platform or AI model. OpenAI models offer powerful generative AI capabilities, while other Foundry tools provide intelligent solutions for a variety of specific scenarios. +For more information, see the [Copilot SDK repository](https://github.com/github/copilot-sdk). -### .NET SDKs for OpenAI models +## Azure AI Foundry -| NuGet package | Supported models | Maintainer or vendor | Documentation | -|---------------|------------------|----------------------|--------------| -| [Microsoft.Agents.AI.OpenAI](https://www.nuget.org/packages/Microsoft.Agents.AI.OpenAI/) | [OpenAI models](https://platform.openai.com/docs/models/overview)
[Azure OpenAI supported models](/azure/ai-services/openai/concepts/models) | [Microsoft Agent Framework](https://github.com/microsoft/agent-framework) (Microsoft) | [Agent Framework documentation](/agent-framework/overview/agent-framework-overview) | -| [Azure OpenAI SDK](https://www.nuget.org/packages/Azure.AI.OpenAI/) | [Azure OpenAI supported models](/azure/ai-services/openai/concepts/models) | [Azure SDK for .NET](https://github.com/Azure/azure-sdk-for-net) (Microsoft) | [Azure OpenAI services documentation](/azure/ai-services/openai/) | -| [OpenAI SDK](https://www.nuget.org/packages/OpenAI/) | [OpenAI supported models](https://platform.openai.com/docs/models) | [OpenAI SDK for .NET](https://github.com/openai/openai-dotnet) (OpenAI) | [OpenAI services documentation](https://platform.openai.com/docs/overview) | +Azure AI Foundry is the managed cloud platform layer for enterprise AI solutions, with two primary functions: model management and hosted agents. -### .NET SDKs for Foundry Tools +Azure AI Foundry isn't the app-facing programming abstraction; MEAI still plays that role in .NET code. Azure AI Foundry becomes the right lead when the real question is *where* the model runs and under what controls. -Azure offers many other AI services, such as Foundry Tools, to build specific application capabilities and workflows. Most of these services provide a .NET SDK to integrate their functionality into custom apps. Some of the most commonly used services are shown in the following table. For a complete list of available services and learning resources, see the [Foundry Tools](/azure/ai-services/what-are-ai-services) documentation. +For more information, see the [Azure AI Foundry documentation](/azure/ai-foundry/). -| Service | Description | -|-----------------------------------|----------------------------------------------| -| [Azure AI Search](/azure/search/) | Bring AI-powered cloud search to your mobile and web apps. | -| [Content Safety in Foundry Control Plane](/azure/ai-services/content-safety/) | Detect unwanted or offensive content. | -| [Azure Document Intelligence in Foundry Tools](/azure/ai-services/document-intelligence/) | Turn documents into intelligent data-driven solutions. | -| [Azure Language in Foundry Tools](/azure/ai-services/language-service/) | Build apps with industry-leading natural language understanding capabilities. | -| [Azure Speech in Foundry Tools](/azure/ai-services/speech-service/) | Speech to text, text to speech, translation, and speaker recognition. | -| [Azure Translator in Foundry Tools](/azure/ai-services/translator/) | AI-powered translation technology with support for more than 100 languages and dialects. | -| [Azure Vision in Foundry Tools](/azure/ai-services/computer-vision/) | Analyze content in images and videos. | +## Foundry Local -## Develop with local AI models +Foundry Local is a local development and local-first deployment option for teams that need to keep AI workloads close to the machine or environment. -.NET apps can also connect to local AI models for many different development scenarios. [Microsoft Agent Framework](https://github.com/microsoft/agent-framework) is the recommended tool to connect to local models using .NET. This framework can connect to many different models hosted across a variety of platforms and abstracts away lower-level implementation details. +Foundry Local is about the development and deployment path, not the higher-level app architecture itself. Local-to-cloud isn't a clean one-to-one move, so expect differences in features, hosting model, and operations. -For example, you can use [Ollama](https://ollama.com/) to [connect to local AI models with .NET](quickstarts/chat-local-model.md), including several small language models (SLMs) developed by Microsoft: +For more information, see the [Foundry Local documentation](/azure/foundry-local/). -| Model | Description | -|---------------------|-----------------------------------------------------------| -| [phi3 models][phi3] | A family of powerful SLMs with groundbreaking performance at low cost and low latency. | -| [orca models][orca] | Research models in tasks such as reasoning over user-provided data, reading comprehension, math problem solving, and text summarization. | +## Aspire -> [!NOTE] -> The preceding SLMs can also be hosted on other services, such as Azure. +Aspire is the orchestration, service-wiring, and observability layer for distributed .NET applications, including AI systems that span multiple services. -## Next steps +AI systems often stop being "just one app" once retrieval, tools, gateways, and worker services are involved. Aspire helps teams keep those parts understandable and observable, and its visuals make it easier to trace AI flows across services. -- [What is Microsoft Agent Framework?](/agent-framework/overview/agent-framework-overview) -- [Quickstart - Summarize text using Azure AI chat app with .NET](quickstarts/prompt-model.md) +Aspire isn't specifically the AI runtime; it's the multi-service application layer around it. It doesn't replace MEAI, MAF, or Azure AI Foundry. -[phi3]: https://azure.microsoft.com/products/phi-3 -[orca]: https://www.microsoft.com/research/project/orca/ +For more information, see the [Aspire documentation](/dotnet/aspire/). diff --git a/docs/ai/index.yml b/docs/ai/index.yml index 28ad720ed7113..393f59396a2fd 100644 --- a/docs/ai/index.yml +++ b/docs/ai/index.yml @@ -43,8 +43,6 @@ landingContent: linkLists: - linkListType: concept links: - - text: Choose the right .NET AI tool - url: conceptual/choose-ai-tool.md - text: How generative AI and LLMs work url: conceptual/how-genai-and-llms-work.md - text: Build agents to automate workflows diff --git a/docs/ai/overview.md b/docs/ai/overview.md index 8e8c1cf53025f..5482a1af78373 100644 --- a/docs/ai/overview.md +++ b/docs/ai/overview.md @@ -58,7 +58,7 @@ We recommend the following sequence of tutorials and articles for an introductio | Generate images | [Generate images from text](./quickstarts/text-to-image.md) | | Train your own model | [ML.NET tutorial](https://dotnet.microsoft.com/learn/ml-dotnet/get-started-tutorial/intro) | -Browse the table of contents to learn more about the core concepts, starting with [How generative AI and LLMs work](./conceptual/how-genai-and-llms-work.md). If you're not sure which .NET AI tool or SDK to use for your scenario, see [Choose the right .NET AI tool](./conceptual/choose-ai-tool.md). +Browse the table of contents to learn more about the core concepts, starting with [How generative AI and LLMs work](./conceptual/how-genai-and-llms-work.md). If you're not sure which .NET AI tool or SDK to use for your scenario, see [Decide which tool to use](./dotnet-ai-ecosystem.md#decide-which-tool-to-use). ## Next steps diff --git a/docs/ai/toc.yml b/docs/ai/toc.yml index b92c5a3937869..2a79d60b16b14 100644 --- a/docs/ai/toc.yml +++ b/docs/ai/toc.yml @@ -15,6 +15,14 @@ items: href: ichatclient.md - name: The IEmbeddingGenerator interface href: iembeddinggenerator.md + - name: Evaluation libraries + href: evaluation/libraries.md + - name: Data ingestion libraries + href: conceptual/medi-library.md + - name: Vector data libraries + href: conceptual/mevd-library.md + - name: MCP client/server overview + href: get-started-mcp.md - name: Microsoft Agent Framework href: /agent-framework/overview/agent-framework-overview?toc=/dotnet/ai/toc.json&bc=/dotnet/ai/toc.json - name: C# SDK for MCP @@ -41,8 +49,6 @@ items: items: - name: How generative AI and LLMs work href: conceptual/how-genai-and-llms-work.md - - name: Choose the right .NET AI tool - href: conceptual/choose-ai-tool.md - name: Agents href: conceptual/agents.md - name: Tokens @@ -59,10 +65,12 @@ items: href: conceptual/zero-shot-learning.md - name: Retrieval-augmented generation href: conceptual/rag.md + - name: Responsible AI with .NET + href: evaluation/responsible-ai.md - name: Call tools items: - name: Overview - href: conceptual/ai-tools.md + href: conceptual/calling-tools.md displayName: ai tool, ai function, tools, functions - name: "Quickstart: Execute a local function" href: quickstarts/use-function-calling.md @@ -86,7 +94,7 @@ items: href: vector-stores/tutorial-vector-search.md - name: Scale Azure OpenAI with Azure Container Apps href: get-started-app-chat-scaling-with-azure-container-apps.md -- name: MCP client/server +- name: MCP client/server quickstarts items: - name: Build a minimal MCP client href: quickstarts/build-mcp-client.md @@ -133,20 +141,14 @@ items: href: /azure/ai-services/openai/how-to/use-blocklists?toc=/dotnet/ai/toc.json&bc=/dotnet/ai/toc.json - name: Use Risks & Safety monitoring href: /azure/ai-services/openai/how-to/risks-safety-monitor?toc=/dotnet/ai/toc.json&bc=/dotnet/ai/toc.json -- name: Evaluation +- name: Evaluation tutorials items: - - name: Responsible AI with .NET - href: evaluation/responsible-ai.md - - name: The Microsoft.Extensions.AI.Evaluation libraries - href: evaluation/libraries.md - - name: Tutorials - items: - - name: "Quickstart: Evaluate the quality of a response" - href: evaluation/evaluate-ai-response.md - - name: "Evaluate response quality with caching and reporting" - href: evaluation/evaluate-with-reporting.md - - name: "Evaluate response safety with caching and reporting" - href: evaluation/evaluate-safety.md + - name: "Quickstart: Evaluate the quality of a response" + href: evaluation/evaluate-ai-response.md + - name: "Evaluate response quality with caching and reporting" + href: evaluation/evaluate-with-reporting.md + - name: "Evaluate response safety with caching and reporting" + href: evaluation/evaluate-safety.md - name: Resources items: - name: Get started resources diff --git a/docs/ai/vector-stores/overview.md b/docs/ai/vector-stores/overview.md index 72638dc5b74d5..e14c42547df87 100644 --- a/docs/ai/vector-stores/overview.md +++ b/docs/ai/vector-stores/overview.md @@ -42,26 +42,11 @@ Other benefits of the RAG pattern include: - Overcome LLM token limits—the heavy lifting is done through the database vector search. - Reduce the costs from frequent fine-tuning on updated data. -## The Microsoft.Extensions.VectorData library +## Microsoft.Extensions.VectorData library To use vector search from .NET, you can use your regular database driver or SDK without requiring any additional library or API. For example, on SQL Server, vector search can be performed in T-SQL when using the standard .NET driver, SqlClient. However, accessing vector search in this way is often quite low-level, requires considerable ceremony to handle serialization/deserialization, and the resulting code isn't portable across databases. -As an alternative, the [📦 Microsoft.Extensions.VectorData.Abstractions](https://www.nuget.org/packages/Microsoft.Extensions.VectorData.Abstractions) package provides a unified layer of abstractions for interacting with vector stores in .NET. These abstractions let you write simple, high-level code against a single API, and swap out the underlying vector store with minimal changes to your application. - -The library provides the following key capabilities: - -- **Seamless .NET type mapping**: Map your .NET type directly to the database, similar to an object/relational mapper. -- **Unified data model**: Define your data model once using .NET attributes and use it across any supported vector store. -- **CRUD operations**: Create, read, update, and delete records in a vector store. -- **Vector and hybrid search**: Query records by semantic similarity using vector search, or combine vector and text search for hybrid search. -- **Embedding generation management**: Configure your embedding generator once and let the library transparently handle generation. -- **Collection management**: Create, list, and delete collections (tables or indices) in a vector store. - -Microsoft.Extensions.VectorData is also the building block for additional, higher-level layers which need to interact with vector database. For example, the [Microsoft.Extensions.DataIngestion](../conceptual/data-ingestion.md). - -### Microsoft.Extensions.VectorData and Entity Framework Core - -If you are already using Entity Framework Core to access your database, it's likely that your database provider already supports vector search, and LINQ queries can be used to express such searches; Microsoft.Extensions.VectorData isn't necessarily needed in such applications. However, most dedicated vector databases are not supported by EF Core, and Microsoft.Extensions.VectorData can provide a good experience for working with those. In addition, you may also find yourself using both EF and Microsoft.Extensions.VectorData in the same application, e.g. when using an additional layer such as Microsoft.Extensions.DataIngestion. +As an alternative, the [Microsoft.Extensions.VectorData.Abstractions library](../conceptual/mevd-library.md) provides a unified layer of abstractions for interacting with vector stores in .NET. For more information about the library, see [The Microsoft.Extensions.VectorData library](../conceptual/mevd-library.md). ## Key abstractions From abe7e4139490e6e7f9165dfd7d064435b212525d Mon Sep 17 00:00:00 2001 From: Genevieve Warren <24882762+gewarren@users.noreply.github.com> Date: Thu, 16 Apr 2026 12:03:59 -0700 Subject: [PATCH 08/10] fix build warnings --- docs/ai/conceptual/data-ingestion.md | 2 +- docs/ai/dotnet-ai-ecosystem.md | 16 ++++++++-------- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/docs/ai/conceptual/data-ingestion.md b/docs/ai/conceptual/data-ingestion.md index fd73f0a723e61..0d0dab8a9f38a 100644 --- a/docs/ai/conceptual/data-ingestion.md +++ b/docs/ai/conceptual/data-ingestion.md @@ -28,7 +28,7 @@ This is where data ingestion becomes critical. You need to extract text from dif ## Data ingestion building blocks -The [Microsoft.Extensions.DataIngestion](hmedi-library.md) library is built around several key components that work together to create a complete data processing pipeline. This section explores each component and how they fit together. +The [Microsoft.Extensions.DataIngestion](medi-library.md) library is built around several key components that work together to create a complete data processing pipeline. This section explores each component and how they fit together. ### Documents and document readers diff --git a/docs/ai/dotnet-ai-ecosystem.md b/docs/ai/dotnet-ai-ecosystem.md index 0d7105d9c8ff4..c6fc0f3a2621b 100644 --- a/docs/ai/dotnet-ai-ecosystem.md +++ b/docs/ai/dotnet-ai-ecosystem.md @@ -49,29 +49,29 @@ MEAI gives .NET developers a clean abstraction for model interaction. It fits na MEAI alone isn't an agent framework. A one-shot call, chat feature, or tool-call loop can be built with MEAI without becoming "agentic." When the system needs goal-directed, multi-step orchestration, use [MAF](#microsoft-agent-framework-maf) instead. -For more information, see [Microsoft.Extensions.AI overview](../microsoft-extensions-ai.md). +For more information, see [Microsoft.Extensions.AI overview](microsoft-extensions-ai.md). ## Evaluation libraries -The [The Microsoft.Extensions.AI.Evaluation library](../evaluation/libraries.md) is the quality and regression layer for AI features built with the .NET AI stack. AI behavior changes readily as prompts, models, and tools evolve. The evaluations library give teams a repeatable way to compare outputs and catch regressions. +The [The Microsoft.Extensions.AI.Evaluation library](evaluation/libraries.md) is the quality and regression layer for AI features built with the .NET AI stack. AI behavior changes readily as prompts, models, and tools evolve. The evaluations library give teams a repeatable way to compare outputs and catch regressions. -For more information, see [Microsoft.Extensions.AI.Evaluation libraries](../evaluation/libraries.md). +For more information, see [Microsoft.Extensions.AI.Evaluation libraries](evaluation/libraries.md). ## Microsoft.Extensions.DataIngestion (MEDI) -[Microsoft.Extensions.DataIngestion](medi-library.md) is the ingestion and preparation layer for AI-ready data in .NET. +[Microsoft.Extensions.DataIngestion](conceptual/medi-library.md) is the ingestion and preparation layer for AI-ready data in .NET. Many AI apps fail before retrieval because data is messy, oversized, or poorly structured. Ingestion quality strongly affects downstream answer quality. MEDI prepares and shapes the data that MEVD or another store later queries. -For more information, see [Data ingestion for AI apps](data-ingestion.md). +For more information, see [Data ingestion for AI apps](conceptual/data-ingestion.md). ## Microsoft.Extensions.VectorData (MEVD) -[Microsoft.Extensions.VectorData](mevd-library.md) is the vector data storage and retrieval layer for semantic search, similarity lookup, and grounding in .NET AI apps. +[Microsoft.Extensions.VectorData](conceptual/mevd-library.md) is the vector data storage and retrieval layer for semantic search, similarity lookup, and grounding in .NET AI apps. MEVD gives .NET applications a consistent way to work with vector stores and helps separate vector storage and retrieval concerns from model invocation concerns. -For more information, see [Vector stores overview](../vector-stores/overview.md). +For more information, see [Vector stores overview](vector-stores/overview.md). ## MCP Server @@ -87,7 +87,7 @@ An MCP Client is the consumer side of the protocol: it connects to MCP servers a An MCP Client is about *consuming* capabilities, not publishing them. If everything the app needs is local and in-process, ordinary function or tool calling is still simpler. -For more information, see [Get started with MCP](../get-started-mcp.md). +For more information, see [Get started with MCP](get-started-mcp.md). ## Microsoft Agent Framework (MAF) From 5a50b64577e5b28a098e540a5a507e5c3bee72cc Mon Sep 17 00:00:00 2001 From: Genevieve Warren <24882762+gewarren@users.noreply.github.com> Date: Thu, 16 Apr 2026 13:23:12 -0700 Subject: [PATCH 09/10] few more see also links --- docs/ai/evaluation/libraries.md | 1 + docs/ai/get-started-mcp.md | 6 ++++-- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/docs/ai/evaluation/libraries.md b/docs/ai/evaluation/libraries.md index 5851c9939b9e6..d7f717ef7524b 100644 --- a/docs/ai/evaluation/libraries.md +++ b/docs/ai/evaluation/libraries.md @@ -99,4 +99,5 @@ For a more comprehensive tour of the functionality and APIs in the Microsoft.Ext ## See also +- [Quickstart: Evaluate response quality](evaluate-ai-response.md) - [Evaluation of generative AI apps (Foundry)](/azure/ai-studio/concepts/evaluation-approach-gen-ai) diff --git a/docs/ai/get-started-mcp.md b/docs/ai/get-started-mcp.md index 0efd1fb00ef1c..963c9725c9d16 100644 --- a/docs/ai/get-started-mcp.md +++ b/docs/ai/get-started-mcp.md @@ -87,8 +87,10 @@ Get started with the following development tools: ## See also +- [Create a minimal MCP client using .NET](quickstarts/build-mcp-client.md) +- [Create a minimal MCP server using C# and publish to NuGet](quickstarts/build-mcp-server.md) - [MCP C# SDK documentation](https://modelcontextprotocol.github.io/csharp-sdk/index.html) - [MCP C# SDK API documentation](https://modelcontextprotocol.github.io/csharp-sdk/api/ModelContextProtocol.html) - [MCP C# SDK README](https://github.com/modelcontextprotocol/csharp-sdk/blob/main/README.md) -- [Microsoft partners with Anthropic to create official C# SDK for Model Context Protocol](https://devblogs.microsoft.com/blog/microsoft-partners-with-anthropic-to-create-official-c-sdk-for-model-context-protocol) -- [Build a Model Context Protocol (MCP) server in C#](https://devblogs.microsoft.com/dotnet/build-a-model-context-protocol-mcp-server-in-csharp/) +- [Blog: Microsoft partners with Anthropic to create official C# SDK for Model Context Protocol](https://devblogs.microsoft.com/blog/microsoft-partners-with-anthropic-to-create-official-c-sdk-for-model-context-protocol) +- [Blog: Build a Model Context Protocol (MCP) server in C#](https://devblogs.microsoft.com/dotnet/build-a-model-context-protocol-mcp-server-in-csharp/) From 0c33463a33a4e91bce329bc0c59334712c40fcd1 Mon Sep 17 00:00:00 2001 From: Genevieve Warren <24882762+gewarren@users.noreply.github.com> Date: Thu, 16 Apr 2026 13:35:59 -0700 Subject: [PATCH 10/10] Apply suggestions from code review Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --- docs/ai/conceptual/medi-library.md | 1 + docs/ai/dotnet-ai-ecosystem.md | 2 +- docs/ai/vector-stores/overview.md | 2 +- 3 files changed, 3 insertions(+), 2 deletions(-) diff --git a/docs/ai/conceptual/medi-library.md b/docs/ai/conceptual/medi-library.md index 4bc51121934e5..bb37b81d52f5a 100644 --- a/docs/ai/conceptual/medi-library.md +++ b/docs/ai/conceptual/medi-library.md @@ -3,6 +3,7 @@ title: "The Microsoft.Extensions.DataIngestion library" description: "Learn about the Microsoft.Extensions.DataIngestion library, which provides foundational .NET building blocks for data ingestion." ms.topic: concept-article ms.date: 04/15/2026 +ai-usage: ai-assisted --- # The Microsoft.Extensions.DataIngestion library diff --git a/docs/ai/dotnet-ai-ecosystem.md b/docs/ai/dotnet-ai-ecosystem.md index c6fc0f3a2621b..43c314c769cd7 100644 --- a/docs/ai/dotnet-ai-ecosystem.md +++ b/docs/ai/dotnet-ai-ecosystem.md @@ -53,7 +53,7 @@ For more information, see [Microsoft.Extensions.AI overview](microsoft-extension ## Evaluation libraries -The [The Microsoft.Extensions.AI.Evaluation library](evaluation/libraries.md) is the quality and regression layer for AI features built with the .NET AI stack. AI behavior changes readily as prompts, models, and tools evolve. The evaluations library give teams a repeatable way to compare outputs and catch regressions. +The [Microsoft.Extensions.AI.Evaluation library](evaluation/libraries.md) is the quality and regression layer for AI features built with the .NET AI stack. AI behavior changes readily as prompts, models, and tools evolve. The evaluations library gives teams a repeatable way to compare outputs and catch regressions. For more information, see [Microsoft.Extensions.AI.Evaluation libraries](evaluation/libraries.md). diff --git a/docs/ai/vector-stores/overview.md b/docs/ai/vector-stores/overview.md index e14c42547df87..5b33cf73959ee 100644 --- a/docs/ai/vector-stores/overview.md +++ b/docs/ai/vector-stores/overview.md @@ -46,7 +46,7 @@ Other benefits of the RAG pattern include: To use vector search from .NET, you can use your regular database driver or SDK without requiring any additional library or API. For example, on SQL Server, vector search can be performed in T-SQL when using the standard .NET driver, SqlClient. However, accessing vector search in this way is often quite low-level, requires considerable ceremony to handle serialization/deserialization, and the resulting code isn't portable across databases. -As an alternative, the [Microsoft.Extensions.VectorData.Abstractions library](../conceptual/mevd-library.md) provides a unified layer of abstractions for interacting with vector stores in .NET. For more information about the library, see [The Microsoft.Extensions.VectorData library](../conceptual/mevd-library.md). +As an alternative, the [Microsoft.Extensions.VectorData library](../conceptual/mevd-library.md) provides a unified layer of abstractions for interacting with vector stores in .NET. ## Key abstractions