diff --git a/.openpublishing.redirection.ai.json b/.openpublishing.redirection.ai.json index edf20ef4ba148..0d717d29c46bb 100644 --- a/.openpublishing.redirection.ai.json +++ b/.openpublishing.redirection.ai.json @@ -13,6 +13,10 @@ "redirect_url": "/dotnet/ai/microsoft-extensions-ai", "redirect_document_id": true }, + { + "source_path_from_root": "/docs/ai/conceptual/ai-tools.md", + "redirect_url": "/dotnet/ai/conceptual/calling-tools" + }, { "source_path_from_root": "/docs/ai/conceptual/evaluation-libraries.md", "redirect_url": "/dotnet/ai/evaluation/libraries", diff --git a/docs/ai/conceptual/ai-tools.md b/docs/ai/conceptual/calling-tools.md similarity index 100% rename from docs/ai/conceptual/ai-tools.md rename to docs/ai/conceptual/calling-tools.md diff --git a/docs/ai/conceptual/data-ingestion.md b/docs/ai/conceptual/data-ingestion.md index 27281dcf0fcef..0d0dab8a9f38a 100644 --- a/docs/ai/conceptual/data-ingestion.md +++ b/docs/ai/conceptual/data-ingestion.md @@ -16,7 +16,7 @@ Data ingestion is the process of collecting, reading, and preparing data from di - **Transform** the data by cleaning, chunking, enriching, or converting formats. - **Load** the data into a destination like a database, vector store, or AI model for retrieval and analysis. -For AI and machine learning scenarios, especially Retrieval-Augmented Generation (RAG), data ingestion is not just about converting data from one format to another. It is about making data usable for intelligent applications. This means representing documents in a way that preserves their structure and meaning, splitting them into manageable chunks, enriching them with metadata or embeddings, and storing them so they can be retrieved quickly and accurately. +For AI and machine learning scenarios, especially retrieval-augmented generation (RAG), data ingestion is not just about converting data from one format to another. It is about making data usable for intelligent applications. This means representing documents in a way that preserves their structure and meaning, splitting them into manageable chunks, enriching them with metadata or embeddings, and storing them so they can be retrieved quickly and accurately. ## Why data ingestion matters for AI applications @@ -26,37 +26,9 @@ Your chatbot needs to understand and search through thousands of documents to pr This is where data ingestion becomes critical. You need to extract text from different file formats, break large documents into smaller chunks that fit within AI model limits, enrich the content with metadata, generate embeddings for semantic search, and store everything in a way that enables fast retrieval. Each step requires careful consideration of how to preserve the original meaning and context. -## The Microsoft.Extensions.DataIngestion library - -The [📦 Microsoft.Extensions.DataIngestion package](https://www.nuget.org/packages/Microsoft.Extensions.DataIngestion) provides foundational .NET building blocks for data ingestion. It enables developers to read, process, and prepare documents for AI and machine learning workflows, especially Retrieval-Augmented Generation (RAG) scenarios. - -With these building blocks, you can create robust, flexible, and intelligent data ingestion pipelines tailored for your application needs: - -- **Unified document representation:** Represent any file type (for example, PDF, Image, or Microsoft Word) in a consistent format that works well with large language models. -- **Flexible data ingestion:** Read documents from both cloud services and local sources using multiple built-in readers, making it easy to bring in data from wherever it lives. -- **Built-in AI enhancements:** Automatically enrich content with summaries, sentiment analysis, keyword extraction, and classification, preparing your data for intelligent workflows. -- **Customizable chunking strategies:** Split documents into chunks using token-based, section-based, or semantic-aware approaches, so you can optimize for your retrieval and analysis needs. -- **Production-ready storage:** Store processed chunks in popular vector databases and document stores, with support for embedding generation, making your pipelines ready for real-world scenarios. -- **End-to-end pipeline composition:** Chain together readers, processors, chunkers, and writers with the API, reducing boilerplate and making it easy to build, customize, and extend complete workflows. -- **Performance and scalability:** Designed for scalable data processing, these components can handle large volumes of data efficiently, making them suitable for enterprise-grade applications. - -All of these components are open and extensible by design. You can add custom logic and new connectors, and extend the system to support emerging AI scenarios. By standardizing how documents are represented, processed, and stored, .NET developers can build reliable, scalable, and maintainable data pipelines without "reinventing the wheel" for every project. - -### Built on stable foundations - -![Data Ingestion Architecture Diagram](../media/data-ingestion/dataingestion.png) - -These data ingestion building blocks are built on top of proven and extensible components in the .NET ecosystem, ensuring reliability, interoperability, and seamless integration with existing AI workflows: - -- **Microsoft.ML.Tokenizers:** Tokenizers provide the foundation for chunking documents based on tokens. This enables precise splitting of content, which is essential for preparing data for large language models and optimizing retrieval strategies. -- **Microsoft.Extensions.AI:** This set of libraries powers enrichment transformations using large language models. It enables features like summarization, sentiment analysis, keyword extraction, and embedding generation, making it easy to enhance your data with intelligent insights. -- **Microsoft.Extensions.VectorData:** This set of libraries offers a consistent interface for storing processed chunks in a wide variety of vector stores, including Qdrant, Azure SQL, CosmosDB, MongoDB, ElasticSearch, and many more. This ensures your data pipelines are ready for production and can scale across different storage backends. - -In addition to familiar patterns and tools, these abstractions build on already extensible components. Plug-in capability and interoperability are paramount, so as the rest of the .NET AI ecosystem grows, the capabilities of the data ingestion components grow as well. This approach empowers developers to easily integrate new providers, enrichments, and storage options, keeping their pipelines future-ready and adaptable to evolving AI scenarios. - ## Data ingestion building blocks -The [Microsoft.Extensions.DataIngestion](https://www.nuget.org/packages/Microsoft.Extensions.DataIngestion) library is built around several key components that work together to create a complete data processing pipeline. This section explores each component and how they fit together. +The [Microsoft.Extensions.DataIngestion](medi-library.md) library is built around several key components that work together to create a complete data processing pipeline. This section explores each component and how they fit together. ### Documents and document readers diff --git a/docs/ai/conceptual/medi-library.md b/docs/ai/conceptual/medi-library.md new file mode 100644 index 0000000000000..bb37b81d52f5a --- /dev/null +++ b/docs/ai/conceptual/medi-library.md @@ -0,0 +1,39 @@ +--- +title: "The Microsoft.Extensions.DataIngestion library" +description: "Learn about the Microsoft.Extensions.DataIngestion library, which provides foundational .NET building blocks for data ingestion." +ms.topic: concept-article +ms.date: 04/15/2026 +ai-usage: ai-assisted +--- + +# The Microsoft.Extensions.DataIngestion library + +The [📦 Microsoft.Extensions.DataIngestion package](https://www.nuget.org/packages/Microsoft.Extensions.DataIngestion) provides foundational .NET building blocks for data ingestion. It enables developers to read, process, and prepare documents for AI and machine learning workflows, especially Retrieval-Augmented Generation (RAG) scenarios. + +With these building blocks, you can create robust, flexible, and intelligent data ingestion pipelines tailored for your application needs: + +- **Unified document representation:** Represent any file type (for example, PDF, Image, or Microsoft Word) in a consistent format that works well with large language models. +- **Flexible data ingestion:** Read documents from both cloud services and local sources using multiple built-in readers, making it easy to bring in data from wherever it lives. +- **Built-in AI enhancements:** Automatically enrich content with summaries, sentiment analysis, keyword extraction, and classification, preparing your data for intelligent workflows. +- **Customizable chunking strategies:** Split documents into chunks using token-based, section-based, or semantic-aware approaches, so you can optimize for your retrieval and analysis needs. +- **Production-ready storage:** Store processed chunks in popular vector databases and document stores, with support for embedding generation, making your pipelines ready for real-world scenarios. +- **End-to-end pipeline composition:** Chain together readers, processors, chunkers, and writers with the API, reducing boilerplate and making it easy to build, customize, and extend complete workflows. +- **Performance and scalability:** Designed for scalable data processing, these components can handle large volumes of data efficiently, making them suitable for enterprise-grade applications. + +All of these components are open and extensible by design. You can add custom logic and new connectors, and extend the system to support emerging AI scenarios. By standardizing how documents are represented, processed, and stored, .NET developers can build reliable, scalable, and maintainable data pipelines without "reinventing the wheel" for every project. + +## Built on stable foundations + +![Data Ingestion Architecture Diagram](../media/data-ingestion/dataingestion.png) + +These data ingestion building blocks are built on top of proven and extensible components in the .NET ecosystem, ensuring reliability, interoperability, and seamless integration with existing AI workflows: + +- **Microsoft.ML.Tokenizers:** Tokenizers provide the foundation for chunking documents based on tokens. This enables precise splitting of content, which is essential for preparing data for large language models and optimizing retrieval strategies. +- **Microsoft.Extensions.AI:** This set of libraries powers enrichment transformations using large language models. It enables features like summarization, sentiment analysis, keyword extraction, and embedding generation, making it easy to enhance your data with intelligent insights. +- **Microsoft.Extensions.VectorData:** This set of libraries offers a consistent interface for storing processed chunks in a wide variety of vector stores, including Qdrant, Azure SQL, CosmosDB, MongoDB, ElasticSearch, and many more. This ensures your data pipelines are ready for production and can scale across different storage backends. + +In addition to familiar patterns and tools, these abstractions build on already extensible components. Plug-in capability and interoperability are paramount, so as the rest of the .NET AI ecosystem grows, the capabilities of the data ingestion components grow as well. This approach empowers developers to easily integrate new providers, enrichments, and storage options, keeping their pipelines future-ready and adaptable to evolving AI scenarios. + +## See also + +- [Data ingestion](data-ingestion.md) diff --git a/docs/ai/conceptual/mevd-library.md b/docs/ai/conceptual/mevd-library.md new file mode 100644 index 0000000000000..97d4c6347419c --- /dev/null +++ b/docs/ai/conceptual/mevd-library.md @@ -0,0 +1,30 @@ +--- +title: "The Microsoft.Extensions.VectorData library" +description: "Learn how to use Microsoft.Extensions.VectorData to build semantic search features." +ms.topic: concept-article +ms.date: 04/15/2026 +ai-usage: ai-assisted +--- + +# The Microsoft.Extensions.VectorData library + +The [📦 Microsoft.Extensions.VectorData.Abstractions](https://www.nuget.org/packages/Microsoft.Extensions.VectorData.Abstractions) package provides a unified layer of abstractions for interacting with vector stores in .NET. These abstractions let you write simple, high-level code against a single API, and swap out the underlying vector store with minimal changes to your application. + +The library provides the following key capabilities: + +- **Seamless .NET type mapping**: Map your .NET type directly to the database, similar to an object/relational mapper. +- **Unified data model**: Define your data model once using .NET attributes and use it across any supported vector store. +- **CRUD operations**: Create, read, update, and delete records in a vector store. +- **Vector and hybrid search**: Query records by semantic similarity using vector search, or combine vector and text search for hybrid search. +- **Embedding generation management**: Configure your embedding generator once and let the library transparently handle generation. +- **Collection management**: Create, list, and delete collections (tables or indices) in a vector store. + +Microsoft.Extensions.VectorData is also the building block for additional, higher-level layers that need to interact with vector databases, for example, the [Microsoft.Extensions.DataIngestion](../conceptual/data-ingestion.md) library. + +## Microsoft.Extensions.VectorData and Entity Framework Core + +If you're already using [Entity Framework Core](/ef/core) to access your database, it's likely that your database provider already supports vector search, and LINQ queries can be used to express such searches. In such applications, Microsoft.Extensions.VectorData isn't necessarily needed. However, most dedicated vector databases aren't supported by EF Core, and Microsoft.Extensions.VectorData can provide a good experience for working with those. In addition, you might also find yourself using both EF and Microsoft.Extensions.VectorData in the same application, for example, when using an additional layer such as [Microsoft.Extensions.DataIngestion](../conceptual/medi-library.md). + +## See also + +- [Vector databases for .NET AI apps](../vector-stores/overview.md) diff --git a/docs/ai/dotnet-ai-ecosystem.md b/docs/ai/dotnet-ai-ecosystem.md index 84e5d0d2dad37..43c314c769cd7 100644 --- a/docs/ai/dotnet-ai-ecosystem.md +++ b/docs/ai/dotnet-ai-ecosystem.md @@ -1,7 +1,7 @@ --- title: .NET + AI ecosystem tools and SDKs description: This article provides an overview of the ecosystem of SDKs and tools available to .NET developers integrating AI into their applications. -ms.date: 12/10/2025 +ms.date: 04/15/2026 ms.topic: overview --- @@ -9,8 +9,35 @@ ms.topic: overview The .NET ecosystem provides many powerful tools, libraries, and services to develop AI applications. .NET supports both cloud and local AI model connections, many different SDKs for various AI and vector database services, and other tools to help you build intelligent apps of varying scope and complexity. -> [!IMPORTANT] -> Not all of the SDKs and services presented in this article are maintained by Microsoft. When considering an SDK, make sure to evaluate its quality, licensing, support, and compatibility to ensure they meet your requirements. +## Decide which tool to use + +The following table recommends which technology to use based on different objectives. + +| Objective | Technology to use | +|-------------------------------|-------------------| +| **Add AI behavior to an app** | Use [Microsoft.Extensions.AI library (MEAI)](#microsoftextensionsai-libraries). Add [Evaluations](#evaluation-libraries) once you have something worth measuring. | +| **Work with your own data** | Use [Microsoft.Extensions.DataIngestion (MEDI)](#microsoftextensionsdataingestion-medi) to read, chunk, or enrich content. Then use [Microsoft.Extensions.VectorData (MEVD)](#microsoftextensionsvectordata-mevd) to store and retrieve vectors. | +| **Share or consume capabilities across AI clients** | Use an [MCP Server](#mcp-server) to publish capabilities, or an [MCP Client](#mcp-client) to consume them. | +| **Build an agentic system** | Use [Copilot SDK](#copilot-sdk) for a ready-made harness, or [Microsoft Agent Framework](#microsoft-agent-framework-maf) for multi-step goal pursuit, routing, or handoffs. | +| **Choose a hosting or execution model** | Use [Azure AI Foundry](#azure-ai-foundry) for managed cloud, [Foundry Local](#foundry-local) for local-first or privacy-sensitive execution, and [Aspire](#aspire) for distributed multi-service systems. | +| **Improve the developer workflow** | Use [AI Toolkit](#ai-toolkit). | + +Most production AI applications combine several components: + +- **Chat or summarization app**: MEAI + Evaluations +- **RAG application**: MEDI + MEVD + MEAI +- **Multi-agent system**: MEAI + MAF + Aspire +- **Tool interoperability**: MEAI + MCP Server + MCP Client +- **Enterprise cloud app**: MEAI + Azure AI Foundry + Aspire +- **Local-first app**: MEAI + Foundry Local + AI Toolkit (development) + +Use these practical rules to choose quickly: + +- Start with `Microsoft.Extensions.AI` for most app-level AI features. +- Add `Microsoft.Extensions.DataIngestion` and `Microsoft.Extensions.VectorData` when grounding responses with your own data. +- Use MCP when capabilities must be shared across process or product boundaries. +- Move to Agent Framework when one-step prompts become multi-step workflows. +- Add evaluations once behavior is useful enough to measure and protect from regressions. ## Microsoft.Extensions.AI libraries @@ -18,75 +45,96 @@ The .NET ecosystem provides many powerful tools, libraries, and services to deve `Microsoft.Extensions.AI` provides abstractions that can be implemented by various services, all adhering to the same core concepts. This library is not intended to provide APIs tailored to any specific provider's services. The goal of `Microsoft.Extensions.AI` is to act as a unifying layer within the .NET ecosystem, enabling developers to choose their preferred frameworks and libraries while ensuring seamless integration and collaboration across the ecosystem. -## Other AI-related Microsoft.Extensions libraries +MEAI gives .NET developers a clean abstraction for model interaction. It fits naturally into dependency injection, configuration, and existing app architectures and is the usual first layer of an AI-enabled .NET application. + +MEAI alone isn't an agent framework. A one-shot call, chat feature, or tool-call loop can be built with MEAI without becoming "agentic." When the system needs goal-directed, multi-step orchestration, use [MAF](#microsoft-agent-framework-maf) instead. + +For more information, see [Microsoft.Extensions.AI overview](microsoft-extensions-ai.md). + +## Evaluation libraries + +The [Microsoft.Extensions.AI.Evaluation library](evaluation/libraries.md) is the quality and regression layer for AI features built with the .NET AI stack. AI behavior changes readily as prompts, models, and tools evolve. The evaluations library gives teams a repeatable way to compare outputs and catch regressions. + +For more information, see [Microsoft.Extensions.AI.Evaluation libraries](evaluation/libraries.md). + +## Microsoft.Extensions.DataIngestion (MEDI) + +[Microsoft.Extensions.DataIngestion](conceptual/medi-library.md) is the ingestion and preparation layer for AI-ready data in .NET. + +Many AI apps fail before retrieval because data is messy, oversized, or poorly structured. Ingestion quality strongly affects downstream answer quality. MEDI prepares and shapes the data that MEVD or another store later queries. + +For more information, see [Data ingestion for AI apps](conceptual/data-ingestion.md). + +## Microsoft.Extensions.VectorData (MEVD) + +[Microsoft.Extensions.VectorData](conceptual/mevd-library.md) is the vector data storage and retrieval layer for semantic search, similarity lookup, and grounding in .NET AI apps. + +MEVD gives .NET applications a consistent way to work with vector stores and helps separate vector storage and retrieval concerns from model invocation concerns. + +For more information, see [Vector stores overview](vector-stores/overview.md). + +## MCP Server + +An MCP Server exposes capabilities such as tools, resources, or prompts over the Model Context Protocol so other assistants, IDEs, and agents can discover and use them through a standard protocol. + +An MCP Server turns app capabilities into reusable AI-facing endpoints. It reduces duplicated tool integration work across assistants and creates a cleaner boundary between capability providers and capability consumers. + +An MCP Server is about *publishing* capabilities. If the capability is used only inside one app, ordinary in-process function calling is simpler. + +## MCP Client + +An MCP Client is the consumer side of the protocol: it connects to MCP servers and brings their exposed capabilities into an app, assistant, or agent runtime. + +An MCP Client is about *consuming* capabilities, not publishing them. If everything the app needs is local and in-process, ordinary function or tool calling is still simpler. + +For more information, see [Get started with MCP](get-started-mcp.md). -The [📦 Microsoft.Extensions.VectorData.Abstractions package](https://www.nuget.org/packages/Microsoft.Extensions.VectorData.Abstractions/) provides a unified layer of abstractions for interacting with a variety of vector stores. It lets you store processed chunks in vector stores such as Qdrant, Azure SQL, CosmosDB, MongoDB, ElasticSearch, and many more. For more information, see [Build a .NET AI vector search app](vector-stores/how-to/build-vector-search-app.md). +## Microsoft Agent Framework (MAF) -The [📦 Microsoft.Extensions.DataIngestion package](https://www.nuget.org/packages/Microsoft.Extensions.DataIngestion) provides foundational .NET building blocks for data ingestion. It enables developers to read, process, and prepare documents for AI and machine learning workflows, especially retrieval-augmented generation (RAG) scenarios. For more information, see [Data ingestion](conceptual/data-ingestion.md). +Microsoft Agent Framework is the orchestration layer for systems that are truly agentic: they pursue a goal across multiple steps, make decisions along the way, use tools, and might coordinate multiple agents. -## Microsoft Agent Framework +Not every AI feature needs MAF. If a direct MEAI call or a simple tool-calling loop solves the problem, use a simpler approach. MAF matters when orchestration complexity is the real challenge, not just model access. -If you want to use low-level services, such as and , you can reference the `Microsoft.Extensions.AI.Abstractions` package directly from your app. However, if you want to build agentic AI applications with higher-level orchestration capabilities, you should use [Microsoft Agent Framework](/agent-framework/overview/agent-framework-overview). Agent Framework builds on the `Microsoft.Extensions.AI.Abstractions` package and provides concrete implementations of for different services, including OpenAI, Azure OpenAI, Microsoft Foundry, and more. +For more information, see [Microsoft Agent Framework overview](/agent-framework/overview/agent-framework-overview). -This framework is the recommended approach for .NET apps that need to build agentic AI systems with advanced orchestration, multi-agent collaboration, and enterprise-grade security and observability. +## AI Toolkit -Agent Framework is a production-ready, open-source framework that brings together the best capabilities of Semantic Kernel and Microsoft Research's AutoGen. Agent Framework provides: +AI Toolkit is a VS Code extension pack for AI development that speeds up experimentation with models, prompts, agents, and evaluations. -- **Multi-agent orchestration**: Support for sequential, concurrent, group chat, handoff, and *magentic* (where a lead agent directs other agents) orchestration patterns. -- **Cloud and provider flexibility**: Cloud-agnostic (containers, on-premises, or multi-cloud) and provider-agnostic (for example, OpenAI or Foundry) using plugin and connector models. -- **Enterprise-grade features**: Built-in observability (OpenTelemetry), Microsoft Entra security integration, and responsible AI features including prompt injection protection and task adherence monitoring. -- **Standards-based interoperability**: Integration with open standards like Agent-to-Agent (A2A) protocol and Model Context Protocol (MCP) for agent discovery and tool interaction. +AI Toolkit isn't the core runtime architecture for the production app. It complements MEAI, Evaluations, and Foundry Local. -For more information, see the [Microsoft Agent Framework documentation](/agent-framework/overview/agent-framework-overview). +For more information, see [AI Toolkit for Visual Studio Code](https://code.visualstudio.com/docs/intelligentapps/overview). -## Semantic Kernel for .NET +## Copilot SDK -[Semantic Kernel](/semantic-kernel/overview/) is an open-source library that enables AI integration and orchestration capabilities in your .NET apps. However, for new applications that require agentic capabilities, multi-agent orchestration, or enterprise-grade observability and security, the recommended framework is [Microsoft Agent Framework](/agent-framework/overview/agent-framework-overview). +Copilot SDK is a prebuilt agent harness and runtime that brings tools, context, and automatic tool calling out of the box. -## .NET SDKs for building AI apps +Copilot SDK is more opinionated and prewired than MEAI. If your goal is a fully custom app architecture, direct MEAI or MAF composition can be a better fit. -Many different SDKs are available to build .NET apps with AI capabilities depending on the target platform or AI model. OpenAI models offer powerful generative AI capabilities, while other Foundry tools provide intelligent solutions for a variety of specific scenarios. +For more information, see the [Copilot SDK repository](https://github.com/github/copilot-sdk). -### .NET SDKs for OpenAI models +## Azure AI Foundry -| NuGet package | Supported models | Maintainer or vendor | Documentation | -|---------------|------------------|----------------------|--------------| -| [Microsoft.Agents.AI.OpenAI](https://www.nuget.org/packages/Microsoft.Agents.AI.OpenAI/) | [OpenAI models](https://platform.openai.com/docs/models/overview)
[Azure OpenAI supported models](/azure/ai-services/openai/concepts/models) | [Microsoft Agent Framework](https://github.com/microsoft/agent-framework) (Microsoft) | [Agent Framework documentation](/agent-framework/overview/agent-framework-overview) | -| [Azure OpenAI SDK](https://www.nuget.org/packages/Azure.AI.OpenAI/) | [Azure OpenAI supported models](/azure/ai-services/openai/concepts/models) | [Azure SDK for .NET](https://github.com/Azure/azure-sdk-for-net) (Microsoft) | [Azure OpenAI services documentation](/azure/ai-services/openai/) | -| [OpenAI SDK](https://www.nuget.org/packages/OpenAI/) | [OpenAI supported models](https://platform.openai.com/docs/models) | [OpenAI SDK for .NET](https://github.com/openai/openai-dotnet) (OpenAI) | [OpenAI services documentation](https://platform.openai.com/docs/overview) | +Azure AI Foundry is the managed cloud platform layer for enterprise AI solutions, with two primary functions: model management and hosted agents. -### .NET SDKs for Foundry Tools +Azure AI Foundry isn't the app-facing programming abstraction; MEAI still plays that role in .NET code. Azure AI Foundry becomes the right lead when the real question is *where* the model runs and under what controls. -Azure offers many other AI services, such as Foundry Tools, to build specific application capabilities and workflows. Most of these services provide a .NET SDK to integrate their functionality into custom apps. Some of the most commonly used services are shown in the following table. For a complete list of available services and learning resources, see the [Foundry Tools](/azure/ai-services/what-are-ai-services) documentation. +For more information, see the [Azure AI Foundry documentation](/azure/ai-foundry/). -| Service | Description | -|-----------------------------------|----------------------------------------------| -| [Azure AI Search](/azure/search/) | Bring AI-powered cloud search to your mobile and web apps. | -| [Content Safety in Foundry Control Plane](/azure/ai-services/content-safety/) | Detect unwanted or offensive content. | -| [Azure Document Intelligence in Foundry Tools](/azure/ai-services/document-intelligence/) | Turn documents into intelligent data-driven solutions. | -| [Azure Language in Foundry Tools](/azure/ai-services/language-service/) | Build apps with industry-leading natural language understanding capabilities. | -| [Azure Speech in Foundry Tools](/azure/ai-services/speech-service/) | Speech to text, text to speech, translation, and speaker recognition. | -| [Azure Translator in Foundry Tools](/azure/ai-services/translator/) | AI-powered translation technology with support for more than 100 languages and dialects. | -| [Azure Vision in Foundry Tools](/azure/ai-services/computer-vision/) | Analyze content in images and videos. | +## Foundry Local -## Develop with local AI models +Foundry Local is a local development and local-first deployment option for teams that need to keep AI workloads close to the machine or environment. -.NET apps can also connect to local AI models for many different development scenarios. [Microsoft Agent Framework](https://github.com/microsoft/agent-framework) is the recommended tool to connect to local models using .NET. This framework can connect to many different models hosted across a variety of platforms and abstracts away lower-level implementation details. +Foundry Local is about the development and deployment path, not the higher-level app architecture itself. Local-to-cloud isn't a clean one-to-one move, so expect differences in features, hosting model, and operations. -For example, you can use [Ollama](https://ollama.com/) to [connect to local AI models with .NET](quickstarts/chat-local-model.md), including several small language models (SLMs) developed by Microsoft: +For more information, see the [Foundry Local documentation](/azure/foundry-local/). -| Model | Description | -|---------------------|-----------------------------------------------------------| -| [phi3 models][phi3] | A family of powerful SLMs with groundbreaking performance at low cost and low latency. | -| [orca models][orca] | Research models in tasks such as reasoning over user-provided data, reading comprehension, math problem solving, and text summarization. | +## Aspire -> [!NOTE] -> The preceding SLMs can also be hosted on other services, such as Azure. +Aspire is the orchestration, service-wiring, and observability layer for distributed .NET applications, including AI systems that span multiple services. -## Next steps +AI systems often stop being "just one app" once retrieval, tools, gateways, and worker services are involved. Aspire helps teams keep those parts understandable and observable, and its visuals make it easier to trace AI flows across services. -- [What is Microsoft Agent Framework?](/agent-framework/overview/agent-framework-overview) -- [Quickstart - Summarize text using Azure AI chat app with .NET](quickstarts/prompt-model.md) +Aspire isn't specifically the AI runtime; it's the multi-service application layer around it. It doesn't replace MEAI, MAF, or Azure AI Foundry. -[phi3]: https://azure.microsoft.com/products/phi-3 -[orca]: https://www.microsoft.com/research/project/orca/ +For more information, see the [Aspire documentation](/dotnet/aspire/). diff --git a/docs/ai/evaluation/libraries.md b/docs/ai/evaluation/libraries.md index 5851c9939b9e6..d7f717ef7524b 100644 --- a/docs/ai/evaluation/libraries.md +++ b/docs/ai/evaluation/libraries.md @@ -99,4 +99,5 @@ For a more comprehensive tour of the functionality and APIs in the Microsoft.Ext ## See also +- [Quickstart: Evaluate response quality](evaluate-ai-response.md) - [Evaluation of generative AI apps (Foundry)](/azure/ai-studio/concepts/evaluation-approach-gen-ai) diff --git a/docs/ai/get-started-mcp.md b/docs/ai/get-started-mcp.md index 0efd1fb00ef1c..963c9725c9d16 100644 --- a/docs/ai/get-started-mcp.md +++ b/docs/ai/get-started-mcp.md @@ -87,8 +87,10 @@ Get started with the following development tools: ## See also +- [Create a minimal MCP client using .NET](quickstarts/build-mcp-client.md) +- [Create a minimal MCP server using C# and publish to NuGet](quickstarts/build-mcp-server.md) - [MCP C# SDK documentation](https://modelcontextprotocol.github.io/csharp-sdk/index.html) - [MCP C# SDK API documentation](https://modelcontextprotocol.github.io/csharp-sdk/api/ModelContextProtocol.html) - [MCP C# SDK README](https://github.com/modelcontextprotocol/csharp-sdk/blob/main/README.md) -- [Microsoft partners with Anthropic to create official C# SDK for Model Context Protocol](https://devblogs.microsoft.com/blog/microsoft-partners-with-anthropic-to-create-official-c-sdk-for-model-context-protocol) -- [Build a Model Context Protocol (MCP) server in C#](https://devblogs.microsoft.com/dotnet/build-a-model-context-protocol-mcp-server-in-csharp/) +- [Blog: Microsoft partners with Anthropic to create official C# SDK for Model Context Protocol](https://devblogs.microsoft.com/blog/microsoft-partners-with-anthropic-to-create-official-c-sdk-for-model-context-protocol) +- [Blog: Build a Model Context Protocol (MCP) server in C#](https://devblogs.microsoft.com/dotnet/build-a-model-context-protocol-mcp-server-in-csharp/) diff --git a/docs/ai/overview.md b/docs/ai/overview.md index fbf5273042704..5482a1af78373 100644 --- a/docs/ai/overview.md +++ b/docs/ai/overview.md @@ -1,7 +1,7 @@ --- title: Develop .NET apps with AI features description: Learn how you can build .NET applications that include AI features. -ms.date: 12/10/2025 +ms.date: 04/15/2026 ms.topic: overview --- @@ -58,7 +58,7 @@ We recommend the following sequence of tutorials and articles for an introductio | Generate images | [Generate images from text](./quickstarts/text-to-image.md) | | Train your own model | [ML.NET tutorial](https://dotnet.microsoft.com/learn/ml-dotnet/get-started-tutorial/intro) | -Browse the table of contents to learn more about the core concepts, starting with [How generative AI and LLMs work](./conceptual/how-genai-and-llms-work.md). +Browse the table of contents to learn more about the core concepts, starting with [How generative AI and LLMs work](./conceptual/how-genai-and-llms-work.md). If you're not sure which .NET AI tool or SDK to use for your scenario, see [Decide which tool to use](./dotnet-ai-ecosystem.md#decide-which-tool-to-use). ## Next steps diff --git a/docs/ai/toc.yml b/docs/ai/toc.yml index 8f8d56e517470..2a79d60b16b14 100644 --- a/docs/ai/toc.yml +++ b/docs/ai/toc.yml @@ -15,6 +15,14 @@ items: href: ichatclient.md - name: The IEmbeddingGenerator interface href: iembeddinggenerator.md + - name: Evaluation libraries + href: evaluation/libraries.md + - name: Data ingestion libraries + href: conceptual/medi-library.md + - name: Vector data libraries + href: conceptual/mevd-library.md + - name: MCP client/server overview + href: get-started-mcp.md - name: Microsoft Agent Framework href: /agent-framework/overview/agent-framework-overview?toc=/dotnet/ai/toc.json&bc=/dotnet/ai/toc.json - name: C# SDK for MCP @@ -57,10 +65,12 @@ items: href: conceptual/zero-shot-learning.md - name: Retrieval-augmented generation href: conceptual/rag.md + - name: Responsible AI with .NET + href: evaluation/responsible-ai.md - name: Call tools items: - name: Overview - href: conceptual/ai-tools.md + href: conceptual/calling-tools.md displayName: ai tool, ai function, tools, functions - name: "Quickstart: Execute a local function" href: quickstarts/use-function-calling.md @@ -84,7 +94,7 @@ items: href: vector-stores/tutorial-vector-search.md - name: Scale Azure OpenAI with Azure Container Apps href: get-started-app-chat-scaling-with-azure-container-apps.md -- name: MCP client/server +- name: MCP client/server quickstarts items: - name: Build a minimal MCP client href: quickstarts/build-mcp-client.md @@ -131,20 +141,14 @@ items: href: /azure/ai-services/openai/how-to/use-blocklists?toc=/dotnet/ai/toc.json&bc=/dotnet/ai/toc.json - name: Use Risks & Safety monitoring href: /azure/ai-services/openai/how-to/risks-safety-monitor?toc=/dotnet/ai/toc.json&bc=/dotnet/ai/toc.json -- name: Evaluation +- name: Evaluation tutorials items: - - name: Responsible AI with .NET - href: evaluation/responsible-ai.md - - name: The Microsoft.Extensions.AI.Evaluation libraries - href: evaluation/libraries.md - - name: Tutorials - items: - - name: "Quickstart: Evaluate the quality of a response" - href: evaluation/evaluate-ai-response.md - - name: "Evaluate response quality with caching and reporting" - href: evaluation/evaluate-with-reporting.md - - name: "Evaluate response safety with caching and reporting" - href: evaluation/evaluate-safety.md + - name: "Quickstart: Evaluate the quality of a response" + href: evaluation/evaluate-ai-response.md + - name: "Evaluate response quality with caching and reporting" + href: evaluation/evaluate-with-reporting.md + - name: "Evaluate response safety with caching and reporting" + href: evaluation/evaluate-safety.md - name: Resources items: - name: Get started resources diff --git a/docs/ai/vector-stores/overview.md b/docs/ai/vector-stores/overview.md index 72638dc5b74d5..5b33cf73959ee 100644 --- a/docs/ai/vector-stores/overview.md +++ b/docs/ai/vector-stores/overview.md @@ -42,26 +42,11 @@ Other benefits of the RAG pattern include: - Overcome LLM token limits—the heavy lifting is done through the database vector search. - Reduce the costs from frequent fine-tuning on updated data. -## The Microsoft.Extensions.VectorData library +## Microsoft.Extensions.VectorData library To use vector search from .NET, you can use your regular database driver or SDK without requiring any additional library or API. For example, on SQL Server, vector search can be performed in T-SQL when using the standard .NET driver, SqlClient. However, accessing vector search in this way is often quite low-level, requires considerable ceremony to handle serialization/deserialization, and the resulting code isn't portable across databases. -As an alternative, the [📦 Microsoft.Extensions.VectorData.Abstractions](https://www.nuget.org/packages/Microsoft.Extensions.VectorData.Abstractions) package provides a unified layer of abstractions for interacting with vector stores in .NET. These abstractions let you write simple, high-level code against a single API, and swap out the underlying vector store with minimal changes to your application. - -The library provides the following key capabilities: - -- **Seamless .NET type mapping**: Map your .NET type directly to the database, similar to an object/relational mapper. -- **Unified data model**: Define your data model once using .NET attributes and use it across any supported vector store. -- **CRUD operations**: Create, read, update, and delete records in a vector store. -- **Vector and hybrid search**: Query records by semantic similarity using vector search, or combine vector and text search for hybrid search. -- **Embedding generation management**: Configure your embedding generator once and let the library transparently handle generation. -- **Collection management**: Create, list, and delete collections (tables or indices) in a vector store. - -Microsoft.Extensions.VectorData is also the building block for additional, higher-level layers which need to interact with vector database. For example, the [Microsoft.Extensions.DataIngestion](../conceptual/data-ingestion.md). - -### Microsoft.Extensions.VectorData and Entity Framework Core - -If you are already using Entity Framework Core to access your database, it's likely that your database provider already supports vector search, and LINQ queries can be used to express such searches; Microsoft.Extensions.VectorData isn't necessarily needed in such applications. However, most dedicated vector databases are not supported by EF Core, and Microsoft.Extensions.VectorData can provide a good experience for working with those. In addition, you may also find yourself using both EF and Microsoft.Extensions.VectorData in the same application, e.g. when using an additional layer such as Microsoft.Extensions.DataIngestion. +As an alternative, the [Microsoft.Extensions.VectorData library](../conceptual/mevd-library.md) provides a unified layer of abstractions for interacting with vector stores in .NET. ## Key abstractions