-
Notifications
You must be signed in to change notification settings - Fork 275
Description
Summary
After building 5 production Azure Developer CLI extensions (azd-app, azd-exec, azd-copilot, azd-rest) plus a shared common library (azd-core), I've identified significant framework gaps where extension authors must build substantial infrastructure themselves. This proposal identifies concrete functionality that could be contributed back to the official extension framework to benefit all extension developers.
Methodology
Detailed multi-model code review (Opus 4.6 + Codex 5.3) across all 6 repositories analyzing:
- The official extension framework (gRPC server, extension loading, middleware, azdext SDK)
- azd-core shared library (30+ packages of reusable infrastructure)
- All 5 extensions' patterns, pain points, and workarounds
Key Findings
- azd-core provides ~30 packages of reusable infrastructure that every extension ends up needing
- All 5 extensions duplicate global flag registration, trace context setup, and command scaffolding
- 4/5 extensions implement MCP servers with identical rate limiting, argument parsing, and security patterns
- ~500-800 lines of boilerplate per extension could be eliminated with framework improvements
- Estimated ~2,500-4,000 total lines eliminated across the ecosystem
P0: CRITICAL — Extension SDK Base
P0-1: Extension Base Command Builder
Problem: Every extension must independently redeclare azd's global flags (--debug, --no-prompt, --cwd, --environment, --trace-log-*) and manually extract OpenTelemetry trace context from environment variables. This is 30-50 lines of identical boilerplate per extension.
Evidence — identical flag registration in every extension:
- azd-exec/main.go L21-41 (variables) + L189-194 (flags)
- azd-app/main.go L17-23 (variables) + L83-87 (flags)
- azd-copilot/main.go L22-37 (variables) + L138-141 (flags)
- azd-rest/root.go L19-35 (variables) + L77-91 (flags)
Evidence — identical trace context extraction in every extension:
- azd-exec/main.go L112-118 — Manual
TRACEPARENT/TRACESTATEextraction - azd-rest/root.go L65-71 — Same exact code
- azd-copilot/main.go L80-86 — Same exact code
Proposed API:
rootCmd := azdext.NewExtensionRootCommand("my-extension", "1.0.0", func(ctx *azdext.ExtensionContext) {
// ctx.Debug, ctx.NoPrompt, ctx.Cwd, ctx.Environment already parsed
// ctx.Context() already has trace context + access token injected
})P0-2: Global Flags Propagation via Environment Variables
Problem: When azd spawns extension processes, it only passes 4 environment variables: AZD_SERVER, AZD_ACCESS_TOKEN, FORCE_COLOR, COLUMNS. Critically, it does NOT pass the parsed global flags --debug, --no-prompt, --cwd, or -e/--environment. Extensions must reparse os.Args or guess from the environment.
Evidence — the framework's limited env var propagation:
- azure-dev/cmd/middleware/extensions.go L127-139 — Only
AZD_SERVER,AZD_ACCESS_TOKEN,FORCE_COLOR, trace context - azure-dev/pkg/extensions/runner.go L50-71 — RunArgs creation with no global flag forwarding
Proposal: Export parsed global flags as environment variables when spawning extensions:
AZD_DEBUG=1(from--debug)AZD_NO_PROMPT=1(from--no-prompt)AZD_CWD=/path(from--cwd)AZD_ENVIRONMENT=prod(from-e)AZD_TRACE_LOG_FILE=/path(from--trace-log-file)
This requires changes to pkg/extensions/runner.go and cmd/middleware/extensions.go.
P0-3: Default ServiceTargetProvider Base Implementation
Problem: The ServiceTargetProvider interface requires implementing 6+ methods. Extensions like azd-app that only need a "local" service target must stub out all methods with no-op implementations, adding ~40 lines of boilerplate.
Evidence — the interface definition requiring all methods:
- azure-dev/pkg/azdext/service_target_manager.go L26-60 — Full interface with Initialize, Endpoints, GetTargetResource, Package, Publish, Deploy
Evidence — stub implementations in azd-app:
- azd-app/servicetarget/local_provider.go L17-111 — 6 methods, most returning nil/empty values
Proposed API:
type LocalProvider struct {
azdext.BaseServiceTargetProvider // Embed for no-op defaults
}
// Only override Deploy() and ConfiguredEnvironment() — everything else inherits defaultsP0-4: Standard Extension Command Scaffolding
Problem: Every extension implements near-identical listen, metadata, version, and mcp serve subcommands. These are pure boilerplate — the logic is the same across all extensions, only the extension ID and root command differ.
Evidence — identical listen commands across all 4 extensions:
- azd-exec/commands/listen.go L13-30
- azd-rest/cmd/listen.go L11-30
- azd-app/commands/listen.go L18-30
- azd-copilot/commands/listen.go L17-30
Evidence — identical metadata commands across all 4 extensions:
- azd-exec/commands/metadata.go L14-29
- azd-rest/cmd/metadata.go L13-28
- azd-app/commands/metadata.go L11-30
- azd-copilot/commands/metadata.go L15-30
Evidence — identical version commands across all 4 extensions:
- azd-exec/commands/version.go L11-12
- azd-rest/cmd/version.go L10-11
- azd-app/commands/version.go L27-28
- azd-copilot/commands/version.go L27-28
Proposed API:
rootCmd.AddCommand(
azdext.NewListenCommand(azdClient, hostConfigurator),
azdext.NewMetadataCommand("1.0", extensionId, rootCmdProvider),
azdext.NewVersionCommand(extensionId, version, &outputFormat),
azdext.NewMCPServeCommand(mcpServerConfigurator),
)P0: CRITICAL — MCP Server Framework
P0-5: MCP Server Builder with Middleware
Problem: 4/5 extensions implement MCP servers. Each independently builds rate limiting with near-identical token bucket patterns. There is no framework-level middleware for rate limiting, path validation, or security — every extension re-invents these from scratch.
Evidence — rate limiter defined identically in 3 extensions + azd-core:
- azd-exec/commands/mcp_ratelimit.go L7 —
var globalRateLimiter = azdextutil.NewRateLimiter(10, 1.0) - azd-rest/cmd/mcp.go L24 —
var limiter = azdextutil.NewRateLimiter(10, 1.0) - azd-app/commands/mcp_ratelimit.go L63 — Custom
TokenBucketimplementation (doesn't even use azd-core) - azd-core/azdextutil/ratelimit.go L9-56 — The shared implementation all should use
Evidence — manual rate limit checks in every MCP tool handler:
- azd-exec/commands/mcp.go L115, L204, L267, L314 —
if !globalRateLimiter.Allow()repeated 4 times
Proposed API:
mcpServer := azdext.NewMCPServerBuilder("my-extension", "1.0.0").
WithRateLimit(60, 1.0). // Applied to all tools automatically
WithPathValidation(projectDir). // Auto-validate file path params
WithSecurityPolicy(azdext.DefaultSecurityPolicy). // Block metadata endpoints, etc.
AddTool("exec_script", handler, azdext.ToolOptions{
Description: "Execute a script",
Destructive: true,
Params: map[string]azdext.Param{
"script_path": {Type: "string", Required: true, Description: "Path to script"},
},
}).
Build()P0-6: Typed MCP Argument Parsing
Problem: Every MCP extension manually extracts arguments from mcp.CallToolRequest using untyped map[string]interface{} with verbose type assertions. This pattern is duplicated across all MCP-capable extensions and azd-core.
Evidence — identical argument parsing code duplicated:
- azd-exec/commands/mcp.go L360-376 — Local
getArgsMap()andgetStringParam()functions - azd-core/azdextutil/mcp.go L12-30 — Shared
GetArgsMap()andGetStringParam()(azd-exec doesn't even use these, it re-implements them)
What's missing: RequireString (error if absent), OptionalBool, OptionalInt, OptionalFloat helpers.
Proposed API:
args := azdext.ParseToolArgs(request)
path, err := args.RequireString("script_path") // Returns error if missing
shell, _ := args.OptionalString("shell", "bash") // Returns default if missing
timeout, _ := args.OptionalInt("timeout", 30)
verbose, _ := args.OptionalBool("verbose", false)P0-7: MCP Result Marshaling Helpers
Problem: Each MCP extension builds its own result marshaling helpers to convert Go structs/strings into mcp.CallToolResult. This is 20-30 lines of JSON marshaling boilerplate per extension.
Evidence — custom marshaling helpers:
- azd-exec/commands/mcp.go L385-408 —
marshalExecResult()andmarshalToolResult()functions
Proposed API:
return azdext.MCPTextResult("Operation completed: %s", name)
return azdext.MCPJSONResult(structuredData) // Auto JSON marshal
return azdext.MCPErrorResult("Invalid input: %v", err)
return azdext.MCPResourceResult([]mcp.ResourceContents{...})P0-8: MCP Security Middleware
Problem: Extensions that expose MCP tools for HTTP calls or file access must independently implement SSRF protection (blocking cloud metadata endpoints, private CIDRs), header redaction, and path validation. azd-rest hardcodes its own blocklists; azd-app repeats 6-step path validation in every resource handler.
Evidence — hardcoded security blocklists in azd-rest:
- azd-rest/cmd/mcp.go L34-66 —
blockedHeaders(L34),blockedHosts(L42),blockedCIDRsinit (L49-66) - azd-rest/cmd/mcp.go L84-134 —
isBlockedIP()(L84) andisBlockedURL()(L96) with DNS resolution + CIDR checking
Evidence — path validation in azd-core security package:
- azd-core/security/security.go L33 —
ValidatePath() - azd-core/security/security.go L175 —
ValidatePathWithinBases()
Proposed API:
policy := azdext.NewMCPSecurityPolicy().
BlockMetadataEndpoints(). // 169.254.169.254, fd00:ec2::254, etc.
BlockPrivateNetworks(). // RFC 1918/5737 CIDRs
RequireHTTPS(). // Except localhost
RedactHeaders("Authorization", "X-Api-Key").
ValidatePathsWithinBase(projectDir)
server := azdext.NewMCPServerBuilder(...).
WithSecurityPolicy(policy).
Build()P1: HIGH — Authentication & Token Management
P1-1: Framework Token Provider
Problem: Extensions that call Azure APIs need a thread-safe, cached token provider. Without framework support, each extension implements its own sync.Mutex + singleton caching pattern. This is error-prone and duplicated.
Evidence — manual token singleton in azd-rest:
- azd-rest/cmd/mcp.go L29-30 —
cachedTokenProviderandtokenProviderMu(sync.Mutex) - azd-rest/cmd/mcp.go L68-81 —
getOrCreateTokenProvider()with lock-check-create pattern
Evidence — production-grade implementation in azd-core:
- azd-core/auth/auth.go L32-38 —
AzureTokenProviderstruct with credential, cache map, and RWMutex for thread-safe caching
Proposal: Add shared token provider to the extension SDK or expose via gRPC Auth service:
// Option A: Standalone helper in SDK
provider := azdext.NewAzureTokenProvider() // Cached, thread-safe
token, err := provider.GetToken(ctx, "https://management.azure.com/.default")
// Option B: Via gRPC service
token, err := client.Auth().GetToken(ctx, scope)P1-2: URL-to-Scope Detection
Problem: Extensions making Azure API calls need to determine the correct OAuth scope for a given URL. azd-core maps 20+ Azure service URLs to their scopes, but this isn't available in the framework.
Evidence — comprehensive scope mapping in azd-core:
- azd-core/auth/scope.go L9-77 —
DetectScope()with:- Exact matches (L24-29): management.azure.com, graph.microsoft.com, api.loganalytics.io, dev.azure.com
- Suffix matches (L50-68): vault.azure.net, blob.core.windows.net, dfs.core.windows.net, database.windows.net, search.windows.net, cognitiveservices.azure.com, openai.azure.com, etc. (15+ services)
- Special cases: visualstudio.com (L35-36), kusto.windows.net (L39-40), servicebus.windows.net with path-based detection (L43-48)
Proposed API:
scope, err := azdext.DetectAzureScope("https://myvault.vault.azure.net/secrets/...")
// Returns: "https://vault.azure.net/.default"
scope, err := azdext.DetectAzureScope("https://management.azure.com/subscriptions/...")
// Returns: "https://management.azure.com/.default"P1-3: Framework HTTP Client with Resilience
Problem: Extensions making HTTP calls need retry logic with exponential backoff, response size limits, and TLS configuration. Without framework support, each extension either uses raw net/http or depends on azd-core's HTTP client.
Evidence — production-grade HTTP client in azd-core:
- azd-core/httpclient/client.go L70-74 —
Clientstruct - azd-core/httpclient/client.go L95-292 —
Execute()with full retry + backoff- Retry logic: L159-244 (exponential backoff loop, 5xx detection, retryable error detection)
- Pagination: L278-290 (handles Link headers,
@odata.nextLink,nextLink)
Evidence — azd-rest reexports the entire client:
- azd-rest/client/client.go L1-40 — Type aliases:
type Client = httpclient.Client,var NewClient = httpclient.NewClient
Proposed API:
client := azdext.NewHTTPClient(azdext.HTTPClientOptions{
TokenProvider: tokenProvider,
Retry: 3,
Timeout: 30 * time.Second,
Paginate: true,
MaxResponseSize: 100 * 1024 * 1024, // 100MB
})
resp, err := client.Execute(ctx, azdext.HTTPRequest{Method: "GET", URL: url})P1-4: Pagination Support
Problem: Azure APIs use 3 different pagination formats. Each extension that paginates must understand all 3 — or miss data.
Evidence — pagination logic handling 3 formats:
- azd-core/httpclient/client.go L278-290 —
handlePagination()dispatch - Supports: HTTP
Linkheaders (L375-393), JSON@odata.nextLink(Azure), JSONnextLink(Graph)
Proposed API:
pages := azdext.NewPaginator(client, initialURL)
var allItems []Item
for pages.Next(ctx) {
items := pages.Current()
allItems = append(allItems, items...)
}P1-5: Key Vault Resolution via gRPC
Problem: Extensions that run scripts or manage environments need to resolve Azure Key Vault references embedded in environment variables. This requires complex parsing of 3 reference formats, thread-safe per-vault client caching, and credential management.
Evidence — Key Vault resolver with 3 pattern formats in azd-core:
- azd-core/keyvault/keyvault.go L21-24 — 3 regex patterns:
@Microsoft.KeyVault(SecretUri=...)@Microsoft.KeyVault(VaultName=...;SecretName=...;SecretVersion=...)akvs://vault/...
- azd-core/keyvault/keyvault.go L27-32 —
KeyVaultResolverstruct with credential & per-vault client caching - azd-core/keyvault/keyvault.go L59-100 —
IsKeyVaultReference()(L59) andResolveReference()(L78)
Evidence — azd-exec consuming Key Vault resolution:
- azd-exec/executor/executor.go L36 —
StopOnKeyVaultErrorconfig flag - azd-exec/executor/executor.go L54-56 — Factory using
keyvault.NewKeyVaultResolver - azd-exec/executor/executor.go L142 —
prepareEnvironment()calling resolver
Proposed API — add to EnvironmentService gRPC:
resp, err := client.Environment().ResolveValues(ctx, &azdext.ResolveValuesRequest{
EnvironmentName: "prod",
ResolveKeyVault: true, // Resolve all @Microsoft.KeyVault and akvs:// references
})
// Returns: resolved env vars with Key Vault secrets inline + warnings for failuresP1-6: Extension Configuration Helpers
Problem: Extensions need typed configuration loading from ~/.azd/config.json with schema validation and defaults. Each extension builds its own config loader.
Evidence — custom config loading in azd-app:
- azd-app/config/config.go L19-27 — Custom
ConfigandAppConfigstructs - azd-app/config/config.go L49-73 —
Load()function: file existence check, JSON unmarshalling - azd-app/config/config.go L75-93 —
Save()usingfileutil.AtomicWriteJSON
Proposed API:
type MyExtConfig struct {
MaxRetries int `json:"maxRetries" default:"3"`
Shell string `json:"shell" default:"bash"`
}
config, err := azdext.LoadExtensionConfig[MyExtConfig](ctx, client)P2: MEDIUM — CLI Output & Logging
P2-1: Standard Output Helpers
Problem: Extensions produce inconsistent output. Some use colored text with ANSI codes, others plain text. No standard for JSON-mode output or structured tables. Users experience different formatting across extensions.
Evidence — comprehensive output library in azd-core:
- azd-core/cliout/cliout.go L267 —
Success() - azd-core/cliout/cliout.go L274 —
Error() - azd-core/cliout/cliout.go L281 —
Warning() - azd-core/cliout/cliout.go L456 —
Table() - azd-core/cliout/cliout.go L192 —
SetFormat()(JSON/default toggle)
Evidence — usage across all extensions:
- azd-app/commands/generate.go L105 —
cliout.Success() - azd-app/commands/add.go L216 —
cliout.Success() - azd-app/commands/core.go L88 —
cliout.Success()
Proposed API:
out := azdext.NewOutput(outputFormat) // "default" or "json"
out.Success("Deployed %s to %s", service, host)
out.Warning("Deprecated feature: %s", name)
out.Table([]string{"Service", "Status"}, rows)
out.JSON(structuredData) // Only outputs in JSON modeP2-2: Structured Logging
Problem: Each extension sets up its own structured logging with debug mode detection. The pattern is identical but not provided by the framework.
Evidence — logging setup in azd-core:
- azd-core/logutil/logutil.go L54-83 —
SetupLogger(debug, structured)— main setup with JSON/text format selection - azd-core/logutil/logger.go L16 —
NewLogger()— component-scoped logger
Proposed API:
// Auto-configured from AZD_DEBUG env var
logger := azdext.NewLogger("my-extension")
logger.Debug("Processing request", "url", url, "method", method)
logger.Info("Operation completed", "duration", elapsed)P2: MEDIUM — Security Utilities
P2-3: Security Validation Package
Problem: Extensions handling user input need path traversal prevention, service name validation, script name sanitization, and container environment detection. Each extension must discover and import these from azd-core rather than getting them from the framework.
Evidence — security functions in azd-core:
- azd-core/security/security.go L33 —
ValidatePath()— detects.., resolves symlinks, validates cleaned path - azd-core/security/security.go L85 —
ValidateServiceName()— DNS-safe regex:^[a-zA-Z0-9][a-zA-Z0-9._-]{0,62}$ - azd-core/security/security.go L129 —
SanitizeScriptName()— blocks shell metacharacters (;,|,>,$(cmd)) - azd-core/security/security.go L148 —
IsContainerEnvironment()— detects Docker, Codespaces, K8s, Dev Containers
Proposed API:
err := azdext.Security.ValidatePath(userPath)
err := azdext.Security.ValidateServiceName(name)
sanitized := azdext.Security.SanitizeScriptName(script)
isContainer := azdext.Security.IsContainerEnvironment()
err := azdext.Security.ValidateURL(url, azdext.RequireHTTPS)P2-4: SSRF Protection
Problem: MCP tools that make HTTP requests on behalf of AI models are particularly vulnerable to SSRF attacks. Extensions must independently implement blocklists for cloud metadata endpoints, private network CIDRs, and URL validation with DNS resolution. This is complex, security-critical code that shouldn't be duplicated.
Evidence — SSRF protection in azd-rest (hardcoded per-extension):
- azd-rest/cmd/mcp.go L34-42 —
blockedHeadersandblockedHosts(169.254.169.254, fd00:ec2::254) - azd-rest/cmd/mcp.go L49-66 —
blockedCIDRsinit with 7 CIDR blocks (IPv4 loopback, IPv6 loopback, link-local, RFC 1918) - azd-rest/cmd/mcp.go L84-91 —
isBlockedIP()— CIDR range checking - azd-rest/cmd/mcp.go L96-134 —
isBlockedURL()— full SSRF protection with DNS resolution
Proposed API:
validator := azdext.NewSSRFValidator()
validator.BlockMetadataEndpoints() // Cloud provider metadata (AWS, Azure, GCP)
validator.BlockPrivateNetworks() // RFC 1918 + link-local + loopback
if err := validator.Check(url); err != nil {
return err // "blocked: URL resolves to private network"
}P3: LOWER — Process, Shell & File Utilities
P3-1: Shell Detection & Execution
Problem: Extensions that execute scripts need to detect the appropriate shell from file extensions and shebangs, then build the correct command arguments for each shell (bash -c, cmd /C, powershell -Command, etc.). azd-exec has TWO separate implementations of shell argument building — one for CLI, one for MCP — that should be unified.
Evidence — shell detection and constants in azd-core:
- azd-core/shellutil/shellutil.go L19-36 — Shell constants (ShellBash, ShellCmd, ShellPowerShell, ShellPwsh, ShellSh, ShellZsh)
- azd-core/shellutil/shellutil.go L72-99 —
DetectShell()— auto-detection by extension, shebang, OS default
Evidence — duplicated shell argument builders in azd-exec:
- azd-exec/commands/mcp.go L411-436 —
buildShellArgs()for MCP (handles cmd, powershell/pwsh, bash/sh/zsh) azd-exec/executor/command_builder.go—buildCommand()for CLI (same logic, different function)
Proposed API:
shell := azdext.DetectShell("script.sh") // Returns "bash" (from extension + shebang)
cmd := azdext.BuildShellCommand(shell, scriptPath, isInline, args) // Unified builderP3-2: Atomic File Operations
Problem: Extensions writing config files risk corruption from partial writes (crash mid-write, concurrent writers). azd-core provides atomic write operations using the temp-file-then-rename pattern, but this isn't available from the framework.
Evidence — atomic file operations in azd-core:
- azd-core/fileutil/fileutil.go L28-89 —
AtomicWriteJSON()— write to temp file, sync, set permissions, rename - azd-core/fileutil/fileutil.go L94-151 —
AtomicWriteFile()— raw bytes variant with same atomic pattern
Proposed API:
err := azdext.AtomicWriteJSON("config.json", data) // JSON marshal + atomic write
err := azdext.AtomicWriteFile("script.sh", content, 0755) // Raw bytes + atomic write
var data Config
err := azdext.ReadJSON("config.json", &data) // Handle missing files gracefullyP3-3: Tool Discovery & PATH Management
Problem: Extensions that integrate with external tools (node, python, docker, etc.) need to find executables in PATH and across common system installation directories, and provide helpful install suggestions when tools are missing.
Evidence — tool discovery in azd-core:
- azd-core/pathutil/pathutil.go L76-90 —
FindToolInPath()— usesexec.LookPathwith Windows .exe handling - azd-core/pathutil/pathutil.go L144-174 —
GetInstallSuggestion()— map-based install suggestions for 18+ tools
Proposed API:
path := azdext.FindTool("node") // Find in PATH + common dirs
suggestion := azdext.GetInstallSuggestion("python") // "Install python from https://..."P3-4: Interactive TUI Support
Problem: Extensions that need to launch interactive terminal applications (like GitHub Copilot CLI) face complex platform-specific challenges with TTY detection. When azd captures stdio for gRPC communication, child processes can't detect a TTY. azd-copilot implements 70+ lines of platform-specific hacking to work around this.
Evidence — platform-specific TTY hacking in azd-copilot:
- azd-copilot/console_windows.go L15, L40-65 — Windows:
procSetStdHandledeclaration +attachConsole()using Windows APISetStdHandle()to attach CONIN$/CONOUT$ - azd-copilot/console_windows.go L68-74 —
restore()to restore original handles - azd-copilot/launcher.go L139-145 — Unix: Opens
/dev/ttydirectly to bypass pipe redirection
Proposed API:
err := azdext.LaunchInteractive(executable, args, azdext.WithTTY())
// Handles Windows SetStdHandle, Unix /dev/tty, macOS, and Codespaces environmentsP3-5: Cross-Platform Process Detection
Problem: Extensions monitoring services need to check if a process is still running. On Windows, stale PIDs are a real problem — a PID may be reused by a different process, so simple os.FindProcess isn't reliable. azd-core uses gopsutil for accurate cross-platform detection.
Evidence — process detection in azd-core:
- azd-core/procutil/procutil.go L15-36 —
IsProcessRunning()usinggopsutilfor reliable PID checking across Windows/Linux/macOS/FreeBSD/OpenBSD/Solaris/AIX
Proposed API:
running, err := azdext.IsProcessRunning(pid) // Handles Windows stale PIDs correctlyScope Boundaries
- ✅ All proposals are additive — no breaking changes to existing framework
- ✅ Existing extensions continue to work unchanged
- ❌ NOT proposing moving the extensions themselves into this repo
- ❌ NOT proposing changing the gRPC protocol
- ❌ NOT proposing replacing
mark3labs/mcp-go— we wrap it with middleware
Impact Estimate
| Metric | Value |
|---|---|
| Proposals | 23 specific items across 4 priority tiers |
| Boilerplate eliminated per extension | ~500-800 lines |
| Total across ecosystem (5 extensions) | ~2,500-4,000 lines |
| Extensions affected | All current and future extensions |
References
- azd-core — Shared utility library with 30+ packages
- azd-app — Service orchestration extension
- azd-exec — Script execution extension
- azd-copilot — AI/Copilot integration extension
- azd-rest — REST API client extension