Introduction
Most .NET developers face the same AI integration dilemma: How do you build production-ready AI features without vendor lock-in or reinventing orchestration patterns? Two Microsoft technologies solve this cleanly.
Semantic Kernel (SK) addresses orchestration — an open-source SDK that helps you structure prompts, call functions/plugins, and adopt agentic patterns across .NET (and Python/Java). It’s designed as the glue that coordinates AI steps with your application logic.
Microsoft.Extensions.AI covers provider abstraction — it gives .NET a unified interface (IChatClient) for chat/multimodal exchanges, streaming, and embeddings, all within familiar Microsoft.Extensions.* dependency-injection and logging patterns. Your call sites stay stable while you switch providers like Azure OpenAI, OpenAI, or local models.
Mental model: SK = orchestration (prompts, plugins, planning/agents); Microsoft.Extensions.AI = transport/primitives (chat, embeddings, options, telemetry).
Quick Decision Guide
- Extensions.AI only: Simple chat/embedding scenarios, single-step AI calls
- SK + Extensions.AI: Multi-step workflows, function calling, agent patterns, enterprise orchestration
- SK with local models: Same as above, but with full data control (Ollama/local hosting)
Table of Contents (updated)
- Microsoft Semantic Kernel (SK) — overview for .NET
- Microsoft.Extensions.AI — overview & packages
- How SK + Microsoft.Extensions.AI work together (why SK matters)
- Code Demo: Automated C# Documentation Generator
- Enterprise Deployment & Data Governance
1) Microsoft Semantic Kernel (SK) — overview for .NET
What it is. Semantic Kernel is Microsoft’s open-source SDK for orchestrating AI with your regular .NET code. It helps you wire prompts, functions (“plugins”), connections to model providers (OpenAI, Azure OpenAI, etc.), plus optional planning/memory features — so your app can coordinate AI steps predictably.
Why it helps. SK smooths integration, reduces complexity across different AI services, and makes behavior more controllable by structuring how prompts and steps run — useful from classroom demos to enterprise apps.
Install (CLI).
# .NET 8/9
dotnet add package Microsoft.SemanticKernel
Compatibility: SK 1.0+ works with .NET 6/8/9. Extensions.AI requires .NET 8+.
Create kernel
using Microsoft.SemanticKernel;
var builder = Kernel.CreateBuilder();
// add services/plugins/connections here
var kernel = builder.Build();
ASP.NET Core registration
var builder = WebApplication.CreateBuilder();
builder.Services.AddKernel(); // register the SK Kernel
// add services/plugins/connections here
var app = builder.Build();
These follow the official overview: the Kernel is the DI container that holds services and plugins SK orchestrates for you.
Connect to a model (example: Azure OpenAI chat).
builder.Services.AddAzureOpenAIChatCompletion(
"your-resource-name",
"your-endpoint",
"your-resource-key",
"deployment-model");
(Representative of the “connections” pattern shown in the Learn overview.)
Core building blocks (at a glance).
=> Connections — adapters to AI services & data.
=>Plugins — functions the model can call (semantic prompts or native C#).
=> Planner — constructs/execut es plans (optional).
=> Memory — embeddings/vector-store abstractions for retrieval.
2) Microsoft.Extensions.AI — the unified .NET AI abstractions
What it is. Microsoft.Extensions.AI gives .NET a provider-agnostic way to talk to AI services, using familiar Microsoft.Extensions.* patterns (DI, middleware). It centers on two key abstractions: IChatClient (chat/multimodal) and IEmbeddingGenerator (embeddings). On top, it offers ready-made middleware for tool calling, caching, telemetry, and more.
Packages (which to reference).
=>Microsoft.Extensions.AI.Abstractions — core types/interfaces (e.g., IChatClient).
=> Microsoft.Extensions.AI — adds higher-level helpers and pipeline/middleware.
Most apps take Microsoft.Extensions.AI plus one provider library (OpenAI, Azure, Ollama, etc.).
Thread-safety. The API docs note that all IChatClient members are thread-safe for concurrent use—ideal for web apps and background services.
Typical usage (one-shot & streaming).
using Microsoft.Extensions.AI;
// One-shot
ChatResponse r = await chatClient.GetResponseAsync("What is AI?");
Console.WriteLine(r.Text);
// Streaming
await foreach (var u in chatClient.GetStreamingResponseAsync("Stream a haiku."))
Console.Write(u);
These patterns — request/streaming, plus tool-calling, caching, telemetry, DI pipelines — are all covered in the Learn article with full examples.
Why it pairs with SK. Keep orchestration (prompts/plugins/flow) in SK, and keep model I/O behind IChatClient. That clean boundary lets you swap providers without refactoring your app logic and layer cross-cutting concerns (logging/Otel, caching, rate-limits) in a single place.
3) How SK and Microsoft.Extensions.AI help each other (and why SK matters)
What each layer does .
=> Microsoft.Extensions.AI gives you a unified, provider-agnostic client (IChatClient) with pipelines for streaming, tool-calling, caching, telemetry, DI, etc. It standardizes “how to talk to models.”
=> Semantic Kernel (SK) gives you orchestration: a Kernel (DI container), plugins (functions your app exposes to the model), optional memory, and planning guidance — i.e., “how AI steps fit together” around your .NET code.
Why SK makes an Ext.AI–based app better
- Organized function exposure
Instead of hand-rolling JSON tool specs or scattering helper methods, SK lets you register plugins (native C# or imported) the model can call. You get a consistent way to expose capabilities, reuse them, and keep them discoverable. - A real orchestration home
Ext.AI standardizes the call; SK standardizes the flow. SK’s Kernel hosts your services and plugins, so multi-step interactions (prompt → tool → prompt) live in one place instead of ad-hoc code paths. - Modern “planning” without legacy planners
The current guidance is to favor function calling (with SK driving the loop) over legacy planners; Microsoft documents a migration from Stepwise Planner to Auto Function Calling because it’s more reliable and token-efficient. SK bakes that orchestration pattern in so you don’t re-implement it. - Memory & retrieval when you scale
When your prompts outgrow a single file, SK’s memory abstractions let you plug in vector stores (Azure AI Search, Redis, etc.) without hard-wiring your app to one backend approach. - Works alongside Ext.AI’s pipelines
Keep Ext.AI’s strengths — streaming, telemetry (OpenTelemetry), caching, and middleware — for the transport layer, while SK focuses on orchestration and plugin lifecycle. Together you get clean separation of concerns and observability.
If you skip SK (Ext.AI only), what changes?
You can ship with Ext.AI alone — but you’ll likely end up rebuilding orchestration glue:
- Ad-hoc tool surface: You’ll describe and manage callable functions yourself (naming, arguments, versioning), and wire the call/return loop manually.
- Scattered prompt/flow logic: Multi-step chains (prompt → tool → follow-up) sit in controllers/services without a common orchestration model.
- Harder to expand: Adding retrieval (“memory”), importing external tools (OpenAPI/MCP), or growing into multi-step agent-like behavior becomes bespoke work.
- Planning migration is on you: You’ll need to emulate the newer auto function calling approach and its retries/guards yourself.
Rule of thumb:
If your app is single-step (ask → answer) with some middleware, Ext.AI alone is fine.
If you need repeatable multi-step flows, standardized plugins, optional memory, or a path to agentic patterns, add SK as the orchestration layer.
Bonus: structured outputs fit both layers
For machine-readable results (like code-doc JSON) you can request JSON / JSON-schema at the client layer (Ext.AI), then let SK consume/route the results as another step. Azure’s Structured outputs explain the schema guarantees vs. older “JSON mode.”
4) Code Demo: Automated C# Documentation with SK + Extensions.AI
What the demo does: This demo showcases a practical “code-to-docs” workflow that takes any C# file as input and generates clean, structured Markdown documentation. It combines Semantic Kernel’s orchestration capabilities with Extensions.AI’s provider-agnostic chat interface to create a tool that automatically documents classes, methods, parameters, and return types.
The workflow:
- Parse C# source — A custom SK plugin uses regex patterns to extract symbols (namespace, classes, public methods) from any
.csfile - Structure as JSON — The parser converts code symbols into structured JSON data
- AI analysis — SK orchestrates a prompt template that receives the JSON and generates professional documentation
- Markdown output — The result is clean, consistent documentation ready for wikis, README files, or onboarding materials
Architecture highlights:
- SK Plugin:
CodeParserPluginexposes aGetSymbolsfunction that the kernel can invoke - Prompt orchestration: SK manages the template that transforms raw code symbols into documentation
- Extensions.AI: Provides the underlying chat client abstraction, keeping the code provider-agnostic
Example input (RcaAgent.cs):
public sealed class RcaAgent
{
public RcaResult AnalyzeIncident(IncidentData incident, IEnumerable<LogEntry> logs) { ... }
public LogAnalysis AnalyzeLogs(IEnumerable<LogEntry> logs) { ... }
// ... more methods
}
Test Output:
dotnet run -- "/path/to/RcaAgent.cs"
Generated Output:
# Summary for RcaAgent
## Classes
- **RcaAgent** — Root Cause Analysis Agent that analyzes logs and incidents to identify potential causes
## Methods
### AnalyzeIncident
`public RcaResult AnalyzeIncident(IncidentData incident, IEnumerable<LogEntry> logs)`
- Analyzes the provided incident together with a sequence of log entries and returns an RcaResult.
- Parameters:
- incident (IncidentData) — the incident instance to be analyzed
- logs (IEnumerable<LogEntry>) — the sequence of log entries to use in the analysis
- Returns: RcaResult — the result of the root-cause analysis
### AnalyzeLogs
`public LogAnalysis AnalyzeLogs(IEnumerable<LogEntry> logs)`
- Analyzes the provided sequence of log entries and returns a LogAnalysis.
- Parameters:
- logs (IEnumerable<LogEntry>) — the sequence of log entries to analyze
- Returns: LogAnalysis — the outcome of the log analysis
Why this demonstrates SK + Extensions.AI value:
- Clean separation: Extensions.AI handles the model communication, SK handles the orchestration
- Reusable plugins: The
CodeParserPlugincan be reused across different documentation workflows - Provider flexibility: Swap from OpenAI to Azure OpenAI to local models without changing business logic
- Structured prompting: SK’s template engine makes the documentation format consistent and maintainable
Real-world impact: This type of automated documentation generation can dramatically improve developer onboarding, code reviews, and knowledge sharing — especially for large codebases where manual documentation lags behind development.
5) Enterprise Deployment & Data Governance
The Enterprise Reality
Many companies have blocked AI usage due to data governance concerns and the risk of sensitive code being sent to external providers. While this demo uses an OpenAI API key for simplicity, the entire workflow can run completely offline using Ollama and open-source models like Llama, CodeLlama, or Mistral.
Full Data Control
With Ollama, your code never leaves your environment — you maintain full control over your data while still getting the productivity benefits of AI-assisted documentation. The Extensions.AI abstraction makes this provider swap trivial: just change the client configuration, and your orchestration logic remains identical.
Enterprise Benefits
- 40–60% reduction in AI integration code
- Provider switching in <5 lines of code changes
- Built-in telemetry reduces debugging time
- Standardized patterns improve team velocity
- Complete offline capability for sensitive codebases
Common Gotchas
- SK prompt templates use
{{$variable}}not{{variable}} - Extensions.AI requires
List<ChatMessage>not raw strings - Plugin methods need
[KernelFunction]attributes
Ready to Try It Yourself?
Ready to try it yourself? The complete working example is available on GitHub: charp-summarizer. Simply clone the repo, plug in any .cs file, and get instant method and class summaries that make onboarding new developers significantly easier. Whether you use OpenAI for quick experiments or Ollama for production privacy, the choice is yours.
Conclusion
Semantic Kernel + Extensions.AI gives .NET developers a production-ready path to AI integration without vendor lock-in or architectural complexity. SK handles orchestration and workflow patterns, while Extensions.AI provides clean provider abstraction and enterprise-grade middleware.
For teams building anything beyond simple chat scenarios, this combination delivers the structured, maintainable foundation needed to scale AI features across enterprise applications — with full control over where your data goes.


















