Home Industries Case Studies About Azure CSP Drop Table Pulse Get Started
Back to Insights
Platform Engineering March 2026 9 min read

Azure DevOps Managed DevOps Pools vs VMSS Agent Pools: What Actually Changes

Azure DevOps Managed DevOps Pools is GA but barely anyone is using it. Should self-hosted Azure Pipelines teams migrate from VM Scale Set agent pools?

Drop Table Team

Azure DevOps Managed DevOps Pools has been generally available for a while now, yet in our experience, barely anyone is using it. Most teams we work with are still running VMSS agent pools or sticking with Microsoft-hosted agents, and many haven't even heard of the feature. That's a missed opportunity. Microsoft explicitly recommends migrating from VM Scale Set pools for production workloads, and positions Managed DevOps Pools as the evolution of self-hosted agent infrastructure. This post is our attempt to change that: a practical breakdown of what Managed DevOps Pools actually offers, how it compares to VMSS pools, and whether it's worth the switch for your team.

What Are Managed DevOps Pools?

Managed DevOps Pools sit between Microsoft-hosted agents and fully self-hosted VMSS pools. Microsoft manages the underlying infrastructure, provisioning, scaling, patching, and health monitoring, while you retain control over the agent images, networking, and pool configuration. Think of it as a managed service layer on top of what VMSS pools gave you, without the operational overhead of maintaining scale set infrastructure yourself.

The key difference: you no longer manage the scale set, its scaling rules, or the orchestration between Azure DevOps and your VM fleet. Microsoft handles that plane entirely. You focus on what image to run, what network it connects to, and how many agents you need.

🔑 Key Distinction

VMSS pools = you own the scale set, the scaling logic, and the VM lifecycle. Managed DevOps Pools = Microsoft owns the infrastructure, you own the image and network config.

Creating a Managed DevOps Pool

Setting up a Managed DevOps Pool is straightforward. It's an Azure resource, so you create it in your Azure subscription and then link it to your Azure DevOps organisation. There are three main ways to do this:

Azure Portal

Search for "Managed DevOps Pools" in the Azure portal and create a new resource. You'll configure the Azure DevOps organisation and project to connect to, one or more agent images (Microsoft-provided or from your own Azure Compute Gallery), the VM SKU and maximum agent count, optional VNet integration for private networking, and a Dev Center project to associate the pool with. Once created, the pool appears in your Azure DevOps organisation's agent pools automatically.

Infrastructure as Code

For teams that manage infrastructure through code (and you should), Managed DevOps Pools can be deployed via Bicep or ARM templates using the Microsoft.DevOpsInfrastructure/pools resource type. This lets you version-control your pool configuration, deploy consistently across environments, and integrate pool creation into your existing IaC pipelines. For Terraform users, there is a verified Azure module, terraform-azurerm-avm-res-devopsinfrastructure-pool, which provides a fully supported way to manage pools without falling back to the AzAPI provider.

Prerequisites

Before creating a pool, you need an Azure subscription with the Microsoft.DevOpsInfrastructure resource provider registered, an Azure DevOps organisation connected to the same Entra ID tenant, and a Dev Center and Dev Center project. You'll also need appropriate permissions in both Azure (Contributor on the resource group) and Azure DevOps (pool administrator). If you plan to use custom images, your Azure Compute Gallery must be accessible from the subscription where the pool lives.

What Changes from VMSS Agent Pools

Infrastructure Management

With VMSS pools, your platform team is responsible for the scale set resource, its scaling profile, fault handling, image updates, and capacity planning. You write the scaling rules, monitor for stuck agents, and deal with Azure platform issues when VMs fail to provision. With Managed DevOps Pools, that entire layer disappears. Microsoft handles provisioning, deprovisioning, health checks, and scaling automatically. Your team configures the pool, image, size, max agents, network, and the service takes care of the rest.

For platform engineering teams already stretched thin, this alone can justify the move. Less infrastructure to maintain means more time spent on developer experience and golden paths rather than debugging agent provisioning failures at 2am.

Scaling and Performance

VMSS scaling has always been a bit of a juggling act. You set minimum and maximum instance counts, configure scale-in and scale-out rules, and hope the timing works out when a surge of pipeline runs hits. Too aggressive on scale-in and agents get killed mid-job. Too conservative and you're paying for idle capacity.

Managed DevOps Pools handles scaling natively, with direct integration into Azure DevOps demand signals. It knows when jobs are queued and can provision agents proactively. Standby agent counts let you keep warm capacity without managing auto-scale rules yourself. The result is faster job pickup times and less wasted compute.

Cost Model

The cost picture is nuanced. With VMSS pools, you pay for the VMs directly, and you have full control over reserved instances, spot pricing, and right-sizing. Managed DevOps Pools also runs on Azure compute, but pricing is handled through the service. You still choose VM SKUs and can configure standby counts to control spend.

Where Managed DevOps Pools can save money is in operational cost. No more engineering time spent maintaining scale set infrastructure, diagnosing provisioning failures, or writing custom scaling logic. For teams where an engineer spends even a few hours a month on agent infrastructure, the managed approach often works out cheaper in total cost of ownership.

💡 Cost Tip

Review your current VMSS pool utilisation before migrating. If your agents sit idle for long periods, Managed DevOps Pools' automatic scaling can significantly reduce waste, but if you're already on reserved instances with high utilisation, model the cost carefully.

Image Control

This is where teams often hesitate. With VMSS pools you have complete control, custom images built in your own Packer pipelines, stored in your Shared Image Gallery, and deployed on your schedule. Managed DevOps Pools preserves this. You can bring your own Azure Compute Gallery images, so your existing image build pipelines continue to work. You can also use the Microsoft-provided images (the same ones available on Microsoft-hosted agents) if you don't need custom tooling.

The image lifecycle doesn't fundamentally change. You still build, test, and publish images. The difference is that Managed DevOps Pools handles rolling out the image to agents rather than you triggering a VMSS model update.

Networking and Security

Managed DevOps Pools supports Azure Virtual Network injection, so agents can run inside your private network just like VMSS agents do today. This is critical for teams that need agents to reach private endpoints, on-premises resources through ExpressRoute, or resources behind firewalls.

From a security perspective, there are advantages. Each agent runs on a fresh VM by default, no state leakage between pipeline runs. Microsoft handles OS patching and security updates on the infrastructure layer. You still control the image, so your security tooling and hardening is preserved. And because you're not managing the scale set directly, the attack surface on your infrastructure is reduced.

Multiple Images and OS Types in a Single Pool

One of the most useful capabilities in Managed DevOps Pools is configuring multiple images, including different operating systems, within a single pool. With VMSS pools you'd typically need a separate pool per OS or image variant. Managed DevOps Pools lets you define several images (Windows, Linux, or both) on the same pool and route pipelines to the right one using demands or image aliases.

Configuring Multiple Images

When you create or edit a Managed DevOps Pool in the Azure portal (or via Bicep/ARM), you can add multiple images to the pool's image configuration. Each image entry specifies the source (Azure Compute Gallery image, or a Microsoft-provided image like ubuntu-22.04 or windows-2022), an alias you assign to identify the image, and optionally a different VM SKU per image, so your Linux agents can run on a cheaper SKU while Windows agents use a larger one.

For example, a single pool could contain a ubuntu-22.04 image aliased as linux, a windows-2022 image aliased as windows, and a custom Azure Compute Gallery image aliased as linux-custom with your internal tooling baked in.

Targeting a Specific Image from YAML

To tell a pipeline which image to use, you add a demands entry in your YAML that matches the image alias. The key demand is ImageOverride:

# Run on the Linux image in the pool
pool:
  name: ManagedPool
  demands:
    - ImageOverride -equals ubuntu-22.04
# Run on the Windows image in the same pool
pool:
  name: ManagedPool
  demands:
    - ImageOverride -equals windows-2022
# Run on your custom image
pool:
  name: ManagedPool
  demands:
    - ImageOverride -equals linux-custom

If you don't specify an ImageOverride demand, the pool uses its default image. This makes it straightforward to share a single pool across teams that need different operating systems, .NET teams targeting Windows builds and Node or Python teams running on Linux can all point at the same pool name and just vary the demand.

💡 Practical Tip

Use descriptive aliases that match your team conventions. Naming images linux, windows, and linux-dotnet8 is clearer than relying on gallery image version strings. Document your aliases in your pipeline templates so developers know what's available.

Why This Matters for Platform Teams

Fewer pools means simpler governance. Instead of managing separate pools for each OS and image variant, each with its own scaling config, permissions, and cost tracking, you manage one pool with multiple images. Pipeline templates can abstract the demand away so developers just pick a target like linux or windows and the template handles the rest.

When Managed DevOps Pools Is the Right Move

Not every team should migrate immediately. Here's where it makes the most sense:

You're spending real engineering time on agent infrastructure. If your platform team regularly debugs provisioning failures, maintains scaling rules, or patches agent VMs, the managed model removes that toil directly.

Your scaling is unpredictable. Teams with bursty workloads, monorepos with many parallel jobs, or release trains that spike on certain days, benefit from the native demand-aware scaling.

You want Microsoft-hosted convenience with self-hosted networking. Managed DevOps Pools fills the gap perfectly: agents in your VNet, with Microsoft handling the fleet.

You're standardising agent images across teams. If you already publish images to Azure Compute Gallery, the migration is straightforward and your existing image pipeline stays intact.

When to Stay on VMSS Pools

There are valid reasons to hold off:

You need fine-grained control over the VM lifecycle. If your pipelines rely on persistent agent state between runs, custom startup scripts that modify the VM before the agent starts, or very specific scaling behaviours, VMSS pools still offer more flexibility.

You're heavily invested in reserved instances. If you've committed to reserved VM pricing for your agent fleet and utilisation is high, switching to Managed DevOps Pools may not save money, and could cost more depending on configuration.

Your regulatory environment requires full infrastructure ownership. Some compliance frameworks require that you own and audit every layer of the compute stack. Managed DevOps Pools abstracts the infrastructure layer, which might not meet those requirements.

Migration Path

Microsoft recommends a phased approach, and so do we. Start by running Managed DevOps Pools alongside your existing VMSS pools. Point non-critical pipelines at the new pool first, dev and test environments, internal tooling builds, and low-priority jobs. Validate that your images work correctly, networking behaves as expected, and job pickup times meet your SLAs.

Once you're confident, migrate production pipelines incrementally. Keep your VMSS pool available as a fallback until you've run the new setup through a full release cycle. The migration doesn't require changes to your YAML pipelines beyond updating the pool name.

⚠️ Migration Tip

Test your custom images thoroughly in Managed DevOps Pools before cutting over. Image compatibility is rarely an issue, but agent startup behaviour and pre-job scripts can behave differently when the provisioning model changes.

What This Means for Platform Teams

Managed DevOps Pools is a clear signal from Microsoft: they want agent infrastructure to be a managed concern, not a platform engineering burden. For teams building internal developer platforms, this is good news. It moves agent management closer to a "configure and forget" model, letting you focus on pipeline templates, security policies, and developer experience rather than VM fleet operations.

It also simplifies the "Microsoft-hosted vs self-hosted" decision. The answer is increasingly: use Managed DevOps Pools. You get the networking and image control of self-hosted with the operational simplicity of Microsoft-hosted. The gap between the two options is narrowing.

How We Can Help

We help teams design and operate Azure Pipelines infrastructure at scale:

  • Agent pool strategy and migration planning
  • Custom agent image pipelines with Azure Compute Gallery
  • Network architecture for private build agents
  • Pipeline security hardening and compliance
  • Platform engineering for CI/CD self-service

If you're evaluating Managed DevOps Pools or planning a migration from VMSS agent pools, get in touch.

Want more insights?

Explore our other articles or subscribe to our newsletter for the latest cloud security guidance.