You've successfully subscribed to Florin Loghiade
Great! Next, complete checkout for full access to Florin Loghiade
Welcome back! You've successfully signed in
Success! Your account is fully activated, you now have access to all content.
Azure SQL Databases - DTU or vCore

Azure SQL Databases - DTU or vCore


DTU or vCore? That is the question.

When they first came as an offering a long time ago, Azure SQL Databases always had a mystery attached to them regarding the performance tier you should use.


Database Throughput Unit or DTU is a way to describe the relative capacity of a performance SKU of Basic, Standard, and Premium databases. DTUs are based on a measure of CPU, memory, I/O reads, and writes. When you want to increase the “power” a database, you just increase the number of DTUs that are allocated to that database. A 100 DTU DB is much more powerful than a 50 DTU one.

Table of DTU SKUs

Target workloadDevelopment and productionDevelopment and productionDevelopment and production
Uptime SLA99.99%99.99%99.99%
CPULowLow, Medium, HighMedium, High
IO throughput (approximate)1-5 IOPS per DTU1-5 IOPS per DTU25 IOPS per DTU
IO latency (approximate)5 ms (read), 10 ms (write)5 ms (read), 10 ms (write)2 ms (read/write)

You can say that the DTU model is a great solution for people who want a preconfigured pool of resources out of the box for their workloads. The problem that can appear with the DTU model is that when you hit the DTU limit, you will get throttled which will result in query timeouts or slowdowns for which the solution is to increase the number of DTUs.

The concept that DTUs present is that when you need to increase the number of resources allocated to that database, you increase the number of DTUs but one issue is is that you don’t have the possibility of individually scaling the CPU / Storage / RAM.


This database model has a more classical approach to let’s say on-premises workloads. This mapping allows you to specify the number of cores, RAM, and I/O. So compared to the DTU model where you increase the CPU, RAM, and I/O automatically; In the vCore model, you have the possibility of doing it individually which allows you to have a lot of flexibility.

Scaling up and down in the vCore model is done on two planes with different CPU specs based on generation or VM model:

  • CPU plane – Generation specific
  • Storage plane – Generation specific -> vCore specific
How DTU and vCore scale source: azure docs
Hardware generationComputeMemory
Gen4– Intel E5-2673 v3 (Haswell) 2.4 GHz processors
– Provision up to 24 vCores (1 vCore = 1 physical core)
– 7 GB per vCore
– Provision up to 168 GB
Gen5Provisioned compute
– Intel E5-2673 v4 (Broadwell) 2.3-GHz and Intel SP-8160 (Skylake)* processors
– Provision up to 80 vCores (1 vCore = 1 hyper-thread)

Serverless compute
– Intel E5-2673 v4 (Broadwell) 2.3-GHz and Intel SP-8160 (Skylake)* processors
– Auto-scale up to 16 vCores (1 vCore = 1 hyper-thread)
Provisioned compute
– 5.1 GB per vCore
– Provision up to 408 GB

Serverless compute
– Auto-scale up to 24 GB per vCore
– Auto-scale up to 48 GB max
Fsv2-series– Intel Xeon Platinum 8168 (SkyLake) processors
– Featuring a sustained all core turbo clock speed of 3.4 GHz and a maximum single-core turbo clock speed of 3.7 GHz.
– Provision 72 vCores (1 vCore = 1 hyper-thread)
– 1.9 GB per vCore
– Provision 136 GB
M-series– Intel Xeon E7-8890 v3 2.5 GHz processors
– Provision 128 vCores (1 vCore = 1 hyper-thread)
– 29 GB per vCore
– Provision 3.7 TB

As you can see, depending on the generation, you will get a specific CPU model per generation or VM series and the RAM allocation is done per vCore.

Choosing a generation can look complicated but you’re not locked into a choice. So if you decide post-deployment that a different generation or VM type works better for you, then that option is available.

DTU vs vCores

Now that we understand the difference between DTUs and vCores, let’s try and compare them.

Memory3.7 TB
Storage4 TB4 TB
SKUsBasic, Standard, PremiumGen 4, Gen 5, Fsv2, M
ScalingPer DTUCompute, Memory + Storage
CostDTU + Backup StoragevCore, Storage, Backup Storage + Logs Storage

As you can see from the table there’s a hefty difference in specs from the DTU and vCore model and, after carefully analyzing the options available you might be inclined to go directly with the vCore model rather than the DTU but the difference is in the details.

One question that you might have would be “How many DTUs are equivalent to a vCore?” Which I can safely say that a generic mapping would be:

  • 100 DTUs Standard = 1 vCore – General Purpose
  • 125 DTUs Premium = 1 vCore – Business Critical
  • 8000 DTUs -> 80 vCores but the maximum amount of DTUs pe SQL DB is 4000 ?

Anything less than 100 DTUs would mean that you’re using less than a vCPU, more like a shared core but testing would be required to find the sweet spot.

Another benefit of the vCore model is that you can reserve the capacity in advance for 1/3 years and you get a better price, plus if you already have an on-premises SQL license with Software Assurance then you can activate the Hybrid benefits checkbox and get even more bang for your buck as you would get from an Azure VM.

So should you move to vCores?

The answer to this question is “depends“. While the vCore model looks more appealing from a traditional approach perspective but the real cost-benefit starts showing from 400 DTUs and up. If your workloads use less than 400 DTUs (roughly 4 vCores) then I would stick with the DTU model and when the time comes then I would just press a button in the portal and migrate to the vCore model.

Besides the tiers I mentioned above there are two other tiers called Serverless and HyperScale which have some benefits in some use cases and not all of them.

In the end, what I can say is that DTUs are not yet ready to be replaced by vCore but I’m expecting this as the next step. Until we get a clear alternative to DTUs, they are here to stay ?