Wait statistics are one of the most powerful diagnostic tools available to database administrators. They tell the story of what’s preventing your SQL database from running queries as fast as it could. Rather than guessing about performance problems, wait statistics provide hard evidence about where your database is spending time waiting instead of doing useful work.
Think of it this way: if a query takes 10 seconds to complete, it’s not executing for all 10 seconds. It’s likely waiting for something, like waiting for disk I/O to complete, waiting for a lock to be released, waiting for a network response. Wait statistics quantify exactly where that time goes.
Dynamics 365 Business Central exposes a page called Database Wait Statistics. This page shows a snapshot of wait time statistics on the tenant database (it’s essentially a page that shows data coming from the system dynamic management view sys.dm_db_wait_stats from Azure SQL). The statistics are aggregated since the time that the database was started and the different wait types are also aggregated into wait categories.
Here is an example coming from a real-world database:
Every time I talk with customers and partners about this view, I often see people lost on those values. What are those numbers? How can I read them and extract valuable informations?
Here I want to explain how to read this view and extract important KPIs for your Business Central environment.
This view exposes the following 6 columns:
The top column identifies the category of wait type being reported. Azure SQL groups similar wait conditions into logical categories to make analysis easier. This is an aggregation of detailed wait types under the hood.
This field tells you how many times a task had to wait for this particular resource category. It’s a counter that increments each time Azure SQL encounters this type of wait condition.
A high number here doesn’t necessarily indicate a problem—it depends on context. A Buffer IO wait with billions of counts might be normal for a busy OLTP system, but even a few hundred Lock waits might signal contention issues.
This is the total cumulative time (in milliseconds) that tasks spent waiting for this resource since the database started. It aggregates all the individual wait durations across all waiting tasks.
This is your total “wasted” time in this category. A high value here is more concerning than a high task count because it directly impacts user experience. A query user perceives as slow is likely waiting in one of these categories.
The longest single wait event recorded in this category since the database started.
This helps you understand the worst-case scenario. A single massive wait (like 10 seconds) might indicate a temporary spike or contention incident, whereas consistent moderate waits across many tasks suggests a systemic issue.
This is the time a task spent waiting to get CPU time after its resource became available. It’s the time waiting for a CPU thread to execute the task once it’s been unblocked from whatever resource it was waiting for.
This is sometimes called “runnable” wait time. If signal wait time is significant compared to the wait time itself, it means your system has CPU pressure—queries are ready to run, but the CPU is busy. This is a sign you might need to look at CPU utilization or query optimization.
The timestamp when this database instance started collecting these statistics.
It tells you how long this dataset covers. Statistics from a database started yesterday are more volatile and actionable than statistics from a database that’s been running for a year. Recent snapshots are more reflective of current behavior.
These are some key principles for reading all these data exposed in this page.
Not all waits matter equally. A high wait time in a rarely-occurring category matters less than a moderate wait time in a frequently-occurring category.
Divide Signal Wait Time by Wait Time in ms. This tells you what percentage of the wait is CPU contention versus actual resource waits:
Divide Wait Time in ms by Waiting Tasks Count to get the average wait duration. This shows whether you have many small waits or fewer large waits:
It’s easy to get distracted by the largest numbers in the Waiting Tasks Count column. Instead, focus on which categories are consuming the most total time.
Different wait types are normal in different scenarios:
Let’s take the image published at the beginnig of this post. As previously described, this is taken from a real-world Dynamics 365 Business Central tenant (Production environment). Here’s what the analysis of these data reveals:
Idle: 3,021,223,428 waiting tasks | Max: 2,525,610,829 ms
This is actually a good news, not bad news. Idle waits represent the time that Azure SQL has nothing to do (the system was idle waiting for client requests). This is normal and expected. This means that the database isn’t under constant pressure. When there’s no work to do, it waits idly. This tells you the system has capacity.
Buffer IO: 395,725,025 tasks | 692,467,371 ms wait time | Max: 18,000 ms | Signal: 52,580,885 ms
This is where the real story begins. Buffer IO waits mean queries are waiting for data pages to be read from disk into memory (or for dirty pages to be written).
Analysis:
These data tells that the database is doing typical I/O work. The 18,000 ms max wait is worth watching (that was a significant disk latency spike) but overall this is normal for an active database.
CPU: 2,848,326,203 tasks | 1,463,945,587 ms wait time | Max: 11,724 ms | Signal: 1,463,234,440 ms
This is concerning and deserves attention.
Analysis:
This means that almost all of the CPU waits are signal waits, meaning tasks are ready to run but waiting for CPU resources to be available. This is a CPU bottleneck indicator.
It seems that the server likely has CPU pressure. Queries are fast (sub-millisecond execution times), but there are so many of them that the CPU is the limiting factor.
Lock: 5,182,3704 tasks | 2,306,623,349 ms wait time | Max: 50,175,915 ms | Signal: 6,143,871 ms
Analysis:
It seems that here there is a significant lock contention. That 14-hour max wait suggests a transaction held locks for an extraordinarily long time, blocking other queries. There may be queries or batch jobs that hold locks too long, or there’s high concurrent access causing blocking chains. This needs investigation.
Other: 102,523,125,915 tasks | 753,802,672,883 ms wait time | Max: 1,801,601,418 ms
This seems to be the largest wait time contributor, but it’s the least informative because “Other” aggregates many different wait types.
Analysis:
This can be considered a signal that we should dive into the underlying wait type details in sys.dm_db_wait_stats where you can see individual wait types.
Service Broker: 4,192,632,671 tasks | 186,939,057,136 ms wait time | Max: 332,078,317 ms | Signal: 269,869,337 ms
Over 186 billion milliseconds of service broker waits is unusual for a typical Business Central environment.
This means that either your Business Central instance is heavily using Service Broker for job queueing (which is standard), or there’s something unusual going on with job processing. Service Broker is used for Business Central’s background job queue system.
This is likely normal for a Business Central deployment if you’re using job queues heavily, but it’s worth monitoring.
User Wait: 5.6 billion ms (0.36% of all non-idle wait time)
This is healthy, because it indicates idle time between user actions and good spare capacity.
Lock: 2.3 billion ms (0.147% of all non-idle wait time)
This indicates a moderate contention. That 14-hour max wait is concerning and needs investigation.
Transaction: 159 million ms (0.010% of all non-idle wait time)
This seems to indicate moderate delays, where some blocking occurring. This is normal for a Business Central database but we should monitor trends.
Transaction Log IO: 2.6 million ms (0.0002% of all non-idle wait time)
This indicates good log subsystem performance, without I/O bottlenecks.
Memory: 41 million ms (0.003% of all non-idle wait time)
This is a good sign. Memory is abundant and not a constraint here.
In summary, this is what the analysis is revealing:
This Business Central environment has good infrastructure (memory, storage, network all healthy) but is hitting a CPU bottleneck. Someone’s running high-CPU queries, and there’s evidence of extended lock holds from batch jobs that need optimization.
The database isn’t under constant pressure (good idle time), disk I/O is responsive, log writes are fast, and memory is plentiful. However, CPU utilization is maxed out (99.95% signal waits mean queries are ready but CPU is busy), and transaction blocking is occurring (51ms average, with one 83-second incident). The infrastructure is properly configured, but workload optimization is needed.
Memory subsystem: Abundant, no constraints
Log I/O subsystem: Fast and responsive
Disk I/O subsystem: Adequate throughput, minimal contention
Network connectivity: Reasonable latency with occasional spikes
System capacity: Good idle time indicates spare resources
CPU: 99.95% signal wait ratio indicates CPU is the bottleneck
Lock contention: 14-hour max wait and 445ms average needs investigation
Transaction blocking: 51.4ms average suggests some lock chains developing
“Other” category: 753 billion ms is 48% of all wait time; drill down to understandThe Database Wait Statistics seems a really hard page to read, but it can reveal a lot of useful informations about your Dynamics 365 Business Central database statistics.
Wait statistics are like a database’s hospital chart, because they can tell us what’s making the patient sick. If a query takes 10 seconds, Azure SQL isn’t actually working for 10 seconds. It’s waiting for something and wait statistics tell us what (wait stats provide objective data about where time is being wasted).
As a Business Central DBA, you don’t need to understand all 100+ wait types. Focus on the top 5 categories by wait time, and you’ll catch 80% of performance problems.
Recommendation is to regularly check those statistics (also via telemetry). If you start collecting baseline metrics today, you’ll have the power to diagnose tomorrow’s performance issues before they become crises.
Original Post https://demiliani.com/2026/01/28/dynamics-365-business-central-understanding-the-database-wait-statistics-page/