Dynamics 365 Business Central: understanding the Database Wait Statistics page.

Wait statistics are one of the most powerful diagnostic tools available to database administrators. They tell the story of what’s preventing your SQL database from running queries as fast as it could. Rather than guessing about performance problems, wait statistics provide hard evidence about where your database is spending time waiting instead of doing useful work.

Think of it this way: if a query takes 10 seconds to complete, it’s not executing for all 10 seconds. It’s likely waiting for something, like waiting for disk I/O to complete, waiting for a lock to be released, waiting for a network response. Wait statistics quantify exactly where that time goes.

Dynamics 365 Business Central exposes a page called Database Wait Statistics. This page shows a snapshot of wait time statistics on the tenant database (it’s essentially a page that shows data coming from the system dynamic management view sys.dm_db_wait_stats from Azure SQL). The statistics are aggregated since the time that the database was started and the different wait types are also aggregated into wait categories.

Here is an example coming from a real-world database:

Every time I talk with customers and partners about this view, I often see people lost on those values. What are those numbers? How can I read them and extract valuable informations?

Here I want to explain how to read this view and extract important KPIs for your Business Central environment.

Understanding the View Fields

This view exposes the following 6 columns:

Wait Category

The top column identifies the category of wait type being reported. Azure SQL groups similar wait conditions into logical categories to make analysis easier. This is an aggregation of detailed wait types under the hood.

Waiting Tasks Count (Waiting Tasks Count)

This field tells you how many times a task had to wait for this particular resource category. It’s a counter that increments each time Azure SQL encounters this type of wait condition.

A high number here doesn’t necessarily indicate a problem—it depends on context. A Buffer IO wait with billions of counts might be normal for a busy OLTP system, but even a few hundred Lock waits might signal contention issues.

Wait Time in ms (Wait Time in ms)

This is the total cumulative time (in milliseconds) that tasks spent waiting for this resource since the database started. It aggregates all the individual wait durations across all waiting tasks.

This is your total “wasted” time in this category. A high value here is more concerning than a high task count because it directly impacts user experience. A query user perceives as slow is likely waiting in one of these categories.

Max Wait Time in ms (Max Wait Time in ms)

The longest single wait event recorded in this category since the database started.

This helps you understand the worst-case scenario. A single massive wait (like 10 seconds) might indicate a temporary spike or contention incident, whereas consistent moderate waits across many tasks suggests a systemic issue.

Signal Wait Time in ms (Signal Wait Time in ms)

This is the time a task spent waiting to get CPU time after its resource became available. It’s the time waiting for a CPU thread to execute the task once it’s been unblocked from whatever resource it was waiting for.

This is sometimes called “runnable” wait time. If signal wait time is significant compared to the wait time itself, it means your system has CPU pressure—queries are ready to run, but the CPU is busy. This is a sign you might need to look at CPU utilization or query optimization.

Database start time (Database start time)

The timestamp when this database instance started collecting these statistics.

It tells you how long this dataset covers. Statistics from a database started yesterday are more volatile and actionable than statistics from a database that’s been running for a year. Recent snapshots are more reflective of current behavior.

How to read this data?

These are some key principles for reading all these data exposed in this page.

Principle 1: Focus on the big numbers

Not all waits matter equally. A high wait time in a rarely-occurring category matters less than a moderate wait time in a frequently-occurring category.

Principle 2: Calculate the ratio

Divide Signal Wait Time by Wait Time in ms. This tells you what percentage of the wait is CPU contention versus actual resource waits:

  • < 10%: Normal; signal waits are minimal.
  • 10-20%: Moderate CPU pressure.
  • > 20%: Significant CPU pressure—optimize queries or add CPU resources.

Principle 3: Average wait per task

Divide Wait Time in ms by Waiting Tasks Count to get the average wait duration. This shows whether you have many small waits or fewer large waits:

  • Small average with high count: Many minor contentions (this is normal for busy systems).
  • Large average with low count: Fewer but more serious incidents.

Principle 4: Rank by wait Time, not Count

It’s easy to get distracted by the largest numbers in the Waiting Tasks Count column. Instead, focus on which categories are consuming the most total time.

Principle 5: Understand wait type context

Different wait types are normal in different scenarios:

  • OLTP systems (like Business Central): Expect Buffer IO, Lock, Pageiolatch waits.
  • Analytics/Reporting: Expect higher CPU and other disk IO waits.
  • Batch jobs: Expect Lock and Transaction waits.

Let’s analyze a real-world case.

Let’s take the image published at the beginnig of this post. As previously described, this is taken from a real-world Dynamics 365 Business Central tenant (Production environment). Here’s what the analysis of these data reveals:

1. Idle Wait Time:

Idle: 3,021,223,428 waiting tasks | Max: 2,525,610,829 ms

This is actually a good news, not bad news. Idle waits represent the time that Azure SQL has nothing to do (the system was idle waiting for client requests). This is normal and expected. This means that the database isn’t under constant pressure. When there’s no work to do, it waits idly. This tells you the system has capacity.

2. Buffer IO:

Buffer IO: 395,725,025 tasks | 692,467,371 ms wait time | Max: 18,000 ms | Signal: 52,580,885 ms

This is where the real story begins. Buffer IO waits mean queries are waiting for data pages to be read from disk into memory (or for dirty pages to be written).

Analysis:

  • 692 million milliseconds ≈ 192 hours of cumulative wait time since database startup
  • Average wait per task: 692,467,371 / 395,725,025 ≈ 1.75 ms per wait (reasonable)
  • Signal wait ratio: 52,580,885 / 692,467,371 ≈ 7.6% (acceptable; only 7.6% CPU contention mixed in)

These data tells that the database is doing typical I/O work. The 18,000 ms max wait is worth watching (that was a significant disk latency spike) but overall this is normal for an active database.

3. CPU:

CPU: 2,848,326,203 tasks | 1,463,945,587 ms wait time | Max: 11,724 ms | Signal: 1,463,234,440 ms

This is concerning and deserves attention.

Analysis:

  • Signal wait time nearly equals total wait time (1.463B vs 1.463B)
  • Signal ratio: 1,463,234,440 / 1,463,945,587 ≈ 99.95% (nearly 100%!)
  • Average wait: 1,463,945,587 / 2,848,326,203 ≈ 0.51 ms per task

This means that almost all of the CPU waits are signal waits, meaning tasks are ready to run but waiting for CPU resources to be available. This is a CPU bottleneck indicator.

It seems that the server likely has CPU pressure. Queries are fast (sub-millisecond execution times), but there are so many of them that the CPU is the limiting factor.

4. Lock Category:

Lock: 5,182,3704 tasks | 2,306,623,349 ms wait time | Max: 50,175,915 ms | Signal: 6,143,871 ms

Analysis:

  • Lock max wait of 50+ million milliseconds ≈ 14 hours! (This is genuinely concerning)
  • Signal ratio: 6,143,871 / 2,306,623,349 ≈ 0.27% (almost no CPU pressure; it’s pure lock contention)
  • Average wait: 2,306,623,349 / 5,182,3704 ≈ 445 ms per wait

It seems that here there is a significant lock contention. That 14-hour max wait suggests a transaction held locks for an extraordinarily long time, blocking other queries. There may be queries or batch jobs that hold locks too long, or there’s high concurrent access causing blocking chains. This needs investigation.

5. Other:

Other: 102,523,125,915 tasks | 753,802,672,883 ms wait time | Max: 1,801,601,418 ms

This seems to be the largest wait time contributor, but it’s the least informative because “Other” aggregates many different wait types.

Analysis:

  • 753+ billion milliseconds of cumulative wait time—this dwarfs everything else
  • But without knowing which specific wait types are included, it’s hard to act on this
  • The maximum single wait of 1.8 billion ms (nearly 500 hours) is probably a long-running background process

This can be considered a signal that we should dive into the underlying wait type details in sys.dm_db_wait_stats where you can see individual wait types.

6. Service Broker:

Service Broker: 4,192,632,671 tasks | 186,939,057,136 ms wait time | Max: 332,078,317 ms | Signal: 269,869,337 ms

Over 186 billion milliseconds of service broker waits is unusual for a typical Business Central environment.

This means that either your Business Central instance is heavily using Service Broker for job queueing (which is standard), or there’s something unusual going on with job processing. Service Broker is used for Business Central’s background job queue system.

This is likely normal for a Business Central deployment if you’re using job queues heavily, but it’s worth monitoring.

7. User Wait:

User Wait: 5.6 billion ms (0.36% of all non-idle wait time)

This is healthy, because it indicates idle time between user actions and good spare capacity.

8. Lock:

Lock: 2.3 billion ms (0.147% of all non-idle wait time)

This indicates a moderate contention. That 14-hour max wait is concerning and needs investigation.

9. Transaction:

Transaction: 159 million ms (0.010% of all non-idle wait time)

This seems to indicate moderate delays, where some blocking occurring. This is normal for a Business Central database but we should monitor trends.

10. Transaction Log I/O:

Transaction Log IO: 2.6 million ms (0.0002% of all non-idle wait time)

This indicates good log subsystem performance, without I/O bottlenecks.

11. Memory:

Memory: 41 million ms (0.003% of all non-idle wait time)

This is a good sign. Memory is abundant and not a constraint here.

In summary, this is what the analysis is revealing:

This Business Central environment has good infrastructure (memory, storage, network all healthy) but is hitting a CPU bottleneck. Someone’s running high-CPU queries, and there’s evidence of extended lock holds from batch jobs that need optimization.

The database isn’t under constant pressure (good idle time), disk I/O is responsive, log writes are fast, and memory is plentiful. However, CPU utilization is maxed out (99.95% signal waits mean queries are ready but CPU is busy), and transaction blocking is occurring (51ms average, with one 83-second incident). The infrastructure is properly configured, but workload optimization is needed.

What’s working well:
  • ✅ Memory subsystem: Abundant, no constraints
  • ✅ Log I/O subsystem: Fast and responsive
  • ✅ Disk I/O subsystem: Adequate throughput, minimal contention
  • ✅ Network connectivity: Reasonable latency with occasional spikes
  • ✅ System capacity: Good idle time indicates spare resources
What needs attention:
  • ⚠ CPU: 99.95% signal wait ratio indicates CPU is the bottleneck
  • ⚠ Lock contention: 14-hour max wait and 445ms average needs investigation
  • ⚠ Transaction blocking: 51.4ms average suggests some lock chains developing
  • ⚠ “Other” category: 753 billion ms is 48% of all wait time; drill down to understand
Investigation priorities:
  1. Priority 1: CPU Bottleneck: Identify and optimize high-CPU queries
  2. Priority 2: Lock Wait: Find what transaction held locks for 14 hours
  3. Priority 3: “Other” waits: Determine which specific wait types are included
  4. Priority 4: Transaction blocking: Monitor if 51ms blocking increases over time

Conclusion

The Database Wait Statistics seems a really hard page to read, but it can reveal a lot of useful informations about your Dynamics 365 Business Central database statistics.

Wait statistics are like a database’s hospital chart, because they can tell us what’s making the patient sick. If a query takes 10 seconds, Azure SQL isn’t actually working for 10 seconds. It’s waiting for something and wait statistics tell us what (wait stats provide objective data about where time is being wasted).

As a Business Central DBA, you don’t need to understand all 100+ wait types. Focus on the top 5 categories by wait time, and you’ll catch 80% of performance problems.

Recommendation is to regularly check those statistics (also via telemetry). If you start collecting baseline metrics today, you’ll have the power to diagnose tomorrow’s performance issues before they become crises.

Original Post https://demiliani.com/2026/01/28/dynamics-365-business-central-understanding-the-database-wait-statistics-page/

0 Votes: 0 Upvotes, 0 Downvotes (0 Points)

Leave a reply

Follow
Search
Loading

Signing-in 3 seconds...

Signing-up 3 seconds...

Discover more from 365 Community Online

Subscribe now to keep reading and get access to the full archive.

Continue reading