Why batching updates in SQL Server is essential for performance
By Kendra Little on October 10, 2025
Many performance problems in SQL Server start with one simple pattern: a large update, delete, or insert that runs in a single transaction. Batching modifications alleviates the pains of modifying large amounts of rows in a database with an active workload. This basic safety habit helps SQL Server work efficiently and predictably.
If you want to dive deeper into techniques like this, Erik Darling and I are teaching two full days on T-SQL at the PASS Data Community Summit in November 2025. Join us to write faster, safer queries — and recognize the patterns that cause performance pain in production before they ruin your day.
Why large, unbatched modifications cause trouble
When I’ve seen blocking storms or long-running updates in production over the course of my career, it’s often been because of a large modification that didn’t use batching.
SQL Server does what you ask: it processes every qualifying row in one go. To guarantee atomicity and durability, it holds locks for the entire transaction, writes each change to the transaction log, and only releases resources once the operation commits.
Batching changes prevents problems by letting SQL Server process smaller sets of rows, commit more often, release locks incrementally, and allow portions of the transaction log to be reused. It transforms a blocking storm into a predictable, steady workload that plays nicely with other activity using the same tables.
Batching isn’t an optimization — it’s self-defense
Batching breaks a large modification into smaller, controlled chunks. Each chunk commits, releases locks, and lets other work continue.
When you batch:
- Blocking windows shrink dramatically
- Log reuse stays healthy, preventing runaway growth
- Rollbacks are faster and less disruptive
- Progress is visible and controllable
Batching turns an unpredictable, risky operation into a safe, repeatable one.
Two knobs to include in every batching pattern
When you batch, your code should always include adjustable parameters for batch size and wait duration between batches. Those two settings make the difference between a healthy system and a stressed one.
1. Batch size
Batch size controls how many rows you modify per iteration. It affects both concurrency and log behavior.
If the batch size is too small, you’ll commit very frequently — and every commit triggers a log flush. SQL Server must flush the transaction log to durable storage at each commit to guarantee durability. Thousands of tiny commits mean thousands of flushes, creating WRITELOG waits and competing with other transactions that also need to commit. The transaction log is a shared resource for your database, and WRITELOG waits can slow down other users even if they aren’t writing to the same tables.
If the batch size is too large, you can run into lock escalation.
Lock escalation is SQL Server’s internal mechanism for reducing memory usage from a large volume of locks. When a transaction holds many row or page locks on a single structure, SQL Server may replace them with a table-level lock. If the lock escalates, every session that touches that table — even if it needs unrelated rows — gets blocked until your transaction commits.
Smaller batches help avoid the threshold that triggers lock escalation. The goal is to keep each transaction light enough that SQL Server doesn’t feel the need to escalate locks, but to do enough work in each transaction that you make good-enough progress on your modifications.
2. Wait duration between batches
A brief pause between batches allows SQL Server to catch up. This is especially important if you are using an Availability Group, where modifications need to stream to replicas via the transaction log.
Even a short delay — a few hundred milliseconds — gives other queries time to acquire locks and make progress. On quieter systems, you can shorten or skip the delay; on busy OLTP systems, a small pause can make a large difference in concurrency.
Both of these knobs should be adjustable. Real-world workloads vary, and you want your batching process to be able to adapt without code changes.
Learn proven batching patterns
You don’t need to start from scratch. These two posts include code examples to get you going:
- Michael J. Swart: Take Care When Scripting Batches — a foundational guide to designing safe batching logic.
- Erik Darling: Batching Modification Queries Is Great Until… — a practical look at the tricky edge cases you’ll want to plan for.
Experiment in a test environment, and start using batching patterns by default when you need to modify rows.
The takeaway
Batching isn’t optional — it’s what keeps SQL Server healthy under modification workloads.
Many critical tables eventually grow. Lots of “quick updates” eventually become the one that locks core portions of the database.
If you build batching into your patterns now — with adjustable batch sizes and wait durations — you’ll prevent many of the worst production incidents before they happen.
When you write T-SQL that modifies data, always ask yourself:
“If this causes blocking, will that be a problem?”
If so, it’s time to batch.