100 Things I Hate About Views: Undeclared Data Types in Columns
Views let you do dumb things by accident in SQL Server. Then they make you have to think way too hard to fix them.
Read Moreon • 9 min read
You need to release schema changes while the SQL Server is in use. Learn why code generation tools write odd scripts and how to stay sane amid rapid releases in this 28 minute video… or scroll down to read a summary.
If you enjoy the podcast, please write a review on iTunes!
How can I achieve near zero downtime deployment with database schema changes? Can I use Peer-to-Peer Replication or some other feature to make this easier? It seems very complicated. We use SQL Server Data Tools (SSDT) and the developers work agile and quite often make small changes to the schema. And we have found there are some snags with it, since SSDT generates the change script. Here is an example of a common problem: when the developers add a column to a table you often get a Create/Insert/Drop Table create script by SSDT. Avoiding table locking when possible would help achieve near zero downtime deployment.

Microsoft’s documentation on Peer to Peer replication says
Publications must allow schema changes to be replicated. (This is a setting of 1 for the publication property replicate_ddl, which is the default setting.)
I’m not sure if that’s a hard requirement (and it just won’t work if you disable replicating schema changes), or if your use of P2P is unsupported if you disable replicating schema changes— but you want to meet the requirement either way.
The requirement simply makes sense when it comes to data consistency. If I drop a column in a table on Node 1 at 2 pm, should the application be allowed to insert rows into the same table on Node 2 at 2:05 pm if I haven’t applied the schema change to Node 2? How would that those rows get replicated back to Node 1? This would put the column into a Shroedinger’s cat situation: does the column exist, or not?
Other things to think about:
Peer to Peer replication wasn’t built to solve this particular problem. You can manage schema changes while using Peer to Peer, but it’s tough stuff.
For the example you mention of adding a column, some automatic code-writing tools do the “create a new table, insert, rename” operation because adding a column in place can sometimes be a very large, painful alter because it’s a “size of data operation.”
When you add a not-nullable column with a default value, SQL Server may have to go through and update every single row on the table to add that default value. That’s a really big in-place operation, and if you cancel it midway, you may be in for a nasty rollback on a large table. That’s why some folks gamble on just loading up a new table and switching it in, instead. In some cases, it’s faster!
I say adding a column may be a size of data operation because you have options:
If you have Developer Edition in your development and staging environments, but Standard Edition in production, changes adding non-null columns with default values could run very quickly everywhere except production on your large tables. That’s because Developer Edition gets all the features of Enterprise Edition.

But while there were bumps along the road, we got pretty good at it! It’s not easy, but the more you do it, the better you get at it.
Adding a new feature? Tables and columns get added to the database well in advance of anything using them. This makes it easier if the schema changes need to be rolled back. Or if something goes wrong and the schema change doesn’t succeed, that whole release doesn’t have to be rolled back.
You need a staging environment to roll changes to before they hit production. Key points:
As long as you use this environment for testing and validating production changes, you can license this with Developer Edition – but note that “super awkard” limitation above where adding a column could be fast everywhere except production. And you still need to invest in storage and hardware. (It’s possible to go cheap on the hardware, but if you do so then you won’t get a reliable estimate of how long it’s going to take to deploy changes to production.)
If you’re just starting down the “deploy all the time and minimize downtime” road and you don’t have this, that’s OK.
When people start asking, “How can we make sure these mistakes don’t happen next time?”, this is your #1 suggestion.
For smaller changes, it may be OK to use automatically generated code to roll the change– but larger tables and sensitive databases can require custom coding to make sure you can do things like take advantage of that feature above, or custom code it.
The DBA usually finds out when there’s a need for custom code when they’re pushing a change to the pre-production (aka staging) environment, and it takes longer than expected.
This was totally counter-intuitive to me. The whole point is that you’re trying to limit downtime.
But this is hard stuff. Things are going to break. Everyone needs to be able to learn when that happens, and being afraid doesn’t help people communicate better.
You need to do postmortems when you have downtime, and you need to have postmortems that don’t suck!
The hardest part of making this work is preventing the human tendency to blame from ruining the whole thing.
If it’s really not OK for schema changes to cause downtime, because people will be harmed, too much money will be lost, or some other reason, then either:
1. Releasing schema changes to SQL Server while production is active isn’t right for that application.
2. You need a highly customized architecture for each application designed to make this work and avoid downtime.
There may be other options as well, but it’s going to be a custom solution for each application.
Copyright (c) 2025, Catalyze SQL, LLC; all rights reserved. Opinions expressed on this site are solely those of Kendra Little of Catalyze SQL, LLC. Content policy: Short excerpts of blog posts (3 sentences) may be republished, but longer excerpts and artwork cannot be shared without explicit permission.