Opinions expressed on this site are solely those of Kendra Little of Catalyze SQL, LLC. Content policy: Short excerpts of blog posts (3 sentences) may be republished, but longer excerpts and artwork cannot be shared without explicit permission.
on January 15, 2023
Hosted cloud databases make a lot of administrative tasks easier, or take care of them for you altogether.
But here are three things that I’ve found a little too easy to forget about Azure SQL Database.
1. “Service objective” means “compute size”
When I think about service objectives, I tend to assume we’re talking about guaranteed uptime, often expressed in some number of 9’s.
But Azure SQL Database has its own language. The best resource I know of to learn that language is the Azure SQL glossary of terms.
The glossary defines service objective this way:
Compute size (service objective) is the amount of CPU, memory, and storage resources available for a single database or elastic pool. Compute size also defines resource consumption limits, such as maximum IOPS, maximum log rate, etc.
When you are choosing your compute size, you are generally not presented with a slider that shows the amount of memory your database will get. You need to take some extra effort to look it up in the docs. This is worth doing, because…
2. Database memory is pretty darn limited, and Serverless compute has even less memory
When you configure an Azure SQL Database in the Azure Portal, you choose a lot of things — but you never choose directly how much memory your database has. The amount of memory allocated to your database is determined by the pricing model and other options you choose.
If the database memory amount is displayed anywhere in the portal, I haven’t found it yet. I believe you have to know to go look in the documentation for resource limits, which are listed separately for the vCore purchasing model and the DTU purchasing model.
For example, as of this writing, under the vCore purchasing model using the General Purpose service tier:
- 8 vCores with Serverless compute (standard series Gen5) gets 24GB max memory
- 8 vCores with Provisioned compute (standard series Gen5) gets 41.5GB max memory
Do both of these memory limits seem terribly small to me for 8 vCores? Yes.
I suspect that if you say you want to increase your cache size only without adjusting other elements of compute, the Azure answer is that you should look into implementing something like a Redis cache. :shrug:
3. There ain’t simple autogrowth here, friend, and an application may not throw the correct error when you’re out of space
I remember when I first started as a DBA, early on my team hit an issue where an application was failing and we needed to figure out why. We determined that the database files were out of space. It took time to discover because the error message that bubbled up in the application was completely unrelated and was a red herring.
Some things never change, especially the “misleading error message” part.
Your Azure SQL Database settings control the maximum size for your database. Increasing the max size within your current service level object can be relatively cheap, compared to increasing vCore count, but it’s still something you need to manage yourself: there isn’t a simple “enable autogrowth” checkbox.
It’s worth setting up database monitoring and configuring alerts for the amount of space used / free space in your database files. You should also periodically review the maximum data size allowed for your SLO (vCore / DTU) in the resource limits and compare this to projections of how much space you’ll likely need for a time period, in case you’re at risk of needing more space than your configuration allows: upgrading your core count is generally pricier than moving the database storage size slider.