WARNING: I do NOT currently recommend enabling the Automatic Plan Correction feature due to issues described in my post, Automatic Plan Correction Could be a Great Auto Tuning Feature for SQL Server. Here’s why it Isn’t.
Transcript
I’m showing you some different demo code here
I did this so that if you want to play around with a different example, hey, you’ve got more code here.
The code that I’m showing you here is Copyright Microsoft Corporation. They share it under the MIT license.
I have already restored the WideWorldImporters database. Here’s the link to download the back up file for that to restore it on your own.
The first thing I’m going to do is create a procedure named AutoTuningTest
I’ve modified this just slightly from the original. It’s executed with Dynamic SQL in the original code, and I’ve created it as a procedure here so that we can add recompile hints if we want– in the header with recompile. Of course, we could also use option recompile after the statement as well. Now in this first run, I do not have any recompile hints in sight here. The first time I run through this I’m just going to show: the vanilla execution of this does get Auto-Tuned.
We get a Plan Correction for this.
I’m going to enable Automatic Tuning for the database saying set force last good plan on.
Anytime you want to verify if automatic tuning is enabled for a database…
You can run this code which checks the database automatic tuning options dynamic management view. We can see that my state is, in fact, on.
First thing we do is run the AutoTuningTest procedure with @packagetypeid = seven
And this is just to seed our workload. Before I run it 60 times I’m going to hit control l here and just look at the execution plan here.
We’ve got an Index Seek, we’ve got a Merge Join, we’ve got a Hash Match, and I’m putting things into a temp Table just so that when I do run this 60 times, I don’t have all those result sets clogging up Management Studio. Here we go, let’s go ahead and run this 60 times and establish a baseline performance in Query Store. Running this 60 times only took me six seconds, it didn’t take super long.
Now, we cause a regression
I’m now going to clear out the procedure cache for this database and that’s so that the next time this runs, I’m going to run this now with @packagetypeid = zero.
I just did control+L to get that estimated plan. What we have here is, now we have a Hash Match and then a Stream Aggregate here. And for fun we’re going to go ahead and compare these in Query Store after I go ahead and run this. So I’ve gotten now the execution plan in cache for this compiled with @packagetypeid = zero.
I’m going to run the workload again with @packagetypeid = seven.
The difference is this time it’s reusing the execution plan that was compiled for @packagetypeid = zero. It isn’t using a plan compiled just for it. And this took a lot longer than six seconds. This took 16 seconds now that it’s reusing that plan.
Let’s take a look at this in the top resource consuming queries report
I’m going to make a couple changes to the display. I’m going to go ahead and click this button so that I can easily see that I’m working on the AutoTuningTest. I’m going to look at CPU time and I’m going to look at the average CPU time.
So here are our two plans. The blue plan, plan_id two, is the plan who has the higher average execution time. We’ve got Hash Match and Stream Aggregate here. The purple plan, who has the check mark, this is the plan who’s being forced, it is the faster one. It has a Merge Join there. We had a Hash Match Join there on the slow one. Now we have a Merge Join there and now we have a Hash Match Aggregate here instead of a Stream Aggregate there.
We have different operators and this one is now forced.
It is if we check our recommendations table. It is in that verifying state. We see that we notice that the average CPU time, when we were reusing the plan, compiled for the value seven, it was faster. And when we got that plan in cache that was compiled for a different value and started reusing that, it slowed down noticeably.
It is in the process of verifying that plan.
So this is where Automatic Tuning is working for the query and is kicking in.
If I run this query myself and turn on my actual execution plans, in the plan itself here, I can go ahead and look at the properties and see that used plan is true. So we are, we’re being forced into a different plan.
What happens when we add that recompile hint?
I’ve already spoiled it and said we are not going to be Auto-Tuned. In this flavor of it we can do recompile hints in different ways. In this flavor, I have put the recompile hint on the header of the procedure and said just don’t cache anything in this procedure, in my execution plan cache. This doesn’t make it disappear from Query Store.
One of the cool things about Query Store is that it is going to see this, but it also is going to, even though I have Auto-Tuning on, it is going to make Auto-Tuning say, hey, hey, you have specified you want to fresh plan every time you run, so I’m not going to Auto-Tune.
So here I go. Let’s create or alter our procedure and we have added in that recompile hint. I am going to go ahead and reset my Query Store just so that when we look at reports and history, I don’t have to get confused by looking at the history of things. It’ll be much more clear. Not something I’m saying you should do on your production databases, this is just for making our test data really easy to see. And I’m verifying in the sys.database_automatic_tuning_options view that, in fact, force last good plan is on.
Now we’re going to start up our workload again. I’m running it with @packagetypeid seven and I’ve made a mistake here. I got this execution plan tab here so I have canceled it after a few runs. I’ve turned off my execution plans. Our initial baseline is going to have some extra runs in there. It’s not running exactly the same, but to be honest, having a bigger baseline would only help us, right?
So now it, again, it took six seconds, it was fast even though it’s compiling a fast plan each run.
Can we cause a regression?
Technically I don’t need to clear my procedure cache for this demo at all, right, because for our query and question we’re saying don’t reuse any plans. But just to try to be a little consistent. I’m going to clear the procedure cache from my database again. This is all test environment commands I’m running.
I’m now going to run AutoTuningTest for @packagetypeid zero. Last time I only ran this one time. But this time I’m running it 100 just to get a bunch of executions into Query Store with a different plan because this plan is not going to get reused. Because of my recompile hint I’m saying, nobody reuse. So, this plan, which before it was my slow plan when it was reused.
When it’s actually run by itself for @packagetypeid zero, it’s really fast. I ran 100 times in zero seconds. But the problem comes in with this plan when it reused for a different @packagetypeid. So I’m going to go ahead and run this another 100 times just so we have a significant amount of executions for this plan in Query Store. This plan last time turned out to be the slow plan, when it’s reused for other values, but now it’s actually going to look like the fast plan because our recompile hint has changed things around. I’m going to go ahead and run our workload again for @packagetypeid = seven. Let’s go ahead and do it maybe 60 times, so that’s 40 times. Now we’ve run it 60 times.
We do not have a recommendation in here. That’s by design.
We’re saying, hey, don’t reuse plan, so we are getting fresh plans and Auto-Tuning isn’t kicking in. But I think one of the interesting things is Auto-Tuning is taught to be smart about this, but we are still allowed to manually take risks on this if we want. I have reopened my top resource consuming reports and I’m going to go ahead and look at this in my preferred view and we’ll go to CPU time average. And we’re looking at AutoTuningTest here.
And one of the things that we can see is we do have multiple execution plans and we have what looks like a fast plan and a slow plan. There’s no check marks anywhere cause Auto-Tuning’s like, hey, it’s not safe to touch that.
But if I look at the plan, that’s blue here, the blue plan looks like the slow plan, right? This plan is the one that has the Merge Join and then the Hash Match, so this actually, it before was the comparatively faster plan. Now, the slower plan was like an average of 400 milliseconds.
This plan is still performing the same way it was earlier.
But what if we have no memory of what this performed like before there was a recompile hint? We wouldn’t know that it was performing the same way. So if we just naively look at this view we might look at this and hover over this guy and say, okay, plan_id one has executed 136 times. It’s average CPU time is 107 milliseconds. And then I might go ahead and look at this and say, okay, plan_id two, it’s lower down on this, it’s faster. If I look at plan_id two, hey this is executed a couple hundred times and it’s average CPU time is just 1.68 milliseconds.
What if I go ahead and force this plan?
Maybe everybody’s average CPU time is going to drop. Well, I can’t, even though there’s a recompile hint in this query. If I want to force this, I can force this. I can force query two to use plan two. And I can do this either by using the force plan button here in Query Store, or by executing a T-SQL statement that forces it and says, hey, take query id two and glue it to plan_id two. So, I’m going to go ahead and do that just to prove that it works. Now we know from the previous demo that forcing this plan, forcing the plan, that has a Hash Match here and then goes into a Stream Aggregate, that that plan isn’t so great when it gets reused for package id seven. But let’s go ahead and experiment and see.
I’m going to go ahead and run this now for @packagetypeid = seven. I have manually forced that plan. It wasn’t Auto-Tuned, but I have manually done it and now, sure enough, we aren’t done in six seconds. We are done in nine seconds. It’s not terrible, but it is slower. If I go ahead and refresh my Query Store window here, I do have those check marks because I have manually forced the plan and I can see, oh, here’s my current run time up here.
If I hover on this top check mark here, okay now I’ve run it 40 times and my average CPU time is up to 235 seconds. How can we tell the difference between an Auto-Tuned plan and a manually forced plan? We can absolutely tell that if we look at the built-in fuse. I’m looking here at Query Store query and Query Store plan and I want to see everything related to my AutoTuningTest procedure. If I look at those dynamic management views, I can see that plan_id two is forced, and the reason that it is forced is that it was manually done. If I go back and look at my Tuning recommendations, there is nothing in my recommendations for this because it was not Automatically Tuned.
I did this manually, no recommendation was made.
Using recompile hints means that Automatic Plan Correction’s going to be like, whoa, whoa, I’d better not touch that cause you said you want recompile. However, I myself, as an operator, can manually force a plan, thereby overriding recompile because I, as the user, am making that decision.