Transcript

Introduction to Oracle Adaptive Cursor Sharing

Hello. Welcome to today’s webinar. My name is Dave Anderson. I’ll be presenting today. Also presenting today is Oracle Certified Master John Watson.

Welcome, John.

John: Hello. Good afternoon.

Dave: This is the third and possibly last webinar in this particular series, the SkillBuilders series on the four critical new features. We’ve done parallel processing enhancements, results cache and, John, we just recently did SQL plan management also known as SQL baselining, and today we will learn about adaptive cursor sharing or ACS, a technique that lets the database adapt to changing bind variable values. We might also sneak in a bit of learning on cardinality feedback, somewhat of a related feature in that it is a feedback-related feature.

Here’s what we’ll do today: Just a very brief review of SQL statement processing, then we’ll look at what problems Oracle is trying to solve with these features – with the feature of ACS and cardinality feedback. We’ll talk a little bit about what were previous solutions; John will be jumping in to help us there. We’ll take a look at the notion, the concept, of feedback-based optimization, which provides or is the foundation of 11g adaptive cursor sharing.

We’ll also want to see how the old features and the new features work together. There are a lot of things like baselining and SQL profiles and so forth, hints. We need to see if these things all mix, and then we’ll take a look, if time permits, at 11g cardinality feedback.

Copyright SkillBuilders.com 2017

×
Transcript

Review SQL Statement Processing

To fully understand ACS, we should be familiar with how SQL statements are compiled. You might call it parsing. I’ll just give you a little broader term – compiled – and this includes soft parse, which involves parsing the statement. You’ve written “select column name, column name from table name where predicates,” and that has to be parsed and translated into an internal format.

Soft parse also includes syntax check and semantic check. We have to know the semantics of the statement, who’s executing it, do you have proper privileges, etcetera. Soft parse also includes searching the library cache to see if the query plan is already stored, if it already has been optimized and, if so, hard parse can be avoided. Hard parse is optimizing for the lowest cost execution plan and it does involve often an expensive step. By default, it compares up to 2,000 permutations if there are that many for a complex query. To settle on the lowest cost plan, you can imagine how expensive that could be.

Hard parse also stores the execution plan in the library cache, part of the shared pool. This requires latches, memory locks. These are a bad thing, because they decrease scalability and consume memory.

So let’s settle on the fact that hard parse is a high overhead operation and you probably heard the motto, “Parse once, compile many.” Now, soft parse can also be avoided at times as well. We’ll learn, however, today that that might not be the best thing for adaptive cursor sharing.

Copyright SkillBuilders.com 2017

×
Transcript

Common Optimization Problems and 9i / 10g Solutions

What are the common problems with respect to optimization? Well, bind variables and data skew:

Bind variables are variables inside our SQL query, inside the predicates, and depending on the selectivity of the value of the variable, you may have a need for different plans. So the developer and the DBA were always faced with a tough choice: Use bind variables for cursor re-use and adhere to our motto of, “Parse once, execute many,” and get reduced parsing and reduced memory use, or use literals for better plans.

The 9i solution [to this problem is] bind peeking. There are some issues there, and John’s going to talk about the older solutions in a moment. So I’ll defer to John on that.

We also have a common problem of cardinality mis-estimates, particularly in intermediate results in a complex plan. So a sub-step might join together two tables or might process a view and come up with an intermediate result set that has to be joined to yet another table or view, and mis-estimates, they are particularly troublesome. 10g has a solution – SQL profiling. However, again, there are some issues there.

So let me turn this over to John for just a moment and we’ll talk about some of the previous and even a couple of the current solutions that have been introduced.

John: Yes. Thank you, Dave.

I particularly wanted to talk on this slide here because I strongly believe that adaptive cursor sharing should be a major driver for your upgrades to 11g and I happen to have a really good example of this, because the first production upgrade we ever did to 11g was purely because of adaptive cursor sharing. I can’t tell you the company, I’m afraid because it’s a reference site, but it’s a nice example that shows all of the possible problems and previous solutions.

What we see here on this slide is first dynamic sampling, introduced by 9i and not really changed since. That allows the optimizer to take a quick look at the data when it first parses a statement, take a quick look at the data and check things like cardinality and check the skew in the data, but it’s a very quick look, a very quick look indeed.

The other major feature or one major feature that came with 9i was bind variable peeking. Now, we all seem familiar with the normal parse, bind, execute, fetch cycle. If any of you out there aren’t, you better to say so now, because that’s just our assumed knowledge, though we can go through it if necessary.

So we parse before we bind, and that means that at parse time, Oracle has no idea what data it wants. So, if the data is skewed, it if it’s a non-equality predicate – you know,

SELECT * FROM emp  WHERE sal > 0

is a bit different from

SELECT * FROM emp WHERE sal > 1000000

You’re going want a very different execution plan and with bind variables because Oracle has no idea what you want.

So, with peeking, you take parse-bind-execute-fetch and for the very first execution, you reverse it to bind-parse. So Oracle expands the bind variable to see what you want, then develops an execution plan and that plan is then used for the next hundred million executions. And you better hope that that first bind variable had a value that is typical for the rest of its lifetime at that instance or at least until the plan is aged out and the statement is re-parsed.

It helps a lot. It did give Oracle a lot more information than it ever had before. Now, it could also cause a major problem, though, that if that first variable was not typical, then your plans might be completely inappropriate for a long time subsequently.

The next facility to get around that, that predates bind variable peeking, which came with 8i was the ‘stored outline’, which is just a set of hints. So, rather that hinting it manually, you instruct the optimizer to generate a set of hints that freeze a plan. So parse the statements, save as a set of hints and use that from now onwards. That got around the problem of bind variable peeking with neurotic results. You always got the same plan, not necessarily the best plan but at least it was the same plan.

Profiles then came in. Profiles giving the optimizer additional information at parse time because it can bring in to the parsing algorithm. So, unlike a stored outline, you are all doing dynamic parsing, no question about that, but with a profile, the dynamic parsing is not based purely on the DDL of the objects, the syntax, the statements, the statistics and so on and the dynamically peeked variables. It also brings in the historical information of what happened when the profile is built, and what actually were the results of running the statement through different plans improving things a great deal.

11g introduced extended statistics and this is the business of maybe there are dependencies between columns. If you correlate columns, the correlations across columns might mean that overall selectivity is not quite what you might think it is.

Now, all of these were improvements, but the example I want to give – I wish I could name the company; I’m not allowed to. It’s a poor-man’s Amazon. They’re a fairly big eCommerce site and with that eCommerce site, whenever you go to their website and you search their catalog to buy a book or buy CDs or whatever, what you’re actually doing is populating about 20 bind variables that go through to a PL/SQL procedure and then your queries run. The results come back on the website and you choose whether or not to buy the book, and that website is doing thousands of executions a second as people search for things.

Now, the environment was a 4-node RAC, and what our client found is they were getting wildly divergent performance. One day, one hour the searches would run really well. The next day they’d run appallingly. And worse still, because it was a RAC, each instance in a cluster did its own parsing, and that made things even more complicated because each instance was building up its own execution plan. And if you’re going to the website and your query happened to be routed through the connection port, one instance it would run well. If it was routed through the connection port another instance, it would run appallingly. Why? Bind variable peeking. Because what was happening is whenever we restarted an instance or flushed the shared pool, the first query to hit that instance, the bind variables were peeked and the plan will be developed and that plan would then be used for hundreds of millions executions subsequently and there was just no stability.

So what do we did to fix it? Well, first off is we almost went down [the points on] this slide:
[Dynamic Sampling, Stored Outlines, SQL Profiles}

We began by pushing up the dynamic sampling. The default in 9i was only one, which is far too low. At 10g, it equals to two. I would normally recommend pushing it up to four or five ([but] we don’t have the time now to go to why). We pushed up dynamic sampling. Then we disabled the bind variable peeking because bind variable peeking is enabled by default. You can switch it off.

Dave, if you’ve got time, can you show me a quick query, that query that will show us [the name and value of the hidden parameter that controls peeking]? Oh, there it is. He got it right there. Thanks.

If you look there, we’ve got there we got the parameter underscore optimum peek user binds to enable peeking of user binds. That hidden parameter is what came in with 9i and of course it defaults to true. Well, what we did for our client was set that to false. If you set that to false, you disable the ability to peek the bind variables at all, and that means you get stability. In effect, you’re forcing the optimizer to revert to the rule-based optimizer. Instead of doing bind parse, it does parse bind. So it’s going to be relying purely on the syntax, the DDL and to a certain extent statistics, but it can’t take account of any skews in the data.

Back to the slide, please, Dave.

So having then disabled the bind variable peeking, at least we got stability, but what we got was stability with rubbish plans. So then we have to do some serious tuning, and we try putting hints in, but tuning with hints – there’s not much fun about that. So we reverted to stored outlines. We run the application in various ways, wait until we’ve got execution plans that didn’t seem too bad for most queries and froze them. That gave us stability with plans that were not half bad, but they certainly were not half good.

We tried profiles of course, but the only real solution is to upgrade to 11g because in 11g, we have adaptive cursor sharing and that really should be the solution to all these problems. Adaptive cursor sharing should give us the ability to handle the different values for bind variables where you have skew in the data, you have non-equality predicates, predicates involving LIKE, > (greater than), BETWEEN.

Now, these non-equality predicates are non-primary keys and what we were doing with dynamic sampling and peeking and stored outlines, disabling peeking, all this, was only work-arounds and this major driver, it was our first ever production upgrade and it should be a driver for you. It may well be the driver that convinces you to do the upgrade to 11g because adaptive cursor sharing is probably the best solution Oracle has come up with yet.

Copyright SkillBuilders.com 2017

×
Transcript

11g Feedback Based Optimization

So let’s take a look at the new concept that 11g provides, and that’s Feedback Based Optimization. I guess I should note that this is not an Oracle only concept. DB2 has the learning optimizer. [Microsoft] SQL Server has some notion of it.

So Oracle’s implementation of it is two features, adaptive cursor sharing and cardinality feedback, and of course it’s different in many ways from the other products on the market, but the concept that the database vendors are getting into is, well, let’s take some of the run-time data and send it back to the optimizer. We’ll put it in the cursor cache, note it’s the cursor cache, so at least for now it is not persistent, and then the SQL engine feeds back this knowledge and the optimizer can make use of it.

As I mentioned, it is the foundation for the two features:

  • Adaptive cursor sharing to overcome limitations in peeking and the other limitations that John just spoke of, and:
  • Cardinality feedback. To fix cardinality misestimates.

It should be noted that this feature is primarily intended for frequently executed SQL. Since both features can cause SQL to be re-optimized, which we’ve already determined is expensive, the features are intended for SQL with high execution counts, so that the cost of re-optimization can be amortized over many subsequent executions. Short-duration SQL statements usually fit this model, such as SQL and an online transaction processing application. Long-duration SQL such as decision support queries against data warehouses are typically not executed frequently enough to benefit.

Copyright SkillBuilders.com 2017

×
Transcript

Oracle 11g Adaptive Cursor Sharing Concepts

So adaptive cursor sharing provides enhanced optimization in some queries containing bind variables and as John and I have mentioned, it really is the next generation’s solution when peeking falls short, which we’ve seen and heard that it does at times. In fact, in John’s case study, they disabled peeking.

We’ve also learned that the execution plan selected from peeking, based on those first values, may not be the best plan for subsequent values. So SQL performance can be unpredictable.

What are the goals of ACS as laid out in one of the original papers back in probably 2001, 2002 that was written on this concept?

First of all, there should be no user intervention. The developer or the DBA should not have to do anything in order to feed back information from the SQL engine to the optimizer. If we’re talking about re-optimization, we all should be a little nervous about that. We want it to be as low overhead as possible. So it must be fast, use as little memory as possible, don’t use memory when it’s not necessary – of course, using memory requires latches, and latches have their own set of scalability issues – and as best as possible, we must determine that it must be worth it. That is to say the new plan must perform better than the original plan.

Copyright SkillBuilders.com 2017

×
Transcript

How 11g Adaptive Cursor Sharing Works

When SQL is introduced into the system, it can be marked as bind sensitive. It can be; not is.

When a bind sensitive query is introduced, Oracle stores the additional information in the cursor and that additional information is things like selectivity estimates for each predicate that contains a bind variable. There are some limitations there, and we’ll talk about those.

Now, as repeated executions occur and new values are introduced, you could imagine the enormous amount of possible combinations of values in complex SQL that has many bind variables. So I think that Oracle limits this tracking of selectivity on 12 variables.

It would appear that additional selectivity processing code is now included in the soft-parse phase of a query for bind sensitive SQL. So, with the low overhead in mind as one of our goals, only SQL that contains predicates where selectivity can be computed quickly are marked as bind sensitive, and that includes equality predicates – but only equality predicates on columns with histograms. There are no histograms, there’s no basis, there’s no reason to have this feature, ACS, enabled, because we would assume that we were going to get the same amount of data back no matter what the value is in the bind variable. If histograms are introduced, we’d assume there’s data skew, and we do need this feature. We do need different execution plans.

John: Dave, can I interject a question here? Different execution plans for the same statements and also applying* general cursors, how is this going to interact with one of my favorite topics, which is SQL plan baselining, where I try to enforce stability and prevent change?

Dave: Yes. What happens there is that the new plan is entered into the system, if you will, but since it hasn’t been accepted, hasn’t been evolved, it would not be used. So the plan is saved but not yet used. And, as you taught me previously, John, you could set up a nightly job to evolve plans and accept them, so tomorrow the plan becomes an acceptable execution plan.

John: Right. So we get the stability of different bind variables typically a day later.
Dave: Right.
John: That makes sense.
Dave: Right.

So we see the types of predicates, equality predicates with columns with histograms and range type predicates. But, variations in selectivity alone is not enough to know that re-optimization is worth it. Remember, we said we want to be sure as much as possible that going through re-optimization and putting a new plan in the library cache, are all fairly expensive. So Oracle wanted another test, and let’s see what this other test is.

The other test is the concept of row source profiling. So, in row source profiling, Oracle samples query executions and saves these execution statistics including the number of rows processed by the query. To reduce overhead, the sampling rate is decreased over repeated executions. It can eventually be turned off.

If row source profiling – after the execution of a query – reveals large variations in the number of rows processed, some additional information is stored in the cursor, and the query is made bind aware. Statements become bind aware.

A bind aware query is re-optimized if on a subsequent execution at cursor matching time during a soft parse Oracle realizes that the new values are outside the selectivity range stored in the bind profile. If the new execution plan is different from the original plan or all the other child plans, a new child cursor is created. So you will end up with additional child cursors in this scenario.

Copyright SkillBuilders.com 2017

×
Transcript

Demonstration of 11g Adaptive Cursor Sharing

We will trace a very simple query on a test table called “T,” and we’ll change the bind variable a couple of times. There are zero or only one row where column ID contains a value one and there are hundreds of thousands of rows where the value is 99.

What we’d like to do is see if ACS will kick in and give us a different plan. So let’s give it a go.

You can see I’m running version 11.2.0.3, 2.0.3, and we’ve just run our script and we have zero rows where the value is one and — I misspoke – not hundreds of thousands but 144,000 rows where the value is 99. We ran that query again. I’m going to change it the value back to 1 and run it again.

John: Right, Dave. Now, if I can interject at this point.
Dave: Please.

John: We would hope that the bind variable peeking means that it would have expanded column N to one on that first execution.

Dave: Absolutely correct.

So let’s take a look. Now, I had already gone through the process of going and getting the trace file and tracing it, and let’s just take a quick look at what happened here. So, in my first query, it used the index on the column ID and, by the way, I used TKPROF with explain feature, just to sort of show you how that can be a little bit dangerous and misleading at times – but here’s how it looked the first time.

I did get a hard parse, you can see here. And, we did use the index. There were very little rows. In this case, zero rows [returned], and so we know that peeking did occur and the index was used. The explained plan, in fact, is correct. So EXPLAIN= is a dubious option, but it worked out this time.

In the second execution of the query, we still have the same plan. The easy way to tell is to look at the plan-hash value, 1-7-8-9 and so forth, and it’s the same value. So we know we have the same plan, but we also know that we changed the value to 99. And this wasn’t the exact explain plan. You can see I changed my data a little bit since I ran this, but you can see that we got a significantly different number of rows from the index range [scan] and the table access. It still used the index, no hard parse.

Explain plan is still correct according to the row-source operation. However, the third execution, which is the second execution with the variation, the great variation of number of rows returned and selectivity, the two litmus tests to turn this thing on – so the third execution of the query, we see that the plan-hash values change. We definitely have a different plan. Something must be different.

Something forced the hard parse. What forced it was adaptive cursor sharing. So it reached out, the SQL engine reached out, gave some information back, put it in the cursor cache and Oracle said wait, the selectivity is very different, the number of rows returned is very different. This query had been marked for re-optimization.

Sure enough, we can see that we have a new plan with a full table scan. And the same kind of problem that always happens with explain-plan commands happens in my trace – when I ask for it. You can see clearly that it is incorrect. Explain doesn’t peek on the first time you use a query that has bind variables in it, nor does it peek here then say, “Oh, it’s now we’re using 99 rather than one as a value.” So just be aware of that.

Fourth and final execution, remember that we change the value back to one. Notice that the plan hash went back to the original plan hash. So we’re back to using the index, which we hoped we would be, and sure enough, the index is used to solve this particular query.

So it takes several executions in order for ACS to kick in. Now, several is a vague term, and I don’t think we can qualify what “several” means. In this case, it was so extreme that several only meant really the second execution was re-optimized with the drastically different bind variable value of 99, but it’s not going to always be this clear cut ( to satisfy the two litmus tests).

John: That was an excellent demonstration, Dave. With regard to how long it takes, if this were a production website where you’re executing the same query millions of times in an hour, my experience is it’s only going to take a few minutes for things to stabilize and then you’ll get a set of plans there,parsed, ready and waiting, that should handle virtually all the variable combinations that come in. That’s the idea, anyway.

Dave: If you know the situation is going to occur and it’s going to occur very fast – I mean, I think, John, with his experience with this in the field, sheds some very good light on it. It’s going to happen fast. If it becomes an incredibly difficult problem because it’s not happening fast enough, you know the query should be marked bind aware – there is a hint that you can use. I’ll mention those on the subsequent slide.

Copyright SkillBuilders.com 2017

×
Transcript

9i and 10g Features with 11g ACS

Stored outlines enhance, negate adaptive cursor sharing, so I think SQL profiles do, too. I need to do more research on that. We might have an opportunity for a quick follow-up.

CURSOR_SHARING=SIMILAR is deprecated. It’s planned for obsolescence —

John: Can you absolutely confirm that?

Dave: Yes. You bet. That’s pretty clearly documented. I mentioned this Metalink Doc ID here and if you check out the documentation on that, it includes this in the first paragraph:

“We recommend that customers discontinue setting cursor_sharing=similar due to the many problematic situations customers have experienced using it. The ability to set this will be removed in Version 12 of the Oracle database. The settings of EXACT and FORCE will remain available. Instead, we recommend the use of adaptive cursor sharing in 11g and up.” I paraphrased in there somewhat.

John: Well, thanks for confirming that, Dave. That’s putting it pretty bluntly. I’ve never seen it myself, but many people I know have seen appalling problems in cursor_sharing=similar.

Copyright SkillBuilders.com 2017

×
Transcript

Adaptive Cursor Sharing Summary

To wrap up ACS, one thing we should be aware of is [to employ ACS] soft parse is required.

Studies have been done to show that PL/SQL, which really has some nice technology in it, to avoid soft parse. Soft parse has a bit of overhead, so the PL/SQL has built-in soft parse avoidance. When you loop through a query or a cursor, it’s not re-soft parsing over and over and over again.

So soft parse avoidance is an issue and, in fact, on Oracle support – MOS, My Oracle Support – it’s listed as a bug and I guess it could be considered that, but we also musk ask ourselves: Is that what ACS is really designed for? Perhaps not.

We also mentioned cursor_sharing=similar is deprecated and obsolete in R12. There is a BIND_AWARE hint, which I mentioned a little while ago, and that will force the re-optimization on the first execution with a different selectivity profile – because with the hint, we’ve already put it in BIND_AWARE mode. We’ve gone from BIND_SENSITIVE to BIND_AWARE and so we’ve essentially taken out one for the litmus tests, which was a variation or a large variation in the number of rows processed. There’s also an opposite hint called NO_ BIND_AWARE if you want to disable this feature.

Copyright SkillBuilders.com 2017

×
Free Online Registration Required

The tutorial session you want to view requires your registering with us.

It’s fast and easy, and totally FREE.

And best of all, once you are registered, you’ll also have access to all the other 100’s of FREE Video Tutorials we offer!

 

×