Transcript

Introduction (including First Demonstration)

Oracle SQL Tuning

1. Introduction

[music]

>> Dave:  I’d like to start though with a very quick introduction to John and I. 

John Watson is SkillBuilders’ director of Oracle Database Services. John oversees our Oracle remote DBA, out Oracle support services and Oracle training businesses. John is an Oracle Certified Master and holds roughly eight Oracle certifications including the 11g Performance Tuning Expert certification. John has authored or co-authored three of the Oracle Press 11g Exam Guides. Hopefully you had a chance to read some of those. 

[pause]

As for me, I don’t do too much. But seriously I have over 30 years of experience in information technology as a developer and a DBA, and I consider myself fairly good at Oracle DBA tasks, mostly tuning. 

[pause]

Now, over to John for today’s lesson. Welcome, John.

[pause]

>> John:  Thank you very much for that somewhat glowing introduction, Dave. 

What I want to talk about this evening or lunchtime for most of you is what we call equivalent SQLs. An equivalent SQL, the idea here is that many different statements with wildly different syntax, totally different structures will often get the same results and that’s what we call an equivalent SQL. 

However, under the covers, the way Oracle actually executes equivalent SQLs may vary – very different execution plans. So equivalent SQLs, they may be equal but perhaps not as equal as one might think. 

A lot of this has to do with query transformations and the cost-based optimizer should be capable of doing any appropriate transformation. So no matter what SQL you throw at it, it should rewrite it into the most efficient form. But as Dave and I have found over the years, there are many cases where it can’t actually do that. Either it can’t because of technical limitations or perhaps because of lack of information, incorrect missing statistics, maybe many reasons why the rewrites do not occur. 

I’m only trying to share in these hours how you can into a certain extent take control and the way you write the SQL you can design equivalent SQLs that may well help the optimizer and give much, much results – much better performance or an identical result. 

[pause]

To give you a very simple idea of how this sort of thing might work, perhaps the most simple equivalent SQL we can have, I’m working at the moment just in the scott schema. So if I were to select – let me just set auto trace on, enable the auto trace facility. If I were to select * from emp e.dept d where e.dept have equals d.depts no, all I’m doing as you can see is just joining the tables. And if we look at the results of that – 

[pause]

What we have there is a simple join event to dept and we see how Oracle did it. The technique from navigating down the execution plan needed a full scan of the primary key index on departments to get the primary keys in department number order. I use those to probe the department table to get a list of departments in primary key department number order, then it did a full table scan of the emp table, sorted the results into department number order and joined, merged the two result sets together to produce the combined results set that we end up with here. Very simple, perfectly straightforward. 

That was with the statements written like that. An equivalent SQL will be set. Select * from emp natural join dept. Just using [4:29 inaudible] syntax of an Oracle proprietary syntax. I may get exactly the same result set. It’s an equivalent SQL and you can see that execution plan is basically is identical. 

Look at the plan hash value 844388907 and up here 844388907. The most basic example I could think of of two equivalent SQLs. That is identical to that. The optimizer got it right. The optimizer got it correctly. It produced the same execution plan, optimized whatever the statistics on those tables happen to be.

[pause]

>> Dave:  Pardon, John. We have a question in the queue.

>> John:  Okay.

>> Dave:  Your execution plan, both that you’re showing us now indoor storage on line 5.

>> John:  Sorry, folks. That’s not exactly relevant. That’s to do with exadata. What that’s telling me – the keyword storage there shows us operation 5 if we could be off loaded in an exadata database machine environment. We could off load the full table scan of emp to the storage tier. This is critical for tuning for exadata, whereas note that the other operations such as the index full scan and that index lookup cannot be off loaded to the storage tier. We’ve got a bit of time here. 

If I were, for example, to tune the statement a little bit – if I were to alter index pk_dept and make that index invisible, and the index is still there of course [6:14 inaudible] primary keys. If I now run the statements, now we get a hash join. 

Look here. I’m off loading not only the access to the emp table but also the access to the dept table is now off loadable to my exadata storage tier. This is just a very brief example of how tuning for exadata is actually a bit harder than you might think. We found that there’s often a lot of amount of work needed to optimize your SQL to exploit it. Running a code like this, you’ve just spent a million dollars in exadata and when you’re using it to half your query. 

Sorry about that, Dave, or whoever asked the question. I shouldn’t have mentioned that. 

[pause]

So forget storage for the rest of this session. 

That was my first very basic example of equivalent SQLs.

Copyright SkillBuilders.com 2017

×
Transcript

SubQuery Factoring

Oracle SQL Tuning

2. SubQuery Factoring

[music]

>> John:  Subquery factoring, what this is about is rewriting your SQL to pull part of the sequel out into a separate subquery. The typical example of this, the worst case is undoubtedly the correlated subquery. 

Correlated subquery are often seriously bad news. We should all know already what a correlated subquery is. It’s a query where you join two tables in such a way that for every row of the outer table you have to query the inner table. For example 

[pause]

Taking a classic case again, I’ll just continue working in the emp schema. If we were to run a query say that attempts to work out every department where the total of the salaries is say more than a certain amount, what would I do? 

I would run select department name (dname) from dept where – say I have a cut off for 10,000 – select dname from dept d where 10,000 is greater than my correlated subquery which will be select sum of salaries from emp e where – let me do the join – that e.deptno = d.deptno. 

Pretty straightforward what’s going on. For every row in the department’s table I’m going to have to sum up the appropriate number of salaries and do the test. That does the total come to more than 10,000. 

And we find that I get two rows back, sales and accounting. So two rows were greater than – sorry, that’s less actually. I’ve retrieved the two departments where the total of salaries is less than 10,000 pounds or even dollars. We see the statement is run. 

To get the full detail of that, I’ll set the auto trace off and get the full information about how that thing was run. Just taking the step further, I’ll run it through the rule-based optimizer. So alter session set, optimizing mode = rule, and see how Oracle is going to run that statement. 

The next setting we need to do is get some statistics. Set statistics level = all, statistics alter session off. Alter session, set statistics level = all. Then run the statement with the correlated subquery and see how it actually ran that query. So to get the execution plan out with execution statistics we select * from table and the table will be our friend dbms x plan and display the cursor just with the usual value this will do to get the last to run the statement. 

[pause]

Much better. I’m going to select dname from dept. Got it right. 

[pause]

Now here we see the bad news about how that statement was run. What happened is it did a full table scan of dept. Full table scan of dept, and it retrieved four rows. Then we see we did a full table scan of emp is the next stage and we did it four times. So for every row in the outer table and now to query we did a full table scan of the inner table and that’s pretty disastrous if we join to a table and the outer table has millions of rows. It’s actually a bloody disaster. 

But that’s what rule-based optimizer has done. It’s done exactly what I told it to do. It’s run that inner query, that second query once for every row of the outer query. 

Let us now see how the cost-based optimizer does. We would hope that it does a rather better job. The cost-based optimizer has got exactly the same result set and it’s done it much more efficiently. You can see what it’s done. It’s in effect reversed the logic. It’s gone to emp table first and has scanned it once, sorted it, done the aggregation and it’s even done the check. You see the filter coming at line 4, it’s worked out the as the sum total of the salaries over 10,000, then it’s joined to dept. 

But this still isn’t perfect. It’s not bad. It has to do with the index you need to scan a dept twice and then do the look up on the dept tables twice. Why? Because there were two rows returned, sales and accounting. But that’s an awful lot better. So the cost-based optimizer has rewritten the query. 

[pause]

Now take another example though, another equivalent SQL because that’s just running that same statement through different versions of the optimizer. Let’s see your program that tries to be clever. Your program has been told that correlated subqueries are bad news and indeed correlated subqueries usually are bad news. 

Perhaps you decide to write it like this. We select say dname from dept. Then we can go to natural join emp, so join dept to emp and give it a group by. So group by deptno – we’ll group by dname better. Group by dname having some sal less than 10,000. So you can see what I’m doing. This isn’t all the [6:44 inaudible]. This will be an equivalent SQL and it is because exactly the same result set back. A different order but never mind that. I didn’t specify an order by clause. 

So my program is being clever. He’s been told that correlated subquery’s a bad news, so it’s rewritten the query to use a join and a group by. What actually happened within the execution plan? 

[pause]

Not so good, is it? What’s it have to do is go back to pkdept, go to that, go to the department’s table, do a merge join – what it’s joining to is it’s had to do four sorts of the emp table. Only one scan, it’s not too bad. But we’ve had to sort those 14 rows four times. 

Quite simply, Oracle could not identify the fact that that statement is an equivalent SQL. It didn’t realize that that is an equivalent SQL to that. It did a fairly good job. It is by no means perfect. So what I’ll now do is show what your program should’ve done. What your program should’ve done is tell the optimizer what it wants it to do and that is by manually factoring out the subquery. 

So with say salary total (saltot) ask – what do you want to do? We want to pass through emp totaling up the salaries per department. Select deptno sum sal ask sum sal from emp and do the group by. Group by deptno. I’m manually factoring out the subquery. It was causing the problems and now we can do a join. We can select dname from dept, natural join saltot where sum sal less than 10,000. And we get the same rows back. 

That’s a third equivalent SQL. Now I’ve got three versions of the same statement. Totally different syntax that give identical results. What is this one like? Let me look at the execution plan. 

[pause]

It’s perfect. We’re back to the one where the optimizer generated in the first place. We’re processing the emp table just once to construct the view so I factored out that and then we have to probe the index on dept. We have to do that twice because we got two rows back. Manually, I told the optimizer to do what it was doing previously without any danger of your programmers and that’s what your programmers should be doing. Rather than forcing the optimizer do the rewrite, they should be rewriting it themselves. 

>> Dave:  I’m sorry. Can I interject a couple of things. Really good stuff. Just to interject – the recapture on what you’ve just seen is, is three equivalent SQLs. You saw the correlated subquery, then you saw natural join, and then finally the subquery factoring with the WITH clause. As John has demonstrated to us, the WITH clause is the most solid coding of it because you’re telling the optimizer what you want to do. 

John, we’ve got a bunch of things in from the students here. 

“One thing, your audio is a little funky so if you can speak a little up and a little slower that will be great. If you can increase your font size, that will also be great. Not an emergency on that though.” 

And a couple of questions. In the last two explain plans, we had what seemed to be two nested loops and we were calling it a nested loop join. Folks, that’s what the new version of Oracle 11g single nested loop. We charged in much more detail in the SQL tuning class. We compared to 9i version or 10g version versus the 11g version, but that is actually a single nested loop between two tables, emp and dept. It’s just how Oracle is externalizing. It’s really great in a subset so it loops twice. It’s more efficient than the old way. 

The other question in the queue was related to statistics level. You had altered session set statistics level = all. That’s initialization parameters, statistics level, and by default and usually it should be set to typical which would collect things like time statistics, AWR, and ADDM statistics, and all of the good things that we want. If you set it to ALL, it collects extra statistics for your SQLs. It had a little overhead so you’d probably want to set it to ALL all the time. 

It can also be done with gather plans statistics hint. If time is permitting you might see an example of the gather plan statistics hint instead of altering session with statistics level. 

[pause]

Finally, there is a question on dbms xplan display cursor. What that does is display the execution plan of any cursor that is filling the cache, plus any statistics if you’ve done the previous tactic of statistics level ALL or in the gather plan statistics hint. Usually dbms xplan display cursor you want to set server output off unless you get an error. 

There are a couple of parameters. First, your SQL ID and child number, if you leave them null, null as John did, that will acquire the execution plan at the last query executed in that session. The third parameter is FORMAT and all stats lasts, it basically says giving all the IO and memory statistics available and display them only for the last execution of this query even if there had been multiple executions of this query. 

You can go to the Oracle PL/SQL types and packages reference for dbms xplan and look for the display cursor function and you can see all of the parameters for format. There are some quite interesting ones. 

[pause]

So with that little diversion, John can you show the WITH statement.

>> John:  Yes, I will do. Thank you for explaining that, by the way. I’m sorry, people. I probably was assuming a bit too much knowledge. What I’m going to do now – we’ll go back to the WITH statement. I just wanted to do one more experiment. I probably have the optimizer mode back to rule. I go back to my equivalent SQL here with the WITH clause with that. 

With that are the subquery then find the appropriate department and run it. Even with the rule-based optimizer, I sincerely hope that I’m now getting the optimal plan, which I do. What this really does bring home to you is that wherever you see a join or subquery especially when it’s a correlated subquery, think about factoring this out. Back to you, Dave. 

Copyright SkillBuilders.com 2017

×
Transcript

Star Transformations

Oracle SQL Tuning

3. Star Transformations

[music]

>> John:  Star transformations are – I have decided and I really did want when we were planning this little webinar, we really wanted to make sure that we didn’t cover any topics that you have to pay for and you can pay us, but nothing without the licensing implications. The classic star transformation is Enterprise Edition. But one thing I do want to show particularly is not only what a star transformation is in terms of equivalent SQLs but also how you can achieve the star transformation with Standard Edition. That’s what I really want to come down to and it’s down to the way you write your SQL. 

[pause]

What are the concepts of the star transformation? You rewrite joins as subqueries. 

[pause]

Trivial example, if I were to select ename from emp and again natural join dept where dname = sales. I want to pull out all the employees in the sales department. What happens? We get six employees back. I can rewrite that as an equivalent SQL which will be select ename from emp where deptno = select deptno from dept where dname = sales. 

That’s exactly an equivalent SQL, we get the same rows back. I can assure you I won’t even bother to prove it. I will go to the cost-based optimizer at this stage and that the optimizer will write those as exactly the same execution plan. Do your own checks if you want, but it’s not going get that rows. It’s not going to get that one wrong. 

[pause]

As true star transformation takes that a lot further, because what I’ve done in effect here is I’ve got one facts table, a single facts table emp and a single dimension table which is dept. Not many data warehouses have a fact table with one dimension. They have many dimensions. 

What I’ve done to work up an example here is I’ve created a little star schema. What I have is a fact table and two dimension tables. I’ve created little table. I’ve done some advanced set time. I’ve done a little table called products and every product has a code, which is a primary key. I’ve created a table called buyers. Every buyer has a code, ends in a particular country so that’s my two dimensions – the products dimension, the buyer’s dimension, and I’ve got a fact table, sales which says that every sale was with a certain value of a certain products to a certain buyer. So that’s my star schema – a fact table and two dimension tables. 

[pause]

I’ve already analyzed the tables, by the way, to build different statistics. Optimized statistics are of course critical. 

Star transformation requires bitmap indexes, which is why it needs Enterprise Edition. I’ll just create my bitmap indexes. Create bitmap index pi on sales (prod_code) – and we get my bitmap index. This of course is where we went straight to Enterprise Edition is the moment you create a bitmap index, you need to be Enterprise Edition licensed. This is taking a while by the way because there are a million rows in my sales table. I did want to get some sort of sensible figures out of it. I’ll create another bitmap index on my buyers which I actually bi. 

So I’ve got my bitmap indexes. If I run my query a little star query, first without a star transformation – I’ll now do it with the statistics level high – I’ve still got all set there. As I said, there’s a [4:31 inaudible] you can put in instead, but I’ll just leave that set at the session level which will make life a bit easier. The query I need to run will be a star query that joins the two dimensions across the fact table. 

For example, I could run select sum of my measure which is the column called value – the value in sales, some value from sales and then joining it to one dimension – natural join to products and then natural join to buyers. So I’m going across say the fact table to both dimensions. 

Let me put in the predicates. Say where product = books and country = Germany. So how many sales we had of books to Germany. 

[pause]

It’s taking a fair few seconds to come up with the answer. 

[pause]

Back come that answer. That took a while. How did Oracle actually run that? Look at the execution plan. 

[pause]

Here we see what it did. It’s navigating down, it is a full scan of products applying the filter to get books – that gave you the product code of books. Then it did a full scan of buyers applying the filter which was buyers in Germany. And it’s joined them with Cartesian products and that’s fine because I analyze data. Oracle knows that this isn’t one row in each Cartesian product. 

Then the bad news. It’s done a hash join to sales. What that means is we’ve had to do a million rows as you can see. It has to do a million look ups to retrieve the matching rows. That’s the way Oracle would do it without a star transformation. 

Now we enable star transformations. By the way, I hope you all appreciate how brave I am to do this sort of thing. With these little demonstration systems, the tiniest environmental factor can cause the optimizer to [6:59 inaudible] with a totally different plan. But anyway, alter session set star.

>> Dave:  I’m impressed even if no one else is.

>> John:  Good. Everyone can laugh at me if this sort of thing doesn’t work. 

We set star transformation enabled = true. This parameter defaults to false. I never quite understood why and I can’t think why you ever would want it on false if you’ve bought your Enterprise Edition licenses. Just set it to true and then run the query again. 

[pause]

Maybe that was quicker. Then let’s see what the execution plan was. Star transformation used. Whew, it works. 

A totally different approach and what it’s done – if you navigate down here, in effect it’s done the rewrite I’ve described earlier. It’s rewritten it to a set of subqueries. What we’re doing is bitmap index range scan of my bitmap index on the product. 

[pause]

So it’s gone to products first applying the filter books to identify the product code. Then it scanned the bitmap index to get the row IDs of all the sales for books. That in effect is saying select from sales where prod ID in select from products where product = books. 

Then it does the same thing with the buyers. It scans the buyers table to find the code for Germany and then it picks the bitmap index to get the relevant row. At this point it’s got two lists of interesting rows. Rows about sales of books, rows about sales off to Germany and it merges those two bitmaps with an AND, so in effect it’s getting the intersection of the two sets. Then it converts bitmap row IDs and constructs the five results set. Totally different write. The optimizer did that automatically, no need to rewrite the SQL. But you can only do that if you’ve got Enterprise Edition licenses. 

What if you don’t have Enterprise Edition licenses?

>> Dave:  John, before you start that, just to recap this. We need Enterprise Edition but I think you also said that that parameter star transformation enabled is set to be false by default?

>> John:  It defaults to false, yes. A strange default, but that’s the way it is. 

Now what I’ll do is in effect revert to Standard Edition. So I’ve got to get rid of my indexes. 

[pause]

I drop my index on buyers, drop my index on products because they are now illegal. Let’s see if I can get the results within equivalent SQL. I will get timing at some point, by the way. We want to check the timings. 

Since we’re here my star transformation, for that the actual time is 2.79 seconds, whereas my previous without the star transformation, it was 9 seconds. So star transformation has improved things quite substantially, 9 seconds down to 2 seconds. 

Now I’ll try a manual rewriting of equivalent SQL. I can no longer use star transformations anymore. I don’t even have the indexes to do it. So I’ll select sum value from sales where prod_ID = select prod_code from products where product = books – that’s the first dimension – and buy_code = select buy_code from buyers where country = Germany. Cross my fingers and see what result we get. 

[pause]

We get a missing right parenthesis. We will see a spelling mistake. 

>> Dave:  Missing a C in the select statement.

>> John:  Thank you. That’s probably the first of many mistakes. It’s not product ID, it’s product code. 

[pause]

It’s product code and select. And that one looks okay. 

[pause]

That’s seem reasonably quick, isn’t it? So I’ve got the same results. That really is an equivalent SQL. How is it actually executed and how long did it take? 

We’re going through 9 seconds to 2 seconds with the Enterprise Edition feature. With my rewrite

[pause]

Good heavens. It was even quicker, barely half a second. So I’ve actually outperformed the optimizer. You can see what it’s done is rewritten the statements completely. 

[pause]

My equivalent SQL – I did the rewrite has come with a totally different execution plan. What we’re doing now is we’re doing a scan of products to retrieve the interesting code then we’re doing a scan of buyers to then get to the interesting code and from then we could go restore it to sales and apply the appropriate filter which you see here. 

Note by the way, yes, earlier question. Note all three of these operations were off loaded to exadata. But best of all, even the filters are all off loaded to exadata. This entire query in exadata environment will be executed on the storage tier. That’s an unexpected side effect. I wasn’t expecting something quite so dramatic. But that’s a very nice example I think of how the equivalent SQL has outperformed astronomically even Enterprise Edition star transformations because that was done with Standard Edition features. All done with the code.

>> Dave:  Could you do us a favor and show us the equivalent SQL side by side there? The first one you did scroll up top.

[pause]

>> John: Yes. That was me rewriting it as a star transformation. Now I’m going to scroll up. My original SQL was

[pause]

There. Select sum value from sales, join products, join buyers where product = – this is intuitive, isn’t it? Many people will think this is much easier to understand, but the results are rubbish compared to my equivalent SQL. 

[pause]

>> Dave:  Yeah. More and more we see customers are saving tens of thousands of dollars by going to Standard Edition. This is a great technique in the event you don’t have Enterprise Edition. So, John, maybe you better answer this one. Why don’t you use TOAD or PL/SQL?

>> John:  I happen to think that TOAD should be stamped on.

>> Dave:  Yeah. I’ll give you my opinion, because I don’t like it. But I’ll turn it over to you now, John.

>> John:  There’s nothing wrong with TOAD at all. Absolutely nothing. And I don’t use it firstly because you have to pay for it and Dave is chronically mean when it comes for paying for tools that will make my life easier. But much more importantly and because we work at many, many sites you have to work all the time with the lowest common denominator, and that’s SQL Plus. Now that’s really the reason why you use SQL Plus all the time and why I can’t use TOAD. Now, I can’t go on a customer site and say, “Hey, I can’t do this unless you buy a TOAD license.” But it’s a good product and if you’ve got the license, use it.

Copyright SkillBuilders.com 2017

×
Transcript

Aggregations

Oracle SQL Tuning 

4. Aggregations

[music]

>> John:  The next one I want to look at is aggregations. Aggregations can be a real problem particularly in data warehouse type environments. 

[pause]

Often the problems occur in each of several aggregations. Imagine you want to sum up your sales by customer, by city, and by country. You want to sum them up by day, by month, by year, by product, and by channel, and so on. All of these aggregations may mean a complete pass through the data, and that can be disastrous. If you want aggregate a multiple dimension, it’s multiple passes.

You can get around a lot of this from 9i onwards by using the Group by Rollup and Group by Rollup and Cube functions. I’m not going to talk about them now. Let’s talk about Rollup and Cube another day.

What I do want to show is that multiple aggregations, which people tend to be frightened of and justifiably so are not necessarily about the thing. Not necessarily about thing at all and as long as they’re done the correct way. That’s where – guess what – isn’t that your usual equivalent SQL may mean that aggregations are not a problem?

[pause]

The example I’m going to give, I just need to create a table.

[pause]

I’ll create a table with about 80,000 rows. Did I spell it correctly? That will be one table. I’m constructing a pretty artificial example here. Of course, that will do for the purpose of showing SQL equivalence and the effect of aggregations in execution plans.

I’ll create another table. That was foo. Now I’ll create a table called goo.

[pause]

That gives me two tables with about 80,000 rows each.

[pause]

First query might be – I’m not actually going to run it. I’ll save time now. The explain plan for – let’s say I simply want to count the rows, how do I count the rows of my tables? Count * from select * from foo union all select * from data table.

How many rows do I have in total? I’m not really interested but I’m interested in the execution plan.

[pause]

What did it do? It was able to scan our tables, union, sorted. Take an alternative and equivalent SQL.

[pause]

Let me explain plan for. Select say sum of something. Sum c from. Select count * from foo union all. Select c from our first table union to the second table. I’m sure you can see that’s an equivalent SQL. It’s going to get the same result.    

What execution plan has it come up with? It looks a lot worse, doesn’t it? That first plan looks really, really efficient. Okay. We have to scan the table. We have to count the results. Clearly, how else can you get the results? How else can you get the count?

You could get it this way. But it looks far worse. What I now have is I’ve scanned the table, sure. But then I’ve done a sort and aggregation. Scan the other table. It couldn’t get all that sort and aggregation.

The third one, this looks appalling. There are three sorts, three aggregations. Surely that’s going to be worse. Let’s actually run them and see if it is. If I run the statement – this was my simple statement. I’ll actually run it. It’s taking a few seconds to come through. There come the rows.

[pause]

Looking at the timings for that and how it actually works, there’s what happens as plan is used. And it took 2.5 seconds. Fair enough.

[pause]

Move on then to the other one. I have a variation of it, my equivalent SQL which did three aggregations, not one.

>> Dave:  John?

[pause]

>> John:  Yes?

>> Dave:  A quick question. I think it’s dynamic sampling. The answer, I’ll take a guess here. The question is – do you need to gather stats on these new tables for the optimizer tuning or I’ll have to run these queries correctly?

>> John:  In this particular case, I don’t because it has no choices. There are, in this case, no indexes. It really doesn’t make any difference. It’s interesting to note that the dynamic samplings in fact are not perfect. Those tables are identical. You can see that. The actual rows – the tables are identical. The dynamic sampling rather interestingly didn’t get the precise figures, but that will not be distorting this example. 

[pause]

>> Dave:  Perfect. Thank you, John.

>> John:  Running that one. That was quicker, wasn’t it? Well, that’s harder to prove. But let’s see this more complex one.

The previous one took 2.5 seconds. This took about half a second. The final complex plan with three aggregations in fact was four times as quick. I haven’t expected as good a benefit as that. My more complex plan, the equivalent SQL executed this in a quarter of the time.

If you’re interested in why, the answer is pretty straightforward. It’s because of the number of rows that had been passed back. At this stage I’m passing – as we see in the A-Rows – I’m passing 145,000 rows up through a view to be sorted. In this stage, I’m reducing 72,000 rows to one. So I’m passing a total of two rows back for the final aggregation. That’s why it’s so much faster. But Oracle was simply incapable of understanding that. 
  
What conclusion can we draw from this? Don’t be frightened of aggregations. Don’t be frightened of complex plans. In some cases, you’ll find that a more complex plan can in fact be far faster. Even if [8:08 inaudible] materializing views at various stages.  

Copyright SkillBuilders.com 2017

×
Transcript

Summary

Oracle SQL 

5. Tuning Summary

[music]

>> John:  To conclude, there will be many cases where are equivalent SQLs – totally different SQL structures but because of the way the Oracle optimizer works or it doesn’t work some structures maybe far more efficient than others. The cost-based optimizer will of course do all sorts of rewrites but it needs all the help it can get. 

What I hope I’ve done in this short session is just point out a few of these structures that looks suspicious. A few of the structures that we would always question and I hope I’ve given you some tools to perhaps improve your own SQL and in the meantime back to you, Dave.

[pause]

>> Dave:  Great. Thanks, John. 

[pause]

During our normal online classes, folks, just you know that there is two-way communication between your instructor, myself, or John and you. Chatting is just one of the methods we use on the larger webinars because we have a few hundred people here today, so we would have chaos if we open the mics for everyone. But in the online training class, obviously the mics are open and you can use your telephone or Voice Over IP to communicate with your instructor. 

We do hope that we see you soon in either another webinar or if you need support for your Oracle database, we do build and support Oracle servers, systems, and applications. Many of you probably know then and again who heads up our APEX team. We’re just doing a lot of work for a lot of different people there, so if you need some APEX help let us know. But of course here on our side of fence, DBA, remote DBA, Oracle support, any Oracle services from installation to dispatching, creations – it’s ours. 

We also teach Oracle [2:09 inaudible] Oracle SQL tuning. There are two classes that are coming up very soon. We have a class that starts in the afternoons on Tuesdays, so it’s once a week, one session week starting September 18th. Just three hours a day. We found that to be convenient for some people and if you want to do it in a single week, half day session, that starts October 15th. 

Finally, yes this lesson has been recorded and assuming everything has gone smoothly, we will format this recording and put it up on the website, send you all a link so that you can review and learn from this recording as time goes by. 

I hope you’ll pass our name around and pass the links around to our website. You can find more information on us. It’s SkillBuilders.com. If you go to www.SkillBuilders.com/oracle you’ll find John and myself and our team, our bios, and lots of good information about us and what we do. 

[pause]

That’s it. Thank you all very much. I appreciate your participation and your questions. I know John does as well and we’ll be signing off for today. Have a good day, everyone.

Copyright SkillBuilders.com 2017

×
Free Online Registration Required

The tutorial session you want to view requires your registering with us.

It’s fast and easy, and totally FREE.

And best of all, once you are registered, you’ll also have access to all the other 100’s of FREE Video Tutorials we offer!

 

×