Transcript

HCC Limitations and Best Practices

HCC is undoubtedly good technology, but there are some limitations, some issues that I think we need to be aware of.

First off, it can work only for direct loads. I believe that this is a technology limitation. Because of the way the compression is done in units of four blocks, it’s hard to see how that type of operation could ever be done in the database buffer cache. So you need direct loads or it can’t work at all.

The four-block compression unit, I can’t see why that would be a technology limitation, and it wouldn’t surprise me if in a later release, perhaps it will be something that’s tunable. But certainly with the current release, four blocks is what you have to work with, and that can be significant when it turns,* comes to thinking about the structure of your objects, the CalMovering, the way your data is actually stored within the tables, and that may require some tuning to the four-block compression unit size.

The compression ration itself, DML against ATC objects, is definitely not a good idea. The way it’s actually implemented, if you do an update against a compressed row, the data is decompressed. The DML is the executed, and when the data is saved back into the segment, it gets compressed with basic compression, or deduplicated compression.

So what you end up with is a table that’s part ATC compressed and part deduplicated. So, inevitably, as DML occurs against an ATC table, you’ll find your compression ratio will degrade and there may also be an impact on the time it takes to perform the DML because of the extra work involved.

Finally, one point that probably isn’t important to many people. The cells will be doing the compression, the decompression, and they will serve decompressed data, just serve rows or blocks back to the compute nodes. However, if the cell nodes are, in fact, working flat out and CPU usage is running at 100%, under those circumstances, the Exadata software can decide to serve complete compression units back to compute nodes. Then the compute nodes will then have to take the hit of doing the decompression.

I would say that if we managed the two new Exadata systems up to the level that you’re running short of CPU on the cell nodes, I think we’ll be doing very well indeed, but it’s just something I want to highlight. There is no way I’m going to be able to demonstrate that, not from the systems I’ve got here, but I will try to demonstrate one or two of the other issues that may be significant to you.

×
Free Online Registration Required

The tutorial session you want to view requires your registering with us.

It’s fast and easy, and totally FREE.

And best of all, once you are registered, you’ll also have access to all the other 100’s of FREE Video Tutorials we offer!

 

×