Transcript

Introduction to John Watson, SkillBuilders Director of Oracle Database Services

Lightning Fast Cloning

 

Session 1 – Intro to John Watson

 

[music]

 

>> Dave:  Welcome. Welcome to another SkillBuilders webinar.

 

Today you will learn how to clone Oracle databases – even very large databases in seconds using technology your company probably already owns.

 

[pause]

 

Experienced DBAs, network and system administrators will benefit from this training. I think that IT management will also benefit from understanding the implications of this fantastic technique that John will teach us.

 

[pause]

 

Slide two please.

 

Today’s instructor is Oracle Certified Master John Watson, SkillBuilders’ director of Oracle Database Services including hands on consulting support for SkillBuilders customers. John is the author of three of Oracle Press Exam Guides and holds Oracle Certifications and he is in the process of writing The Oracle Press 12c Exam Guide.

 

[pause]

 

My name is Dave Anderson, SkillBuilders’ President. I’ll moderate today’s session and look for your questions in the chat window.

 

[pause]

 

Now I’d like to bring you John in for today’s session. Welcome, John.

 

>> John:  Thank you for the introduction, Dave.

 

Copyright SkillBuilders.com 2017

×
Transcript

Agenda

Lightning Fast cloning

 

Session 2 -Tutorial Agenda

 

[music]

 

>> John:  Now what I want to run through in our lunch time – well, lunch time depending on the time zone. Now our session today is the ability to clone databases. I’ll go through why people clone databases very briefly as I’m sure some of you get extremely frustrated with continuous request from end users to clone databases, but some of you don’t, in which case I’ll run through some of the common reasons for cloning databases.

 

[pause]

 

And a very quick look at the traditional method of cloning databases, which I’m reasonably sure many of you will be doing already one way or another. But what I want to concentrate on then is a new technique for cloning databases introduced with the later releases of 11g and formerly documented in 12c. Very powerful indeed, potentially very powerful.

 

Cloning a multi-terabyte database theoretically can be done in just a couple of minutes and with regards to stress on the hardware it isn’t really fast, it takes no disk space or minimal disk space at least. It’s easy, not prone to error.

 

Copyright SkillBuilders.com 2017

×
Transcript

Clones, Clones and More Clones. Too Many Clones?

Lightning Fast Cloning

 

Session 3 -Too Many Clones

 

[music]

 

>> John:  To begin with then. Why do people clone databases? Why do people do it? We have some customers who have maybe made six, seven copies, about half a dozen copies of every production database. Why are there so many?

 

User [00:30 inaudible] testing systems. Of course, the assurance testing systems, development systems. One customer working with an independent development team is in each of perhaps four development teams insistent on having their own clone of the production database. Why? Because they incorporate together, but that was the case of half a dozen copies of every production database.

 

Then you get the reporting databases. These will be databases that perhaps over frozen point in time so you can run a whole series of reports over a couple of days as of the end of last week for instance. There are many sites who have multiple reporting databases as well. All of these are going to be based on clones made through one way or another.

 

[pause]

 

Now how do you create these clones? One way, very nice way is to use Data Guard. Of course, if you have Enterprise Edition licenses particularly if you’ve licensed at Active Data Guard option then Data Guard is a very nice option for reporting databases and using snapshot standby. They can also be used for developing systems as well. But Data Guard we talked about elsewhere on other lecture program.

 

One technique I do want to mention with cloning, it is not as widely used. It should be perhaps. It’s the technique of editioning and versioning. This may – these facilities are not as widely used. They should be perhaps and they will often mean that you don’t need as many clones as you might think you would need. So I want to take a very quick look at least at versioning. I don’t think I have time to go into editioning. These are facilities that you need to know ¿ people need to know about that are not as widely used perhaps as they should be.

 

So as a very quick example, right now I’m logged on to the normal, the standard scott schema where I happen to be using a 12c database, but it doesn’t matter. The technique I’m going to demonstrate has been available for many, many years.

 

First off, to setup the versioning technique alternative to a clone, we choose the table that we want to enable version for. In this case, just emp and enable versioning. That’s going to create a bunch of objects. If you want to reverse engineer it you will see large numbers of triggers and views being created. Eventually, you’ll see them being created.

 

>> Dave:  Question in the queue John. What is dbms_wm?

 

>> John:  This is Workspace Manager package. This enables the version incapability. I’m a bit concerned about the delay there, why that’s taking so long.

 

[pause]

 

It’s a standard facility available in – well, there we go. That took a while. A standard facility available in any database. It sets up the versioning incapability which is often a sensible alternative to cloning. I’ve enabled it – enabled what we call versioning just for the emp table.

 

If you look at the objects that were created – select object name, object type from user object – we’ll see that what had actually happened is

 

[pause]

 

I have an extra table created called emp_lt. Emp itself is now a view and then to make sure objects as well have been created. There’s lots going on in the background if I run that procedure call. As far as the end users are concerned, select start from emp, it’s just the emp table like any other, an absolute normal select query dml pl/sql will run against it.

 

[pause]

 

But what we can now do is create a workspace. Next procedure call dbms Workspace Manager, create workspace, give it a name – arbitrary name myws my workspace. Still the end users don’t know how things happened. Absolutely normal queries against emp will run the application. A full PL/SQL application will run absolutely as normal at this point.

 

But what I can do is, as a particular user, I can move to the workspace. Having created that workspace myws, I can now move to it. I’m now in a logically self-contained separate database. For example, I can delete from emp, 14 rows deleted. I can commit.

 

[pause]

 

Sounds pretty drastic but it isn’t drastic at all because any other user will still be seeing what we call the live data, sql + scott tiger and he says “select star from emp” and he sees everything there because he is not in the workspace where the changes were done.

 

If I want to go back to the live data I can move to what we call the live database. If I copy/paste correctly, there we go, I can move to live database and now indeed I will see the live data.

 

Second switch between workspaces, this can be totally ¿ I go to the next workspace, go to my workspace again and then no worries. A very nice facility.

 

I just wanted to mention this because it often means you don’t need as many clones as you might think you actually need. Editioning takes things a step further. Now the editioning capability as well as maintaining a virtual database, your own copies of the data where you can do what you like with for testing, developing, whatever, you can also maintain multiple copies of your software within one database.

 

So many people will be doing an awful lot of clones, perhaps more than they actually need.

 

Copyright SkillBuilders.com 2017

×
Transcript

Cloning the Old Way

Lightning Fast Cloning

 

Session 4 – Cloning the Old Way

 

[music]

 

>> John:  So how do you do it? Right. The traditional way of cloning your database.

 

Well, I’ll go from the bottom up on this slide. Do it manually. You copy the entire database to a different location, you create a control file for it, you change the name, you change the DBID, you open a resetlog and there you are with the clone database. Fine. But clone databases I always think the whole routine –

 

[pause]

 

I’d never want to do it that way. I’ve seen too many errors. It’s dreadfully prone to error, particularly if the clone is on the same machine as the source. One of our clients about a year ago during their manual cloning routine destroyed their production database. Why? Minor mistake in the script and they overwrote the control file of the production database with the control file of the clone. It’s very prone to error.

 

[pause]

 

Alternative technique – Data Guard. Much better, but you got licensing issues of course. Enterprise Edition, yeah. Particularly if you want to use real application testing it becomes even more expensive, but certainly a good solution provided licensing is not an issue.

 

Most people will be cloning with RMAN. I certainly would generally be cloning with RMAN. If the RMAN technique is not so prone to error at all, it will make sure you never do accidently overwrite the database but it’s still quite a lot of work to setup.

 

Copyright SkillBuilders.com 2017

×
Transcript

Cloning Issues and Sample Script

Lightning Fast Cloning

 

Session 5 – Cloning Issues and Sample Script

 

[music]

 

>> John:  Cloning the old way. Do it with RMAN – probably the best technique. It can be scripted. And as an example of scripting it, let me pull up my shell script.

 

[pause]

 

This is an example of a script.

 

[pause]

 

Something of a work in progress that we wrote a while back. This was originally developed for a particular client who needed to clone databases every day. A whole set of databases were going to cloned every night for reporting purposes for the next day and all the clones are going to made I think every month. And as an addition to that, large numbers of ad hoc clones are done from time to time for building up the development systems.

 

[pause]

 

So the technique we would have to do in this script – this automates the whole process. Some of these are just fairly standard with the UNIX shell scripting that we’re going to email messages when things happen. This is all going to be showed through Chrome. We use a function here to work out the dates in this particular time. They’re setting the time of the clone to 8:00 in the evening because that was the time that the cloned database is meant to be to get consistency for reporting.

 

[pause]

 

Prompts for what the database should be. The source database name, the destination database name –

 

[pause]

 

numerous checks. Numerous checks have to go through. I’ll let you run the script for instance.

 

[pause]

 

And eventually, we get the guts of it which begins here. So we connect to the clone database, terminate the clone, drop the database. And this is where the bad news starts. Our down time begins here. The reporting system’s offline as of this point. Having dropped the source database what could we do next?

 

[pause]

 

Generate a parameter file for the new version of the clone because the spfile would’ve destroyed by the drop, create a parameter file for the clone, start off of the parameter file. Then finally connect to RMAN, set our until time and create the database. So we connect to the catalog, we connect to the target, we connect to the auxiliary, and at last we can run the duplicate target database command.

 

If this database is many hundreds of gigabytes – possibly terabytes big – that’s an awful lot of down time as we go through that duplicates. Afterwards, convert to archive log mode, a few checks, and finally after all that we can make the database available for users. Alter database open and users can now log on.

 

That’s an awful lot of down time as the entire database is first destroyed and then reproduced. In fact, if you look at the script yes, 344 lines the shell script to do that.

 

So we can do this. Now we can clone databases using scripts on the UNIX and we can also do it on the Windows, but it’s a lot of work. So what I want to move on now – well, it’s a lot of work, a lot of disk space, a lot of down time while the clone is in progress. So now we can move on to the new technique.

 

Copyright SkillBuilders.com 2017

×
Transcript

Introducing DNFS Copy on Update to Clone

Lightning Fast Cloning of Oracle Databases

 

Session 6

 

[music]

 

>> John:  That’s using Oracle’s direct NFS drivers. The direct NFS drivers were introduced in version 11 and with later releases 11.2.0.2 and actual facility was just slotted in to what you could do with direct NFS.

 

[pause]

 

The ability that was introduced in 11.2.0.2 was a copy on update capability. So when you get through your data files through the direct NFS driver, we have the capability to point the database instance towards in effect two copies of the data files. One copy is a read-only, totally static version of the data files. The other copy of the data files, stored-only changes.

 

So in normal operation end users connect to the clone and all their queries will be operating against the read-only frozen back-up of the data files. Whenever they do DML in the background, the direct NFS driver will copy just the changed blocks to a copy of the data file that’s specific for the clone.

 

[pause]

 

So what is that mean? You back up your source database once only – a one-off backup. Then you create as many clones as you want – one clone, two clone, three clones, four clones, as many clones as you want instantaneously – virtually instantaneously because they’re all be reading from that original copy.

 

They can be on different machines or they can be on the same machine. The copy can be on a local machine or the copy can be on a remote machine. But one way or another, the clones will be instances reading from a copy and whenever they do DML, the changed blocks will be written to storage local and specific to each clone. So each clone requires minimal extra storage.

 

The clones appear to to users to be completely independent. You can do anything with them – DML, DDL, queries, anything at all. The end result of this is that a multi terabyte clone can be created in just a couple of minutes and the multi terabyte clone will take up in effect zero disk space until you start doing a lot of DML against it.

 

[pause]

 

Now this is introduced in 11.2.0.2 and with 12c, it’s been formalized. It’s been formalized quite nicely and Oracle’s even providing a script to [2:24 inaudible] tend to use that makes it easier. With earlier releases it’s a bit tricky to setup whereas it’s a pretty straight forward now.

 

Notes, it’s does rely on using direct NFS but you do not need and NFS server. The way I’m going to demonstrate it now is all going to be on one machine. I’ll create an NFS shared but it will just loop back to a local file system. Yes, we’re using the direct NFS client but we are not in fact using any networking capabilities.

 

 

Copyright SkillBuilders.com 2017

×
Transcript

DNFS Cloning Technique and Demonstration

Lightning Fast Cloning

 

Session 7 – DNFS Cloning Technique and Demonstration

 

[music]

 

>> John:  What I’m going to run through now is – and I hope you all appreciate how brave it is to do this sort of live demonstration – the technique for rapid cloning with direct NFS. We need to configure NFS shares. I’m going to do it locally so I’m not actually using NFS in any meaningful way. It’s just that to get the access to the driver I do have to configure it.

 

[pause]

 

We backup the source database, that one-off copy. And there must be an image copy. It doesn’t in fact have to be an RMAN image copy and if you have the ability to split mirrors, take read-only snapshots on the SAM, they’ll be perfectly acceptable in some sort of image copy of the data files.

 

Then we create a parameter file to control file. With 12c that’s very, very easy because Oracle provides the scripts to do that. It was harder work with release 11.2.

 

[pause]

 

Quick use of a package to setup the use of DLL sets of data files, the read-only full copy and the read-write copy that’s local for each clone. Then we open reset logs and then we’re done.

 

[pause]

 

That’s how we go with it then. To begin with, I need to configure NFS. This is another example of how the line between database administration and system administration is getting so blurred at the Oracle product set nowadays and you could argue a lot about who’s meant to be doing this.

 

I’m working on Oracle Enterprise Linux by the way here. Let us just see basic check is NFS actually running.

 

[pause]

 

And, yes, my demon is running. Now, to configure NFS – first off, I need to create a directory where I shall share out the file. Create a directory clonedb/clone1.

 

Then I need to export that directory. I need to export the directory. Go to my exports file.

 

[pause]

 

And I set a line up there already. I’m going to export the directory u01/nfs_shares/clonedb. That’s I’m going to export and there are bunch of options that needs to be specified when configuring NFS for use by Oracle environment and we do have certain options that must be set. Nothing special but they have to be right.

 

[pause]

 

So having set that up, do we actually have it working? Well, let’s try. Export fs-a then export fs, and there we go. I’m now exporting that directory to the entire world. Not the best security, but don’t worry about that for now.

 

Then I need to create the directory that I’m going to mount things on. So I shall create another directory which will be

 

[pause]

 

on the same machine. So I’m exporting ¿ I’m actually creating one more. We’re ready to do that. And exporting u01/nsf_shares/clonedb and I’m going to mount it on u01/nfs_mount/clonedb. So my NFS is never actually leaving. It’s never leaving the system.

 

[pause]

 

Give it a mount command and for those of you who are not familiar with this, mount type nfs. Again -o, a string of options. No choice about the options or very little choice about the options and they’re well documented and they are required for it to function.

 

We look at etc/fstab. I’ll use the mounting I already got there. I’ve already configured this in my fstab file while I’m trying this all dynamically. What I’m doing here is taking this export and mounting it – taking that export and mounting it on that path there. So I’ll do it through the fstab file instead it mount – a, df – h and there we are.

 

I’ve now mounted the NFS directory on that. So what have I done so far? Not very much, except making a couple of typing errors. I have exported one directory and mounted it on another. So if we look at just to review, export fs – I’m exporting u01/nfs_shares/clonedb and then I’m mounting it on u01/nfs_mount/clonedb.

 

[pause]

 

I’ll just make some ownership changes to make sure that Oracle has permission to see everything.

 

[pause]

 

Now that’s configured NFS at the operating system level. This server is now exporting and mounting a file system.

 

Moving on then to the Oracle side, the configuration within the Oracle environment, first there’s a small configuration file that isn’t essential but is generally considered to be best practice. And in your Oracle home DBS directory we create a file called oranfstab. This is strictly speaking, not necessary. It is optional but it is considered best practice.

 

Within this file you specify a list of all the NFS servers you’re likely to use. In this case, looping back to my local machine. The path to get to the NFS server – the path from your NFS client and you can see I’m using loop back addresses, so I’m not really using NFS at all. It will then give me access to the driver.

 

Then the mount we’re going to use and that reflects the mount that’s already made at the operating system level. This file is optional but in a complex environment where you have multipathing to your servers, using these directives will allow you to isolate the NFS traffic perhaps a certain [6:30 inaudible] rather than having to interfere and will be interfered with by other traffic in the environment.

 

[pause]

 

Having done that, we need to enable the use of the NFS driver. That is done by copying it in in your library’s directory.

 

[pause]

 

You will find from release 11 onwards you will have two drivers for getting to disk systems. One will be called libodm12 or libodm11.so, the shared object library for reading and writing files on conventional storage. And then there’s libnfsodm12 or under 11 libnfsodm 11. That’s just the NFS driver. All you do is copy that over that. It’s exactly the same routine under Windows except they’re called DLLs. You’ll find both drivers there, copy one over the other copy the NFS driver over another NFS driver.

 

From then on we are NFS-enabled and whenever we start an instance – if you look at the [7:34 inaudible] log you’ll see a message at the startup stating that we are using NFS driver version 3 nowadays or version 2 before.

 

[pause]

 

Let’s configure it then. What do we do next? Having that NFS running I now need to start the clone process. It begins with a backup. So I’ll create a directory for my backup, u01 backups. My source database is called Oracle orclz, Oracle Z. I could create a directory for the backup, connect with RMAN – because I’m going to do it with RMAN – and make a backup of the file.

There’s no reason why there shouldn’t be a hot backup by the way. Hot or cold makes no difference but what it must be is backup as copy. Backup as copy database and I’ll send the backup to the directory I just created using a standard format string.

 

So there’s my backup. I backed up the entire database as a copy. You could speed this up creating parallelism of course if you have the license and the files will be generated over there. That’s going to take a while, so while that’s going on perhaps I can do a bit more work in the background to make sure I’m connected to the database.

 

[pause]

 

I need a copy.

 

>> Dave:  [9:09 inaudible]

 

>> John:  Thank you. I need a parameter file. I’ll get there eventually.

 

[pause]

 

I need a parameter file, which is simply copied from the live system. So I create pfile, give it a name from spfile. And that parameter file will then be edited for creating every clone database. So it’s a one-off copy of the data files and a one off copy of the parameter file, and from then on we generate the clones based on that parameter file, based on the copy which will be finished in a couple of minutes.

 

[pause]

 

The copy will have gone to this directory here.

 

[pause]

 

And there it is. I’m going to remove the temporary backup of the spfile that RMAN did automatically and that will be a backup of the control file that RMAN did for me. I need to get rid of them because the clone of course create – the clone will have its own parameter file and its own control file but will be using these image copies of the data files.

 

So when is my clone going to be created? I’ll give it a directory in which to create it.

 

[pause]

 

It already exists. Okay, no problem. So I’ve got a directory there waiting for me.

 

[pause]

 

Now, the next step. We set four variables. The variables are quite nice and they’re nice because they fit into a script that Oracle provides 12c. With 11g we have to write scripts ourselves. Most of the variables, MASTER COPY DIR. Where is my master backup? That’s the backup in my case created with RMAN but it could be a splitting of a mirror, so I’m going to use repeatedly.

 

Where am I going to create my database? I’m going to create it in u01/nfs_mount/clonedb/clone1.

 

Name of the database? My clone database will be called clone1. SID of the database will be clone1.

 

[pause]

 

Then I need to generate the control file and the parameter file. To do that Oracle, provides a script. The script exists in – I’ll run the script and then we’ll walk through –

perl ORACLE_HOME/rdbms/install/clonedb.pl.

 

I pointed towards the backup of the parameter file that I just created. And what that script has done is read that file and make a couple of edits to it and written out a brand new parameter file. So where is that parameter file and what’s in it? We’ll take a quick look at it.

 

[pause]

 

It would have been created in nfs_mount/clonedb/clone1. That’s the file that was generated and if we look at it, just a very few changes made from the original one. Those are the parameters, two critical changes, name of the database will be clone1 dbname, and then we see clonedb = true. You’ll be wanting to look out for that parameter – I looked it up for you.

 

And then we have the clonedb parameter set to true or false. Clonedb set in on Direct NFS Client CloneDB database. When it’s set, the clonedb database uses the database backup as backing install. So that parameter needs to be set.

 

[pause]

 

What else do the script do? Apart from that, the script created crtdb.sql and rename.sql. Those were generated by the script that I ran, by the perl script. If you have a quick look at that we will see that crtdb startup nomount of that file, that parameter file and it create control file command for our database called clone1 pointing towards the backup.

 

I shall run that now, make sure I’m connected to the right instance name, clone1. Yes. So sqlplus / as sysdba.

 

[pause]

 

And run that first script.

 

[pause]

 

Let’s startup nomount, that was done, control file created. That was reasonably painless. It just creates the control file pointing to the data files in the backup. So then we move on to the next script, which was rename.sql regenerated by the perl program. What that does, rename.sql, begin a series of calls to the dbms dnfs.clone_rename file.

 

[pause]

 

We’re wanting to look at up as well and here it is. It’s a very simple procedure. It has just clonedb_rename file procedure and that’s all there is to it.

 

What it does, it renames data files that were pointing to our backup set, the actual filename of our clone database. Two arguments, source file (srcfile) is the data file on the backup. Destfile must point to nfs volume, but it can be local. It doesn’t have to be on the remote NFS server and that’s where the clone files will be created. So we’ll try to run that rename.

 

[pause]

 

That would help. @rename.

 

[pause]

 

It’s done the renaming, open reset logs, and when that goes through that will be it. The clone will have been created.

 

[pause]

 

There we go. Now select open mode from  v$database.

 

[pause]

 

It’s open read-write. It’s an open read-write database. If we select name in v$datafile, we see the data files are pointing to the NFS mounted directory.

 

That was a clone that took just less than a minute, wasn’t it? But let’s see what’s actually happening there and it is quite interesting. First, we look at the original backup. If I go to my backup orclz, we see now the data files. The data files – let’s just use a different version of ls. Ls – lsh.

 

These data files, the example table data file 324 megabytes and note the S in there showing the actual size. It is indeed 324 megabytes as we would expect. So the backup, your current size of the file and the actual size of the file, it matches and the entire backup is 2.1 gigabytes. That’s going to be the same size the source as the source database.

 

But if we look at what’s happening in the backup environment, in the cloned environment – I go to u01/nfs_mount/clonedb/clone1 ls -lsh, there are the files and on the face of it are the same size, 324 megabytes, same size as the source, 771 megabytes same, size as the source. But look at this here.

 

[pause]

 

That file is not 324 megabytes, it’s 16 kilobytes and the entire clone database is occupying just 210 megabytes and most of that is the online logs. So my entire database is taking up just a few K. What happens when I do some work? If I, for example, create a table, create table T1 table example and select star from all users, if I actually do some work, absolutely normal database, open read-write – what has happened now while I rerun this command and we see that whereas the 324 megabyte file that’s occupying 16K, the 324 megabyte file is now occupying 32K. So clearly, we’ve had to copy over, we have to copy out two blocks to represent the change that we’ve actually done.

 

[pause]

 

That shows you how space economical the databases. So a 2 gigabyte database is in fact occupying 210 megabytes.

 

Copyright SkillBuilders.com 2017

×
Transcript

Demo Creating Additional Clones ( In 2 Minutes! )

Lightning Fast Cloning

 

Session 8 – Demo Creating Additional Clones (In 2 Min.)

 

[music]

 

>> John:  The way it gets really good is the ability to create multiple clones running off the same database. Multiple clones off the same database. And this should show us how fast it really can be.

 

[pause]

 

What I shall try to do is create a second clone. I really should put a stopwatch on and see how long it actually takes. First off, create a directory in which the clone will live, clonedb/clone2. Then I’ll set those four critical environment variables – master copy dir, pointing the same as before. Clone file create test, pointing to my new directory. Clonedb_name ¿ that looks like a typing error in there. Clonedb_name = clone2.

 

[pause]

 

Oracle sid = clone2. I’ve just set the variables for creating a second clone. Then I run the magic file script that will generate the parameter file and the clone creation scripts. So perl cloned.pl, pointing towards the existing long-term storage copy of the parameter file and that will regenerate it. Crtdb and renamedb. So run them again.

 

[pause]

 

sqlplus / as sysdba

 

[pause]

 

And create a second clone.

 

[pause]

 

I ran out of memory on this machine. Using the clone to shut down one of the other clones first.

[pause]

 

Terminate that. That’s not a problem with cloning, it’s the problem with the capacity of this machine. So if I kill off my original Oracle SID database and then we’ll try again.

 

[pause]

 

Now that’s looking a bit better.

 

[pause]

 

Control file created and then we run the script that renames the data files. Done. Open reset logs. And at that point you should have in my clone2 directory, those are the copies, ls – lsh, which of course are taking a minimal amount of disk space. And that’s it. Clone2 is created.

 

[pause]

 

Set style from all users. I hope you’ll agree with me when I say that being able to clone and that could literally be a multi terabyte database. But in effect, I cloned a multi terabyte database in probably about three or four minutes.

 

>> Dave:  Two minutes and 11 seconds.

 

>> John:  Thank you Dave. Including the fact that I have to terminate another instance halfway through. Okay. Two minutes to clone a database. That’s not bad going and the space used is only a few kilobytes for each file.

 

Copyright SkillBuilders.com 2017

×
Transcript

Review Technique and Limitations

Lightning Fast Cloning

 

Session 9 – Review Technique and Limitations

 

>> John:  We create NFS at the operating system level and I’m not using networking. I’m doing this along loop back addresses. A backup as image, create a parameter file and control file. We got now scripts available to do that which is harder with 11g, I can assure you. But we can do it for you with 11g and we can in effect write the clonedb script.

 

[pause]

 

Rename the files using that package, open reset logs, done. Of course, you have to monitor the space of your drive because as time goes by your clones is going to take up more space. Because it will never be your guess that when you do a simple ls – l or ls – lh, UNIX is lying to us. These are created at what I call sparse files. They are sparse files.

 

Only if we do that you receive the actual space being occupied. So never forget that the clones is going to increase in the background and you may well hit file system full problems that you are not expecting, so monitor the space usage. As many times as possible your clones as you wish all running at the same backup.

 

[pause]

 

Lastly, just a few limitations, 11.2.0.2 is when this came in. With 11.2.0.2 I’d like to say it was clunky, it was manual. The Direct NFS ODM library must be enables. Simply copy it in. I have to say I wonder why it isn’t enabled by default. I can see no downside to using the NFS library. You don’t have to have your files on the NFS device. You can have files on local devices and the NFS driver will still function. No problem at all. The NFS library can read both local storage and NFS storage whereas the standard ODMs library cannot read via NFS devices.

 

All your clones must be able to see the backup. The backup by the way does not have to on NFS. It can be on any form storage that have enough available except ASM. Clones from different machines, they’re going to run on the same machine as the source as I am. If you damage that one backup all the clones will broken. That does become a single point of failure for all your clones because there is only one master copy of the data. The private data to each clone is only changed blocks.

 

[pause]

 

Point at the bottom here that I do want to highlight, performance tuning. Tuning SQL is no problem at all. The clone database is perfect for tuning SQL. You can run the statements, get your execution plans out, do everything you want for tuning SQL on the clone, no problem at all. Actually benchmarking a workload, that wouldn’t be fair. That would not be a fair test because there could be many clones hitting the original copy of the data.

 

But for tuning SQL, not an issue. But if you were going to say – if you say perhaps you want the real application testing and option, it would not be fair to run database replay against the clone that was created in this fashion, whereas the SQL performance analyzer will be no problem at all.

 

Copyright SkillBuilders.com 2017

×
Free Online Registration Required

The tutorial session you want to view requires your registering with us.

It’s fast and easy, and totally FREE.

And best of all, once you are registered, you’ll also have access to all the other 100’s of FREE Video Tutorials we offer!

 

×
Podcast
×