Transcript

Introduction and Data Guard Quick Review (Revision)

1. Introduction and Data Guard Quick Review (Revision)

 

[music]

 

>> John:  What I want to talk about today is what Data Guard can do for you in addition to the standard functionality providing disaster recovery facilities. What this is about is gaining more value from your DL systems than just DR. 

 

[pause]

 

I never forget a discussion – if you can call it a discussion rather than an argument – that I had almost the first time I set a Data Guard. This is back with 9i. 

 

[pause]

 

The client I was working for have won a contract to run a call center for a multinational company. One condition of the contract was that the database should be Data Guard and I’ve been subcontracted to set it up for them. 

 

[pause]

 

The person managing this contract kept coming up with comments like, “It’s ridiculous that I have to spend half a million dollars on that book that’s just sitting there in the corner doing nothing.” 

 

I tried to explain to him that it wasn’t doing nothing. It was the reason he had a job. But he wouldn’t accept this.

 

[pause]

 

To this day I come across this attitude quite frequently normally from financial people. They find it hard to accept that the Data Guard system is actually worth having purely for data recovery, for giving you zero data loss, and zero downtime. But it won’t actually give you zero downtime. There will be a short switchover that’s certainly zero data loss.

 

[pause]

 

What I want to talk about today is facilities that will help in such situations, introducing ways in which you can extract business value from your standby databases beyond the disaster recovery aspects. This may make it easier to motivate the introduction of Data Guard. It may make it easy to motivate the upgrade from Standard Edition to Enterprise Edition. But one way or another, this is about ways of extracting more value from your Data Guard environment in addition to the high availability facilities.

 

What we’re going to do is open the database read-only. We’re going to open the database read-write. At the end, it will even open the database what we call the read-mostly mode, which is a particularly nice way grouping your standbys into use.

 

[pause]

 

But throughout all this, I have to emphasize – don’t ever forget – the core reason for Data Guard is protecting against data loss, protecting against downtime. And everything else should be viewed as a bonus. But that bonus can be huge. 

 

What can get out of these extra facilities should be techniques for improving performance of the primary by offloading work from the primary, improving performance of queries or running them against the standby, to a certain extent optimize for the purpose, and perhaps even reducing the need for Oracle licenses by opening the standby database in the manner in which one system can provide multiple purposes perhaps can protect your data while simultaneously running applications.

 

As we go through this of course, I’ll be doing demonstrations and so on. Those of you who have been with us before will know its highlight nothing better than [03:24 inaudible] plus on my face while doing live demonstrations, but they really are important because anyone can show you just a set of PowerPoint slides. I will – I hope to be able to demonstrate that we really can do what we say we’re doing.

 

[pause]

 

Before I move on to the facilities that I want to concentrate on, I do want to spend just a few minutes of what is supposed to be revision. This is what’s Data Guard actually worked, the functioning Data Guard, in normal or standard environments.

 

[pause]

 

This is what I might call assumed knowledge, so I’m going to be very quick over upon this. But if any of you would like a bit more information, send in a question and assuming there’s time, we’ll cover it.

 

[pause]

 

How does Data Guard function?

 

[pause]

 

When you enable Data Guard, you have two or more databases and the environment I’ve set up here – I’ve actually got three databases. I’ve got one database which I call London and if we look at the London database, you’ll see that London is my primary database. I’ve configured two others. 

 

Yes?

 

[pause]

 

>> Dave:  We’re not seeing you. 

 

>> John:  I’m sorry about that. 

 

>> Dave:  We just see your slide.

 

>> John:  In that case, I’ll rebroadcast. 

 

[pause]

 

Is that better? I’m sorry about that. 

 

[pause]

 

I’ve got three databases here. My London database is my primary. My Berlin database is one physical standby. I created the third one, ready for when I break one of the existing ones which is Madrid. That too is a physical standby. So I’ve got one primary propagating changes to physical standbys.

 

[pause]

 

Redo on the primary is generated absolutely as normal. There’s no change the redo generation at all when you install in Data Guard. Your end users, their sessions write change back to the log buffer and the log writer plus they change back just the online logs in near as dammit real time. How many you say commits of course, it is real time. 

 

What does change on the primary is the ADS transport services, which may be synchronous or asynchronous. Those transport services will show parameter log_archive_dest_1. Log archive dest 1 I have not set. By default, that will be my disc-based destination for the local archives.

 

Log off dest 2, I’ve configured to send the redo to the Berlin database and log archive dest 3 I’ve configured to send redo to the Madrid database. How is that redo actually being transmitted?

 

[pause]

 

We see I have additional processes, log writer network slave processes and they are writing the data to the remote destination. I could join to another column or two, another table to find out which one it is. But two of them are writing because I’m writing to two databases. Right now I am transmitting block 54248 of logfile switch sequence 29. Time goes on, and we already have small advance. We can see the data being transmitted in real time.

 

[pause]

 

Redo generates this normal what we had on transport services. 

 

Then we have the apply services. On the standby databases, we have the Berlin standby for example. If we look at what’s happening here

 

[pause]

 

We see that I have a process, the MRP, manage every process applying redo. It’s extracting redo received and right now it’s applying block 54291 in sequence 29. It has already advanced on there, applying that. 

 

The RFS (remote file service) does the process that will be receiving redo. If I do some work here, anything at all really, “Create table t1 as select * from all users,” I should’ve kicked off some reaction here.

 

[pause]

 

It’s now just been receiving that much. It’s gone up to 89 already. That data we already have come across have been applied. 

 

[pause]

 

The apply – I was using the manage recovery process which is for physical standbys. Redo apply through recovery. It is quite simply recovery as in recover database or recover datafile done at the block level. Purely for completeness, the alternative type of standby database is of course the logical standby should apply data with streams technology. Streams technology will be processing redo stream and from it reverse engineering equivalent DML statements.

 

I’m not going to talk about that today. That will be too large a topic and we we’ll be able to deal with that on another day.

 

[pause]

 

Next slide of revision, the data flow. The log buffer is read on the primary by log writer network slave processes. In my case, two network slaves because I’m directing data to two standbys.

 

The network slave writes across SQL Net or Oracle Net as we should call it nowadays to remote file servers. That is a normal logon session. The log writer network slave logs on through a listener to the remote database, the remote database listener spawns the server process which becomes the RFS process. Usually it spawns several because you might want to parallelize the reception of redo. 

 

The RFS (the remote file service) on the standby receives the redo and writes the standby log files, the apply services. In my case, MRP will read the standby log files, extract change of vectors and apply them to datafiles, logical standby, streams processes, reverse engineer the change of vectors into equivalent DML. 

 

Finally, on the standby databases archive processes will generate archive logs from a standby logs they fell. So we’ll end up with archive logs in the primary, but there’s no real change the operation at all apart from launching. That’s processes and archive logs will also exist on the standby log files. 

 

[pause]

 

Finally, the conclusion of Data Guard, I did promise this would only be a few minutes of revision. 

 

MAA – Maximum Availability Architecture

 

The Maximum Availability Architecture is Oracle Corporation’s preferred option for zero data loss and 100% uptime. The zero data loss is counted by Data Guard, and it really is zero data loss under any circumstances if you configure it appropriately. 

 

[pause]

 

100% uptime guaranteed by RAC. That’s being slightly naughty in the definition of uptime. I always like to warn clients of this. That 100% uptime, yes, in terms of the database remains open. But bear in mind, not necessarily 100% uptime in user’s point of view. Sessions may freeze for a while. But in principle, maximum availability is one or more is RAC’s database protected by RAC’s Data Guard.

 

[pause]

 

For maximum availability, we include automatic failover based on Data Guard broker. The whole process of failover is automated. It’s very effective.             

 

[pause]

 

It’s also very expensive. We would love to set up Maximum Availability Architecture for you, but it does cost a lot of money in terms of licenses, in terms of hardware because it’s largely based on redundant systems. Redundant system is what in this seminar I want to get away from. I want to start bringing those Data Guard systems into use.

 

>> Dave:  John, a question on the queue. You mentioned sessions’ user experience might freeze. Can you expand on that, an experience with that? How long might the freeze be as you said?

 

>> John:  There are two points to consider here. If you’re working on RAC, your session will be against one instance. If that one instance fails – that’s what Oracle likes to call a “brownout.” When you lose an instance in a cluster, the entire cluster will freeze. 

 

The freeze may be a matter of seconds – single-digit seconds – and during that single-digit sessions, single-digit second period (perhaps three seconds), the cluster reconfigures itself. Only after that can sessions failover to a surviving instance and sessions already against the surviving instance wake up.

 

[pause]

 

Your session will appear to hang for a few seconds. 

 

[pause]

 

If, however, your cluster’s really big, if it is 8 instances and they all have 100 gigabyte SGAs, you’re going to notice that freeze. It could be a few minutes. Single-digit minutes but you’re going to notice it.

 

[pause]

 

The other problem is with failover to Data Guard – failover to a Data Guard’s standby database that can be initiated.

 

We might choose to initiate it. The default for automatic failover is 30 seconds. I happen to think that’s too short and I would normally want to extend that to one minute or two minutes. The reason for that by the way is that we wouldn’t want some minor network problems happen, and wham, all your databases switch over to [13:22 inaudible]. You would want to delay long enough to handle short-term transient problems.

 

So you probably would want a couple of minutes delay before you initiate a failover. Having initiated the failover, there may well be a few minutes delay longer before the Data Guard physical standby was activated and sessions can transfer across to it.

 

[pause]

 

So delay should be single-digit minutes in the worst case.

 

[pause]

 

>> Dave:  Very nice, John. That term “brownout” is somewhat catchy and descriptive. It a little bit depends on configuration and how much money you’ve spent and who’s set up your configuration. As John said, we would love to setup a configuration for you.

 

John, another quick question in the queue.

 

[pause]

 

Does Data Guard require a separate license?

 

>> John:  Right.

 

[pause]

 

Data Guard is Enterprise Edition only. You do have to license every machine. So, yes, every machine has to be licensed. But as long as you already have Enterprise Edition licenses, there’s no extra.

 

[pause]

 

The alternative that some of the sites have to work with of course is Standard Edition licenses. With Standard Edition licenses, go back a few slides.

 

[pause]

 

Standard Edition licenses – there are no transport services, there are no apply services.

 

[pause]

 

So with a Standard Edition environment, you generate redo as normal on the primary database, but then you, the DBA, in conjunction with your system administrators have to be responsible for transport. 

 

So a script-base system using usually a combination of SQL scripts and shell scripts has to be designed to transfer archive logs from the primary database to the standby database. And on the standby database side, more scripts. Again, usually using a combination of SQL scripts and shell scripts have to apply the redo.

 

[pause]

 

There is no extra license. But if you are not using Enterprise Edition licenses, you have to setup transport and apply manually, which is no big deal. We’ve done it.

 

[pause]

 

The transmission would normally be perhaps every 10 minutes because it’s based on archive login. I would not want to force the generation of archive log more frequently than say every 10 minutes.

 

[pause]

 

Then the apply is through applying archive logs. That will be pretty much as soon as they arrive.

 

So there’s no extra license as such, but if you don’t have Enterprise Edition licenses, we have to buy scripted solutions for transport and apply.

 

[pause]

 

>> Dave:  Great. Thank you very much, John. 

 

I wanted to put it in the chat as a response to the question. Normally ship every 10 minutes, when would you apply?

 

[pause]

 

>> John:  As soon as the archive logs arrive. What that does mean is it is not guaranteed zero data loss. In the worst possible case, which should be where your primary database were destroyed 9.5 minutes of the forced log switch and prove to be impossible to get that last switch of redo across, yes, you would lose that 9.5 minutes of work.

 

[music] 

 

Copyright SkillBuilders.com 2017

×
Transcript

Demonstration: Open the Standby Read Only

2. Demonstration: Open the Standby Read Only

 

[music]

 

>> John:  Moving on, we now look at some of the facilities. 

 

Opening the standby database read-only. This has in fact been possible since the automation of redo transport and redo apply when it was first introduced in release 8i.

 

[pause]

 

Redo is received on the standby database but is not applied. So the redo is written to the RFS, slave to standby logs, archive to standby logs into archive logs but there’s no apply done. So recoverability is not compromised. There’s still no danger being lots of data. You’re still receiving.

 

[pause]

 

The thing is open read-only. What does that mean? It means you can run queries against it. You can offload all your heavyweight queries from the primary to the standby. 

 

[pause]

 

When I say all of them, even queries that require vast massive temporary space, those horrendous queries, those doing hash joins between multi gigabyte tables, they’ll be generating many gigabytes of temporary data. You can write temporary data.

 

[pause]

 

Provided that you’re using proper temporary tablespaces which you’re certain you should be, you can run any query no matter how big, no matter how much temporary data it needs.

 

You can also write audit data and this is important because with version 11, there’s quite a lot of auditing enabled by default. By default, that auditing goes to the database, writing database tables. And you can’t write to a database table in a read-only database. That we can configure for automatic switching and direct the audit records to the operating system or the trail instead.

 

[pause]

 

The queries you run, why is this such a good facility? First, you’re taking stress off the primary. I’m sure that we’re all DBAs here and we all know how difficult it is to tune your database or a concurrent workload that includes both high volume DML and includes heavyweight queries. Basically, you can’t tune that way at all.

 

[pause]

 

Offloading the queries to the physical standby will reduce a lot of stress from the primary. So your DML should run significantly better and you can even do a certain amount of tuning on the standby to tune for optimal query performance. The most obvious tuning one would do, would be the memory parameters. No one could for example give the primary a massive PGA aggregate target and the much smaller SGA target or use indeed automatic memory management, perhaps cell tuning.

 

[pause]

 

One can adjust many parameters to do with tuning the optimizer to drive the optimizer to a certain execution plan, to exploit the environments. All that can be done on an open read-only database.

 

How does one do it? It really couldn’t be simpler.

 

[pause]

 

If we look at my physical standby here, this thing database role, physical standby – and if we select the open mode from v$ database, we find this in mount mode.

 

How do we actually open at read-only? First off, we have to stop the recovery process because remember redo is received but no applied. You have to cancel the recovery. So “alter database recover managed standby database cancel.”

 

[pause]

 

That will give us a read consistent version of the database.

 

The command is intelligent enough to know that there might be incomplete transactions. Long-run transactions could’ve been partially received. We will end up with a read consistent version. 

 

To cancel it, all we do is alter database open read-only.

 

[pause]

 

It is that simple.

 

[pause]

 

And now on my primary, select * from scott.dept and I can now run all my heavyweight queries over here. But of course, it is read-only if I insert into scott.dept.

 

[pause]

 

Values 99, a new department, and it can be for UK. Then I commit it. 

 

[pause]

 

We already got 99. That’s probably because I have an uncommitted transaction kicking around.

 

[pause]

 

That’s better.

 

[pause]

 

Commit. That is not going to appear over here of course because we are not applying the redo at all.

 

[pause]

 

We already 99. There’s no committed insert.

 

[pause]

 

We can see what’s going on by the way. If we look at the query

 

[pause]

 

We can see we are still receiving redo. The redo is coming across in pretty much real time.

 

[pause]

 

From my primary I am transmitting block 55303.

 

[pause]

 

I’ve already applied it. Sorry. I’ve already received it.

 

We’re receiving all the redo in very nearly real time, but we’re not applying it at all. So what I’ll need to do eventually is alter database close and then re-enable the manage recovery.

 

[pause]

 

But there’s no reason with a standard command. Right now I’d be launching my recovery process and it will be applying all the redo that is being received.

 

[pause]

 

It’s already caught up.

 

[pause]

 

We can set this up on many sites so that maybe you would open your database read-only first thing in the morning and of all day redirect your queries to this thing. At night, close it, apply the redo and it can catch up.

 

[pause]

 

But it gets better. 

 

[pause]

 

Because we have the ability to open the database read-only. 

 

Read-only with apply.

 

What I’ve just demonstrated is a wonderful technique for improving performance. You offload all your heavyweight queries to standby without compromising recoverability and that you repeat that without compromising recoverability. The ability to tune the standby run increase optimally helps a lot. That’s of course your queries are only as up to date as of the time the database was opened. 

 

If you want a real time data warehouse, then you need this. The real time data warehouse facility open read-only with apply really does now mean that we can start saving money because previously you would need to license two machines. 

 

[pause]

 

You need to have one machine, perhaps open one machine which you would use as a physical standby applying redo in mount mode. That would give you your fault tolerance. The second machine, you’d have to license and uses a logical standby which you could open as a real time data warehouse. 

 

By using the open read-only with apply option, the one license machine can fulfill both functions. They can run real time queries and give you a zero data loss.

 

[pause]

 

It can function full zero data loss without requiring a separate machine. It’s a real time queries and zero data loss into one box.

 

[pause]

 

One thing I should emphasize at this point by the way, in these webinars, we try not to cover features that cost money other than of course the cost of paying us to assist you in implementing them. But this feature is an exception.

 

[pause]

 

This is Active Data Guard. Active Data Guard is licensed on top of the Enterprise Edition. I justify it by saying that it really is an option that saves you money because of the ability to reduce your license footprints. You will find with Active Data Guard, you don’t need to license as many machines, because the one machine is both your data warehouse site and your disaster recovery site.

 

[pause]

 

Because the standby database is a real time DSS system.

 

[pause]

 

The way it works

 

[pause]

 

My physical standby is now closed.

 

[pause]

 

Let’s look at it in terms of the view. This is now a mounted database.

 

So we can’t run any queries at all at this point. If for some reason I tried to, of course database not open.

 

What do we do? We begin by counseling the manage recovery once again.

 

[pause]

 

We open the database. Alter database open.

 

[pause]

 

Run my queries. Fine. You can see the changes we’ve come across already. But now we start recovery.

 

[pause]

 

Alter database recover managed standby database using current logfile disconnect from session.

 

[pause]

 

This would have the recovery process and I’m now legally required to have licensed of course the Active Data Guard option. 

 

[pause]

 

We can now see I’m applying logfiles at this precise moment. If we do some DML or open a query, I’m running queries then against the database. This is indeed applying redo at this precise moment.

 

If I were to delete from scott.dept where depts. No=98, of course I cannot run that query without statement here.

 

[pause]

 

But I can run to my primary, no problem. And commit.

 

[pause]

 

In very nearly real time, the row’s already gone. Your data is coming across.

 

So I’m getting real time information for my queries. And it gets better. If you look at the last couple of points I have here, there’s a configurable what time.

 

[pause]

 

If for some reason, the apply redo just slipped behind, which can happen at Data Guard environment. You have a huge burst of activity on the primary. It may well be the apply service. Just get a few seconds behind. You can configure lag times.  

 

[pause]

 

You can configure lag times quite straightforwardly.

 

[pause]

 

With the command such as this. 

 

[pause]

 

I can’t do it for sys. But if I connect to scott, we can configure a lag time here of 10 seconds. What this means is if I run the queries, before running the query, my server process will check, is the standby really up to date? If the standby is more than 10 seconds behind, I guess an appropriate error message, there’s also another command we can run to force a synchronization with the primary.

 

Normally we would setup this sort of thing for you as logon triggers, so that we can guarantee that certain tools you might be running will always give you data that’s accurate within say three seconds when default to real time synchronization so the query will actually hang until the data is up to date. 

 

[pause]

 

>> Dave:  John, hold on a moment if you would.

 

[pause]

 

I would like to do two things and pass you a few questions.

 

[pause]

 

What’s the impact on database performance?

 

>> John:  On the primary, there should be no database lost, no performance impact whatsoever. Because all the primary is doing is generating redo. There is of course an extra process or two to Data Guard which is the log writer network slave process. But that’s just one process. There should be no issue with that at all.

 

[pause]

 

>> Dave:  Great. I’m sorry, were you finished with that one, John?

 

>> John:  Yes.

 

>> Dave:  Good. I just want to interrupt just for a moment. This is a little bit about John. If you go to SkillBuilders.com and click on John’s picture, you’ll get to this page. I just want to show you a little bit more about John’s experience and credentials. Actually you authored several books. You’ll see thumbnails of those books here which will take you to Amazon or something, if you want to buy those books if you don’t already have them. But you can see that John just got an incredible breath of certifications including Certified Specialist in Implementation of Exadata.

 

[pause]

 

And a Certified Master Oracle Database 11g Administrator.

 

[pause]

 

Hey John, I took a stab. There was a question that came and I took a stab at the answer.

 

[pause]

 

What is the release of the database that you are operating on?

 

>> John:  I am operating right now on release 11.2.0.3. Most of the facilities – open read-only dates back to 8i. Open read-only with apply is most definitely 11g, but is any release of 11g. So from 11.1 onwards, we can do everything we’ve got so far.

 

[pause]

 

>> Dave:  You might have mentioned logical standby somewhere along the way and there was just a quick question on that. Can we export data from a logical standby database?

 

>> John:  A logical standby – now you certainly can. A logical standby – which I hadn’t intended to talk about today – but a logical standby is an absolutely normal database. It just happens to be receiving a stream of changes through the SQL apply mechanism. So no problem with that at all. No issue with that. 

 

[pause]

 

>> Dave:  Great. 

 

[pause]

 

I think you might attack this a little later. But I’ll put it in the queue now in your thought train. Can the standby database be used as a load balancer? If so, what are the factors that we need to consider before setup?

 

[pause]

 

>> John:  What you need to consider there is what work can effectively be executed against the standby? Because the standby database is as we’ve seen at the moment is of course read-only.

 

[pause]

 

Let me check where I am on my slides. Yes, in about three slides time, we’ll see how you can intelligently direct some traffic that has DML to a standby. That would allow you in effect use as low balancing mechanism. Direct some applications based on services, balance some applications, the primary, and direct some other applications to the standby.

 

We will return to that later on. Thank you for raising that question. I’ll make sure I’ll talk about it in two slides time. 

 

[music]

 

Copyright SkillBuilders.com 2017

×
Transcript

Demonstration: Open the Standby Read Write

3. Demonstration: Open the Standby Read Write

 

[music]

 

>> John:  The next option 

 

[pause]

 

Open the standby read-write. 

 

[pause]

 

And that sounds impossible, doesn’t it? But it is what we call a snapshot standby. The snapshot standby capability, it’s even better as a cost saver, even better. And as a database administrator as well, I love it. It’s wonderful for DBAs. 

 

The idea is you take your physical standby database and you open it read-write. While it’s opened read-write, to end users it appears to be an absolutely normal open database. What do you use it for? You could run queries against it, but perhaps more commonly you’d use it as your test system, your development system, your UAT (user acceptance testing) system, anything you want. 

 

[pause]

 

For example, you could open standby as a snapshot standby and deploy a whole new application release to it. Do all the testing you want. Including using things like your AWR reports, database replay, do anything you want in it. But the absolute certainty, the database is a real recent clone reproduction. As up to date, as the point of which you opened it read-write. 

 

While you’re using it with development and so on, redo is still being received. Redo is still being written from the primary by LNS to RFS and written standby logs. Redo’s being archived, so recoverability is not compromised. 

 

[pause]

 

And you can open it in this fashion for a week maybe. You probably wouldn’t want to do it for a month, but you can open it for a long period. So you open it read-write, do whatever you want to it and then when you’ve done your testing, tested your application release, done whatever it may be, done your development work, then we re-synchronize it to the primary. 

 

Now the enabling technology by the way is one has to setup flashback database and the flashback is the point of which was opened and then all the backups redo, all the redo that’s been received and not applied gets applied. So there’s absolutely no compromise to recoverability. 

[pause]

 

How do you do it? 

 

[pause]

 

Here’s my database. This thing right now, this is open. Isn’t it? So I’ll need to close it at this point. So I get to say, “Shut down immediate connect/as sysdba.” 

 

[pause]

 

Because it was opened read-only. I will have to shut down. 

 

[pause]

 

But note throughout this – I’m working on the standby at this point. There’s no downtime on the primary at all at this stage. 

 

[pause]

 

And then I’ll start if up in mount mode. 

 

[pause]

 

Once it comes up in mount mode, 

 

[pause]

 

I want to make sure it is in fact synchronized with the primary. 

 

[pause]

 

So my primary database… 

 

[pause]

 

I’ll start the manage recovery process. 

 

[pause]

 

I’m just looking in these queries of the [04:16 inaudible] sequence number of the block. I could equally well look at the system change number. That would be another way to proving they’re we’re synchronized. 

 

We’re now applying 29 block 5611. We’re very nearly up to date and right now I’m pretty sure will occur. 

 

[pause]

 

Let me just force save log switch to make sure we really are synchronized, and we are. We’re now receiving the log number 30. 

 

How do I open it read-write? 

 

[pause]

 

What we do is we begin by cancelling the manage recovery. 

 

[pause]

 

Let’s cancel the recovery. Then just one command and that command is “alter database convert to snapshot standby.” 

 

[pause]

 

This process we would normally automate by the way. And if we now look at what’s happening within this database,

 

[pause]

 

and look at the views, you’ll see that. The mount mode then we alter database open. 

 

[pause]

 

Previously while I open the database, of course it was opening read-write. Sorry, previously it was opening read-only I should say. 

 

[pause]

 

This time it’s going to open read-write. 

 

[pause]

 

It’s opened read-write. Oracle is aware of what’s going on, however. If I look at the database role, Oracle knows that it’s theoretically still a standby database. So the redo is still being received. It’s simply isn’t being replied and this is an absolutely normal read-write database. I can select * from scott.dept. Of course, I can. I can delete from scott.emp. I can commit. 

 

[pause]

 

I can do anything I want on those things. And absolutely as far users are concerned, this is an absolutely read-write database. It will be generating AWR snapshots and generate ADDM reports, I can tune mySQL, I can run work low test, I can deploy whole new applications, create new users, drop schemas. I can do anything I at all as I like as I want to this database. And having done all my work 

 

[pause]

 

I can even drop the table, drop table scott.emp. And that table is gone as far as this database is concerned. 

 

A week later maybe, I’ve completed my test run. So what do I do next? Terminate the instance.

 

[pause]

 

Just abort it. 

 

[pause]

 

There’s no particular need to be nice by using a shutdown immediate because I’m going to throw away all the work I’ve done, then mount it. Startup mount. 

 

[pause]

 

Note that throughout this whole process of course, the primary is running exactly as normal. There’s no downtime at all on the primary. 

 

Yes, Dave?

 

[pause]

 

>> Dave:  Where are the redo logs created or written when the database is a snapshot standby?

 

>> John:  In this particular case, I’ve configured my databases in the simplest way possible. And if you look at this, sho parameter – I’m going to Madrid, my spare standby at this point. Sho parameter log_archive_dest. 

 

[pause]

 

I set this up in the easiest way possible and that I setup – in fact, I haven’t setup. I’m relying completely on defaults. Yes, if we look at this, there’s too much to show you. 

 

Archive log list, connect/as sysdba. By default, 

 

[pause]

 

they go to the flash recovery area or fast recovery area I should call it. What will be happening in the fast recovery area is there will have been all the time that it was open as a snapshot standby to that fast recovery area there will have been two strings of redo written. There will have been a redo generated from the primary by standby logs and the redo generated over the database Berlin because it was an open database. 

 

To lead on from that question, when you open your snapshot standby, you remember it was open as a normal read-write database. If it was generating its own redo, so it was in fact archiving two strings of redo. The redo received from the London primary and the redo generated by the Berlin snapshot standby. And I didn’t configure it other than just default, it goes to the fast recovery area. 

 

[pause]

 

On a production system, we’d be a bit cleverer than that and we would direct the redo from the primary and the redo from the snapshots in a more intelligent fashion than that. 

 

[pause]

 

But it was a good question, I should emphasize that. There were two strings of redo being generated while the thing was open read-write.

 

[pause]

 

>> Dave:  Great. One more question, John. If we open a physical standby in write mode then can we bring it back to mount mode and receive changes from the primary and apply them?

 

>> John:  That’s what I’m at right now. I’ll shut it down, I’ll start in mount mode, one more command. 

 

[pause]

 

The next command, convert to physical standby. This is the conversion back. Quite a lot is going on at this point. At this point, we are using the flashback database facility to flash back the database to the exact system change number have which I opened it. 

 

So in effect, backing out all the changes that I did. But the back out of course is done through the flashback database mechanism which is extremely fast and efficient. So we’re reversing the database back to when it was a true physical standby. The proof of that…

 

[pause]

 

After we started it’s got a dramatic change because this still convert the control file, as well as backing up the database. 

 

[pause]

 

So I’ll terminate the instance and mount it again. 

 

[pause]

 

We would of course automate all this in any sort of production configuration. 

 

[pause]

 

I must emphasize that throughout this whole process, my primary stays running and there is no break or minimal break in the transmission and reception of redo. 

 

[pause]

 

I guess there will be a few seconds during this process when redo is not in fact being received. Having remounted, it’s now back to normal physical standby. So what do I do next? 

 

I start the recovery. And at this point, you’ll be applying all the redo that was received and archived from the primary during the period it was opened as a snapshot standby. 

 

[pause]

 

If it had been opened as a snapshot standby for a week, it’s going to take a while to catch up. But there is no compromise to recoverability. All the data was there. We can see – note I dropped the table at that point. I haven’t opened it read-only. 

 

[pause]

 

But if I recover my standby database, cancel. 

 

[pause]

 

If I now open the database read-only – alter database open – I no longer have to specify read-only. I did back with 8i. But currently Oracle is of course intelligent not to know that. And we see everything is there. So I have now, in effect, thrown away the week’s worth of work and applied any changes that’s being done on the primary in the meantime. 

 

This is a truly wonderful capability. What does it do? It saves you any licensed footprint clearly because I’ve now got one system is being used for all my test or development work and at the same time is giving me 100% recoverability in the case of disaster. 

 

In the DBA’s point of view, it’s really productive. It saves me as a DBA the hassle of continually having to clone testing development systems from production. It’s a really powerful capability.

 

[music]

 

Copyright SkillBuilders.com 2017

×
Transcript

Demonstration: The Read Mostly Database: Query the Standby, DML to the Primary, Transparently

4. Demonstration: The Read Mostly Database: Query the Standby, DML to the Primary, Transparently

 

[music]

 

>> John:  Now, the last topic. I’ve just got time to dealing with this. The last topic I want to look at is – I’d like to call it the read mostly database. 

 

This is a very nice technique. We’re getting even more value out of your physical standby because remember what I’ve talked about before? When you open your database read-only you can write a very limited amount of data, basically only temporary data. 

 

[pause]

 

Opening it read-write to the snapshot standby, yes, it’s read-write but then you’re not applying the redo so it’s no longer valid as a real time data warehouse. The read mostly database is a compromise between the two. So it adds even more value ideally to Active Data Guard. 

 

It would function without Active Data Guard license but Active Data Guard is where you’re going to get most benefits of this. This is where we’ve come to that low balancing question we had earlier. 

 

If you have say two applications, perhaps one application is your standard OLTP application. The other application is your query application. Maybe you’ve got business objects or discoverer. Business objects applications, discover applications, they’re read intensive. They run massive queries. But they also do an occasional bit of DML. Not much, but they do enough to mean you cannot run them against read-only database in many circumstances. 

 

So what we need to do is redirect the small amount of DML to the primary while running all heavy weight queries against the standby. In a way, that means your application doesn’t even know it’s happening. 

 

[pause]

 

How can I do that? 

 

[pause]

 

I can do it like this. On my primary, as we see on the slide, send them some database links. So, on my primary which is Madrid, I’ll begin by creating a user. I do need to start manage recovery here. So I cancelled recovery, I opened a database, let apply going. I’m about to connect to a user I haven’t even created. 

 

[pause]

 

Conn/as sysdba 

 

On my primary, I will create a user. 

 

[pause]

 

I’m trying to connect to the user again. This is what you get if you copy and paste when you try to create to save time. I’ll create my user. This is called readmost and that’s the schema we’re going to connect to through, for example, discoverer. 

 

[pause]

 

Then I shall create a database link. And before any of you start screaming and shouting at me, I do know that a public database link with connection credentials is a huge security hole but it will do purely for demonstration purposes at the moment. 

 

Then synonyms, a very simple environment here. I shall create a synonym in the readmost schema for the emp table and I’ll also create a synonym 

 

[pause]

 

for depts. That goes show the database link. 

 

Now I’m working on the primary, so my users should be propagated to the standby, the synonym should be propagated, the other synonym should be propagated. So if I go to let’s say I’ll work on Madrid at this point, 

 

[pause]

 

conn readmost/readmost 

 

[pause]

 

It doesn’t have create session. You need to give them a couple of privileges. 

 

[pause]

 

So what I’ll do is, “grant dba to readmost” on the primary and then connects over here. 

 

[pause]

 

And note, everything comes across in real time because I do happen to have got this thing setup. I’ve done so much work on Berlin. 

 

[pause]

 

I just want to make sure my spare one is a physical standby and clearly it’s open read-only. I won’t be able to do anything so if I select – what now happens? This is my readmost user. From my read-only application, if he tries for example to delete from emp, it’s a read only database. 

 

[pause]

 

But what happens now? He can in fact do this. He can insert into dept values or just do a select first, select * from dept. 

 

Nothing special about that. It’s a read-only database. What if I were to delete from dept where deptno=99, one row deleted even though it’s a read-only database. 

 

Transactional consistency is gone. Whereas of course on the other side in the actual primary,

 

[pause]

 

scott.dept, is still there. Transactional consistency is maintained. Commit my delete. 

 

[pause]

 

And it’s gone. It’s what I call a read mostly database. This is really, really powerful and my application is very simple. It’s based on two tables. I’ve got one table dept so I can do DML, another table emp where I run the heavy weight queries, and it’s completely transparent modes. There’s no configuration of the application at all. 

 

I, as DBA, in conjunction with new application people, have to identify what tables are likely to be written and what ones aren’t. And that gives us the read mostly database. 

 

[pause]

 

We can redirect work transparent to the application and it would all be based on services, which brings us to the very last point which I will not have time to talk about in great detail. My apologies for that. 

 

In this environment I’ve been laying out, we need to be absolutely clear that you connect to the correct database. There’s no good having users connecting to the primary and they think they’re logged on to a snapshot standby. 

 

We can control all that and automate the connectivity – the use of services, startup triggers, TNS connect strings, its appropriate use of services, and also a mechanism that dynamically adapts role changes. Because remember fault tolerance – at any stage, my primary might fail and I need to automatically activate the standby. At that point, roles change and we don’t need to redirect sessions appropriately. We can automate that for you as well. 

 

[pause]

 

And that concludes what I wanted to talk about, techniques for gaining more value your disaster recovery system, and perhaps help you motivate installation and move towards a more advanced Data Guard configuration.

 

[music]

 

Copyright SkillBuilders.com 2017

×
Free Online Registration Required

The tutorial session you want to view requires your registering with us.

It’s fast and easy, and totally FREE.

And best of all, once you are registered, you’ll also have access to all the other 100’s of FREE Video Tutorials we offer!

 

×