Monday, December 22, 2008
One of my colleagues is implementing a RAC solution with two nodes and two application servers. The platform is Oracle Enterprise Linux 4.
Before I joined, the team decided to choose OCFS2 as cluster file system for the database. A logical step would be to use OCFS2 for sharing the Application Tier File System as well.
However, we recently found out that this ORACLE product still is not supported/certified for use with the Application Tier of Oracle E-Business Suite. There are issues with OCFS2 in combination with a Shared Application Tier File System (see my article at the AMIS Technology Weblog). Somehow Oracle doesn’t seem to be putting too much efforts into certifying OCFS2 against the EBS Application Server Tier. Initially, the reason was that Oracle had to put too much work into Release 12 and Fusion. However, we are more than 2 years from the initial announcement by Steven Chan, and still no certification, even though customers are requesting it. The only workaround for the issue with OCFS2 in setting up a Shared Application Tier File System is the one proposed in my article, and it is this workaround being referred to by Oracle Support as well. But keep in mind that this doesn't mean it is supported, because it is not!
OCFS2 is a Cluster File System that is certified against and supported with Oracle Server. Why not with Application Server (EBS)? In my opinion there are many customers willing to use OCFS2, because it is stable, from Oracle and not at the least: free of charge.
By the way, the issue I found with OCFS2 also applied (at the time) to GPFS (IBM's Cluster File System). When I tried to reproduce it on AIX on Power, the issue did not reproduce. It might therefore be something related to Linux (whether it be OEL or RHEL) and not at all to OCFS2.
One thing is sure: I would want to use OCFS2 at my customers. I have enough of them who would be willing to use it.
Until that time, my recommendation is to stay away from OCFS2 with the EBS (shared) Application Tier. Use NFS, or any alternative which is supported instead.
Tuesday, December 16, 2008
One of the nice features of Oracle's beta exams is that anyone who takes this exam will be scored on the basis of the actual production exam, resulting in a valid certificate for the production exam, when passed.
Like anyone else in my position would be, I am very eager to know whether I passed this exam. If I do, I will be among the first to get the OCP status for E-Business Suite Release 12, because I already have got the 11i OCP certificate.
Today I was checking the prometric website to see whether the scoring had been applied to the test. Unfortunately, the status still was "Tested"... To get an idea of when to expect the scoring result, there is Oracle's statement that it will take approximately 8 to 10 weeks after closure of the Beta Period, but you can always check with Prometric what the availability is for the actual production exam. So I looked up 1Z0-238 (beta exams always start with 1Z1, production exams start with 1Z0) and started the scheduling engine for this exam. This way I found out that the first available date for the exam is set to February 2nd (strangely enough, the exam can only be scheduled until February 10?).
Anyway, I guess my results will be made available somewhere in January... I need some more patience...
UPDATE, January 6, 2009:
I was checking again today and saw that the first exam-date for this exam is now scheduled at January 20, 2009.
Thursday, November 13, 2008
Sometimes you come across beautiful things in your life, you wish you had known before, because they were there all the time, waiting for that moment to be discovered – by you.
Recently I discovered one of these things. Shamefully I confess, this brilliant feature of RMAN is around for some time – since 10g already – but I have never been lucky enough to explore it, until recently.
I was working for a customer, who has a number of very large databases, all of them around the size of 12 to 25 Terabytes. These databases are all very heavily used by all kinds of activities; each of them is producing over 150 Gigabytes of redo per day. Making a full online backup (to disk) of these databases takes more than a week – per database…
Can you imagine what it takes to restore and recover this kind of databases? The customer estimates around 2 weeks. This is not an option, because these databases hold data that is very frequently consulted and processed.
So, what to do in such a case? Well, RMAN has a nice feature called backup as copy. This makes an image copy of your database and puts it into another location, defaulting to the Flash Recovery Area (assuming you have set up your parameters DB_RECOVERY_FILE_DEST and DB_RECOVERY_FILE_DEST_SIZE). I knew about this feature because I have migrated lots of databases from file system to ASM, for example.
What I didn’t know is that you can use incremental backups to recover this copy of the database. A jewel in the RMAN feature portfolio!
To make a full copy backup of your database:
allocate channel for maintenance type disk;
configure controlfile autobackup on;
configure default device type to disk;
RECOVER COPY OF DATABASE WITH TAG ‘IMG_COPY’ UNTIL TIME ‘SYSDATE – 1’;
BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG ‘IMG_COPY’ DATABASE PLUS ARCHIVELOG DELETE INPUT;
copy current controlfile to ‘/path_to
OK, let’s break this down into small pieces:
RECOVER COPY OF DATABASE WITH TAG ‘IMG_COPY’ UNTIL TIME ‘SYSDATE – 1’;
This command will not do anything until there is a backupset with the tag FULL_COPY.
When there is a backupset, it will take it and recover the copy of the database
using this backupset (see next command).
BACKUP INCREMENTAL LEVEL 1 FOR
RECOVER OF COPY WITH TAG ‘IMG_COPY’ DATABASE
PLUS ARCHIVELOG DELETE INPUT;
This command will create a level 1 backup if there is a level 0 available. During the first run of this script, there is no level 0, so the level 0 will be created (which is basically equal to a backup as copy database).During the second run of the script, a level 1 incremental backup will be created, which can be used in effect for the recover command as discussed above.
Basically, you are creating an image backup of your database in the Flash Recovery Area. This image backup can be used for restore and media recovery. Since this will be faster than conventional backups, it could be a solution to look at when you demand quick recovery. Well, nothing spectacular, this solution has been discussed a number of times on the Internet already.
In the situation my customer is in, we’re talking different stuff. The database is so large, taking the initial backup takes more than a week, let alone a restore, and we’re not even talking about recovery yet. So we chose a different approach to the ‘recovery’ of the database in case it becomes damaged.
RMAN has this tiny little command that does it all in this case:
RMAN> SWITCH DATABASE TO COPY;
This only works when you have your controlfile available, but hey, that is what we do twice in the backup script, don’t we? And not for nothing.
copy current controlfile to ‘/
So, after the backup has been taken, a copy of the controlfile has been made to a separate location, eliminating the need to restore it from autobackup with all of its consequences. If somehow we have lost this image backup of the controlfile as well, we can always fall back to the autobackup.
When we want to recover our lost database in the least amount of time, we assure the availability of the controlfile, then switch the database to the copy, which in effect updates the controlfile with the new location of all the datafiles. After this has been completed, we can open the database (with or without resetlogs, depending on incomplete or complete recovery), and off we go.
The database is available now, but you will be left with a database that doesn’t have a valid backup anymore. We’re using the database in the FRA, remember? No panic, redefine your FRA (to e.g. the location of the original database) and start your rman scripts to create a new copy. Optionally, if required, you can relocate your datafiles according to your needs.
The downside of this solution is that you would need twice the amount of diskspace for your database, because you are holding a copy of it on disk. But compare the cost of that against the cost of 2 weeks inavailability of the database. I think in most cases (with current storage costs) the costs will be far less than the benefits.
Wednesday, November 12, 2008
I started googling around, and found out there is a perl script, delivered with Oracle 10g, to clone the ORACLE_HOME. It is located in $ORACLE_HOME/clone/bin (where have I seen this kind of directories earlier..?). It turns out to be really easy to clone an ORACLE_HOME.
Here are the steps, including the expected output:
1. Copy the Oracle_HOME to the destination location (whether it be on the same or a remote server). When you need to rename the directory where the current installation is located, you can do this as well. If necessary, take the central inventory as well.
2. Check the existence of the ORACLE_HOME in the Central Inventory
$ pwd/appora/oraInventory/ContentsXML$ grep
"HOME NAME" *inventory.xml:
3. (Conditional) Detach the ORACLE_HOME using runInstaller from the copied ORACLE_HOME
Starting Oracle Universal
No pre-requisite checks found in oraparam.ini, no
system pre-requisite checks will be executed.
The inventory pointer is located at
The inventory is located at
DetachHome' was successful.
4. Check the Inventory to see if the ORACLE_HOME was removed (verify the REMOVED=”T” option is added to the ‘HOME NAME’ tag)
$ cd /appora/oraInventory/ContentsXML
$ grep "HOME NAME" *
5. Reregister the ORACLE_HOME
Make sure nothing by the current user is running, because it will do a relink as part of the procedure.
$ cd $ORACLE_HOME/clone/bin
$ perl clone.pl ORACLE_HOME="/appora/product/10.2.0/db" ORACLE_HOME_NAME="OraDb10g_home1"
./runInstaller -silent -clone -waitForCompletion "ORACLE_HOME=/appora/product/10.2.0/db"
"ORACLE_HOME_NAME=OraDb10g_home1" -noConfig -nowait
Starting Oracle Universal Installer...
No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2008-11-07_03-21-26PM.
Please wait ...Oracle Universal Installer, Version 10.2.0.3.0 Production
Copyright (C) 1999, 2006, Oracle. All rights reserved.
You can find a log of this install session at:
Installation in progress (Fri Nov 07 15:21:37 CET 2008)
Linking in progress (Fri Nov 07 15:21:49 CET 2008)
Setup in progress (Fri Nov 07 15:22:56 CET 2008)
End of install phases.(Fri Nov 07 15:23:03 CET 2008)
The following configuration
scripts need to be executed as the "root" user.
#Root script to run
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts
The cloning of OraDb10g_home1 was successful.
Please check '/appora/oraInventory/logs/cloneActions2008-11-07_03-21-26PM.log' for more
6. Run Root.sh
7. The clone is ready now.
$ su -
Running Oracle10 root.sh script...
The following environment variables are set as:
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying coraenv to /usr/local/bin
Entries will be added to the /var/opt/oracle/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
The nice thing about this is that once you have an ORACLE_HOME with a number of patches (like I had with the 10.2.0.3 patch set) these will automatically be taken with the clone.
Well, what is the purpose of this weblog? The idea is to share my knowledge with the community; it'll also have to serve as some kind of reference for the future.
The majority of the articles here will be focusing on the technology or technicalities of Oracle E-Business Suite and Oracle Database.
Some Information about myself can be found in my profile as well as on LinkedIn.
I hope to be a frequent blogger. I have published some weblogs at the AMIS weblog, and will continue to do so here. Some of the posts that I published at the AMIS weblog will be published here as well in the near future.
Hope to meet you here again soon!