Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Steps to Drop a Database?
Go to dbca, choose drop database option
Shutdown the database, startup mount restrict, drop database
Steps to Remove Oracle software?
Windows
• Uninstall all Oracle components using the Oracle Universal Installer (OUI).
• Run regedit.exe and delete the HKEY_LOCAL_MACHINE/SOFTWARE/ORACLE key.
This contains registry entires for all Oracle products.
• Delete any references to Oracle services left behind in the following part of the registry:
HKEY_LOCAL_MACHINE/SYSTEM/CurrentControlSet/Services/Ora*
It should be pretty obvious which ones relate to Oracle.
• Reboot your machine.
• Delete the "C:\Oracle" directory, or whatever directory is your ORACLE_BASE.
• Delete the "C:\Program Files\Oracle" directory.
• Empty the contents of your "c:\temp" directory.
• Empty your recycle bin.
UNIX
• Uninstall all Oracle components using the Oracle Universal Installer (OUI).
• Stop any outstanding processes using the appropriate utilities:
• # oemctl stop oms user/password
• # agentctl stop
# lsnrctl stop
Alternatively you can kill them using the kill -9 pid command as the root user.
• Delete the files and directories below the $ORACLE_HOME:
• # cd $ORACLE_HOME
# rm -Rf *
• With the exception of the product directory, delete directories below the
$ORACLE_BASE.
• # cd $ORACLE_BASE
# rm -Rf admin doc jre o*
• Delete the /etc/oratab file. If using 9iAS delete the /etc/emtab file also.
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
# rm /etc/oratab /etc/emtab
Steps to clone Database?
RMAN C l o n e a Da t a b a s e
Let's assume a source database named SRCDB and the target database, named GEMINI.
1. Create a password file for the Cloned (GEMINI) instance:
orapwd file=/u01/app/oracle/product/9.2.0.1.0/dbs/orapwGEMINI password=password
entries=10
2. Configure tnsnames.ora and listner.ora
Properly identify the database at the tnsnames.ora and have the instance manually registered against
the listener.ora files, both files located at the $ORACLE_HOME/network/admin directory.
2.a Manually register the database against the listener (listener.ora)
(SID_DESC =
(ORACLE_HOME = /u01/app/oracle/product/9.2.0.1.0)
(SID_NAME = GEMINI)
)
2.b Added the target GEMINI to the tnsnames.ora
GEMINI =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = myhost.mydomain.com)(PORT = 1521))
)
(CONNECT_DATA =
(ORACLE_SID = GEMINI)
)
)
2.c Reload the listener
lsnrctl reload
3. Create a new init.ora for the cloned database.
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Create an init.ora file for the cloned database. In case the same paths cannot be used on the target
host, either because it is the same source host or because those paths are not reproducible on the
target, then DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT may be required to be
defined
DB_NAME=GEMINI
CONTROL_FILES=(/u02/oradata/GEMINI/control01.ctl,
/u02/oradata/GEMINI/control02.ctl,
/u02/oradata/GEMINI/control03.ctl)
# Convert file names to allow for different directory structure.
DB_FILE_NAME_CONVERT=(/u02/oradata/SRCDB/,/u02/oradata/GEMINI/)
LOG_FILE_NAME_CONVERT=(/u01/oradata/SRCDB/,/u01/oradata/GEMINI/)
# block_size and compatible parameters must match those of the source database
DB_BLOCK_SIZE=8192
COMPATIBLE=9.2.0.0.0
4. Connect to the cloned instance
ORACLE_SID=GEMINI; export ORACLE_SID
sqlplus /nolog
conn / as sysdba
5. Create an SPFILE based on the init.ora
CREATE SPFILE FROM PFILE='/u01/app/oracle/admin/GEMINI/pfile/init.ora';
6. Start the database in NOMOUNT mode:
STARTUP FORCE NOMOUNT;
7. Connect to the TARGET, CATALOG and AUXILIARY databases.
By means of the rman three connections are open, one for the Source Database (SOURCEDB), another for
the Catalog database (RCAT), and one more for the cloned database (GEMINI)
ORACLE_SID=GEMINI; export ORACLE_SID
rman TARGET sys/password@SRCDB CATALOG rman/rman@RCAT AUXILIARY /
8. Complete or Incomplete clone (recover)
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
From the rman the database using one of the following commands:
8.a Clone the database by means of a complete recover.
DUPLICATE TARGET DATABASE TO GEMINI;
8.b Clone the database up to a defined point in time in the past by means of an incomplete recover
DUPLICATE TARGET DATABASE TO GEMINI UNTIL TIME 'SYSDATE-2';
9. Process finished.
Once the process is finished, the newly created GEMINI database is ready to be used as an independent
new cloned database.
C L ONE D a t a b a s e Ma n u a l l y
STEP 1: On the old system, go into SQL*Plus, sign on as SYSDBA and issue: “alter
database backup controlfile to trace”. This will put the create database syntax in the trace
file directory. The trace keyword tells oracle to generate a script containing a create
controlfile command and store it in the trace directory identified in the user_dump_dest
parameter of the init.ora file. It will look something like this:
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "OLDLSQ" NORESETLOGS
NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 2
MAXDATAFILES 240
MAXINSTANCES 1
MAXLOGHISTORY 113
LOGFILE
GROUP 1 ('/u03/oradata/oldlsq/log1a.dbf',
'/u03/oradata/olslsq/log1b.dbf') SIZE 30M,
GROUP 2 ('/u04/oradata/oldlsq/log2a.dbf',
'/u04/oradata/oldlsq/log2b.dbf') SIZE 30M
DATAFILE
'/u01/oradata/oldlsq/system01.dbf',
'/u01/oradata/oldlsq/mydatabase.dbf'
;
# Recovery is required if any of the datafiles are restored
# backups, or if the last shutdown was not normal or immediate.
RECOVER DATABASE
# Database can now be opened normally.
ALTER DATABASE OPEN;
STEP 2: Shutdown the old database
STEP 3: Copy all data files into the new directories on the new server. You may change
the file names if you want, but you must edit the controlfile to reflect the new data files
names on the new server.
rcp /u01/oradata/oldlsq/* newhost:/u01/oradata/newlsq
rcp /u01/oradata/oldlsq/* newhost:/u01/oradata/newlsq
rcp /u03/oradata/oldlsq/* newhost:/u03/oradata/newlsq
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
rcp /u04/oradata/oldlsq/* newhost:/u04/oradata/newlsq
STEP 4: Copy and Edit the Control file – Using the output syntax from STEP 1, modify
the controlfile creation script by changing the following:
Old:
CREATE CONTROLFILE REUSE DATABASE "OLDLSQ" NORESETLOGS
New:
CREATE CONTROLFILE SET DATABASE "NEWLSQ" RESETLOGS
STEP 5: Remove the “recover database” and “alter database open” syntax
# Recovery is required if any of the datafiles are restored
# backups, or if the last shutdown was not normal or immediate.
RECOVER DATABASE
# Database can now be opened normally.
ALTER DATABASE OPEN;
STEP 6: Re-names of the data files names that have changed.
Save as db_create_controlfile.sql.
Old:
DATAFILE
'/u01/oradata/oldlsq/system01.dbf',
'/u01/oradata/oldlsq/mydatabase.dbf'
New:
DATAFILE
'/u01/oradata/newlsq/system01.dbf',
'/u01/oradata/newlsq/mydatabase.dbf'
STEP 7: Create the bdump, udump and cdump directories
cd $DBA/admin
mkdir newlsq
cd newlsq
mkdir bdump
mkdir udump
mkdir cdump
mkdir pfile
STEP 8: Copy-over the old init.ora file
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
rcp $DBA/admin/olslsq/pfile/*.ora
newhost:/u01/oracle/admin/newlsq/pfile
STEP 9: Start the new database
@db_create_controlfile.sql
STEP 10: Place the new database in archivelog mode
Cloning database using DBCA
The "Template Management" section of the Database Configuration
Assistant (DBCA) can be used to clone databases. The following method
creates a clone of an existing database including both the structure
and the data:
1. Start the Database Configuration Assistant (DBCA).
2. On the "Welcome" screen click the "Next" button.
3. On the "Operations" screen select the "Manage Templates" option
and click the "Next" button.
4. On the "Template Management" screen select the "Create a database
template" option and select the "From and existing database
(structure as well as data. you can also chose structure only )"
sub-option then click the "Next" button.
5. On the "Source database" screen select the relevant database
instance and click the "Next" button.
6. On the "Template properties" screen enter a suitable name and
description for the template, confirm the location for the
template files and click the "Next" button.
7. On the "Location of database related files" screen choose either
to maintain the file locations or to convert to OFA structure
(recommended) and click the "Finish" button.
8. On the "Confirmation" screen click the "OK" button.
9. Wait while the Database Configuration Assistant progress screen
gathers information about the source database, backs up the
database and creates the template.
By default the template files are located in the
ORACLE_HOME/assistants/dbca/templates" directory.
Now move the template into your destination machine and start DBCA.
install database using that template
Steps to manually create Database?
Creating Database thru command
line
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Specifying the Instance's SID
There can be more than one Oracle instance on a single machine. In order to be able to
distinguish these instances, Oracle uses a SID (System Identifier) which is a string.
The SID can be set through the ORACLE_SID environment variable.
D:\oracle\product\10.1.0>set ORACLE_SID=ORA10
Creating an Oracle Service
On Windows, each instance requires a Windows service. This service must first be created with
oradim:
D:\oracle\product\10.1.0\Db_1>oradim -new -sid %ORACLE_SID% -intpwd
MYSECRETPASSWORD -startmode M
Instance created.
It can be verified that a Windows service was created by typing services.msc into the
console. A service named OracleServiceORA10 (ORA10 = %ORACLE_SID%) will be found.
Also, the startup type is manual as was requested by -startmode M.
Oracle also created a password file under %ORACLE_HOME%\database:
D:\oracle\product\10.1.0\Db_1>dir database
Volume in drive D has no label.
Volume Serial Number is C4E9-469A
Directory of D:\oracle\product\10.1.0\Db_1\database
03/05/2005 03:54 PM <DIR> .
03/05/2005 03:54 PM <DIR> ..
03/05/2005 11:16 AM <DIR> archive
03/05/2005 11:13 AM 31,744 oradba.exe
03/05/2005 03:54 PM 2,560 PWDORA10.ORA
As can be seen, the SID is in the password file's name.
Creating the initialization parameter file
When an Oracle instance starts up, it requires either an initialization paramter file (init.ora) or
an SPFILE.
SPFILES have binary content and must be created from init.ora files. Therefore, the init.ora file
(which is an ordianary text file) is created first.
Here's a minimal init.ora (under $ORACLE_HOME/dbs if it is Unix, or
%ORACLE_HOME%\database, if it is windows) just to demonstrate how the control files
are found. Of course, you will add more init params into the init.ora file.
D:\oracle\product\10.1.0\Db_1\database\initORA10.ora
control_files = (d:\oracle\databases\ora10\control01.ora,
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
d:\oracle\databases\ora10\control02.ora,
d:\oracle\databases\ora10\control03.ora)
undo_management = auto
db_name = ora10
db_block_size = 8192
The undo_management parameter is necessary if we want to use automatic undo
management.
Although the above seems to be the bare required minimum, you probably also want do define
background_dump_dest, core_dump_dest and user_dump_dest.
Starting the instance
Now, that we have created an Oracle service and the init.ora file, we're ready to start the
instance:
D:\oracle\product\10.1.0\Db_1>sqlplus /nolog
SQL*Plus: Release 10.1.0.2.0 - Production on Sat Mar 5 16:05:15 2005
Copyright (c) 1982, 2004, Oracle. All rights reserved.
SQL> connect sys/MYSECRETPASSWORD as sysdba
Connected to an idle instance.
SQL*Plus tells us that we're connected to an idle instance. That means that it is not yet
started. So, let's start the instance. We have to start the instance without mounting
(nomount) as there is no database we could mount at the moment.
SQL> startup nomount
ORACLE instance started.
Total System Global Area 113246208 bytes
Fixed Size 787708 bytes
Variable Size 61864708 bytes
Database Buffers 50331648 bytes
Redo Buffers 262144 bytes
This created the SGA (System Global Area) and the background processes.
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Creating the database
We're now ready to finally create the database:
SQL>create database ora10
logfile group 1 ('D:\oracle\databases\ora10\redo1.log') size 10M,
group 2 ('D:\oracle\databases\ora10\redo2.log') size 10M,
group 3 ('D:\oracle\databases\ora10\redo3.log') size 10M
character set WE8ISO8859P1
national character set utf8
datafile 'D:\oracle\databases\ora10\system.dbf'
size 50M
autoextend on
next 10M maxsize unlimited
extent management local
sysaux datafile 'D:\oracle\databases\ora10\sysaux.dbf'
size 10M
autoextend on
next 10M
maxsize unlimited
undo tablespace undo
datafile 'D:\oracle\databases\ora10\undo.dbf'
size 10M
default temporary tablespace temp
tempfile 'D:\oracle\databases\ora10\temp.dbf'
size 10M;
If something goes wrong with the creation, Oracle will write an error into the alert.log. The
alert log is normaly found in the directory that is specified with the background_dump_dest. If
this parameter was not specified (as is the case in our minimal init.ora), the alert.log will be
written into %ORACLE_HOME%/RDMBS/trace.
If an ORA-01031: insufficient privileges is returned, that means most likely, that the current
user is not in the dba group (on unix), or the ORA_DBA (windows).
If the init.ora file is not at its default location or has not been found with the pfile attribute,
an ORA-01078: failure in processing system parameters and an LRM-00109: could not open
parameter file '/appl/oracle/product/9.2.0.2/dbs/initadpdb.ora' error is issued.
The create database command also executes a file whose name is determined by the (hidden)
init parameter _init_sql_file (which seems to default to sql.bsq)
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
After the creation of the database, it can be mounted and opened for use.
Completing the DB creation
In order to complete the db creation, the following scripts must be run as sys:
• %ORACLE_HOME%/rdbms/admin/catalog.sql
• %ORACLE_HOME%/rdbms/admin/catproc.sql and
SQL*Plus provides a shortcut to refer to the ORACLE_HOME directory: the question mark (?).
Therefore, these scripts can be called like so:
SQL> @?/rdbms/admin/catalog.sql
SQL> @?/rdbms/admin/catproc.sql
catalog.sql creates the data dictionary. catproc.sql creates all structures required for PL/SQL.
catalog.sql calls, for example, catexp.sql which is a requirement for exp, or dbmsstdx.sql
which is a requirement to create triggers.
The user system might also want to run ?/sqlplus/admin/pupbld.sql. pupbld.sql creates a
table that allows to block someone from using sql plus.
SQL> connect system/manager
SQL> @?/sqlplus/admin/pupbld
Of course, tablespaces, users, tables and so on must be created according to the use of the
database.
Setting up database to using java
Also call @?/javavm/install/initjvm if you want to enable the JServer option(?).
Oracle managed files
Refer also to DB_CREATE_ONLINE_LOG_DEST_n and DB_CREATE_FILE_DEST for Oraclemanaged
files.
Errors while creating database
If there is an error while the database is created, such as a ORA-01092: ORACLE instance
terminated. Disconnection forced, the alert log should be consulted. This file most probably
contains a more desriptive error message.
If the error occurs at a very early stage, there won't be an alert.log. In this case, the error will
most probably be found in a trace file in udump directory.
Steps to silent installation of database
Thru response file
To run Database Configuration Assistant in response file or silent mode:
1. Copy the dbca.rsp response file template from the response file directory to a
directory on your system:
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
2. $ cp /directory_path/response/dbca.rsp local_directory
In this example, directory_path is the path of the database directory on the DVD. If
you have copied the software to a hard drive, you can edit the file in the response
directory if you prefer.
Note:
As an alternative to editing the response file template, you can also create a database
by specifying all required information as command line options when you run
Database Configuration Assistant. For information about the list of options supported,
enter the following command:
$ $ORACLE_HOME/bin/dbca -help
3. Open the response file in a text editor:
4. $ vi /local_dir/dbca.rsp
5. Edit the file, following the instructions in the file.
Note:
Database Configuration Assistant fails if you do not correctly configure the response
file.
6. Log in as the Oracle software owner user, and set the ORACLE_HOME environment
variable to specify the correct Oracle home directory.
7. If you intend running Database Configuration Assistant in response file mode, set the
DISPLAY environment variable.
8. Use the following command syntax to run Database Configuration Assistant in silent or
response file mode using a response file:
9. $ORACLE_HOME/bin/dbca {-progressOnly | -silent} -responseFile \
/local_dir/dbca.rsp
In this example:
o The -silent option runs Database Configuration Assistant in silent mode.
o The -progressOnly option runs Database Configuration Assistant in response
file mode.
o local_dir is the full path of the directory where you copied the dbca.rsp
response file template.
Steps to clone Oracle Home
1. Verify that the installation of Oracle Database that you want to clone has been
successful.
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
You can do this by reviewing the installActionsdate_time.log file for the installation
session, which is normally located in the /orainventory_location/logs directory.
If you have installed patches, then you can check their status by running the following
commands:
$ $ORACLE_HOME/OPatch ORACLE_HOME=ORACLE_HOME_using_patch
$ $ORACLE_HOME/OPatch opatch lsinventory
2. Stop all processes related to the Oracle home. Refer to the "Removing Oracle
Software" section for more information on stopping the processes for an Oracle home.
3. Create a ZIP file with the Oracle home (but not Oracle base) directory.
For example, if the source Oracle installation is in the
/u01/app/oracle/product/10.2.0/db_1, then you zip the db_1 directory by using the
following command:
# zip -r db_1.zip /u01/app/oracle/product/10.2.0/db_1
Leave out the admin, flash_recovery_area, and oradata directories that are in the
10.2.0 directory. These directories will be created in the target installation later, when
you create a new database there.
4. Copy the ZIP file to the root directory of the target computer.
5. Extract the ZIP file contents by using the following command:
# unzip -d / db_1.zip
6. Repeat steps 4 and 5 for each computer where you want to clone the Oracle home,
unless the Oracle home is on a shared storage device.
7. On the target computer, change directory to the unzipped Oracle home directory, and
remove all the .ora (*.ora) files present in the unzipped
$ORACLE_HOME/network/admin directory.
8. From the $ORACLE_HOME/oui/bin directory, run Oracle Universal Installer in clone
mode for the unzipped Oracle home. Use the following syntax:
9. $ORACLE_HOME/oui/bin/runInstaller -silent -clone ORACLE_HOME="target location"
ORACLE_HOME_NAME="unique_name_on node" [-responseFile full_directory_path]
10.
For example:
$ORACLE_HOME/oui/bin/runInstaller -silent -clone
ORACLE_HOME="/u01/app/oracle/product/10.2.0/db_1"
ORACLE_HOME_NAME="db_1"
The -responseFile parameter is optional. You can supply clone-time parameters on the
command line or by using the response file named on the command line.
Oracle Universal Installer starts, and then records the cloning actions in the
cloneActionstimestamp.log file. This log file is normally located in
/orainventory_location/logs directory.
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
11. To create a new database for the newly cloned Oracle home, run Database
Configuration Assistant as follows:
$ cd $ORACLE_HOME/bin
$ ./dbca
12. To configure connection information for the new database, run Net Configuration
Assistant.
$ cd $ORACLE_HOME/bin
$ ./netca
Steps to Read AWR Report?
Interpret AWR Report
select wait_class, event_name from dba_hist_event_name order by wait_class,event_name;
and find out which event name comes under which class before start reading AWR as it has
some 800+event name under administration wait class event and application wait class event.
ADMINISTRATIVE Wait Class
Events
APPLICATION Wait Class Events
Cluster Commit
Concurrency Configuration
Idle Network
Other Scheduler
System I/O User I/O
Below is the out of script above given just look at what wait class is required and events
causing major issue in that wait class
WAIT_CLASS EVENT_NAME
-------------------- ----------------------------------------
Administrative ASM COD rollback operation completion
ASM mount : wait for heartbeat
Backup: sbtbackup
Backup: sbtclose
Backup: sbtclose2
Backup: sbtcommand
Backup: sbtend
Backup: sbterror
Backup: sbtinfo
Backup: sbtinfo2
Backup: sbtinit
Backup: sbtinit2
Backup: sbtopen
Backup: sbtpcbackup
Backup: sbtpccancel
Backup: sbtpccommit
Backup: sbtpcend
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Backup: sbtpcquerybackup
Backup: sbtpcqueryrestore
Backup: sbtpcrestore
Backup: sbtpcstart
Backup: sbtpcstatus
Backup: sbtpcvalidate
Backup: sbtread
Backup: sbtread2
Backup: sbtremove
Backup: sbtremove2
Backup: sbtrestore
Backup: sbtwrite
Backup: sbtwrite2
JS coord start wait
JS kgl get object wait
JS kill job wait
alter rbs offline
alter system set dispatcher
buffer pool resize
enq: DB - contention
enq: TW - contention
enq: ZG - contention
index (re)build online cleanup
index (re)build online merge
index (re)build online start
multiple dbwriter suspend/resume for fil
e offline
switch logfile command
switch undo - offline
wait for possible quiesce finish
Application SQL*Net break/reset to client
SQL*Net break/reset to dblink
Streams capture: filter callback waiting
for ruleset
Streams: apply reader waiting for DDL to
apply
Wait for Table Lock
enq: KO - fast object checkpoint
enq: PW - flush prewarm buffers
enq: RO - contention
enq: RO - fast object reuse
enq: TM - contention
enq: TX - row lock contention
enq: UL - contention
Cluster ASM PST query : wait for [PM][grp][0] gr
ant
Streams: RAC waiting for inter instance
ack
gc assume
gc block recovery request
gc buffer busy
gc claim
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
gc cr block 2-way
gc cr block 3-way
gc cr block busy
gc cr block congested
gc cr block lost
gc cr block unknown
gc cr cancel
gc cr disk read
gc cr disk request
gc cr failure
gc cr grant 2-way
gc cr grant busy
gc cr grant congested
gc cr grant unknown
gc cr multi block request
gc cr request
gc current block 2-way
gc current block 3-way
gc current block busy
gc current block congested
gc current block lost
gc current block unknown
gc current cancel
gc current grant 2-way
gc current grant busy
gc current grant congested
gc current grant unknown
gc current multi block request
gc current request
gc current retry
gc current split
gc domain validation
gc freelist
gc object scan
gc prepare
gc quiesce
gc quiesce wait
gc recovery free
gc recovery quiesce
gc remaster
lock remastering
pi renounce write complete
retry contact SCN lock master
Commit log file sync
Concurrency buffer busy waits
cursor: mutex S
cursor: mutex X
cursor: pin S wait on X
enq: TX - index contention
latch: In memory undo latch
latch: MQL Tracking Latch
latch: Undo Hint Latch
latch: cache buffers chains
latch: library cache
latch: library cache lock
latch: library cache pin
latch: row cache objects
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
latch: shared pool
library cache load lock
library cache lock
library cache pin
logout restrictor
os thread startup
pipe put
resmgr:internal state change
resmgr:internal state cleanup
resmgr:sessions to exit
row cache lock
row cache read
Configuration Streams AQ: enqueue blocked on low memor
y
Streams capture: resolve low memory cond
ition
Streams capture: waiting for subscribers
to catch up
checkpoint completed
enq: HW - contention
enq: SQ - contention
enq: SS - contention
enq: ST - contention
enq: TX - allocate ITL entry
free buffer waits
latch: redo copy
latch: redo writing
log buffer space
log file switch (archiving needed)
log file switch (checkpoint incomplete)
log file switch (private strand flush in
complete)
log file switch completion
sort segment request
statement suspended, wait error to be cl
eared
undo segment extension
undo segment tx slot
wait for EMON to process ntfns
write complete waits
Idle ASM background timer
DIAG idle wait
EMON idle wait
HS message to agent
JS external job
KSV master wait
LNS ASYNC archive log
LNS ASYNC dest activation
LNS ASYNC end of log
LogMiner: client waiting for transaction
LogMiner: reader waiting for more redo
LogMiner: slave waiting for activate mes
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
sage
LogMiner: wakeup event for builder
LogMiner: wakeup event for preparer
LogMiner: wakeup event for reader
PL/SQL lock timer
PX Deq Credit: need buffer
PX Deq: Execute Reply
PX Deq: Execution Msg
PX Deq: Index Merge Close
PX Deq: Index Merge Execute
PX Deq: Index Merge Reply
PX Deq: Join ACK
PX Deq: Msg Fragment
PX Deq: Par Recov Change Vector
PX Deq: Par Recov Execute
PX Deq: Par Recov Reply
PX Deq: Parse Reply
PX Deq: Table Q Normal
PX Deq: Table Q Sample
PX Deq: Txn Recovery Reply
PX Deq: Txn Recovery Start
PX Deq: kdcph_mai
PX Deq: kdcphc_ack
PX Deque wait
PX Idle Wait
SGA: MMAN sleep for component shrink
SQL*Net message from client
SQL*Net message from dblink
Streams AQ: RAC qmn coordinator idle wai
t
Streams AQ: deallocate messages from Str
eams Pool
Streams AQ: delete acknowledged messages
Streams AQ: qmn coordinator idle wait
Streams AQ: qmn slave idle wait
Streams AQ: waiting for messages in the
queue
Streams AQ: waiting for time management
or cleanup tasks
Streams fetch slave: waiting for txns
class slave wait
dispatcher timer
gcs remote message
ges remote message
i/o slave wait
jobq slave wait
parallel recovery coordinator waits for
cleanup of slaves
pipe get
pmon timer
rdbms ipc message
single-task message
smon timer
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
virtual circuit status
wait for unread message on broadcast cha
nnel
wait for unread message on multiple broa
dcast channels
watchdog main loop
Network ARCH wait for flow-control
ARCH wait for net re-connect
ARCH wait for netserver detach
ARCH wait for netserver init 1
ARCH wait for netserver init 2
ARCH wait for netserver start
ARCH wait on ATTACH
ARCH wait on DETACH
ARCH wait on SENDREQ
LGWR wait on ATTACH
LGWR wait on DETACH
LGWR wait on LNS
LGWR wait on SENDREQ
LNS wait on ATTACH
LNS wait on DETACH
LNS wait on LGWR
LNS wait on SENDREQ
SQL*Net message to client
SQL*Net message to dblink
SQL*Net more data from client
SQL*Net more data from dblink
SQL*Net more data to client
SQL*Net more data to dblink
TCP Socket (KGAS)
TEXT: URL_DATASTORE network wait
dedicated server timer
dispatcher listen timer
Other ARCH wait for archivelog lock
ARCH wait for process death 1
ARCH wait for process death 2
ARCH wait for process death 3
ARCH wait for process death 4
ARCH wait for process death 5
ARCH wait for process start 1
ARCH wait for process start 2
ARCH wait for process start 3
ARCH wait for process start 4
ARCH wait on c/f tx acquire 1
ARCH wait on c/f tx acquire 2
ASM background running
ASM background starting
ASM db client exists
ASM internal hang test
AWR Flush
AWR Metric Capture
BFILE check if exists
BFILE check if open
BFILE closure
BFILE get length
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
BFILE get name object
BFILE get path object
BFILE internal seek
BFILE open
CGS skgxn join retry
CGS wait for IPC msg
Cluster Suspension wait
Cluster stablization wait
DBFG waiting for reply
DBMS_LDAP: LDAP operation
DFS db file lock
DFS lock handle
Data Guard broker: single instance
Data Guard broker: wait upon ORA-12850 e
rror
Data Guard: process clean up
Data Guard: process exit
FAL archive wait 1 sec for REOPEN minimu
m
GCS lock cancel
GCS lock cvt S
GCS lock cvt X
GCS lock esc
GCS lock esc X
GCS lock open
GCS lock open S
GCS lock open X
GCS recovery lock convert
GCS recovery lock open
GV$: slave acquisition retry wait time
IPC busy async request
IPC send completion sync
IPC wait for name service busy
IPC waiting for OSD resources
KJC: Wait for msg sends to complete
Kupp process shutdown
L1 validation
LGWR simulation latency wait
LGWR wait for redo copy
LGWR wait on full LNS buffer
LGWR-LNS wait on channel
LMON global data update
LNS simulation latency wait
LNS wait for LGWR redo
Logical Standby Apply shutdown
Logical Standby Terminal Apply
Logical Standby dictionary build
Logical Standby pin transaction
MMON (Lite) shutdown
MMON slave messages
MRP wait on archivelog archival
MRP wait on archivelog arrival
MRP wait on archivelog delay
MRP wait on process death
MRP wait on process restart
MRP wait on process start
MRP wait on startup clear
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
MRP wait on state change
MRP wait on state n_a
MRP wait on state reset
OLAP Aggregate Client Deq
OLAP Aggregate Client Enq
OLAP Aggregate Master Deq
OLAP Aggregate Master Enq
OLAP Null PQ Reason
OLAP Parallel Temp Grew
OLAP Parallel Temp Grow Request
OLAP Parallel Temp Grow Wait
OLAP Parallel Type Deq
PMON to cleanup pseudo-branches at svc s
top time
PX Deq Credit: free buffer
PX Deq Credit: send blkd
PX Deq: OLAP Update Close
PX Deq: OLAP Update Execute
PX Deq: OLAP Update Reply
PX Deq: Signal ACK
PX Deq: Table Q Close
PX Deq: Table Q Get Keys
PX Deq: Table Q qref
PX Deq: Test for msg
PX Deq: reap credit
PX Nsq: PQ descriptor query
PX Nsq: PQ load info query
PX Send Wait
PX create server
PX qref latch
PX server shutdown
PX signal server
PX slave connection
PX slave release
RF - FSFO Wait for Ack
RFS announce
RFS attach
RFS close
RFS create
RFS detach
RFS dispatch
RFS ping
RFS register
RVWR wait for flashback copy
Replication Dequeue
SGA: allocation forcing component growth
SGA: sga_target resize
Streams AQ: QueueTable kgl locks
Streams AQ: enqueue blocked due to flow
control
Streams AQ: qmn coordinator waiting for
slave to start
Streams AQ: waiting for busy instance fo
r instance_name
Streams capture: waiting for archive log
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Streams capture: waiting for database st
artup
Streams miscellaneous event
Sync ASM rebalance
WCR: RAC message context busy
Wait for TT enqueue
Wait for shrink lock
Wait for shrink lock2
Wait on stby instance close
affinity expansion in replay
block change tracking buffer space
buffer busy
buffer deadlock
buffer dirty disabled
buffer exterminate
buffer freelistbusy
buffer invalidation wait
buffer latch
buffer rememberlist busy
buffer resize
buffer write wait
buffer writeList full
change tracking file parallel write
change tracking file synchronous read
change tracking file synchronous write
check CPU wait times
checkpoint advanced
cleanup of aborted process
control file diagnostic dump
control file heartbeat
cr request retry
cursor: pin S
cursor: pin X
debugger command
dispatcher shutdown
dma prepare busy
dupl. cluster key
enq: AD - allocate AU
enq: AD - deallocate AU
enq: AF - task serialization
enq: AG - contention
enq: AM - client registration
enq: AM - rollback COD reservation
enq: AM - shutdown
enq: AO - contention
enq: AS - modify service
enq: AS - service activation
enq: AT - contention
enq: AU - audit index file
enq: AW - AW generation lock
enq: AW - AW state lock
enq: AW - AW$ table lock
enq: AW - user access for AW
enq: BF - PMON Join Filter cleanup
enq: BF - allocation contention
enq: BR - file shrink
enq: BR - proxy-copy
enq: CF - contention
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
enq: CI - contention
enq: CL - compare labels
enq: CL - drop label
enq: CM - gate
enq: CM - instance
enq: CN - race with init
enq: CN - race with reg
enq: CN - race with txn
enq: CT - CTWR process start/stop
enq: CT - change stream ownership
enq: CT - global space management
enq: CT - local space management
enq: CT - reading
enq: CT - state
enq: CT - state change gate 1
enq: CT - state change gate 2
enq: CU - contention
enq: DD - contention
enq: DF - contention
enq: DG - contention
enq: DL - contention
enq: DM - contention
enq: DN - contention
enq: DP - contention
enq: DR - contention
enq: DS - contention
enq: DT - contention
enq: DV - contention
enq: DX - contention
enq: FA - access file
enq: FB - contention
enq: FC - open an ACD thread
enq: FC - recover an ACD thread
enq: FD - Flashback coordinator
enq: FD - Flashback on/off
enq: FD - Marker generation
enq: FD - Restore point create/drop
enq: FD - Tablespace flashback on/off
enq: FG - FG redo generation enq race
enq: FG - LGWR redo generation enq race
enq: FG - serialize ACD relocate
enq: FL - Flashback database log
enq: FL - Flashback db command
enq: FM - contention
enq: FP - global fob contention
enq: FR - contention
enq: FS - contention
enq: FT - allow LGWR writes
enq: FT - disable LGWR writes
enq: FU - contention
enq: HD - contention
enq: HP - contention
enq: HQ - contention
enq: HV - contention
enq: IA - contention
enq: ID - contention
enq: IL - contention
enq: IM - contention for blr
enq: IR - contention
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
enq: IR - contention2
enq: IS - contention
enq: IT - contention
enq: JD - contention
enq: JI - contention
enq: JQ - contention
enq: JS - contention
enq: JS - evt notify
enq: JS - evtsub add
enq: JS - evtsub drop
enq: JS - job recov lock
enq: JS - job run lock - synchronize
enq: JS - q mem clnup lck
enq: JS - queue lock
enq: JS - sch locl enqs
enq: JS - wdw op
enq: KK - context
enq: KM - contention
enq: KP - contention
enq: KT - contention
enq: MD - contention
enq: MH - contention
enq: MK - contention
enq: ML - contention
enq: MN - contention
enq: MO - contention
enq: MR - contention
enq: MS - contention
enq: MW - contention
enq: OC - contention
enq: OL - contention
enq: OQ - xsoq*histrecb
enq: OQ - xsoqhiAlloc
enq: OQ - xsoqhiClose
enq: OQ - xsoqhiFlush
enq: OQ - xsoqhistrecb
enq: OW - initialization
enq: OW - termination
enq: PD - contention
enq: PE - contention
enq: PF - contention
enq: PG - contention
enq: PH - contention
enq: PI - contention
enq: PL - contention
enq: PR - contention
enq: PS - contention
enq: PT - contention
enq: PV - syncshut
enq: PV - syncstart
enq: PW - perwarm status in dbw0
enq: RB - contention
enq: RF - RF - Database Automatic Disabl
e
enq: RF - RF - FSFO Observed
enq: RF - RF - FSFO connectivity
enq: RF - RF - FSFO state
enq: RF - RF - FSFO synchronization
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
enq: RF - RF - FSFO wait
enq: RF - atomicity
enq: RF - new AI
enq: RF - synch: DG Broker metadata
enq: RF - synchronization: HC master
enq: RF - synchronization: aifo master
enq: RF - synchronization: chief
enq: RF - synchronization: critical ai
enq: RN - contention
enq: RP - contention
enq: RR - contention
enq: RS - file delete
enq: RS - persist alert level
enq: RS - prevent aging list update
enq: RS - prevent file delete
enq: RS - read alert level
enq: RS - record reuse
enq: RS - write alert level
enq: RT - contention
enq: RU - contention
enq: RU - waiting
enq: RW - MV metadata contention
enq: SB - contention
enq: SE - contention
enq: SF - contention
enq: SH - contention
enq: SI - contention
enq: SK - contention
enq: SR - contention
enq: SU - contention
enq: SW - contention
enq: TA - contention
enq: TB - SQL Tuning Base Cache Load
enq: TB - SQL Tuning Base Cache Update
enq: TC - contention
enq: TC - contention2
enq: TD - KTF dump entries
enq: TE - KTF broadcast
enq: TF - contention
enq: TL - contention
enq: TO - contention
enq: TQ - DDL contention
enq: TQ - INI contention
enq: TQ - TM contention
enq: TS - contention
enq: TT - contention
enq: TX - contention
enq: US - contention
enq: WA - contention
enq: WF - contention
enq: WL - contention
enq: WP - contention
enq: WR - contention
enq: XH - contention
enq: XQ - recovery
enq: XQ - relocation
enq: XR - database force logging
enq: XR - quiesce database
enq: XY - contention
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
events in waitclass Other
extent map load/unlock
flashback buf free by RVWR
flashback free VI log
flashback log switch
free global transaction table entry
free process state object
gcs ddet enter server mode
gcs domain validation
gcs drm freeze begin
gcs drm freeze in enter server mode
gcs enter server mode
gcs log flush sync
gcs remastering wait for read latch
gcs remastering wait for write latch
gcs resource directory to be unfrozen
gcs to be enabled
ges LMD suspend for testing event
ges LMD to inherit communication channel
s
ges LMD to shutdown
ges LMON for send queues
ges LMON to get to FTDONE
ges LMON to join CGS group
ges cached resource cleanup
ges cancel
ges cgs registration
ges enter server mode
ges generic event
ges global resource directory to be froz
en
ges inquiry response
ges lmd and pmon to attach
ges lmd/lmses to freeze in rcfg - mrcvr
ges lmd/lmses to unfreeze in rcfg - mrcv
r
ges master to get established for SCN op
ges performance test completion
ges pmon to exit
ges process with outstanding i/o
ges reconfiguration to start
ges resource cleanout during enqueue ope
n
ges resource cleanout during enqueue ope
n-cvt
ges resource directory to be unfrozen
ges retry query node
ges reusing os pid
ges user error
ges wait for lmon to be ready
ges1 LMON to wake up LMD - mrcvr
ges2 LMON to wake up LMD - mrcvr
ges2 LMON to wake up lms - mrcvr 2
ges2 LMON to wake up lms - mrcvr 3
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
ges2 proc latch in rm latch get 1
ges2 proc latch in rm latch get 2
global cache busy
global enqueue expand wait
imm op
inactive session
inactive transaction branch
index block split
instance state change
job scheduler coordinator slave wait
jobq slave TJ process wait
jobq slave shutdown wait
kcbzps
kcrrrcp
kdblil wait before retrying ORA-54
kdic_do_merge
kfcl: instance recovery
kgltwait
kjbdomalc allocate recovery domain - ret
ry
kjbdrmcvtq lmon drm quiesce: ping comple
tion
kjbopen wait for recovery domain attach
kjctcisnd: Queue/Send client message
kjctssqmg: quick message send wait
kjudomatt wait for recovery domain attac
h
kjudomdet wait for recovery domain detac
h
kjxgrtest
kkdlgon
kkdlhpon
kkdlsipon
kksfbc child completion
kksfbc research
kkshgnc reloop
kksscl hash split
knpc_acwm_AwaitChangedWaterMark
knpc_anq_AwaitNonemptyQueue
knpsmai
kpodplck wait before retrying ORA-54
ksbcic
ksbsrv
ksdxexeother
ksdxexeotherwait
ksim generic wait event
ksqded
ksv slave avail wait
ksxr poll remote instances
ksxr wait for mount shared
ktfbtgex
ktm: instance recovery
ktsambl
kttm2d
kupp process wait
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
kxfxse
kxfxsp
latch activity
latch free
latch: Change Notification Hash table la
tch
latch: KCL gc element parent latch
latch: cache buffer handles
latch: cache buffers lru chain
latch: checkpoint queue latch
latch: enqueue hash chains
latch: gcs resource hash
latch: ges resource hash list
latch: messages
latch: object queue header heap
latch: object queue header operation
latch: parallel query alloc buffer
latch: redo allocation
latch: session allocation
latch: undo global data
latch: virtual circuit queues
library cache revalidation
library cache shutdown
listen endpoint status
lms flush message acks
lock close
lock deadlock retry
lock escalate retry
lock release pending
log file switch (clearing log file)
log switch/archive
log write(even)
log write(odd)
master exit
name-service call wait
no free buffers
no free locks
null event
opishd
optimizer stats update retry
pending global transaction(s)
prewarm transfer retry
prior spawner clean up
process shutdown
process startup
process terminate
qerex_gdml
queue slave messages
rdbms ipc message block
rdbms ipc reply
recovery area: computing applied logs
recovery area: computing backed up files
recovery area: computing dropped files
recovery area: computing identical files
recovery area: computing obsolete files
reliable message
rfi_drcx_site_del
rfi_insv_shut
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
rfi_insv_start
rfi_nsv_deldef
rfi_nsv_md_close
rfi_nsv_md_write
rfi_nsv_postdef
rfi_nsv_shut
rfi_nsv_start
rfi_recon1
rfi_recon2
rfm_dmon_last_gasp
rfm_dmon_pdefer
rfm_dmon_shut
rfm_dmon_timeout_op
rfm_pmon_dso_stall
rfrdb_dbop
rfrdb_recon1
rfrdb_recon2
rfrdb_try235
rfrla_lapp1
rfrla_lapp2
rfrla_lapp3
rfrla_lapp4
rfrla_lapp5
rfrld_rhmrpwait
rfrm_dbcl
rfrm_dbop
rfrm_nonzero_sub_count
rfrm_rsm_shut
rfrm_rsm_so_attach
rfrm_rsm_start
rfrm_stall
rfrm_zero_sub_count
rfrpa_mrpdn
rfrpa_mrpup
rfrxpt_pdl
rfrxptarcurlog
rollback operations active
rollback operations block full
rolling migration: cluster quiesce
scginq AST call
secondary event
select wait
set director factor wait
simulated log write delay
slave exit
test long ops
timer in sksawat
transaction
tsm with timeout
txn to complete
unbound tx
undo segment recovery
undo_retention publish retry
unspecified wait event
wait active processes
wait for EMON to die
wait for EMON to spawn
wait for FMON to come up
wait for MTTR advisory state object
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
wait for a paralle reco to abort
wait for a undo record
wait for another txn - rollback to savep
oint
wait for another txn - txn abort
wait for another txn - undo rcv abort
wait for assert messages to be sent
wait for change
wait for master scn
wait for membership synchronization
wait for message ack
wait for record update
wait for rr lock release
wait for scn ack
wait for split-brain resolution
wait for stopper event to be increased
wait for sync ack
wait for tmc2 to complete
wait for verification ack
wait for votes
wait list latch activity
wait list latch free
waiting to get CAS latch
waiting to get RM CAS latch
writes stopped by instance recovery or d
atabase suspension
xdb schema cache initialization
Scheduler resmgr:become active
resmgr:cpu quantum
System I/O ARCH random i/o
ARCH sequential i/o
ARCH wait for pending I/Os
LGWR random i/o
LGWR sequential i/o
LNS ASYNC control file txn
Log archive I/O
RFS random i/o
RFS sequential i/o
RFS write
RMAN backup & recovery I/O
Standby redo I/O
control file parallel write
control file sequential read
control file single write
db file parallel write
io done
kfk: async disk IO
ksfd: async disk IO
kst: async disk IO
log file parallel write
log file sequential read
log file single write
recovery read
User I/O BFILE read
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
DG Broker configuration file I/O
Data file init write
Datapump dump file I/O
Log file init write
buffer read retry
db file parallel read
db file scattered read
db file sequential read
db file single write
dbms_file_transfer I/O
direct path read
direct path read temp
direct path write
direct path write temp
local write wait
read by other session
890 rows selected.
Now start reading AWR Report but before that few definitions which will help us while reading
AWR report
Logical Reads
The Logical Reads Per Sec Oracle metric data item represents the number of logical reads
(consistent gets, from the data buffer) per second during the sample period. A logical read
is a read request for a data block from the SGA. Logical reads may result in a physical
read if the requested block does not reside with the buffer cache
Block changes
The Oracle docs note: The Block Changes/Tx Oracle metric shows the average number
of data block changes for a DML (update, insert, delete) transaction. For example, a
single row insert might cause three data block changes when the index blocks are
included.
User Calls
The user calls Oracle metric is when Oracle allocates resources (Call State Objects) to
keep track of relevant user call data structures every time you log in, parse, or execute.
Hard Parse
Oracle SQL is parsed before execution, and a hard parse includes these steps:
1. Loading into shared pool - The SQL source code is loaded into RAM for
parsing. (the "hard" parse step)
2. Syntax parse - Oracle parses the syntax to check for misspelled SQL keywords.
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
3. Semantic parse - Oracle verifies all table & column names from the dictionary
and checks to see if you are authorized to see the data.
4. Query Transformation - If enabled (query_rewrite=true), Oracle will transform
complex SQL into simpler, equivalent forms and replace aggregations with
materialized views, as appropriate.
5. Optimization - Oracle then creates an execution plan, based on your schema
statistics (or maybe with statistics from dynamic sampling in 10g).
6. Create executable - Oracle builds an executable file with native file calls to
service the SQL query.
Oracle gives us the shared_pool_size parm to cache SQL so that we don't have to parse,
over-and-over again. However, SQL can age-out if the shared_pool_size is too small or
if it is cluttered with non-reusable SQL (i.e. SQL that has literals "where name = "fred")
in the source.
What the difference between a hard parse and a soft parse in Oracle? Just the first step,
step 1 as shown in red, above. In other words, a soft parse does not require a shared pool
reload (and the associated RAM memory allocation).
A general high "parse call" (> 10/sec.) indicates that your system has many
incoming unique SQL statements, or that your SQL is not reentrant (i.e. not using
bind variables).
A hard parse is when your SQL must be re-loaded into the shared pool. A hard parse is
worse than a soft parse because of the overhead involved in shared pool RAM allocation
and memory management. Once loaded, the SQL must then be completely re-checked
for syntax & semantics and an executable generated.
Excessive hard parsing can occur when your shared_pool_size is too small (and reentrant
SQL is paged out), or when you have non-reusable SQL statements without host
variables.
See the cursor_sharing parameter for a easy way to make SQL reentrant and remember
that you should always use host variables in you SQL so that they can be reentrant.
Buffer Nowait
The Buffer Nowait Ratio Oracle metric is the percentage of requests a server process
makes for a specific buffer where the buffer was available immediately; all buffer types
are included in this statistic. If the ratio is low, determine which type of block is being
contended for by examining the Buffer Wait Statistics section of the Statspack report
Buffer Hit Ratio
The Data Buffer Hit Ratio Oracle metric is a measure of the effectiveness of the Oracle
data block buffer. The higher the buffer hit ratio, the more frequently Oracle found a data
block in memory and avoid a disk I/O.
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
The buffer cache hit ratio is most meaningful for databases with an undersized
db_cache_size, where the "working set" of frequently-referenced data has not been
cached. Oracle provides the data buffer cache advisory utility (v$db_cache_advice) in
the standard AWR report.
Library Hit Ratio
The Library Hit Ratio Oracle metric is also known as the library cache hit ratio. The ratio
indicates the number of pin requests which result in pin hits. A pin hit occurs when the
SQL or PL/SQL code you wish to execute is already in the library cache and is valid to
execute.
A low library cache hit percentage could mean SQL is prematurely aging out of the
shared pool as the shared pool may be small, or that un-sharable SQL is being used. Also
compare with the soft parse ratio; if they are both low, then investigate whether there is a
parsing issue.
The Redo Nowait Ratio Oracle metric ratio is indicative of the number of redo-entries
generated for which there was space immediately available in the redo log. The
percentage is calculated as followed:
100 x (1- (redo log space requests/redo entries) The ‘redo log space request’ statistic is
incremented when an Oracle process attempts to write a redo entry, however there was
not sufficient space remaining in the online redo log. The ‘redo entries’ statistic is
incremented for each entry made to the redo log.
The In Memory Sort Ratio Oracle metric is the percentage of sorts (from ORDER BY
clauses or index building) that are done to disk vs. in-memory. Disk sorts are done in the
TEMP tablespace, which is hundreds of times slower than a RAM sort. The in-memory
sorts are controlled by sort_area_size or by pga_aggregate_target.
At the time a session is established with Oracle, a private sort area is allocated in RAM
memory for use by the session for sorting. If the connection is via a dedicated connection
a Program Global Area (PGA) is allocated according to the sort_area_size init.ora
parameter.
For connections via the multithreaded server, sort space is allocated in the large_pool.
Unfortunately, the amount of memory used in sorting must be the same for all sessions,
and it is not possible to add additional sort areas for tasks that require large sort
operations.
The Soft Parse Ratio Oracle metric is the ratio of soft parses (SQL is already in library
cache) to hard parses (SQL must be parsed, validated, and an execution plan formed).
The library cache (as set by shared_pool_size) serves to minimize hard parses. Excessive
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
hard parsing could be due to a time shared_pool_size or because of SQL with embedded
literal values
The Latch Hit Ratio Oracle metric is the ratio of the total number of latch misses to the
number of latch gets for all latches. A low value for this ratio indicates a latching
problem, whereas a high value is generally good. However, as the data is rolled up over
all latches, a high latch hit ratio can artificially mask a low get rate on a specific latch.
Oracle tuning professionals will cross-check this value with the top 5 wait events to see if
latch free is in the list, and refer to the latch sections of the report.
Percent Non-Parse CPU
The Percent Non-Parse CPU is defined by this formula, and is the percentage of Parse
time CPU to the CPU used by any given session. In sum “How much CPU is spent
fetching the SQL rows?”
100*( 1 - ( valdiff( stats$sysstat, 'parse time cpu' )
/ valdiff( stats$sysstat, 'CPU used by this session' )
)
)
Furthermore, on DBA Support Forums, a user wanted to know how to increase his latch
hit ration using the following execution:
SELECT (1 - (Sum(misses) / Sum(gets))) * 100
INTO v_value
FROM v$latch;
DBMS_Output.Put('Latch Hit Ratio : ' || Format(v_value));
V$BH view shows which are all the objects are currently residing as blocks in SGA.
When you do 'alter table <table_name> cache;', the only thing that changes is the
characteristics of how table blocks are cached in the buffer cache, when subject to a full
table scan. Without the table set to 'cache', this is what happens:
Table blocks that are read into the buffer cache during a full table scan, if said table is
smaller than _small_table_threshold, will be put on the MRU end of the LRU list. Tables
that are larger than _small_table_threshold will have full scanned blocks put on the LRU
end of the LRU list. (This allows for some damage limitation, as full scanning a large
table would otherwise thrash the buffer cache.)
When the table is set to 'cache', and the table is larger than _small_table_threshold, it will
behave the same as a "small table". That is, block read during full scan will be read into
the MRU end of the LRU list. (This means they are more likely to stay cached in
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
memory.)
On the other hand, we have the ability to define and utilize a keep buffer pool. 'Alter
table <table_name> storage (buffer_pool keep);' will assign a table to the keep buffer
pool. This is a buffer pool that must be created and managed separately from the default
pool. It also allows for better control of what data is 'kept' in the pool. In my opinion, the
keep buffer pool is a far better solution than the CACHE keyword. Note, however, that if
you oversubscribe the KEEP pool, blocks will age out, as with any other cache. There's
no way to guarantee that blocks remain in the keep pool.
Finally, as to the last question, regarding flushing shared pool and buffer cache:
Flushing the shared pool will have zero impact on the buffer cache. The buffer cache and
shared pool are completely different concepts. The former caches data blocks, the latter
caches SQL, stored objects, dictionary information etc. Flushing the buffer cache would
flush all data blocks that are not currently pinned. This would cover all buffer pools,
default, keep and recycle.
Pin objects in Shared Pool
Frequently used procedures can (and generally should) be pinned in the shared
pool.
Why: Frequently loading and reloading stored procedures, packages, and triggers
into the shared pool is a relatively expensive operation for the server. By pinning
packages in the shared pool, the amount of CPU used to reload packages will be
minimized.
How to do it: To pin a package in the shared pool use the following command:
execute dbms_shared_pool.keep('SCHEMA.PACKAGE','P')
Below is the AWR report to read
------------------------------------------------ User Logged in dta pts/4 Nov 29 11:01
(3.209.178.10) SSO ID of User: gab Time Logged in: Mon Nov 29 15:21:01 EST 2010
Option Selected: AWR Report Type of Monitoring: Database Level Tuning Output:
WORKLOAD REPOSITORY report for
DB Name DB Id Instance Inst num Release RAC Host
ICEF_P92 2110882438 ICEF_P92 1 10.2.0.4.0 NO recp0002
Snap Id Snap Time Sessions Cursors/Session
Begin Snap: 17287 29-Nov-10 00:48:57 40 6.9
End Snap: 17306 29-Nov-10 13:12:16 320 19.3
Elapsed: 743.31 (mins)
DB Time: 809.04 (mins)
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Report Summary
Cache Sizes
Begin End
Buffer Cache: 3,216M 3,344M Std Block Size: 8K
Shared Pool Size: 4,160M 4,032M Log Buffer: 1,056K
Load Profile
Per Second Per Transaction
Redo size: 35,676.17 24,408.05
Logical reads: 31,643.67 21,649.20
Block changes: 221.09 151.26
Physical reads: 529.81 362.47
Physical writes: 18.55 12.69
User calls: 159.39 109.05
Parses: 68.19 46.65
Hard parses: 13.17 9.01
Sorts: 21.20 14.50
Logons: 0.21 0.14
Executes: 6,272.06 4,291.07
Transactions: 1.46
% Blocks changed per Read: 0.70 Recursive Call %: 97.57
Rollback per transaction %: 0.20 Rows per Sort: 500.68
Instance Efficiency Percentages (Target 100%)
Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: 98.36 In-memory Sort %: 100.00
Library Hit %: 99.39 Soft Parse %: 80.69
Execute to Parse %: 98.91 Latch Hit %: 99.88
Parse CPU to Parse Elapsd %: 95.74 % Non-Parse CPU: 92.96
Shared Pool Statistics
Begin End
Memory Usage %: 73.17 53.72
% SQL with executions>1: 13.15 97.30
% Memory for SQL w/exec>1: 19.56 96.13
Top 5 Timed Events
Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
CPU time 38,210 78.7
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
db file sequential read 5,140,351 5,887 1 12.1 User I/O
db file scattered read 4,597,340 5,090 1 10.5 User I/O
log file parallel write 79,978 349 4 .7 System I/O
SQL*Net break/reset to client 3,816 280 73 .6 Application
Main Report
• Report Summary
• Wait Events Statistics
• SQL Statistics
• Instance Activity Statistics
• IO Stats
• Buffer Pool Statistics
• Advisory Statistics
• Wait Statistics
• Undo Statistics
• Latch Statistics
• Segment Statistics
• Dictionary Cache Statistics
• Library Cache Statistics
• Memory Statistics
• Streams Statistics
• Resource Limit Statistics
• init.ora Parameters
Back to Top
Wait Events Statistics
• Time Model Statistics
• Wait Class
• Wait Events
• Background Wait Events
• Operating System Statistics
• Service Statistics
• Service Wait Class Stats
Back to Top
Time Model Statistics
• Total time in database user-calls (DB Time): 48542.2s
• Statistics including the word "background" measure background process time, and so do not contribute to the
DB time statistic
• Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
sql execute elapsed time 45,842.86 94.44
DB CPU 38,210.17 78.72
parse time elapsed 2,810.73 5.79
hard parse elapsed time 2,557.63 5.27
PL/SQL execution elapsed time 1,283.79 2.64
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
PL/SQL compilation elapsed time 62.19 0.13
connection management call elapsed time 19.52 0.04
hard parse (sharing criteria) elapsed time 6.73 0.01
hard parse (bind mismatch) elapsed time 3.62 0.01
repeated bind elapsed time 1.42 0.00
sequence load elapsed time 1.33 0.00
failed parse elapsed time 1.30 0.00
DB time 48,542.22
background elapsed time 1,094.05
background cpu time 421.72
Back to Wait Events Statistics
Back to Top
Wait Class
• s - second
• cs - centisecond - 100th of a second
• ms - millisecond - 1000th of a second
• us - microsecond - 1000000th of a second
• ordered by wait time desc, waits desc
Wait Class Waits %Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn
User I/O 10,199,984 0.00 11,064 1 156.47
System I/O 179,975 0.00 669 4 2.76
Application 3,862 0.00 280 73 0.06
Network 11,583,016 0.00 155 0 177.69
Commit 46,526 0.00 102 2 0.71
Concurrency 2,717 0.70 78 29 0.04
Configuration 8,094 89.24 64 8 0.12
Administrative 41 0.00 9 220 0.00
Other 8,965 0.12 8 1 0.14
Back to Wait Events Statistics
Back to Top
Wait Events
• s - second
• cs - centisecond - 100th of a second
• ms - millisecond - 1000th of a second
• us - microsecond - 1000000th of a second
• ordered by wait time desc, waits desc (idle events last)
Event Waits %Time -
outs
Total Wait Time
(s)
Avg wait
(ms)
Waits
/txn
db file sequential read 5,140,351 0.00 5,887 1 78.85
db file scattered read 4,597,340 0.00 5,090 1 70.52
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
log file parallel write 79,978 0.00 349 4 1.23
SQL*Net break/reset to client 3,816 0.00 280 73 0.06
db file parallel write 19,593 0.00 254 13 0.30
SQL*Net more data to client 5,267,506 0.00 139 0 80.80
log file sync 46,526 0.00 102 2 0.71
read by other session 113,395 0.00 65 1 1.74
library cache lock 55 14.55 62 1132 0.00
log file switch (checkpoint incomplete) 66 45.45 39 586 0.00
control file parallel write 17,557 0.00 34 2 0.27
log buffer space 779 0.00 24 31 0.01
log file sequential read 1,881 0.00 22 12 0.03
db file parallel read 1,574 0.00 21 14 0.02
latch: shared pool 1,373 0.00 11 8 0.02
SQL*Net more data from client 80,838 0.00 11 0 1.24
switch logfile command 41 0.00 9 220 0.00
Log archive I/O 1,647 0.00 7 4 0.03
SQL*Net message to client 6,229,294 0.00 5 0 95.56
latch free 106 0.00 4 39 0.00
os thread startup 33 0.00 3 91 0.00
control file sequential read 59,215 0.00 3 0 0.91
rdbms ipc reply 1,583 0.00 2 1 0.02
log file switch completion 31 0.00 1 40 0.00
buffer exterminate 6 0.00 1 132 0.00
LGWR wait for redo copy 7,127 0.15 1 0 0.11
latch: library cache 124 0.00 1 5 0.00
SQL*Net more data from dblink 340 0.00 1 2 0.01
direct path read temp 313,892 0.00 0 0 4.82
log file single write 104 0.00 0 3 0.00
enq: CF - contention 24 0.00 0 12 0.00
db file single write 340 0.00 0 1 0.01
cursor: pin S wait on X 12 91.67 0 15 0.00
latch: cache buffers lru chain 27 0.00 0 6 0.00
enq: JS - queue lock 26 0.00 0 5 0.00
log file switch (private strand flush
incomplete)
6 0.00 0 19 0.00
local write wait 20 0.00 0 4 0.00
direct path write temp 27,068 0.00 0 0 0.42
latch: cache buffers chains 607 0.00 0 0 0.01
enq: RO - fast object reuse 45 0.00 0 1 0.00
Data file init write 13 0.00 0 2 0.00
undo segment extension 7,205 99.83 0 0 0.11
library cache load lock 3 0.00 0 8 0.00
write complete waits 7 0.00 0 3 0.00
direct path read 5,627 0.00 0 0 0.09
latch: library cache lock 2 0.00 0 6 0.00
buffer busy waits 486 0.00 0 0 0.01
reliable message 45 0.00 0 0 0.00
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
SQL*Net message to dblink 4,867 0.00 0 0 0.07
SQL*Net more data to dblink 171 0.00 0 0 0.00
latch: row cache objects 3 0.00 0 0 0.00
direct path write 364 0.00 0 0 0.01
latch: In memory undo latch 19 0.00 0 0 0.00
enq: TM - contention 1 0.00 0 0 0.00
latch: session allocation 2 0.00 0 0 0.00
cursor: pin S 18 0.00 0 0 0.00
latch: object queue header operation 1 0.00 0 0 0.00
SQL*Net message from client 6,228,981 0.00 4,169,084 669 95.55
jobq slave wait 15,011 94.12 43,577 2903 0.23
wait for unread message on broadcast
channel
44,177 99.99 43,541 986 0.68
Streams AQ: waiting for messages in the
queue
8,906 99.97 43,521 4887 0.14
SQL*Net message from dblink 4,867 0.00 45 9 0.07
SGA: MMAN sleep for component shrink 903 88.37 14 15 0.01
single-task message 28 0.00 2 76 0.00
class slave wait 25 0.00 0 0 0.00
Back to Wait Events Statistics
Back to Top
Background Wait Events
• ordered by wait time desc, waits desc (idle events last)
Event Waits %Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn
log file parallel write 79,977 0.00 349 4 1.23
db file parallel write 19,593 0.00 254 13 0.30
control file parallel write 17,447 0.00 34 2 0.27
log file sequential read 1,443 0.00 13 9 0.02
db file sequential read 3,536 0.00 9 3 0.05
Log archive I/O 1,248 0.00 6 5 0.02
db file scattered read 650 0.00 5 7 0.01
os thread startup 33 0.00 3 91 0.00
events in waitclass Other 8,746 0.13 2 0 0.13
control file sequential read 34,541 0.00 1 0 0.53
log file single write 104 0.00 0 3 0.00
db file single write 338 0.00 0 1 0.01
log buffer space 38 0.00 0 4 0.00
buffer busy waits 81 0.00 0 0 0.00
latch: library cache 2 0.00 0 0 0.00
latch: cache buffers chains 3 0.00 0 0 0.00
rdbms ipc message 232,217 67.86 512,510 2207 3.56
pmon timer 20,834 99.95 43,494 2088 0.32
smon timer 555 18.56 41,822 75354 0.01
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
SGA: MMAN sleep for component shrink 903 88.37 14 15 0.01
Back to Wait Events Statistics
Back to Top
Operating System Statistics
Statistic Total
AVG_BUSY_TIME 688,611
AVG_IDLE_TIME 3,768,647
AVG_IOWAIT_TIME 0
AVG_SYS_TIME 81,607
AVG_USER_TIME 605,676
BUSY_TIME 5,519,319
IDLE_TIME 30,159,602
IOWAIT_TIME 0
SYS_TIME 663,288
USER_TIME 4,856,031
LOAD 1
OS_CPU_WAIT_TIME 6,500
RSRC_MGR_CPU_WAIT_TIME 0
VM_IN_BYTES 2,460,229,632
VM_OUT_BYTES 545,300,480
PHYSICAL_MEMORY_BYTES 17,094,672,384
NUM_CPUS 8
Back to Wait Events Statistics
Back to Top
Service Statistics
• ordered by DB Time
Service Name DB Time (s) DB CPU (s) Physical Reads Logical Reads
SYS$USERS 31,236.30 25,029.60 15,447,507 1,141,714,799
ICEF_P92.WORLD 17,307.70 13,180.80 8,120,187 267,817,178
SYS$BACKGROUND 0.00 0.00 62,424 1,777,944
Back to Wait Events Statistics
Back to Top
Service Wait Class Stats
• Wait Class info for services in the Service Statistics section.
• Total Waits and Time Waited displayed for the following wait classes: User I/O, Concurrency, Administrative,
Network
• Time Waited (Wt Time) in centisecond (100th of a second)
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Service Name
User I/O
Total
Wts
User
I/O Wt
Time
Concurcy
Total Wts
Concurcy
Wt Time
Admin
Total
Wts
Admin
Wt
Time
Network
Total Wts
Network
Wt Time
SYS$USERS 6022713 648480 670 353 41 901 7321719 14016
ICEF_P92.WORLD 4145073 446871 1653 7099 0 0 4236738 1470
SYS$BACKGROUND 32353 11168 391 312 0 0 0 0
Back to Wait Events Statistics
Back to Top
SQL Statistics
• SQL ordered by Elapsed Time
• SQL ordered by CPU Time
• SQL ordered by Gets
• SQL ordered by Reads
• SQL ordered by Executions
• SQL ordered by Parse Calls
• SQL ordered by Sharable Memory
• SQL ordered by Version Count
• Complete List of SQL Text
Back to Top
SQL ordered by Elapsed Time
• Resources reported for PL/SQL code includes the resources used by all SQL statements called by the code.
• % Total DB Time is the Elapsed Time of the SQL statement divided into the Total Database Time multiplied by
100
Elapsed
Time (s)
CPU
Time
(s)
Executions
Elap
per
Exec
(s)
%
Total
DB
Time
SQL Id SQL Module SQL Text
11,981 11,979 1 11981.07 24.68 9bth4p1h24y7x SQL*Plus SELECT E.IC_NO_702
TIER0, c...
5,412 2,312 1 5412.17 11.15 82hxvr8kxuzjq sqlplus@recp0002
(TNS V1-V3)
BEGIN
dbms_stats.gather_databa...
4,038 4,038 611 6.61 8.32 07v4utszs6f9d SQL*Plus SELECT ACCT, ME,
INTERCOMP, ...
3,427 3,427 1 3426.84 7.06 8uvvdzvbar6z9 SQL*Plus BEGIN
Spr_Gcars_Cust_Summary('...
1,618 1,357 1 1617.98 3.33 84fz6kymcqh1g SQL*Plus Declare begin
dbms_output.pu...
1,499 1,407 1 1499.09 3.09 9p3urqfavbg6a SQL*Plus DECLARE BEGIN
dbms_output.put_...
635 110 1 634.93 1.31 7u811ut86g6rg SQL*Plus declare owner1 varchar2(45)...
613 613 1 612.84 1.26 15vpmpns3z7ga SQL*Plus BEGIN
Spr_Gcars_Cust_Summary('...
513 513 17,067,547 0.00 1.06 21uxm951p8n8s SQL*Plus SELECT
POL_PERIOD_DT1_799
FROM...
479 479 17,067,556 0.00 0.99 cucfnp8wc50zh SQL*Plus SELECT
POL_PERIOD_DT2_799
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
FROM...
Back to SQL Statistics
Back to Top
SQL ordered by CPU Time
• Resources reported for PL/SQL code includes the resources used by all SQL statements called by the code.
• % Total DB Time is the Elapsed Time of the SQL statement divided into the Total Database Time multiplied by
100
CPU
Time
(s)
Elapsed
Time (s) Executions
CPU
per
Exec
(s)
%
Total
DB
Time
SQL Id SQL Module SQL Text
11,979 11,981 1 11978.80 24.68 9bth4p1h24y7x SQL*Plus SELECT E.IC_NO_702
TIER0, c...
4,038 4,038 611 6.61 8.32 07v4utszs6f9d SQL*Plus SELECT ACCT, ME,
INTERCOMP, ...
3,427 3,427 1 3426.84 7.06 8uvvdzvbar6z9 SQL*Plus BEGIN
Spr_Gcars_Cust_Summary('...
2,312 5,412 1 2311.87 11.15 82hxvr8kxuzjq sqlplus@recp0002
(TNS V1-V3)
BEGIN
dbms_stats.gather_databa...
1,407 1,499 1 1406.89 3.09 9p3urqfavbg6a SQL*Plus DECLARE BEGIN
dbms_output.put_...
1,357 1,618 1 1356.53 3.33 84fz6kymcqh1g SQL*Plus Declare begin
dbms_output.pu...
613 613 1 612.84 1.26 15vpmpns3z7ga SQL*Plus BEGIN
Spr_Gcars_Cust_Summary('...
513 513 17,067,547 0.00 1.06 21uxm951p8n8s SQL*Plus SELECT
POL_PERIOD_DT1_799
FROM...
479 479 17,067,556 0.00 0.99 cucfnp8wc50zh SQL*Plus SELECT
POL_PERIOD_DT2_799
FROM...
470 470 17,067,588 0.00 0.97 9jnqqzx2xba27 SQL*Plus SELECT
POL_PERIOD_DT13_799
FRO...
110 635 1 110.16 1.31 7u811ut86g6rg SQL*Plus declare owner1 varchar2(45)...
Back to SQL Statistics
Back to Top
SQL ordered by Gets
• Resources reported for PL/SQL code includes the resources used by all SQL statements called by the code.
• Total Buffer Gets: 1,411,267,917
• Captured SQL account for 127.5% of Total
Buffer
Gets
Executio
ns
Gets per
Exec
%Tot
al
CPU
Time
(s)
Elaps
ed
Time
(s)
SQL Id SQL Module SQL Text
677,917,2 1 677,917,280 48.04 11978. 11981.0 9bth4p1h24y SQL*Plus SELECT E.IC_NO_702
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
80 .00 80 7 7x TIER0, c...
162,374,8
23
611 265,752.57 11.51 4038.4
7
4038.50 07v4utszs6f9
d
SQL*Plus SELECT ACCT, ME,
INTERCOMP, ...
140,811,1
33
1 140,811,133
.00
9.98 3426.8
4
3426.84 8uvvdzvbar6
z9
SQL*Plus BEGIN
Spr_Gcars_Cust_Summ
ary('...
80,779,04
0
1 80,779,040.
00
5.72 1406.8
9
1499.09 9p3urqfavbg
6a
SQL*Plus DECLARE BEGIN
dbms_output.put_...
51,207,07
8
17,067,69
9
3.00 3.63 468.67 468.73 0kzxszr7v024
k
SQL*Plus SELECT
POL_PERIOD_DT5_799
FROM...
51,206,83
5
17,067,62
3
3.00 3.63 467.67 468.09 fzwdn4ucuf0
6y
SQL*Plus SELECT
POL_PERIOD_DT6_799
FROM...
51,206,69
3
17,067,58
8
3.00 3.63 470.23 470.32 9jnqqzx2xba
27
SQL*Plus SELECT
POL_PERIOD_DT13_79
9 FRO...
51,206,66
0
17,067,65
1
3.00 3.63 465.56 465.57 dyhbmg6vc0
ycu
SQL*Plus SELECT
POL_PERIOD_DT3_799
FROM...
51,206,59
4
17,067,55
6
3.00 3.63 479.12 479.27 cucfnp8wc50
zh
SQL*Plus SELECT
POL_PERIOD_DT2_799
FROM...
51,206,41
3
17,067,48
3
3.00 3.63 468.19 468.18 ct62kq20vrcc
j
SQL*Plus SELECT
POL_PERIOD_DT9_799
FROM...
51,206,39
3
17,067,54
7
3.00 3.63 513.17 513.26 21uxm951p8
n8s
SQL*Plus SELECT
POL_PERIOD_DT1_799
FROM...
51,206,19
1
17,067,55
4
3.00 3.63 464.51 464.50 0b842nnkpzb
pb
SQL*Plus SELECT
POL_PERIOD_DT10_79
9 FRO...
51,205,97
9
17,067,37
0
3.00 3.63 469.71 469.71 5hmz9vfyz8h
2q
SQL*Plus SELECT
POL_PERIOD_DT15_79
9 FRO...
51,205,90
1
17,067,31
8
3.00 3.63 470.02 470.04 89yyvk580r3
14
SQL*Plus SELECT
POL_PERIOD_DT12_79
9 FRO...
50,279,59
1
16,758,59
9
3.00 3.56 453.19 453.22 c1a4cmg7kur
3y
SQL*Plus SELECT
POL_PERIOD_DT8_799
FROM...
50,279,59
0
16,758,61
9
3.00 3.56 455.63 455.74 dpbr1ftvbv3q
g
SQL*Plus SELECT
POL_PERIOD_DT11_79
9 FRO...
50,279,33
7
16,758,59
2
3.00 3.56 454.19 454.21 aa6p7f2wxzg
7r
SQL*Plus SELECT
POL_PERIOD_DT4_799
FROM...
50,278,63
2
16,758,15
4
3.00 3.56 454.09 454.09 3fmr3wzfdhz
xj
SQL*Plus SELECT
POL_PERIOD_DT7_799
FROM...
40,951,78
6
13,649,75
8
3.00 2.90 371.99 372.02 4hjg72vdr300
p
SQL*Plus SELECT
POL_PERIOD_DT14_79
9 FRO...
28,446,89
1
1 28,446,891.
00
2.02 1356.5
3
1617.98 84fz6kymcqh
1g
SQL*Plus Declare begin
dbms_output.pu...
22,247,13
3
1 22,247,133.
00
1.58 2311.8
7
5412.17 82hxvr8kxuzj
q
sqlplus@recp0
002 (TNS V1-
V3)
BEGIN
dbms_stats.gather_datab
a...
21,585,16 1 21,585,168. 1.53 612.84 612.84 15vpmpns3z SQL*Plus BEGIN
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
8 00 7ga Spr_Gcars_Cust_Summ
ary('...
21,475,25
9
1 21,475,259.
00
1.52 428.08 428.17 3z6574ak3wn
jd
SQL*Plus BEGIN
spr_gecars_close_arcs;
d...
21,363,96
9
24,802 861.38 1.51 391.39 423.37 3hbhvnmffph
yd
SQL*Plus SELECT
CONNECTION_ARC_N
O_690, ...
18,034,81
3
1 18,034,813.
00
1.28 374.53 376.28 7tpy2spgjpff
6
PL/SQL
Developer
SELECT E.IC_NO_702
TIER0, ...
16,031,16
7
1 16,031,167.
00
1.14 275.44 275.90 4p5rq20zg0q
3s
SQL*Plus SELECT SUM(CASE
WHEN TRUNC(SYS...
Back to SQL Statistics
Back to Top
SQL ordered by Reads
• Total Disk Reads: 23,628,785
• Captured SQL account for 47.9% of Total
Physic
al
Reads
Executio
ns
Reads
per Exec
%Tot
al
CPU
Time
(s)
Elaps
ed
Time
(s)
SQL Id SQL Module SQL Text
9,814,5
24
1 9,814,524.
00
41.54 2311.
87
5412.17 82hxvr8kxuzj
q
sqlplus@recp0
002 (TNS V1-
V3)
BEGIN
dbms_stats.gather_databa...
1,169,7
98
1 1,169,798.
00
4.95 110.1
6
634.93 7u811ut86g6
rg
SQL*Plus declare owner1 varchar2(45)...
316,555 6 52,759.17 1.34 53.29 139.22 1ru59v0smxs
11
w3wp.exe SELECT COUNT(1) FROM(
SELECT ...
303,496 1 303,496.0
0
1.28 77.47 80.79 9zjaxhxak6fc
t
exp@recp000
2 (TNS V1-V3)
SELECT
/*+NESTED_TABLE_GET_RE
F...
303,494 1 303,494.0
0
1.28 16.82 32.49 csh800xunxp
zw
SQL*Plus select count(*) from
GCARS_OU...
241,076 1 241,076.0
0
1.02 68.31 141.35 6kguaan30t9
zc
PL/SQL
Developer
SELECT
INV_INPUT_DT_720, inv_c...
234,400 5 46,880.00 0.99 42.32 135.32 6n2a0h1b2w
ysf
w3wp.exe SELECT COUNT(1) FROM(
SELECT ...
171,993 2 85,996.50 0.73 49.41 69.81 5b7j9qqvktq
7b
SQL*Plus BEGIN
SPR_GCARS_NVP_CREATE
CUST...
167,254 1 167,254.0
0
0.71 15.73 68.58 c8na1muq1h
v3u
SQL*Plus select count(*) from
GCARS_BA...
165,964 2 82,982.00 0.70 33.35 77.50 adw8rqvga7j
5t
SQL*Plus DECLARE errmsg
VARCHAR2(100); ...
Back to SQL Statistics
Back to Top
SQL ordered by Executions
• Total Executions: 279,726,090
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
• Captured SQL account for 96.7% of Total
Executions Rows
Processed
Rows
per
Exec
CPU
per
Exec
(s)
Elap
per
Exec
(s)
SQL Id SQL
Module SQL Text
17,067,699 17,067,571 1.00 0.00 0.00 0kzxszr7v024k SQL*Plus SELECT
POL_PERIOD_DT5_799
FROM...
17,067,651 17,067,462 1.00 0.00 0.00 dyhbmg6vc0ycu SQL*Plus SELECT
POL_PERIOD_DT3_799
FROM...
17,067,623 17,067,473 1.00 0.00 0.00 fzwdn4ucuf06y SQL*Plus SELECT
POL_PERIOD_DT6_799
FROM...
17,067,588 17,067,451 1.00 0.00 0.00 9jnqqzx2xba27 SQL*Plus SELECT
POL_PERIOD_DT13_799
FRO...
17,067,556 17,067,412 1.00 0.00 0.00 cucfnp8wc50zh SQL*Plus SELECT
POL_PERIOD_DT2_799
FROM...
17,067,554 17,067,331 1.00 0.00 0.00 0b842nnkpzbpb SQL*Plus SELECT
POL_PERIOD_DT10_799
FRO...
17,067,547 17,067,314 1.00 0.00 0.00 21uxm951p8n8s SQL*Plus SELECT
POL_PERIOD_DT1_799
FROM...
17,067,483 17,067,399 1.00 0.00 0.00 ct62kq20vrccj SQL*Plus SELECT
POL_PERIOD_DT9_799
FROM...
17,067,370 17,067,211 1.00 0.00 0.00 5hmz9vfyz8h2q SQL*Plus SELECT
POL_PERIOD_DT15_799
FRO...
17,067,318 17,067,187 1.00 0.00 0.00 89yyvk580r314 SQL*Plus SELECT
POL_PERIOD_DT12_799
FRO...
16,758,619 16,758,467 1.00 0.00 0.00 dpbr1ftvbv3qg SQL*Plus SELECT
POL_PERIOD_DT11_799
FRO...
16,758,599 16,758,369 1.00 0.00 0.00 c1a4cmg7kur3y SQL*Plus SELECT
POL_PERIOD_DT8_799
FROM...
16,758,592 16,758,424 1.00 0.00 0.00 aa6p7f2wxzg7r SQL*Plus SELECT
POL_PERIOD_DT4_799
FROM...
16,758,179 16,758,056 1.00 0.00 0.00 7ars6gagpzy3g JDBC Thin
Client
SELECT
TO_CHAR(SYSDATE+:B2 ,
:...
16,758,154 16,758,073 1.00 0.00 0.00 3fmr3wzfdhzxj SQL*Plus SELECT
POL_PERIOD_DT7_799
FROM...
13,649,758 13,649,511 1.00 0.00 0.00 4hjg72vdr300p SQL*Plus SELECT
POL_PERIOD_DT14_799
FRO...
Back to SQL Statistics
Back to Top
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
SQL ordered by Parse Calls
• Total Parse Calls: 3,041,058
• Captured SQL account for 25.1% of Total
Parse
Calls Executions % Total
Parses SQL Id SQL Module SQL Text
268,320 268,455 8.82 3p0ktx6y21ydu w3wp.exe BEGIN
GCARS.SPR_GCARS_IRPWC3( ...
55,067 55,067 1.81 dma453s58wu9f w3wp.exe select pol_days_cr_unaged_799 ...
44,593 44,593 1.47 7fprcxd5040cz exp@recp0002 (TNS
V1-V3)
SELECT POLGRP, POLICY,
POLOW...
35,463 35,463 1.17 0h6b2sajwb74n select privilege#, level from ...
21,788 21,788 0.72 8c96r9b1gtwh1 w3wp.exe Select Pol_Cust_No_Assign_799 ...
16,328 16,328 0.54 a0qkdc47yt5kq w3wp.exe Select pol_main_lang_799, POL_...
15,864 15,864 0.52 8qw602s7yq181 w3wp.exe select nvl(c.cc_name_705, ' ')...
15,099 15,099 0.50 9qgtwh66xg6nz update seg$ set type#=:4, bloc...
15,017 15,017 0.49 aq4js2gkfjru8 update tsq$ set blocks=:3, max...
8,718 8,718 0.29 c1u9kugvgw1nj w3wp.exe SELECT pol_main_lang_799, pol...
Back to SQL Statistics
Back to Top
SQL ordered by Sharable Memory
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by Version Count
• Only Statements with Version Count greater than 20 are displayed
Version Count Executions SQL Id SQL Module SQL Text
176 7,772 dajasdjnzz789 w3wp.exe BEGIN GCARS.SPR_GCARS_IRPEX1_E...
27 268,455 3p0ktx6y21ydu w3wp.exe BEGIN GCARS.SPR_GCARS_IRPWC3( ...
25 7,350 0k8522rmdzg4k select privilege# from sysauth...
21 8,486 04xtrk7uyhknh select obj#, type#, ctime, mti...
21 7,889 7ng34ruy5awxq select i.obj#, i.ts#, i.file#,...
21 8,687 83taa7kaw59c1 select name, intcol#, segcol#,...
Back to SQL Statistics
Back to Top
Complete List of SQL Text
SQL Id SQL Text
04xtrk7uyhknh select obj#, type#, ctime, mtime, stime, status, dataobj#, flags, oid$, spare1, spare2 from obj$ where
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
owner#=:1 and name=:2 and namespace=:3 and remoteowner is null and linkname is null and subname
is null
07v4utszs6f9d SELECT ACCT, ME, INTERCOMP, SUM(DYTD) DYTD, SUM(CYTD) CYTD, SUM(DCPTD) DCPTD,
SUM(CCPTD) CCPTD FROM (SELECT GL_BUSINESS_ACCT_NO_773 AS ACCT,
INV_COST_CENTER_720 AS ME, CUST_GE_COCIC_717 AS INTERCOMP, INV_TRANS_AMT_720
AS DYTD, 0 AS CYTD, CASE WHEN TRUNC(INV_INPUT_DT_720) >= TRUNC(:B5 ) AND
TRUNC(INV_INPUT_DT_720) <= TRUNC(:B4 ) THEN INV_TRANS_AMT_720 ELSE 0 END AS
DCPTD, 0 AS CCPTD FROM INVOICE, CUSTOMER, SYSTEM_POLICY, GL_ACCOUNT_MAP,
INVESTMENT_CODE, GL_ACCOUNT_TRANSLATION WHERE CUST_NO_717 = INV_CUST_NO_720
AND INV_INIT_TC_720 = '10' AND INV_TRANF_FLAG_720 <> 'Y' AND INV_CUST_NO_720 = :B3 AND
TRUNC(INV_INPUT_DT_720) >= TRUNC(:B2 ) AND TRUNC(INV_INPUT_DT_720) < TRUNC(:B1 )
AND GL_TYPE_798 = 'AR' AND GL_KEY_798 = LPAD(TO_CHAR(INV_AR_TYPE_720), 3, '0') ||
IC_FUTURE_USE2_702 AND INV_IC_NO_720 = IC_NO_702 AND TRIM(SUBSTR(GL_MAP_798, 1,
10)) = GL_GECARS_ACCT_NO_773 AND IC_ACCT_GROUP_702 = GL_ACCT_GROUP_773 UNION
ALL SELECT GL_BUSINESS_ACCT_NO_773 AS ACCT, INV_COST_CENTER_720 AS ME,
CUST_GE_COCIC_717 AS INTERCOMP, 0 AS DYTD, -INV_TRANS_AMT_720 AS CYTD, 0 AS
DCPTD, CASE WHEN TRUNC(INV_INPUT_DT_720) >= TRUNC(:B5 ) AND
TRUNC(INV_INPUT_DT_720) <= TRUNC(:B4 ) THEN -INV_TRANS_AMT_720 ELSE 0 END AS
CCPTD FROM INVOICE, CUSTOMER, SYSTEM_POLICY, GL_ACCOUNT_MAP,
INVESTMENT_CODE, GL_ACCOUNT_TRANSLATION WHERE CUST_NO_717 = INV_CUST_NO_720
AND INV_INIT_TC_720 = '12' AND INV_TRANF_FLAG_720 <> 'Y' AND INV_CUST_NO_720 = :B3 AND
TRUNC(I NV_INPUT_DT_720) >= TRUNC(:B2 ) AND TRUNC(INV_INPUT_DT_720) < TRUNC(:B1 )
AND GL_TYPE_798 = 'AR' AND GL_KEY_798 = LPAD(TO_CHAR(INV_AR_TYPE_720), 3, '0') ||
IC_FUTURE_USE2_702 AND INV_IC_NO_720 = IC_NO_702 AND TRIM(SUBSTR(GL_MAP_798, 1,
10)) = GL_GECARS_ACCT_NO_773 AND IC_ACCT_GROUP_702 = GL_ACCT_GROUP_773 UNION
ALL SELECT GL_BUSINESS_ACCT_NO_773 AS ACCT, IC_GL_CODE_702 AS ME,
CUST_GE_COCIC_717 AS INTERCOMP, CASE WHEN CHEQUE_TRANS_AMT_719 > 0 THEN
CHEQUE_TRANS_AMT_719 ELSE 0 END, CASE WHEN CHEQUE_TRANS_AMT_719 < 0 THEN -
CHEQUE_TRANS_AMT_719 ELSE 0 END, CASE WHEN TRUNC(BATCH_JE_DATE2_716) >=
TRUNC(:B5 ) AND TRUNC(BATCH_JE_DATE2_716) <= TRUNC(:B4 ) THEN CASE WHEN
CHEQUE_TRANS_AMT_719 > 0 THEN CHEQUE_TRANS_AMT_719 ELSE 0 END ELSE 0 END, CASE
WHEN TRUNC(BATCH_JE_DATE2_716) >= TRUNC(:B5 ) AND TRUNC(BATCH_JE_DATE2_716) <=
TRUNC(:B4 ) THEN CASE WHEN CHEQUE_TRANS_AMT_719 < 0 THEN -
CHEQUE_TRANS_AMT_719 ELSE 0 END ELSE 0 END FROM CHEQUE,
CHEQUE_BATCH_HEADER, CUSTOMER, SYSTEM_POLICY, GL_ACCOUNT_MAP, LOCK_BOX,
CLIENT_POLICY, INVESTMENT_CODE, GL_ACCOUNT_TRANSLATION WHERE CUST_NO_717 =
CHEQUE_CUST_NO_719 AND TRUNC(BATCH_JE_DATE2_716) >= TRUNC(:B2 ) AND
TRUNC(BATCH_JE_DATE2_716) < TRUNC(:B1 ) AND CHEQUE_LOCK_BOX_719 =
BATCH_LOCK_BOX_716 AND CHEQUE_BATCH_NO_719 = BATCH_NO_716 AND
CHEQUE_CUST_NO_719 = :B3 AND GL_TYPE_798 = 'LB' AND GL_KEY_798 =
CHEQUE_LOCK_BOX_719 AND CHEQUE_LOCK_BOX_719 = LOCK_BOX_NO_766 AND
LOCK_BOX_ACCT_GROUP_766 = CPOL_ ACCT_GROUP_791 AND CPOL_CORP_IC_791 =
IC_NO_702 AND TRIM(SUBSTR(GL_MAP_798, 12, 10)) = GL_GECARS_ACCT_NO_773 AND
IC_ACCT_GROUP_702 = GL_ACCT_GROUP_773 UNION ALL SELECT
GL_BUSINESS_ACCT_NO_773 AS ACCT, CASE WHEN SUBSTR(GL_MAP_798, 57, 1) = 'Y' THEN
IC_GL_CODE_702 ELSE TRIM(SUBSTR(GL_MAP_798, 39, 6)) END AS ME, CUST_GE_COCIC_717
AS INTERCOMP, CASE WHEN CHEQ_TRANS_AMT_726 < 0 THEN -CHEQ_TRANS_AMT_726 ELSE
0 END, CASE WHEN CHEQ_TRANS_AMT_726 > 0 THEN CHEQ_TRANS_AMT_726 ELSE 0 END,
CASE WHEN TRUNC(CHEQ_JE_DATE2_726) >= TRUNC(:B5 ) AND TRUNC(CHEQ_JE_DATE2_726)
<= TRUNC(:B4 ) THEN CASE WHEN CHEQ_TRANS_AMT_726 < 0 THEN -CHEQ_TRANS_AMT_726
ELSE 0 END ELSE 0 END, CASE WHEN TRUNC(CHEQ_JE_DATE2_726) >= TRUNC(:B5 ) AND
TRUNC(CHEQ_JE_DATE2_726) <= TRUNC(:B4 ) THEN CASE WHEN CHEQ_TRANS_AMT_726 > 0
THEN CHEQ_TRANS_AMT_726 ELSE 0 END ELSE 0 END FROM CHEQUE_JOURNAL_ENTRY,
CUSTOMER, SYSTEM_POLICY, CHEQUE, GL_ACCOUNT_MAP, INVESTMENT_CODE,
GL_ACCOUNT_TRANSLATION WHERE CUST_NO_717 = CHEQ_CUST_NO_726 AND
TRUNC(CHEQ_JE_DATE2_726) >= TRUNC(:B2 ) AND TRUNC(CHEQ_JE_DATE2_726) < TRUNC(:B1
) AND CHEQ_CUST_NO_726 = :B3 AND CHEQUE_ID_719 = CHEQ_ID_726 AND CHEQ_TC_726
NOT IN ('776', '777', '778', '771', '772') AND GL_TYPE_798 = 'TC' AND GL_KEY_798 =
LPAD(TRIM(CHEQ_TC_726), 3, '0') || IC_FUTURE_USE2_702 AND CHEQ_IC_NO_726 = IC_NO_702
AND TRIM(SUBSTR(GL_MAP_798, 1, 10)) = GL_GECARS_ACCT_NO_773 AND
IC_ACCT_GROUP_702 = GL_ACCT_GROUP_773 UNION ALL SELECT GL
_BUSINESS_ACCT_NO_773 AS ACCT, CASE WHEN SUBSTR(GL_MAP_798, 57, 1) = 'Y' THEN
INV_COST_CENTER_720 ELSE TRIM(SUBSTR(GL_MAP_798, 39, 6)) END AS ME,
CUST_GE_COCIC_717 AS INTERCOMP, CASE WHEN INV_TRANS_AMT_727 < 0 THEN -
INV_TRANS_AMT_727 ELSE 0 END, CASE WHEN INV_TRANS_AMT_727 > 0 THEN
INV_TRANS_AMT_727 ELSE 0 END, CASE WHEN TRUNC(INV_JE_DATE2_727) >= TRUNC(:B5 )
AND TRUNC(INV_JE_DATE2_727) <= TRUNC(:B4 ) THEN CASE WHEN INV_TRANS_AMT_727 < 0
THEN -INV_TRANS_AMT_727 ELSE 0 END ELSE 0 END, CASE WHEN TRUNC(INV_JE_DATE2_727)
>= TRUNC(:B5 ) AND TRUNC(INV_JE_DATE2_727) <= TRUNC(:B4 ) THEN CASE WHEN
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
INV_TRANS_AMT_727 > 0 THEN INV_TRANS_AMT_727 ELSE 0 END ELSE 0 END FROM
INVOICE_JOURNAL_ENTRY, CUSTOMER, SYSTEM_POLICY, INVOICE, GL_ACCOUNT_MAP,
INVESTMENT_CODE, GL_ACCOUNT_TRANSLATION WHERE CUST_NO_717 = INV_CUST_NO_727
AND INV_AMT_727 <> 0 AND TRUNC(INV_JE_DATE2_727) >= TRUNC(:B2 ) AND
TRUNC(INV_JE_DATE2_727) < TRUNC(:B1 ) AND INV_CUST_NO_720 = :B3 AND
INV_CUST_NO_720 = INV_OWNER_CUST_NO_727 AND INV_FINDER_NO_720 =
INV_OWNER_FINDER_NO_727 AND INV_TC_727 NOT IN ('776', '777', '778', '771', '772') AND
GL_TYPE_798 = 'TC' AND INV_IC_NO_727 = IC_NO_702 AND GL_KEY_798 =
LPAD(TRIM(INV_TC_727), 3, '0') || IC_FUTURE_USE2_702 AND TRIM(SUBSTR(GL_MAP_798, 1, 10))
= GL_GECARS_ACCT_NO_773 AND GL_ACCT_GROUP_773 = IC_ACCT_GROUP_702) GROUP BY
ACCT, ME, INTERCOMP ORDER BY ACCT, ME, INTERCOMP
0b842nnkpzbpb SELECT POL_PERIOD_DT10_799 FROM SYSTEM_POLICY
0h6b2sajwb74n select privilege#, level from sysauth$ connect by grantee#=prior privilege# and privilege#>0 start with
grantee#=:1 and privilege#>0
0k8522rmdzg4k select privilege# from sysauth$ where (grantee#=:1 or grantee#=1) and privilege#>0
0kzxszr7v024k SELECT POL_PERIOD_DT5_799 FROM SYSTEM_POLICY
15vpmpns3z7ga BEGIN Spr_Gcars_Cust_Summary('WNDPS0115'); END;
1ru59v0smxs11 SELECT COUNT(1) FROM( SELECT CUS.* FROM ( SELECT C.CUST_NO_717, Ic_No_702 AS
ICNUM, (SELECT count(*) FROM invoice W WHERE W.inv_cust_No_720 = c.Cust_no_717 and
w.Inv_Paid_Flag_720 ='O' AND w.INV_IC_NO_720 = IC_NO_702) AS Open_Count, (SELECT count(*)
FROM invoice W WHERE w.inv_cust_No_720 = c.Cust_no_717 and w.Inv_Paid_Flag_720 ='P' AND
w.INV_IC_NO_720 = IC_NO_702) as Closed_Count FROM CUSTOMER C, CC_ORGANIZATION,
CUSTOMER_TOTALS A, CUSTOMER_DATES B, CUSTOMER_ADDRESS, INVOICE,
INVESTMENT_CODE, EXCHANGE_RATE2 E, ARC Where C.CUST_CC_NO_717 = CC_NO_705 AND
C.CUST_NO_717 = CUST_NO_682 AND C.CUST_NO_717 = A.cust_no_717 AND C.cust_no_717 =
B.cust_no_717 AND C.cust_no_717 = inv_cust_no_720 AND INV_IC_NO_720 = IC_NO_702 AND
E.EXCH_RATE_TYPE_792 = 'CM' AND E.EXCH_CURR3_FROM_792 = INV_LOC_CURR3_720 AND
INV_PAID_FLAG_720 = 'O' AND cc_postable_flag_705 = 'P' AND CUST_CC_NO_717 in (SELECT
CC_NO_705 From cc_organization WHERE CC_FUTURE_USE_705 like '%CL3V%' OR CC_NO_705 =
'CL3V') AND INV_CRE_ARC_NO_720 = ARC_NO_729(+) GROUP BY C.CUST_NO_717, IC_NO_702,
CUST_LONG_NAME_717, CUST_TYPE_717, CUST_CC_NO_717,
CC_MANAGER_COLL_NAME_705, CUST_CURR_717, CUST_CR_CURR_717,
CUST_CR_START_717, CUST_ADDR_LINE1_682, CUST_ADDR_LINE2_682,
C.CUST_ACCT_STAT1_SW_717, C.CUST_ACCT_STAT2_SW_717, C.CUST_ACCT_STAT3_SW_717,
C.CUST_ACCT_STAT4_SW_717, CUST_CR_BUSINESS_717, C.CUST_HIGH_PAYOR_NO_717,
C.CUST_HIGH_PARENT_NO_717, C.CUST_PAYOR_NO_717, C.CUST_PARENT_NO_717,
C.CUST_CITY_717, C UST_POSTAL_CODE_682, C.CUST_COUNTRY_717, C.CUST_PROV_717,
IC_LEGAL_ID_702, IC_GL_CODE_702, C.CUST_CLIENT_717, A.CUST_NO_DR_INV_717,
A.CUST_NO_CR_INV_717 ) CUS WHERE Open_Count > 0 )
21uxm951p8n8s SELECT POL_PERIOD_DT1_799 FROM SYSTEM_POLICY
3fmr3wzfdhzxj SELECT POL_PERIOD_DT7_799 FROM SYSTEM_POLICY
3hbhvnmffphyd SELECT CONNECTION_ARC_NO_690, CONNECTION_CUST_NO_690,
CONNECTION_FINDER_NO_690 FROM ARC_INVOICE_CONNECTOR WHERE
CONNECTION_ARC_NO_690 = :B1
3p0ktx6y21ydu BEGIN GCARS.SPR_GCARS_IRPWC3( :1, :2, :3 ); END;
3z6574ak3wnjd BEGIN spr_gecars_close_arcs; dbms_output.put_line('Completed IR4185 replacement - Arc closure proc
at: ' || to_char(sysdate, 'mm/dd/yyyy hh24:mi:ss')); END;
4hjg72vdr300p SELECT POL_PERIOD_DT14_799 FROM SYSTEM_POLICY
4p5rq20zg0q3s SELECT SUM(CASE WHEN TRUNC(SYSDATE - BATCH_JE_DATE_716) BETWEEN 1 AND 90 THEN
(DECODE(CHEQUE_UNAPPL_SW_719, 'Y', (CHEQUE_UNAPPLY_TRANS_AMT_719 *
EXCH_RATE2_792) / 1000, DECODE(CHEQUE_UNIDENT_FLAG_719, 'Y',
((CHEQUE_TRANS_AMT_719 * EXCH_RATE2_792) / 1000), 0), 0.00)) ELSE 0 END) , SUM(CASE
WHEN TRUNC(SYSDATE - BATCH_JE_DATE_716) BETWEEN 91 AND 365 THEN
(DECODE(CHEQUE_UNAPPL_SW_719, 'Y', (CHEQUE_UNAPPLY_TRANS_AMT_719 *
EXCH_RATE2_792) / 1000, DECODE(CHEQUE_UNIDENT_FLAG_719, 'Y',
((CHEQUE_TRANS_AMT_719 * EXCH_RATE2_792) / 1000), 0), 0.00)) ELSE 0 END) , SUM(CASE
WHEN TRUNC(SYSDATE - BATCH_JE_DATE_716) > 365 THEN
(DECODE(CHEQUE_UNAPPL_SW_719, 'Y', (CHEQUE_UNAPPLY_TRANS_AMT_719 *
EXCH_RATE2_792) / 1000, DECODE(CHEQUE_UNIDENT_FLAG_719, 'Y',
((CHEQUE_TRANS_AMT_719 * EXCH_RATE2_792) / 1000), 0), 0.00)) ELSE 0 END) FROM CHEQUE ,
CHEQUE_BATCH_HEADER , EXCHANGE_RATE2 , LOCK_BOX WHERE EXCH_RATE_TYPE_792 =
'HM' AND TO_CHAR(EXCH_YEAR_792) || LPAD(TO_CHAR(EXCH_MONTH_792), 2, 0) =
LTRIM(RTRIM(SPGETOFFSETDATE('-2', 'yyyy'))) || SPGETGECARSPERIOD(TO_CHAR(SYSDATE -
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
2, 'mm/dd/yyyy'), 'mm/dd/yyyy') AND EXCH_CURR3_FROM_792 = CHEQUE_DEP_LOC_CURR3_719
AND EXCH_CURR3_TO_792 = 'USD' AND BATCH_NO_716 = CHEQUE_BATCH_NO_719 AND
BATCH_LOCK_BOX_716 = CHEQUE_LOCK_BOX_719 AND BATCH_LOCK_BOX_716 =
LOCK_BOX_NO_766 AND (CHEQUE_UNAPPL_SW_719 = 'Y' OR CHEQUE_UNIDENT_FLAG_719 =
'Y') AND (LOCK_BOX_SET_OF_BOOK_766, LOCK_BOX_ACCT_GROUP_766) IN (SELECT
IC_SET_OF_BOOK_702 , IC_ACCT_GROUP_7 02 FROM INVESTMENT_CODE WHERE
SUBSTR(IC_FUTURE_USE_702, 10, 6) IN ('TOTLES'))
5b7j9qqvktq7b BEGIN SPR_GCARS_NVP_CREATECUSTOMERS; END;
5hmz9vfyz8h2q SELECT POL_PERIOD_DT15_799 FROM SYSTEM_POLICY
6kguaan30t9zc SELECT INV_INPUT_DT_720, inv_cost_center_720, INV_IC_NO_720 "IC", INV_NO_720 "INV NO",
INV_CUST_NO_720 "INV CUSTOMER NO", CUST_LONG_NAME_717 "CUST NAME",
INV_PROJECT_NO_720 "PROJECT NO", INV_AR_TYPE_720 "AR TYPE",
DECODE(INV_PAID_FLAG_720, 'P', 'CLOSED', 'OPEN') "INV STATUS", DECODE(INV_TYPE_720, 'B',
'Billed Invoice', 'C', 'Billed Credit Note', 'N', 'Memo Invoice', 'U', 'Unapplied Cash', 'V', 'Variance (Cash
Appln)', 'W', 'Variance(Zero Appln)', 'X', 'Reverse Variance (Cash)', 'Y', 'REVERSE VARIANCE(ZERO)',
'E', 'Charge Entry', 'I', 'Interest Invoice', 'M', 'Miscellaneous') AS INV_TYPE, INV_DATE_720 "INV DATE",
INV_DUE_DATE_720 "INV DUE DATE", ROUND(INV_AMT_720 / 100, 2) "INVOICE AMT",
ROUND(INV_AMT_PAID_720/ 100, 2) "INVOICE PAID AMT", ROUND((INV_AMT_720-
INV_AMT_PAID_720)/ 100, 2) "INVOICE AMT BAL", (inv_amt_720-nvl(inv_vat_amt_720, 0))/100 AS
"Before Tax Inv Amt", inv_amt_720/100 AS "After Tax Inv Amt", inv_vat_amt_720/100 AS "VAT Amt",
INV_CURR3_720 "INVOICE CUR R", ROUND(INV_TRANS_AMT_720, 2) "TRANS INV AMT",
ROUND(INV_TRANS_AMT_PAID_720, 2) "INVOICE TRANS PAID AMT",
ROUND((INV_TRANS_AMT_720-INV_TRANS_AMT_PAID_720)/ 100, 2) "INVOICE TRANS AMT BAL",
INV_LOC_CURR3_720 "LOC CURR", INV_CLOSE_DATE_720 "INV CLOSE DATE", 'CHEQUE'
"PAYMENT MODE", CASH_CONNECT_DATE_728 "APPLY DATE",
ROUND(DECODE(CASH_INV_AMT_728, 0, -INV_AMT_720, CASH_INV_AMT_728) / 100, 2)
"APPLIED AMT", ROUND((DECODE(CASH_INV_AMT_728, 0, -INV_AMT_720, CASH_INV_AMT_728)
* INV_TRANS_AMT_720 / INV_AMT_720), 2) "TRANS APPLIED AMT", CHEQUE_NO_719 "CHEQUE
NO/INVOICE NO", CHEQUE_DATE_719 "CHEQUE DATE/INV DATE", (CHEQUE_BATCH_NO_719)
"BATCH NO", CHEQUE_LOCK_BOX_719 "LOCK BOX (CHEQUES)" FROM INVOICE,
CASH_APPLY_CONNECTOR, CHEQUE, Customer WHERE /*INV_IC_NO_720 IN (SELECT
IC_NO_702 FROM INVESTMENT_CODE WHERE IC_NO_702 LIKE '%JDESA1%') AND*/
CASH_CUST_NO_728 = INV_CUST_NO_720 AND CASH_FINDER_NO_728 = INV_FINDER_NO_720
AND CASH_CHEQUE_ID_728 = CHEQUE_ID_719 AND CASH_CONNECT_DATE_728 between '01-
Aug-2010' and '29-Nov-2010'and inv_cust_no_720 = CUST_NO_717 and -- AND cust_cc_no_717 =
'CL37'/* AND TRIM(inv_no_720) in (' 136596', ' 1043175', ' 1043176', ' 3000395', ' 3000396', ' 1043175', '
1043675', ' 1043677', ' 1 043678', ' 1043679', ' 1043682', ' 1043683', ' 1043684', ' 1043685', ' 1043688', '
1043689', ' 1043690', ' 1043691', ' 1043676', ' 1043680', ' 1043681', ' 1043684', ' 1043685', ' 1043686', '
1043687', ' 1043692' ) /* INV_INPUT_DT_720 BETWEEN '1-JAN-2006' AND '31-DEC-2006'*/ /*
INV_INIT_TC_720 IN (10, 12) *\*/ UNION SELECT A.INV_INPUT_DT_720, A.inv_cost_center_720,
A.INV_IC_NO_720 "IC", A.INV_NO_720 "INV NO", A.INV_CUST_NO_720 "INV CUSTOMER NO",
CUST_LONG_NAME_717 "CUST NAME", A.INV_PROJECT_NO_720 "PROJECT NO",
A.INV_AR_TYPE_720 "AR TYPE", DECODE(A.INV_PAID_FLAG_720, 'P', 'CLOSED', 'OPEN') "INV
STATUS", DECODE(A.INV_TYPE_720, 'B', 'Billed Invoice', 'C', 'Billed Credit Note', 'N', 'Memo Invoice',
'U', 'Unapplied Cash', 'V', 'Variance (Cash Appln)', 'W', 'Variance(Zero Appln)', 'X', 'Reverse Variance
(Cash)', 'Y', 'REVERSE VARIANCE(ZERO)', 'E', 'Charge Entry', 'I', 'Interest Invoice', 'M', 'Miscellaneous')
AS INV_TYPE, A.INV_DATE_720 "INV DATE", A.INV_DUE_DATE_720 "INV DUE DATE", ROUND(
A.INV_AMT_720 / 100, 2) "INVOICE AMT", ROUND( A.INV_AMT_PAID_720/ 100, 2) "INVOICE PAID
AMT", ROUND(( A.INV_AMT_720- A.INV_AMT_PAID_720)/ 100, 2) "INVOICE AMT BAL",
(A.inv_amt_720-nvl(A.inv_vat_amt_720, 0))/100 AS "Before Tax Inv Amt", A.inv_amt_720/100 AS "After
Tax Inv Amt", A.inv_vat_amt_720/100 AS "VAT Amt", A.INV_CURR3_720 "INVOICE CURR", ROUND(
A.INV_TRANS_AMT_720, 2) "TRANS INV AMT", ROUND( A.INV_TRANS_AMT_PAID_720, 2)
"INVOICE TRANS PAID AMT", ROUND(( A.INV_TRANS_AMT_720- A.INV_TRANS_AMT_PAID_720)/
100, 2) "INVOICE TRANS AMT BAL", A.INV_LOC_CURR3_720 "LOC CURR",
A.INV_CLOSE_DATE_720 "INV CLOSE DATE", 'PAID' "PAYMENT MODE",
ZAP_CONNECTION_DT_687 "APPLY DATE", ROUND(-ZAP_CR_PAY_AMT_687 / 100, 2) "APPLIED
AMT", ROUND((-ZAP_CR_PAY_AMT_687 * A.INV_TRANS_AMT_720 / A.INV_AMT_720), 2) "TRANS
APPLIED AMT", B.INV_NO_720 "CHEQUE NO/INVOICE NO", B.INV_DATE_720 "CHEQUE DATE/INV
DATE", NULL, NULL FROM INVOICE A, ZERO_APPLICATION, INVOICE B, Customer WHERE
/*A.INV_IC_NO_720 IN (SELECT IC_NO_702 FROM INVESTMENT_CODE WHERE IC_NO_702 LIKE
'%JDESA1%') AND*/ ZAP_PY_CUST_NO_687 = A.INV_CUST_NO_720 AND
ZAP_PY_FINDER_NO_687 = A.INV_FINDER_NO_720 AND ZAP_PD_CUST_NO_687 =
B.INV_CUST_NO_720 AND ZAP_PD_FINDER_NO_687 = B.INV_FINDER_NO_720 AND
ZAP_CONNECTION_DT_687 between '01-Aug-2010' and '29-Nov-2010' and A.inv_cust_no_720 =
CUST_NO_717 and -- AND cust_cc_no_717 = 'CL37' TRIM(A.inv_no_720) in (' 136596', ' 1043175', '
1043176', ' 3000395', ' 3000396', ' 1043175', ' 1043675', ' 1043677', ' 1043678', ' 1043679', ' 1043682', '
1043683', ' 1043684', ' 1043685', ' 1043688', ' 1043689', ' 1043690', ' 1043691', ' 1043676', ' 1043680', '
1043681', ' 1043684', ' 1043685', ' 1043686', ' 1043687', ' 1043692' ) UNION SELECT
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
A.INV_INPUT_DT_720, A.inv_cost_center_720, A.INV_IC_NO_720 "IC", A.INV_NO_720 "INV NO",
A.INV_CUST_NO_720 "INV CUSTOMER NO", CUST_LONG_NAME_717 "CUST NAME",
A.INV_PROJECT_NO_720 "PROJECT NO", A.INV_AR_TYPE_720 "AR TYPE",
DECODE(A.INV_PAID_FLAG_720, 'P', 'CLOSED', 'OPEN') "INV STATUS", DECODE(A.INV_TYPE_720,
'B', 'Billed Invoice', 'C', 'Billed Credit Note', 'N', 'Memo Invoice', 'U', 'Unapplied Cash', 'V', 'Variance (Cash
Appln)', 'W', 'Variance(Zero Appln)', 'X', 'Reverse Variance (Cash)', 'Y', 'REVERSE VARIANCE(ZERO)',
'E', 'Charge Entry', 'I', 'Interest Invoice', 'M', 'Miscellaneous') AS INV_TYPE, A.INV_DATE_720 "INV
DATE", A.INV_DUE_DATE_720 "INV DUE DATE", ROUND( A.INV_AMT_720 / 100, 2) "INVOICE AMT",
ROUND( A.INV_AMT_PAID_720/ 100, 2) "INVOICE PAID AMT", ROUND(( A.INV_AMT_720-
A.INV_AMT_PAID_720)/ 100, 2) "INVOICE AMT BAL", (A.inv_amt_720-nvl(A.inv_vat_amt_720, 0))/100
AS "Before Tax Inv Amt", A.inv_amt_720/100 AS "After Tax Inv Amt", A.inv_vat_amt_720/100 AS "VAT
Amt", A.INV_CURR3_720 "INVOICE CURR", ROUND( A.INV_TRANS_AMT_720, 2) "TRANS INV AMT",
ROUND( A.INV_TRANS_AMT_PAID_720, 2) "INVOICE TRANS PAID AMT", ROUND((
A.INV_TRANS_AMT_720- A.INV_TRANS_AMT_PAID_720)/ 100, 2) "INVOICE TRANS AMT BAL",
A.INV_LOC_CURR3_720 "LOC CURR", A.INV_CLOSE_DATE_720 "INV CLOSE DATE", 'PAID BY'
"PAYMENT MODE", ZAP_CONNECTION_DT_687 "APPLY DATE",
ROUND(DECODE(ZAP_DR_PAY_AMT_687, 0, -A.INV_AMT_720, ZAP_DR_PAY_AMT_687) / 100, 2)
"APPLIED AMT", ROUND((DECODE(ZAP_DR_PAY_AMT _687, 0, -A.INV_AMT_720,
ZAP_DR_PAY_AMT_687) * A.INV_TRANS_AMT_720 / A.INV_AMT_720), 2) "TRANS APPLIED AMT",
B.INV_NO_720 "CHEQUE NO/INVOICE NO", B.INV_DATE_720 "CHEQUE DATE/INV DATE", NULL,
NULL FROM INVOICE A, ZERO_APPLICATION, INVOICE B, INVESTMENT_CODE, Customer WHERE
/*A.INV_IC_NO_720 IN (SELECT IC_NO_702 FROM INVESTMENT_CODE WHERE IC_GL_702 LIKE
'%JDESA1%') AND*/ ZAP_PD_CUST_NO_687 = A.INV_CUST_NO_720 AND
ZAP_PD_FINDER_NO_687 = A.INV_FINDER_NO_720 AND ZAP_PY_CUST_NO_687 =
B.INV_CUST_NO_720 AND ZAP_CONNECTION_DT_687 between '01-Aug-2010' and '29-Nov-2010'
and ZAP_PY_FINDER_NO_687 = B.INV_FINDER_NO_720 AND A.inv_cust_no_720 = CUST_NO_717
and -- AND cust_cc_no_717 = 'CL37' TRIM(A.inv_no_720) in (' 136596', ' 1043175', ' 1043176', '
3000395', ' 3000396', ' 1043175', ' 1043675', ' 1043677', ' 1043678', ' 1043679', ' 1043682', ' 1043683', '
1043684', ' 1043685', ' 1043688', ' 1043689', ' 1043690', ' 1043691', ' 1043676', ' 1043680', ' 1043681', '
1043684', ' 1043685', ' 1043686', ' 1043687', ' 1043692' )
6n2a0h1b2wysf SELECT COUNT(1) FROM( SELECT CUS.* FROM ( SELECT C.CUST_NO_717, Ic_No_702 AS
ICNUM, (SELECT count(*) FROM invoice W WHERE W.inv_cust_No_720 = c.Cust_no_717 and
w.Inv_Paid_Flag_720 ='O' AND w.INV_IC_NO_720 = IC_NO_702) AS Open_Count, (SELECT count(*)
FROM invoice W WHERE w.inv_cust_No_720 = c.Cust_no_717 and w.Inv_Paid_Flag_720 ='P' AND
w.INV_IC_NO_720 = IC_NO_702) as Closed_Count FROM CUSTOMER C, CC_ORGANIZATION,
CUSTOMER_TOTALS A, CUSTOMER_DATES B, CUSTOMER_ADDRESS, INVOICE,
INVESTMENT_CODE, EXCHANGE_RATE2 E, ARC Where C.CUST_CC_NO_717 = CC_NO_705 AND
C.CUST_NO_717 = CUST_NO_682 AND C.CUST_NO_717 = A.cust_no_717 AND C.cust_no_717 =
B.cust_no_717 AND C.cust_no_717 = inv_cust_no_720 AND INV_IC_NO_720 = IC_NO_702 AND
E.EXCH_RATE_TYPE_792 = 'CM' AND E.EXCH_CURR3_FROM_792 = INV_LOC_CURR3_720 AND
INV_PAID_FLAG_720 = 'O' AND cc_postable_flag_705 = 'P' AND CUST_CC_NO_717 in (SELECT
CC_NO_705 From cc_organization WHERE CC_FUTURE_USE_705 like '%CL4Y%' OR CC_NO_705 =
'CL4Y') AND INV_CRE_ARC_NO_720 = ARC_NO_729(+) GROUP BY C.CUST_NO_717, IC_NO_702,
CUST_LONG_NAME_717, CUST_TYPE_717, CUST_CC_NO_717,
CC_MANAGER_COLL_NAME_705, CUST_CURR_717, CUST_CR_CURR_717,
CUST_CR_START_717, CUST_ADDR_LINE1_682, CUST_ADDR_LINE2_682,
C.CUST_ACCT_STAT1_SW_717, C.CUST_ACCT_STAT2_SW_717, C.CUST_ACCT_STAT3_SW_717,
C.CUST_ACCT_STAT4_SW_717, CUST_CR_BUSINESS_717, C.CUST_HIGH_PAYOR_NO_717,
C.CUST_HIGH_PARENT_NO_717, C.CUST_PAYOR_NO_717, C.CUST_PARENT_NO_717,
C.CUST_CITY_717, C UST_POSTAL_CODE_682, C.CUST_COUNTRY_717, C.CUST_PROV_717,
IC_LEGAL_ID_702, IC_GL_CODE_702, C.CUST_CLIENT_717, A.CUST_NO_DR_INV_717,
A.CUST_NO_CR_INV_717 ) CUS WHERE Open_Count > 0 )
7ars6gagpzy3g SELECT TO_CHAR(SYSDATE+:B2 , :B1 ) FROM DUAL
7fprcxd5040cz SELECT POLGRP, POLICY, POLOWN, POLSCH, POLFUN, STMT, CHKOPT, ENABLED, SPOLICY
FROM SYS.EXU9RLS WHERE OBJOWN = :1 AND OBJNAM = :2
7ng34ruy5awxq select i.obj#, i.ts#, i.file#, i.block#, i.intcols, i.type#, i.flags, i.property, i.pctfree$, i.initrans, i.maxtrans,
i.blevel, i.leafcnt, i.distkey, i.lblkkey, i.dblkkey, i.clufac, i.cols, i.analyzetime, i.samplesize, i.dataobj#,
nvl(i.degree, 1), nvl(i.instances, 1), i.rowcnt, mod(i.pctthres$, 256), i.indmethod#, i.trunccnt, nvl(c.unicols,
0), nvl(c.deferrable#+c.valid#, 0), nvl(i.spare1, i.intcols), i.spare4, i.spare2, i.spare6, decode(i.pctthres$,
null, null, mod(trunc(i.pctthres$/256), 256)), ist.cachedblk, ist.cachehit, ist.logicalread from ind$ i,
ind_stats$ ist, (select enabled, min(cols) unicols, min(to_number(bitand(defer, 1))) deferrable#,
min(to_number(bitand(defer, 4))) valid# from cdef$ where obj#=:1 and enabled > 1 group by enabled) c
where i.obj#=c.enabled(+) and i.obj# = ist.obj#(+) and i.bo#=:1 order by i.obj#
7tpy2spgjpff6 SELECT E.IC_NO_702 TIER0, E.IC_NAME_702 TIER0NAME, B.IC_NO_702 TIER1, B.IC_NAME_702
TIER1NAME, C.IC_NO_702 TIER2, C.IC_NAME_702 TIER2NAME, D.IC_NO_702 TIER3,
D.IC_NAME_702 TIER3NAME, A.IC_NO_702 IC_NO, A.IC_NAME_702 ICNAME,
TO_CHAR(INV_DATE_720, 'MM/DD/YYYY') INV_DATE, TO_CHAR(INV_INPUT_DT_720,
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
'MM/DD/YYYY') INV_INPUT_DATE, TO_CHAR(INV_DUE_DATE_720, 'MM/DD/YYYY')
INV_DUE_DATE, CUST_NO_717 CUST_NO, CUST_LONG_NAME_717 CUST_NAME, INV_NO_720
INV_NO, INV_ID_720 INV_ID, ROUND((INV_TRANS_AMT_720 * EXCH_RATE2_792), 2)
USD_AMT_FUNC_CURR, ROUND((SYSDATE-INV_DUE_DATE_720)) PAST_DUE_DAYS,
CC_LOCATION_705 CC_LOCATION, ARC_SUB_TC_729 ARC_CODE, ARC_DATE_729 ARC_DATE ,
ARC_NO_729 ARC_NO , CASE WHEN Trunc(SYSDATE - Arc_Date_729) BETWEEN 1 AND 90 THEN
((((Inv_Trans_Amt_720 - Inv_Trans_Amt_Paid_720)) * Exch_Rate2_792)) ELSE 0 END v_Arc90, CASE
WHEN Trunc(SYSDATE - Arc_Date_729) BETWEEN 91 AND 365 THEN ((((Inv_Trans_Amt_720 -
Inv_Trans_Amt_Paid_720)) * Exch_Rate2_792)) ELSE 0 END v_Arc365 , CASE WHEN
Trunc(SYSDATE - Arc_Date_729) > 365 THEN ((((Inv_Trans_Amt_720 - Inv_Trans_Amt_Paid_720)) *
Exch_Rate2_792)) ELSE 0 END v_Arc366 FROM Arc , Arc_Invoice_Connector , invoice_me ,
Exchange_Rate2 , Customer , Investment_Code a , Investment_Code b , Investment_Code c ,
Investment_Code d , Investment_Code e , CC_ORGANIZATION WHERE Exch_Rate_Type_792 = 'HM' -
-AND To_Char(Exch_Year_792) || Lpad(To_Char(Exch_Month_792), 2, 0) = '201002' AND
To_Char(Exch_Year_792) || Lpad(To_Char(Exch_Month_792), 2, 0) = Ltrim(Rtrim(Spgetoffsetdate('-2',
'yyyy'))) ||Spgetgecarsperiod(To_Char(SYSDATE-2, 'mm/dd/yyyy'), 'mm/dd/yyyy') AND
Exch_Curr3_From_792 = Inv_Loc_Curr3_720 AND Exch_Curr3_To_792 = 'USD' AND
Nvl(Arc_Close_Flag_729, 'N') = 'N' AND Arc_Subtc_Type_729 = 'O' AND Arc_No_729 =
Connection_Arc_No_690 AND Inv_Cust_No_720 = Connection_Cust_No_690 AND Inv_Finder_No_720
= Connection_Finder_No_690 AND Cust_No_717 = Inv_Cust_No_720 AND cust_cc_no_717 =
cc_no_705 AND Inv_Ic_No_720 = a.Ic_No_702 AND TRIM(SUBSTR(A.IC_FUTURE_USE_702, 10, 9))
= B.IC_NO_702(+) AND TRIM(SUBSTR(A.IC_FUTURE_USE_702, 19, 9)) = C.IC_NO_702(+) AND
TRIM(SUBSTR(A.IC_FUTURE_USE_702, 28, 9)) = D.IC_NO_702(+) AND
TRIM(SUBSTR(A.IC_FUTURE_USE_702, 1, 9)) = E.IC_NO_702(+) AND (CASE WHEN
a.Ic_Set_Of_Book_702 IN ('1', '2', '3', '4', '5', '6', 'D', 'E', 'G', 'P', 'V', 'H') AND Cust_Class_717 NOT IN
('9330', '9340') THEN 1 WHEN a.Ic_Set_Of_Book_702 NOT IN ('1', '2', '3', '4', '5', '6', 'D', 'E', 'G', 'P', 'V',
'H') AND Cust_Class_717 NOT IN ('9360') THEN 1 ELSE 0 END = 1) AND (CASE WHEN A.Ic_No_702
IN (SELECT Ic_No_702 FROM Investment_Code START WITH Ic_No_702 IN ( 'DEWIND', 'WAT',
'GAVIATION', 'MABE', 'NAAGEN', 'TOTLES', 'TOTLEP', 'HCLS' ) CONNECT BY Ic_Owner_No_702 =
PRIOR Ic_No_702) THEN 1 WHEN A.Ic_No_702 IN (SELECT Ic_No_702 FROM Investment_Code
START WITH Ic_No_702 IN (SELECT Ic_No_702 FROM Investment_Code WHERE
Ic_Position_Level_702 = '3' AND Ic_No_702 NOT LIKE 'BCO%' AND Nvl(Ic_Active_Flag_702, 'A') = 'A'
AND Ic_Set_Of_Book_702 IN ('1', '2', '3', '4', '5', '6', 'D', 'E', 'G', 'P', 'T') AND Substr(Ic_Future_Use_702,
1, 6) NOT IN ('TOTLPS')) CONNECT BY Ic_Owner_No_702 = PRIOR Ic_No_702) THEN 1 WHEN
A.Ic_No_702 IN (SELECT Ic_No_702 FROM Investment_Code START WITH Ic_No_702 = 'OA G'
CONNECT BY Ic_Owner_No_702 = PRIOR Ic_No_702) AND A.Ic_Business_Id_702 NOT IN ('PII',
'THD', 'NVP') AND A.Ic_No_702 NOT IN ('JDEJ9L') THEN 1 END = 1 ) ORDER BY 1
7u811ut86g6rg declare owner1 varchar2(45); table_name varchar2(50); res number; cursor table_size is select owner,
segment_name from (select owner, segment_name, bytes/1024/1024 Size_Mb from dba_segments
where segment_type='TABLE' and owner<>'SYS' order by bytes/1024/1024 DESC ) a where rownum <=
20; Begin open table_size; loop fetch table_size into owner1, table_name; exit when
table_size%notfound; execute immediate 'select count(*) from '|| owner1||'.'||table_name into res; insert
into TABLE_SIZE_REPORT values(sysdate, 'ICEF_P92', owner1, table_name, res); commit; end loop;
close table_size; End;
82hxvr8kxuzjq BEGIN dbms_stats.gather_database_stats; END;
83taa7kaw59c1 select name, intcol#, segcol#, type#, length, nvl(precision#, 0), decode(type#, 2, nvl(scale, -
127/*MAXSB1MINAL*/), 178, scale, 179, scale, 180, scale, 181, scale, 182, scale, 183, scale, 231, scale,
0), null$, fixedstorage, nvl(deflength, 0), default$, rowid, col#, property, nvl(charsetid, 0), nvl(charsetform,
0), spare1, spare2, nvl(spare3, 0) from col$ where obj#=:1 order by intcol#
84fz6kymcqh1g Declare begin dbms_output.put_line('Procedure Started');
SPR_GCARS_COLL_ARCHIVE_EXTRACT(NULL, 'W'); SPR_GCARS_CUST_SCORE_CALC(NULL);
dbms_output.put_line('Completed Procedure'); END;
89yyvk580r314 SELECT POL_PERIOD_DT12_799 FROM SYSTEM_POLICY
8c96r9b1gtwh1 Select Pol_Cust_No_Assign_799 From System_policy
8qw602s7yq181 select nvl(c.cc_name_705, ' '), nvl(c.cc_manager_coll_name_705, ' '), nvl(substr(c.cc_comm_line_705, 0,
(case instr(c.cc_comm_line_705, 'F', 1) when 0 then 20 else instr(c.cc_comm_line_705, 'F', 1) end) -1), '
') from cc_organization c where c.cc_no_705='CL65'
8uvvdzvbar6z9 BEGIN Spr_Gcars_Cust_Summary('WNDPS0114'); END;
9bth4p1h24y7x SELECT E.IC_NO_702 TIER0, chr(9) , E.IC_NAME_702 TIER0NAME, chr(9) , B.IC_NO_702 TIER1,
chr(9) , B.IC_NAME_702 TIER1NAME, chr(9) , C.IC_NO_702 TIER2, chr(9) , C.IC_NAME_702
TIER2NAME, chr(9) , D.IC_NO_702 TIER3, chr(9) , D.IC_NAME_702 TIER3NAME, chr(9) ,
A.IC_NO_702 IC_NO, chr(9) , A.IC_NAME_702 ICNAME, chr(9) , TO_CHAR(INV_DATE_720,
'MM/DD/YYYY') INV_DATE, chr(9) , TO_CHAR(INV_INPUT_DT_720, 'MM/DD/YYYY')
INV_INPUT_DATE, chr(9) , TO_CHAR(INV_DUE_DATE_720, 'MM/DD/YYYY') INV_DUE_DATE, chr(9) ,
CUST_NO_717 CUST_NO, chr(9) , CUST_LONG_NAME_717 CUST_NAME, chr(9) , INV_NO_720
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
INV_NO, chr(9) , INV_ID_720 INV_ID, chr(9) , ROUND((INV_TRANS_AMT_720 * EXCH_RATE2_792),
2) USD_AMT_FUNC_CURR, chr(9) , ROUND((SYSDATE-INV_DUE_DATE_720)) PAST_DUE_DAYS,
chr(9) , CC_LOCATION_705 CC_LOCATION, chr(9) , ARC_SUB_TC_729 ARC_CODE, chr(9) ,
ARC_DATE_729 ARC_DATE, chr(9) , ARC_NO_729 ARC_NO , chr(9) , CASE WHEN Trunc(SYSDATE
- Arc_Date_729) BETWEEN 1 AND 90 THEN ((((Inv_Trans_Amt_720 - Inv_Trans_Amt_Paid_720)) *
Exch_Rate2_792)) ELSE 0 END v_Arc90, chr(9) , CASE WHEN Trunc(SYSDATE - Arc_Date_729)
BETWEEN 91 AND 365 THEN ((((Inv_Trans_Amt_720 - Inv_Trans_Amt_Paid_720)) * Exch_Rate2_
792)) ELSE 0 END v_Arc365 , chr(9) , CASE WHEN Trunc(SYSDATE - Arc_Date_729) > 365 THEN
((((Inv_Trans_Amt_720 - Inv_Trans_Amt_Paid_720)) * Exch_Rate2_792)) ELSE 0 END v_Arc366 ,
chr(9) FROM Arc , Arc_Invoice_Connector , invoice_me , Exchange_Rate2 , Customer ,
Investment_Code a , Investment_Code b , Investment_Code c , Investment_Code d , Investment_Code
e , CC_ORGANIZATION WHERE Exch_Rate_Type_792 = 'HM' --AND To_Char(Exch_Year_792) ||
Lpad(To_Char(Exch_Month_792), 2, 0) = '201002' AND To_Char(Exch_Year_792) ||
Lpad(To_Char(Exch_Month_792), 2, 0) = Ltrim(Rtrim(Spgetoffsetdate('-2', 'yyyy')))
||Spgetgecarsperiod(To_Char(SYSDATE-2, 'mm/dd/yyyy'), 'mm/dd/yyyy') AND Exch_Curr3_From_792 =
Inv_Loc_Curr3_720 AND Exch_Curr3_To_792 = 'USD' AND Nvl(Arc_Close_Flag_729, 'N') = 'N' AND
Arc_Subtc_Type_729 = 'O' AND Arc_No_729 = Connection_Arc_No_690 AND Inv_Cust_No_720 =
Connection_Cust_No_690 AND Inv_Finder_No_720 = Connection_Finder_No_690 AND Cust_No_717 =
Inv_Cust_No_720 AND cust_cc_no_717 = cc_no_705 AND Inv_Ic_No_720 = a.Ic_No_702 AND
TRIM(SUBSTR(A.IC_FUTURE_USE_702, 10, 9)) = B.IC_NO_702(+) AND
TRIM(SUBSTR(A.IC_FUTURE_USE_702, 19, 9)) = C.IC_NO_702(+) AND
TRIM(SUBSTR(A.IC_FUTURE_USE_702, 28, 9)) = D.IC_NO_702(+) A ND
TRIM(SUBSTR(A.IC_FUTURE_USE_702, 1, 9)) = E.IC_NO_702(+) AND (CASE WHEN
a.Ic_Set_Of_Book_702 IN ('1', '2', '3', '4', '5', '6', 'D', 'E', 'G', 'P', 'V', 'H') AND Cust_Class_717 NOT IN
('9330', '9340') THEN 1 WHEN a.Ic_Set_Of_Book_702 NOT IN ('1', '2', '3', '4', '5', '6', 'D', 'E', 'G', 'P', 'V',
'H') AND Cust_Class_717 NOT IN ('9360') THEN 1 ELSE 0 END = 1) AND (CASE WHEN A.Ic_No_702
IN (SELECT Ic_No_702 FROM Investment_Code START WITH Ic_No_702 IN ( 'DEWIND', 'WAT',
'GAVIATION', 'MABE', 'NAAGEN', 'TOTLES', 'TOTLEP', 'HCLS' ) CONNECT BY Ic_Owner_No_702 =
PRIOR Ic_No_702) THEN 1 WHEN A.Ic_No_702 IN (SELECT Ic_No_702 FROM Investment_Code
START WITH Ic_No_702 IN (SELECT Ic_No_702 FROM Investment_Code WHERE
Ic_Position_Level_702 = '3' AND Ic_No_702 NOT LIKE 'BCO%' AND Nvl(Ic_Active_Flag_702, 'A') = 'A'
AND Ic_Set_Of_Book_702 IN ('1', '2', '3', '4', '5', '6', 'D', 'E', 'G', 'P', 'T') AND Substr(Ic_Future _Use_702,
1, 6) NOT IN ('TOTLPS')) CONNECT BY Ic_Owner_No_702 = PRIOR Ic_No_702) THEN 1 WHEN
A.Ic_No_702 IN (SELECT Ic_No_702 FROM Investment_Code START WITH Ic_No_702 = 'OAG'
CONNECT BY Ic_Owner_No_702 = PRIOR Ic_No_702) AND A.Ic_Business_Id_702 NOT IN ('PII',
'THD', 'NVP') AND A.Ic_No_702 NOT IN ('JDEJ9L') THEN 1 END = 1 ) ORDER BY 1
9jnqqzx2xba27 SELECT POL_PERIOD_DT13_799 FROM SYSTEM_POLICY
9p3urqfavbg6a DECLARE BEGIN dbms_output.put_line('Begin of the program'); spr_dashboard_new_cad_me;
spr_dashboard_new_naagen_me; spr_dashboard_new_mabe_me; spr_dashboard_new_oag_me;
SPR_Dashboard_New_TOTLEP_ME; SPR_Dashboard_New_TOTLES_ME;
SPR_Dashboard_New_WAT_ME; spr_dashboard_new_gea_me; SPR_Dashboard_New_WIND_ME;
SPR_DASHBOARD_NEW_PARAM('HCLS'); dbms_output.put_line('End of the program'); END;
9qgtwh66xg6nz update seg$ set type#=:4, blocks=:5, extents=:6, minexts=:7, maxexts=:8, extsize=:9, extpct=:10,
user#=:11, iniexts=:12, lists=decode(:13, 65535, NULL, :13), groups=decode(:14, 65535, NULL, :14),
cachehint=:15, hwmincr=:16, spare1=DECODE(:17, 0, NULL, :17), scanhint=:18 where ts#=:1 and
file#=:2 and block#=:3
9zjaxhxak6fct SELECT /*+NESTED_TABLE_GET_REFS+*/ "GCARS_OUTBND"."WEBM_OUT_ACTIVITY".* FROM
"GCARS_OUTBND"."WEBM_OUT_ACTIVITY"
a0qkdc47yt5kq Select pol_main_lang_799, POL_CUST_NO_ASSIGN_799 from system_policy
aa6p7f2wxzg7r SELECT POL_PERIOD_DT4_799 FROM SYSTEM_POLICY
adw8rqvga7j5t DECLARE errmsg VARCHAR2(100); BEGIN dbms_output.put_line('Executing SPR:
SPR_GCARS_FL_TRIGGER'); SPR_GCARS_UPD_PYMTSCHD; SPR_GCARS_FL_TRIGGER;
dbms_output.put_line('Completed executing SPR : SPR_GCARS_FL_TRIGGER'); end;
aq4js2gkfjru8 update tsq$ set blocks=:3, maxblocks=:4, grantor#=:5, priv1=:6, priv2=:7, priv3=:8 where ts#=:1 and
user#=:2
c1a4cmg7kur3y SELECT POL_PERIOD_DT8_799 FROM SYSTEM_POLICY
c1u9kugvgw1nj SELECT pol_main_lang_799, pol_suffix_length_799 FROM system_policy
c8na1muq1hv3u select count(*) from GCARS_BATCH.IFACTOR_IN_ACTIVITY
csh800xunxpzw select count(*) from GCARS_OUTBND.WEBM_OUT_ACTIVITY
ct62kq20vrccj SELECT POL_PERIOD_DT9_799 FROM SYSTEM_POLICY
cucfnp8wc50zh SELECT POL_PERIOD_DT2_799 FROM SYSTEM_POLICY
dajasdjnzz789 BEGIN GCARS.SPR_GCARS_IRPEX1_EXCH_RATE( :1, :2, :3, :4, :5, :6, :7, :8 ); END;
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
dma453s58wu9f select pol_days_cr_unaged_799 from SYSTEM_POLICY
dpbr1ftvbv3qg SELECT POL_PERIOD_DT11_799 FROM SYSTEM_POLICY
dyhbmg6vc0ycu SELECT POL_PERIOD_DT3_799 FROM SYSTEM_POLICY
fzwdn4ucuf06y SELECT POL_PERIOD_DT6_799 FROM SYSTEM_POLICY
Back to SQL Statistics
Back to Top
Instance Activity Statistics
• Instance Activity Stats
• Instance Activity Stats - Absolute Values
• Instance Activity Stats - Thread Activity
Back to Top
Instance Activity Stats
Statistic Total per Second per Trans
CPU used by this session 3,890,561 87.23 59.68
CPU used when call started 2,552,638 57.24 39.16
CR blocks created 114,248 2.56 1.75
Cached Commit SCN referenced 789,747 17.71 12.11
Commit SCN cached 306 0.01 0.00
DB time 17,267,523 387.18 264.89
DBWR checkpoint buffers written 261,652 5.87 4.01
DBWR checkpoints 57 0.00 0.00
DBWR object drop buffers written 14 0.00 0.00
DBWR parallel query checkpoint buffers written 0 0.00 0.00
DBWR revisited being-written buffer 0 0.00 0.00
DBWR tablespace checkpoint buffers written 0 0.00 0.00
DBWR thread checkpoint buffers written 23,318 0.52 0.36
DBWR transaction table writes 1,472 0.03 0.02
DBWR undo block writes 79,251 1.78 1.22
DFO trees parallelized 0 0.00 0.00
IMU CR rollbacks 1,393 0.03 0.02
IMU Flushes 12,530 0.28 0.19
IMU Redo allocation size 53,756,388 1,205.33 824.64
IMU commits 62,707 1.41 0.96
IMU contention 336 0.01 0.01
IMU ktichg flush 13 0.00 0.00
IMU pool not allocated 145 0.00 0.00
IMU recursive-transaction flush 157 0.00 0.00
IMU undo allocation size 415,964,616 9,326.83 6,381.00
IMU- failed to get a private strand 145 0.00 0.00
Misses for writing mapping 0 0.00 0.00
PX local messages recv'd 0 0.00 0.00
PX local messages sent 0 0.00 0.00
Parallel operations not downgraded 0 0.00 0.00
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
SMON posted for undo segment recovery 0 0.00 0.00
SMON posted for undo segment shrink 28 0.00 0.00
SQL*Net roundtrips to/from client 6,205,010 139.13 95.19
SQL*Net roundtrips to/from dblink 4,867 0.11 0.07
active txn count during cleanout 94,570 2.12 1.45
application wait time 28,039 0.63 0.43
auto extends on undo tablespace 0 0.00 0.00
background checkpoints completed 26 0.00 0.00
background checkpoints started 26 0.00 0.00
background timeouts 157,911 3.54 2.42
branch node splits 20 0.00 0.00
buffer is not pinned count 229,075,691 5,136.37 3,514.08
buffer is pinned count 732,473,286 16,423.63 11,236.32
bytes received via SQL*Net from client 1,211,763,101 27,170.35 18,588.74
bytes received via SQL*Net from dblink 2,242,855 50.29 34.41
bytes sent via SQL*Net to client 12,216,711,469 273,925.05 187,407.37
bytes sent via SQL*Net to dblink 1,490,271 33.42 22.86
calls to get snapshot scn: kcmgss 794,227,715 17,808.30 12,183.65
calls to kcmgas 329,633 7.39 5.06
calls to kcmgcs 39,217 0.88 0.60
change write time 14,954 0.34 0.23
cleanout - number of ktugct calls 161,160 3.61 2.47
cleanouts and rollbacks - consistent read gets 60,604 1.36 0.93
cleanouts only - consistent read gets 64,697 1.45 0.99
cluster key scan block gets 1,446,313 32.43 22.19
cluster key scans 472,933 10.60 7.25
commit batch performed 100 0.00 0.00
commit batch requested 100 0.00 0.00
commit batch/immediate performed 218 0.00 0.00
commit batch/immediate requested 218 0.00 0.00
commit cleanout failures: block lost 5,690 0.13 0.09
commit cleanout failures: buffer being written 4 0.00 0.00
commit cleanout failures: callback failure 1,120 0.03 0.02
commit cleanout failures: cannot pin 16 0.00 0.00
commit cleanout failures: hot backup in progress 0 0.00 0.00
commit cleanouts 475,686 10.67 7.30
commit cleanouts successfully completed 468,856 10.51 7.19
commit immediate performed 118 0.00 0.00
commit immediate requested 118 0.00 0.00
commit txn count during cleanout 83,729 1.88 1.28
concurrency wait time 7,753 0.17 0.12
consistent changes 2,322,154 52.07 35.62
consistent gets 1,401,148,906 31,416.78 21,493.97
consistent gets - examination 85,821,317 1,924.30 1,316.52
consistent gets direct 5,676 0.13 0.09
consistent gets from cache 1,401,143,230 31,416.66 21,493.88
current blocks converted for CR 12 0.00 0.00
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
cursor authentications 278,996 6.26 4.28
data blocks consistent reads - undo records applied 2,204,341 49.43 33.82
db block changes 9,860,536 221.09 151.26
db block gets 10,119,008 226.89 155.23
db block gets direct 97,150 2.18 1.49
db block gets from cache 10,021,858 224.71 153.74
deferred (CURRENT) block cleanout applications 240,078 5.38 3.68
dirty buffers inspected 55,963 1.25 0.86
enqueue conversions 36,691 0.82 0.56
enqueue deadlocks 0 0.00 0.00
enqueue releases 1,510,455 33.87 23.17
enqueue requests 1,510,481 33.87 23.17
enqueue timeouts 44 0.00 0.00
enqueue waits 81 0.00 0.00
exchange deadlocks 0 0.00 0.00
execute count 279,726,090 6,272.06 4,291.07
failed probes on index block reclamation 50 0.00 0.00
frame signature mismatch 0 0.00 0.00
free buffer inspected 23,058,585 517.02 353.72
free buffer requested 23,409,426 524.89 359.11
heap block compress 371,058 8.32 5.69
hot buffers moved to head of LRU 3,938,708 88.31 60.42
immediate (CR) block cleanout applications 125,301 2.81 1.92
immediate (CURRENT) block cleanout applications 104,689 2.35 1.61
index crx upgrade (found) 0 0.00 0.00
index crx upgrade (positioned) 144,040 3.23 2.21
index fast full scans (full) 80,019 1.79 1.23
index fast full scans (rowid ranges) 0 0.00 0.00
index fetch by key 18,925,994 424.36 290.33
index scans kdiixs1 87,271,253 1,956.81 1,338.76
leaf node 90-10 splits 808 0.02 0.01
leaf node splits 8,511 0.19 0.13
lob reads 12,614 0.28 0.19
lob writes 31,229 0.70 0.48
lob writes unaligned 31,229 0.70 0.48
logons cumulative 9,408 0.21 0.14
messages received 106,848 2.40 1.64
messages sent 106,847 2.40 1.64
no buffer to keep pinned count 0 0.00 0.00
no work - consistent read gets 799,745,702 17,932.03 12,268.30
opened cursors cumulative 3,093,251 69.36 47.45
parse count (failures) 304 0.01 0.00
parse count (hard) 587,153 13.17 9.01
parse count (total) 3,041,058 68.19 46.65
parse time cpu 268,890 6.03 4.12
parse time elapsed 280,851 6.30 4.31
physical read IO requests 10,066,739 225.72 154.43
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
physical read bytes ############### 4,340,190.32 2,969,365.63
physical read total IO requests 10,129,487 227.12 155.39
physical read total bytes ############### 4,389,587.37 3,003,160.90
physical read total multi block requests 4,623,799 103.68 70.93
physical reads 23,628,785 529.81 362.47
physical reads cache 23,163,527 519.38 355.33
physical reads cache prefetch 13,425,988 301.04 205.96
physical reads direct 465,258 10.43 7.14
physical reads direct (lob) 5,630 0.13 0.09
physical reads direct temporary tablespace 459,582 10.30 7.05
physical reads prefetch warmup 0 0.00 0.00
physical write IO requests 264,527 5.93 4.06
physical write bytes 6,777,135,104 151,958.01 103,962.92
physical write total IO requests 487,874 10.94 7.48
physical write total bytes 12,247,835,648 274,622.93 187,884.82
physical write total multi block requests 250,354 5.61 3.84
physical writes 827,287 18.55 12.69
physical writes direct 468,625 10.51 7.19
physical writes direct (lob) 368 0.01 0.01
physical writes direct temporary tablespace 468,121 10.50 7.18
physical writes from cache 358,662 8.04 5.50
physical writes non checkpoint 773,521 17.34 11.87
pinned buffers inspected 55,985 1.26 0.86
prefetch clients - default 4 0.00 0.00
prefetch warmup blocks aged out before use 0 0.00 0.00
prefetched blocks aged out before use 502,259 11.26 7.70
process last non-idle time 44,594 1.00 0.68
queries parallelized 0 0.00 0.00
recursive aborts on index block reclamation 0 0.00 0.00
recursive calls 286,020,700 6,413.20 4,387.63
recursive cpu usage 2,365,946 53.05 36.29
redo blocks written 3,258,807 73.07 49.99
redo buffer allocation retries 804 0.02 0.01
redo entries 4,641,133 104.06 71.20
redo log space requests 116 0.00 0.00
redo log space wait time 4,097 0.09 0.06
redo ordering marks 81,346 1.82 1.25
redo size 1,591,112,288 35,676.17 24,408.05
redo subscn max counts 201,944 4.53 3.10
redo synch time 10,207 0.23 0.16
redo synch writes 46,586 1.04 0.71
redo wastage 22,239,604 498.66 341.16
redo write time 34,881 0.78 0.54
redo writer latching time 79 0.00 0.00
redo writes 79,900 1.79 1.23
rollback changes - undo records applied 69,851 1.57 1.07
rollbacks only - consistent read gets 39,110 0.88 0.60
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
rows fetched via callback 8,332,185 186.83 127.82
session connect time 0 0.00 0.00
session cursor cache hits 1,461,301 32.77 22.42
session logical reads 1,411,267,917 31,643.67 21,649.20
session pga memory 1,680,499,552 37,680.43 25,779.28
session pga memory max 5,565,849,792 124,798.37 85,381.51
session uga memory ############### 85,915,140.92 58,779,327.09
session uga memory max 21,306,656,248 477,741.25 326,849.36
shared hash latch upgrades - no wait 5,228,555 117.24 80.21
shared hash latch upgrades - wait 1 0.00 0.00
sorts (disk) 9 0.00 0.00
sorts (memory) 945,291 21.20 14.50
sorts (rows) 473,293,101 10,612.25 7,260.43
sql area evicted 655,375 14.69 10.05
sql area purged 727 0.02 0.01
summed dirty queue length 82,450 1.85 1.26
switch current to new buffer 11,804 0.26 0.18
table fetch by rowid 398,360,990 8,932.11 6,110.96
table fetch continued row 4,493,040 100.74 68.92
table scan blocks gotten 464,164,527 10,407.57 7,120.40
table scan rows gotten 16,712,968,638 374,740.85 256,381.06
table scans (cache partitions) 2 0.00 0.00
table scans (direct read) 0 0.00 0.00
table scans (long tables) 353 0.01 0.01
table scans (rowid ranges) 4 0.00 0.00
table scans (short tables) 256,903,544 5,760.33 3,940.96
total number of times SMON posted 437 0.01 0.01
transaction rollbacks 218 0.00 0.00
transaction tables consistent read rollbacks 4 0.00 0.00
transaction tables consistent reads - undo records applied 1,383 0.03 0.02
undo change vector size 558,993,744 12,533.85 8,575.10
user I/O wait time 1,105,891 24.80 16.96
user calls 7,108,761 159.39 109.05
user commits 65,057 1.46 1.00
user rollbacks 131 0.00 0.00
workarea executions - onepass 170 0.00 0.00
workarea executions - optimal 703,774 15.78 10.80
write clones created in background 20 0.00 0.00
write clones created in foreground 69 0.00 0.00
Back to Instance Activity Statistics
Back to Top
Instance Activity Stats - Absolute Values
• Statistics with absolute values (should not be diffed)
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Statistic Begin Value End Value
session cursor cache count 37,880,083 38,207,611
opened cursors current 276 6,189
workarea memory allocated 1,242,304 1,423,870
logons current 40 320
Back to Instance Activity Statistics
Back to Top
Instance Activity Stats - Thread Activity
• Statistics identified by '(derived)' come from sources other than SYSSTAT
Statistic Total per Hour
log switches (derived) 26 2.10
Back to Instance Activity Statistics
Back to Top
IO Stats
• Tablespace IO Stats
• File IO Stats
Back to Top
Tablespace IO Stats
• ordered by IOs (Reads + Writes) desc
Tablespace Reads Av
Reads/s
Av
Rd(ms)
Av
Blks/Rd Writes Av
Writes/s
Buffer
Waits
Av Buf
Wt(ms)
DATA_GCARS 6,678,008 150 0.72 2.28 70,579 2 113,403 0.59
TOOLS 1,787,583 40 1.98 1.70 18,924 0 0 0.00
INDX_GCARS 612,012 14 2.99 2.06 65,598 1 7 4.29
WEBM_GCARS 463,914 10 0.87 6.69 4,380 0 0 0.00
TEMP_NEW 325,675 7 0.02 1.70 53,381 1 0 0.00
SYSAUX 88,142 2 2.61 2.11 19,753 0 0 0.00
SYSTEM 65,739 1 3.07 1.50 8,067 0 0 0.00
USERS 42,694 1 0.26 3.22 3 0 0 0.00
UNDOTBS_01 3,636 0 1.44 1.00 23,854 1 471 0.02
STATSPACK 387 0 4.39 3.45 0 0 0 0.00
Back to IO Stats
Back to Top
File IO Stats
• ordered by Tablespace, File
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Tablespac
e Filename Reads
Av
Reads
/s
Av
Rd(m
s)
Av
Blks/
Rd
Writ
es
Av
Writes
/s
Buff
er
Wait
s
Av
Buf
Wt(m
s)
DATA_GCA
RS
/database_p92/P92/data/gcars_data0
1_01.dbf
3,146,1
95
71 0.70 2.28 22,22
9
0 51,87
6
0.65
DATA_GCA
RS
/database_p92/P92/data/gcars_data0
1_02.dbf
3,531,8
13
79 0.73 2.29 48,35
0
1 61,52
7
0.55
INDX_GCA
RS
/database_p92/P92/data/gcars_indx0
1_01.dbf
174,515 4 3.02 2.10 16,91
4
0 5 6.00
INDX_GCA
RS
/database_p92/P92/data/gcars_indx0
1_02.dbf
437,497 10 2.98 2.05 48,68
4
1 2 0.00
STATSPAC
K
/database_p92/P92/data/statsapack_
01.dbf
387 0 4.39 3.45 0 0 0 0.00
SYSAUX /database_p92/P92/data/sysaux01_0
1.dbf
88,142 2 2.61 2.11 19,75
3
0 0 0.00
SYSTEM /database_p92/P92/data/system_01.d
bf
65,739 1 3.07 1.50 8,067 0 0 0.00
TEMP_NEW /database_p92/P92/data/temp_new_0
1.dbf
325,675 7 0.02 1.70 53,38
1
1 0
TOOLS /database_p92/P92/data/tools_01.dbf 497,260 11 1.38 1.66 3,083 0 0 0.00
TOOLS /database_p92/P92/data/tools_02.dbf 1,290,3
23
29 2.21 1.72 15,84
1
0 0 0.00
UNDOTBS_
01
/database_p92/P92/data/undo01_01_
001.dbf
3,636 0 1.44 1.00 23,85
4
1 471 0.02
USERS /database_p92/P92/data/users_01.db
f
42,694 1 0.26 3.22 3 0 0 0.00
WEBM_GC
ARS
/database_p92/P92/data/gcars_webm
01_01.dbf
433,402 10 0.87 6.70 4,203 0 0 0.00
WEBM_GC
ARS
/database_p92/P92/data/gcars_webm
01_02.dbf
30,512 1 0.78 6.59 177 0 0 0.00
Back to IO Stats
Back to Top
Buffer Pool Statistics
• Standard block size Pools D: default, K: keep, R: recycle
• Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
P Number of
Buffers
Pool
Hit% Buffer Gets Physical
Reads
Physical
Writes
Free Buff
Wait
Writ Comp
Wait
Buffer Busy
Waits
D 412,276 98 1,410,496,350 23,163,712 358,668 0 0 113,881
Back to Top
Advisory Statistics
• Instance Recovery Stats
• Buffer Pool Advisory
• PGA Aggr Summary
• PGA Aggr Target Stats
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
• PGA Aggr Target Histogram
• PGA Memory Advisory
• Shared Pool Advisory
• SGA Target Advisory
• Streams Pool Advisory
• Java Pool Advisory
Back to Top
Instance Recovery Stats
• B: Begin snapshot, E: End snapshot
Targt
MTTR (s)
Estd
MTTR
(s)
Recovery
Estd IOs
Actual
Redo Blks
Target
Redo Blks
Log File
Size Redo
Blks
Log Ckpt
Timeout Redo
Blks
Log Ckpt
Interval Redo
Blks
B 0 15 740 6357 552960 552960 10000000
E 0 16 1154 6394 552960 552960 10000000
Back to Advisory Statistics
Back to Top
Buffer Pool Advisory
• Only rows with estimated physical reads >0 are displayed
• ordered by Block Size, Buffers For Estimate
P Size for Est (M) Size Factor Buffers for Estimate Est Phys Read Factor Estimated Physical Reads
D 320 0.10 39,580 1.73 5,601,802,781
D 640 0.19 79,160 1.64 5,305,916,030
D 960 0.29 118,740 1.55 5,021,063,670
D 1,280 0.38 158,320 1.47 4,749,609,786
D 1,600 0.48 197,900 1.39 4,490,882,268
D 1,920 0.57 237,480 1.31 4,243,227,897
D 2,240 0.67 277,060 1.24 4,004,711,498
D 2,560 0.77 316,640 1.17 3,773,539,331
D 2,880 0.86 356,220 1.10 3,548,171,745
D 3,200 0.96 395,800 1.03 3,327,360,505
D 3,344 1.00 413,611 1.00 3,229,602,526
D 3,520 1.05 435,380 0.96 3,110,120,534
D 3,840 1.15 474,960 0.90 2,895,689,912
D 4,160 1.24 514,540 0.83 2,683,480,144
D 4,480 1.34 554,120 0.77 2,473,032,604
D 4,800 1.44 593,700 0.70 2,263,982,715
D 5,120 1.53 633,280 0.64 2,056,032,348
D 5,440 1.63 672,860 0.57 1,848,931,270
D 5,760 1.72 712,440 0.51 1,642,466,495
D 6,080 1.82 752,020 0.44 1,436,458,081
D 6,400 1.91 791,600 0.38 1,230,746,280
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Back to Advisory Statistics
Back to Top
PGA Aggr Summary
• PGA cache hit % - percentage of W/A (WorkArea) data processed only in-memory
PGA Cache Hit % W/A MB Processed Extra W/A MB Read/Written
88.49 57,851 7,525
Back to Advisory Statistics
Back to Top
PGA Aggr Target Stats
• B: Begin snap E: End snap (rows dentified with B or E contain data which is absolute i.e. not diffed over the
interval)
• Auto PGA Target - actual workarea memory target
• W/A PGA Used - amount of memory used for all Workareas (manual + auto)
• %PGA W/A Mem - percentage of PGA memory allocated to workareas
• %Auto W/A Mem - percentage of workarea memory controlled by Auto Mem Mgmt
• %Man W/A Mem - percentage of workarea memory under manual control
PGA Aggr
Target(M)
Auto PGA
Target(M)
PGA Mem
Alloc(M)
W/A PGA
Used(M)
%PGA
W/A Mem
%Auto
W/A Mem
%Man
W/A Mem
Global Mem
Bound(K)
B 24,576 22,061 146.34 0.33 0.22 100.00 0.00 1,048,576
E 24,576 21,874 625.65 14.24 2.28 100.00 0.00 1,048,576
Back to Advisory Statistics
Back to Top
PGA Aggr Target Histogram
• Optimal Executions are purely in-memory operations
Low Optimal High Optimal Total Execs Optimal Execs 1-Pass Execs M-Pass Execs
2K 4K 662,657 662,657 0 0
64K 128K 6,449 6,449 0 0
128K 256K 3,767 3,767 0 0
256K 512K 6,786 6,786 0 0
512K 1024K 15,527 15,527 0 0
1M 2M 6,259 6,259 0 0
2M 4M 962 962 0 0
4M 8M 1,003 961 42 0
8M 16M 179 151 28 0
16M 32M 155 131 24 0
32M 64M 120 73 47 0
64M 128M 44 33 11 0
128M 256M 41 23 18 0
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
256M 512M 3 3 0 0
Back to Advisory Statistics
Back to Top
PGA Memory Advisory
• When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value where Estd PGA Overalloc
Count is 0
PGA Target
Est (MB)
Size
Factr
W/A MB
Processed
Estd Extra W/A MB Read/
Written to Disk
Estd PGA
Cache Hit %
Estd PGA
Overalloc Count
3,072 0.13 5,046,811.55 96,574.45 98.00 0
6,144 0.25 5,046,811.55 93,785.23 98.00 0
12,288 0.50 5,046,811.55 93,785.23 98.00 0
18,432 0.75 5,046,811.55 93,785.23 98.00 0
24,576 1.00 5,046,811.55 92,351.30 98.00 0
29,491 1.20 5,046,811.55 6,347.26 100.00 0
34,406 1.40 5,046,811.55 6,347.26 100.00 0
39,322 1.60 5,046,811.55 6,347.26 100.00 0
44,237 1.80 5,046,811.55 6,347.26 100.00 0
49,152 2.00 5,046,811.55 6,347.26 100.00 0
73,728 3.00 5,046,811.55 6,347.26 100.00 0
98,304 4.00 5,046,811.55 6,347.26 100.00 0
147,456 6.00 5,046,811.55 6,347.26 100.00 0
196,608 8.00 5,046,811.55 6,347.26 100.00 0
Back to Advisory Statistics
Back to Top
Shared Pool Advisory
• SP: Shared Pool Est LC: Estimated Library Cache Factr: Factor
• Note there is often a 1:Many correlation between a single logical object in the Library Cache, and the physical
number of memory objects associated with it. Therefore comparing the number of Lib Cache objects (e.g. in
v$librarycache), with the number of Lib Cache Memory Objects is invalid.
Shared
Pool
Size(M)
SP
Size
Factr
Est LC
Size
(M)
Est LC
Mem Obj
Est LC Time
Saved (s)
Est LC
Time
Saved
Factr
Est LC
Load Time
(s)
Est LC
Load
Time
Factr
Est LC Mem
Obj Hits
704 0.17 422 43,916 17,601,259 0.83 3,806,332 12.03 1,714,877,975
1,120 0.28 836 61,686 18,039,966 0.86 3,367,625 10.64 1,715,800,125
1,536 0.38 1,251 81,087 18,478,288 0.88 2,929,303 9.26 1,716,715,439
1,952 0.48 1,666 112,867 18,916,115 0.90 2,491,476 7.87 1,717,623,024
2,368 0.59 2,081 139,313 19,353,782 0.92 2,053,809 6.49 1,718,523,440
2,784 0.69 2,498 158,689 19,790,880 0.94 1,616,711 5.11 1,719,417,275
3,200 0.79 2,914 176,805 20,226,097 0.96 1,181,494 3.73 1,720,304,999
3,616 0.90 3,329 195,949 20,659,489 0.98 748,102 2.36 1,721,187,183
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
4,032 1.00 3,744 210,392 21,091,161 1.00 316,430 1.00 1,722,064,445
4,448 1.10 4,159 229,667 21,521,208 1.02 1 0.00 1,722,937,444
4,864 1.21 4,575 243,894 21,949,872 1.04 1 0.00 1,723,806,822
5,280 1.31 4,991 262,744 22,377,472 1.06 1 0.00 1,724,673,196
5,696 1.41 5,406 281,794 22,804,367 1.08 1 0.00 1,725,537,397
6,112 1.52 5,821 299,858 23,230,896 1.10 1 0.00 1,726,400,332
6,528 1.62 6,236 318,558 23,657,293 1.12 1 0.00 1,727,262,774
6,944 1.72 6,651 335,428 24,084,433 1.14 1 0.00 1,728,125,023
7,360 1.83 7,066 353,323 24,511,485 1.16 1 0.00 1,728,987,216
7,776 1.93 7,482 372,251 24,938,037 1.18 1 0.00 1,729,849,400
8,192 2.03 8,055 399,699 25,364,414 1.20 1 0.00 1,730,711,582
Back to Advisory Statistics
Back to Top
SGA Target Advisory
SGA Target Size (M) SGA Size Factor Est DB Time (s) Est Physical Reads
1,860 0.25 10,640,236 4,829,547,534
3,720 0.50 8,743,934 5,098,896,380
5,580 0.75 6,973,979 3,862,281,594
7,440 1.00 5,541,501 3,229,602,470
9,300 1.25 4,441,513 2,060,809,336
11,160 1.50 4,435,417 1,120,349,097
13,020 1.75 4,435,417 1,120,349,097
14,880 2.00 4,435,417 1,120,349,097
Back to Advisory Statistics
Back to Top
Streams Pool Advisory
Size for Est (MB) Size Factor Est Spill Count Est Spill Time (s) Est Unspill Count Est Unspill Time (s)
16 1.00 0 0 0 0
32 2.00 0 0 0 0
48 3.00 0 0 0 0
64 4.00 0 0 0 0
80 5.00 0 0 0 0
96 6.00 0 0 0 0
112 7.00 0 0 0 0
128 8.00 0 0 0 0
144 9.00 0 0 0 0
160 10.00 0 0 0 0
176 11.00 0 0 0 0
192 12.00 0 0 0 0
208 13.00 0 0 0 0
224 14.00 0 0 0 0
240 15.00 0 0 0 0
256 16.00 0 0 0 0
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
272 17.00 0 0 0 0
288 18.00 0 0 0 0
304 19.00 0 0 0 0
320 20.00 0 0 0 0
Back to Advisory Statistics
Back to Top
Java Pool Advisory
No data exists for this section of the report.
Back to Advisory Statistics
Back to Top
Wait Statistics
• Buffer Wait Statistics
• Enqueue Activity
Back to Top
Buffer Wait Statistics
• ordered by wait time desc, waits desc
Class Waits Total Wait Time (s) Avg Time (ms)
data block 113,410 67 1
undo header 354 0 0
undo block 117 0 0
Back to Wait Statistics
Back to Top
Enqueue Activity
• only enqueues with waits are shown
• Enqueue stats gathered prior to 10g should not be compared with 10g data
• ordered by Wait Time desc, Waits desc
Enqueue Type (Request Reason) Requests Succ
Gets
Failed
Gets Waits Wt Time
(s)
Av Wt
Time(ms)
CF-Controlfile Transaction 22,764 22,764 0 23 0 12.96
JS-Job Scheduler (queue lock) 138,941 138,941 0 26 0 4.81
RO-Multiple Object Reuse (fast object
reuse)
405 405 0 45 0 0.98
TM-DML 353,088 353,072 16 1 0 1.00
Back to Wait Statistics
Back to Top
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Undo Statistics
• Undo Segment Summary
• Undo Segment Stats
Back to Top
Undo Segment Summary
• Min/Max TR (mins) - Min and Max Tuned Retention (minutes)
• STO - Snapshot Too Old count, OOS - Out of Space count
• Undo segment block stats:
• uS - unexpired Stolen, uR - unexpired Released, uU - unexpired reUsed
• eS - expired Stolen, eR - expired Released, eU - expired reUsed
Undo
TS#
Num Undo
Blocks (K)
Number of
Transactions
Max Qry
Len (s)
Max Tx
Concurcy
Min/Max TR
(mins)
STO/
OOS
uS/uR/uU/
eS/eR/eU
11 78.03 115,546 12,478 7 15/223.05 0/0 0/0/0/0/0/0
Back to Undo Statistics
Back to Top
Undo Segment Stats
• Most recent 35 Undostat rows, ordered by Time desc
End
Time
Num Undo
Blocks
Number of
Transactions
Max Qry
Len (s)
Max Tx
Concy
Tun Ret
(mins)
STO/
OOS
uS/uR/uU/
eS/eR/eU
29-Nov
13:19
376 1,276 0 5 15 0/0 0/0/0/0/0/0
29-Nov
13:09
711 1,706 1,708 3 39 0/0 0/0/0/0/0/0
29-Nov
12:59
380 1,062 48 4 15 0/0 0/0/0/0/0/0
29-Nov
12:49
253 1,124 1,029 4 28 0/0 0/0/0/0/0/0
29-Nov
12:39
6,675 6,602 422 4 18 0/0 0/0/0/0/0/0
29-Nov
12:29
1,268 2,374 561 4 20 0/0 0/0/0/0/0/0
29-Nov
12:19
1,124 1,360 1,845 3 42 0/0 0/0/0/0/0/0
29-Nov
12:09
1,038 2,076 1,240 3 33 0/0 0/0/0/0/0/0
29-Nov
11:59
323 1,168 634 3 23 0/0 0/0/0/0/0/0
29-Nov
11:49
704 1,290 882 4 27 0/0 0/0/0/0/0/0
29-Nov
11:39
4,341 3,380 277 5 17 0/0 0/0/0/0/0/0
29-Nov
11:29
219 1,344 116 3 15 0/0 0/0/0/0/0/0
29-Nov 1,746 2,192 190 4 15 0/0 0/0/0/0/0/0
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
11:19
29-Nov
11:09
4,035 3,723 129 5 15 0/0 0/0/0/0/0/0
29-Nov
10:59
136 1,258 92 3 15 0/0 0/0/0/0/0/0
29-Nov
10:49
194 1,010 107 3 15 0/0 0/0/0/0/0/0
29-Nov
10:39
476 1,513 0 3 15 0/0 0/0/0/0/0/0
29-Nov
10:29
157 1,079 0 3 15 0/0 0/0/0/0/0/0
29-Nov
10:19
199 1,625 0 3 15 0/0 0/0/0/0/0/0
29-Nov
10:09
208 1,057 1,732 3 42 0/0 0/0/0/0/0/0
29-Nov
09:59
340 1,352 1,126 3 32 0/0 0/0/0/0/0/0
29-Nov
09:49
278 1,171 519 3 22 0/0 0/0/0/0/0/0
29-Nov
09:39
137 1,172 156 3 16 0/0 0/0/0/0/0/0
29-Nov
09:29
1,237 1,538 0 3 15 0/0 0/0/0/0/0/0
29-Nov
09:19
164 1,242 2,004 3 46 0/0 0/0/0/0/0/0
29-Nov
09:09
215 1,256 2,044 4 47 0/0 0/0/0/0/0/0
29-Nov
08:59
133 929 1,437 2 38 0/0 0/0/0/0/0/0
29-Nov
08:49
101 941 829 2 28 0/0 0/0/0/0/0/0
29-Nov
08:39
135 794 78 2 15 0/0 0/0/0/0/0/0
29-Nov
08:29
167 956 1,408 2 38 0/0 0/0/0/0/0/0
29-Nov
08:19
58 712 802 2 27 0/0 0/0/0/0/0/0
29-Nov
08:09
130 561 1,704 7 42 0/0 0/0/0/0/0/0
29-Nov
07:59
225 801 1,096 4 32 0/0 0/0/0/0/0/0
29-Nov
07:49
252 667 492 3 22 0/0 0/0/0/0/0/0
Back to Undo Statistics
Back to Top
Latch Statistics
• Latch Activity
• Latch Sleep Breakdown
• Latch Miss Sources
• Parent Latch Statistics
• Child Latch Statistics
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Back to Top
Latch Activity
• "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for willing-to-wait latch get requests
• "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
• "Pct Misses" for both should be very close to 0.0
Latch Name Get
Requests
Pct Get
Miss
Avg Slps
/Miss
Wait
Time (s)
NoWait
Requests
Pct NoWait
Miss
ASM db client latch 31,332 0.00 0 0
AWR Alerted Metric Element list 378,074 0.00 0 0
Consistent RBA 80,006 0.00 0 0
FAL request queue 943 0.00 0 0
FAL subheap alocation 943 0.00 0 0
FIB s.o chain latch 958 0.00 0 0
FOB s.o list latch 37,908 0.01 0.00 0 0
In memory undo latch 717,440 0.00 1.12 0 89,543 0.00
JS Sh mem access 296 0.00 0 0
JS mem alloc latch 366 0.00 0 0
JS queue access latch 366 0.00 0 0
JS queue state obj latch 277,882 0.00 0 0
JS slv state obj latch 781 0.00 0 0
KMG MMAN ready and startup
request latch
15,714 0.00 0 0
KMG resize request state object
freelist
282 0.00 0 0
KTF sga latch 97 0.00 0 11,865 0.00
KWQMN job cache list latch 6 0.00 0 0
KWQP Prop Status 38 0.00 0 0
MQL Tracking Latch 0 0 882 1.36
Memory Management Latch 3,723 0.00 0 15,714 0.00
OS process 24,915 0.00 0 0
OS process allocation 31,704 0.02 0.00 0 0
OS process: request allocation 16,756 0.02 0.00 0 0
PL/SQL warning settings 389,069 0.00 0 0
SQL memory manager latch 19 0.00 0 14,814 0.00
SQL memory manager workarea
list latch
1,207,789 0.00 0.00 0 0
Shared B-Tree 92 0.00 0 0
active checkpoint queue latch 56,661 0.27 0.00 0 0
active service list 119,466 0.00 0.40 0 20,828 0.00
archive control 1,425 0.00 0 0
archive process latch 16,765 0.01 0.00 0 0
begin backup scn array 382 0.00 0 0
buffer pool 9,381 0.00 0 0
cache buffer handles 449,368 0.00 0.00 0 0
cache buffers chains 2,787,789,060 0.14 0.00 0 76,173,188 0.00
cache buffers lru chain 2,133,146 0.21 0.01 0 83,829,096 0.07
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
cache table scan latch 0 0 4,595,474 0.04
channel handle pool latch 17,599 0.04 0.00 0 0
channel operations parent latch 297,286 0.00 0.00 0 0
checkpoint queue latch 1,798,415 0.01 0.00 0 1,087,389 0.01
client/application info 73,501 0.00 0 0
commit callback allocation 71 0.00 0 0
compile environment latch 9,409 0.00 0 0
constraint object allocation 2 0.00 0 0
dictionary lookup 211 0.00 0 0
dml lock allocation 686,045 0.00 0.00 0 0
dummy allocation 18,538 0.01 0.00 0 0
enqueue hash chains 3,058,819 0.00 0.00 0 596 0.00
enqueues 2,155,198 0.01 0.00 0 0
event group latch 8,517 0.02 0.00 0 0
file cache latch 19,543 0.00 0 0
global KZLD latch for mem in SGA 8,457 0.00 0 0
global tx hash mapping 5,368 0.00 0 0
hash table column usage latch 8,720 0.02 0.00 0 25,316,133 0.00
hash table modification latch 1,243 0.00 0 0
internal temp table object number
allocation latc
12 0.00 0 0
job workq parent latch 0 0 1,782 0.00
job_queue_processes parameter
latch
1,632 0.00 0 0
kks stats 2,989,320 0.01 0.01 0 0
kokc descriptor allocation latch 1,000 0.00 0 0
ksuosstats global area 3,017 0.00 0 0
ktm global data 609 0.00 0 0
kwqbsn:qsga 73 0.00 0 0
lgwr LWN SCN 82,243 0.02 0.00 0 0
library cache 40,336,648 0.03 0.01 1 2,652,129 4.99
library cache load lock 63,042 0.00 0 2 0.00
library cache lock 16,143,331 0.00 0.00 0 2,738 0.00
library cache lock allocation 562,320 0.00 0.00 0 0
library cache pin 12,109,070 0.00 0.00 0 0
library cache pin allocation 313,531 0.00 0 0
list of block allocation 23,939 0.00 0 0
loader state object freelist 16,076 0.00 0 0
logminer context allocation 19 0.00 0 0
longop free list parent 4,986 0.00 0 4,933,639 0.00
message pool operations parent
latch
1,037 0.00 0 0
messages 620,906 0.10 0.00 0 0
mostly latch-free SCN 82,905 0.31 0.00 0 0
multiblock read objects 15,750,307 0.01 0.00 0 0
ncodef allocation latch 999 0.00 0 0
object queue header heap 17,294 0.00 0 14,041 0.00
object queue header operation 46,966,714 0.00 0.00 0 737,661 0.01
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
object stats modification 1,820 0.00 0 0
parallel query alloc buffer 5,924 0.00 0 0
parameter list 33,893 0.00 0 0
parameter table allocation
management
9,576 0.00 0 0
post/wait queue 73,101 0.00 0.00 0 47,458 0.00
process allocation 16,756 0.06 0.00 0 8,507 0.12
process group creation 16,756 0.01 0.00 0 0
redo allocation 429,020 0.09 0.00 0 4,647,740 0.01
redo copy 0 0 4,647,940 0.15
redo writing 328,098 0.09 0.00 0 0
resmgr group change latch 14,030 0.00 0 0
resmgr:active threads 18,546 0.00 0 0
resmgr:actses change group 9,437 0.00 0 0
resmgr:actses change state 1 0.00 0 0
resmgr:free threads list 18,485 0.02 0.00 0 0
resmgr:resource group CPU
method
1 0.00 0 0
resmgr:schema config 22 0.00 0 0
resmgr:vc list latch 1 0.00 0 0
row cache objects 431,364,390 0.15 0.00 0 28,196 0.06
rules engine aggregate statistics 3 0.00 0 0
rules engine rule set statistics 1,906 0.00 0 0
sequence cache 79,269 0.02 0.00 0 0
session allocation 3,261,244 0.04 0.00 0 0
session idle bit 14,276,536 0.00 0.01 0 0
session state list latch 18,357 0.02 0.00 0 0
session switching 999 0.00 0 0
session timer 20,829 0.00 0 0
shared pool 31,336,074 0.09 0.05 11 0
shared pool sim alloc 181 0.00 0 0
shared pool simulator 190,304,495 0.00 0.01 3 0
simulator hash latch 54,510,474 0.00 0.00 0 0
simulator lru latch 391,723 0.05 0.13 0 50,851,687 0.05
slave class 28 0.00 0 0
slave class create 109 10.09 1.00 1 0
sort extent pool 54,984 0.00 0.00 0 0
state object free list 38 0.00 0 0
statistics aggregation 6,384 0.00 0 0
temp lob duration state obj
allocation
23 0.00 0 0
temporary table state object
allocation
25 0.00 0 0
threshold alerts latch 3,851 0.00 0 0
transaction allocation 12,699 0.00 0 0
transaction branch allocation 22,548 0.00 0 0
undo global data 803,308 0.00 0.00 0 0
user lock 35,724 0.02 0.00 0 0
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Back to Latch Statistics
Back to Top
Latch Sleep Breakdown
• ordered by misses desc
Latch Name Get Requests Misses Sleeps Spin Gets Sleep1 Sleep2 Sleep3
cache buffers chains 2,787,789,060 3,859,278 607 3,858,690 0 0 0
row cache objects 431,364,390 644,720 3 644,717 0 0 0
shared pool 31,336,074 28,673 1,373 27,378 0 0 0
library cache 40,336,648 11,711 124 11,615 0 0 0
shared pool simulator 190,304,495 6,132 63 6,070 0 0 0
cache buffers lru chain 2,133,146 4,543 27 4,516 0 0 0
session allocation 3,261,244 1,289 2 1,287 0 0 0
object queue header operation 46,966,714 509 1 508 0 0 0
library cache lock 16,143,331 449 2 447 0 0 0
kks stats 2,989,320 267 3 264 0 0 0
simulator lru latch 391,723 206 26 180 0 0 0
session idle bit 14,276,536 131 1 130 0 0 0
In memory undo latch 717,440 17 19 2 0 0 0
slave class create 109 11 11 0 0 0 0
active service list 119,466 5 2 3 0 0 0
Back to Latch Statistics
Back to Top
Latch Miss Sources
• only latches with sleeps are shown
• ordered by name, sleeps desc
Latch Name Where NoWait Misses Sleeps Waiter Sleeps
In memory undo latch kticmt: child 0 14 1
In memory undo latch ktiFlush: child 0 6 1
active service list ksws_event: ksws event 0 2 1
cache buffers chains kcbgtcr: kslbegin excl 0 1,099 1,100
cache buffers chains kcbrls: kslbegin 0 139 141
cache buffers chains kcbgcur: kslbegin 0 22 0
cache buffers chains kcbchg: kslbegin: bufs not pinned 0 11 13
cache buffers chains kcbzwb 0 8 2
cache buffers chains kcbnew: new latch again 0 2 0
cache buffers chains kcbzib: multi-block read: nowait 0 2 0
cache buffers chains kcbgtcr: fast path 0 1 0
cache buffers chains kcbzib: finish free bufs 0 1 6
cache buffers lru chain kcbzgws 0 25 0
cache buffers lru chain kcbbwlru 0 1 2
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
cache buffers lru chain kcbzgm 0 1 0
kks stats kks stats alloc/free 0 3 3
library cache kglhdiv: child 0 23 1
library cache kglivl: child 0 23 1
library cache kglScanDependency 0 20 0
library cache kglhdiv0: child 0 13 0
library cache kglobpn: child: 0 5 37
library cache kglhdiv0: parent: invalidate 0 3 0
library cache kglpndl: child: after processing 0 3 0
library cache kgldti: 2child 0 2 7
library cache kglhdgn: child: 0 1 37
library cache lock kgllkdl: child: no lock handle 0 10 18
library cache lock kgllkdl: child: cleanup 0 2 2
object queue header operation kcbo_ivbo 0 1 0
row cache objects kqrpre: find obj 0 2 2
session allocation kspallmod 0 2 0
session idle bit ksupuc: clear busy 0 1 0
shared pool kghalo 0 655 600
shared pool kghfrunp: alloc: wait 0 254 3
shared pool kghupr1 0 207 444
shared pool kgh: quiesce extents 0 93 0
shared pool kgh: add extent to quiesced list 0 66 1
shared pool kgh_next_free 0 40 0
shared pool kghalp 0 21 177
shared pool kghfre 0 19 119
shared pool kghfrunp: clatch: wait 0 19 3
shared pool kgh: sim resz update 0 13 0
shared pool kghasp 0 5 29
shared pool kghfrunp: clatch: nowait 0 3 0
shared pool simulator kglsim_unpin_simhp 0 59 1
shared pool simulator kglsim_chg_simhp_free 0 4 0
simulator lru latch kcbs_shrink_pool 0 11 0
simulator lru latch kcbs_grow_pool 0 8 0
simulator lru latch kcbs_free_granule_sim_buffers 0 7 0
slave class create ksvcreate 0 11 0
Back to Latch Statistics
Back to Top
Parent Latch Statistics
No data exists for this section of the report.
Back to Latch Statistics
Back to Top
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Child Latch Statistics
No data exists for this section of the report.
Back to Latch Statistics
Back to Top
Segment Statistics
• Segments by Logical Reads
• Segments by Physical Reads
• Segments by Row Lock Waits
• Segments by ITL Waits
• Segments by Buffer Busy Waits
Back to Top
Segments by Logical Reads
• Total Logical Reads: 1,411,267,917
• Captured Segments account for 97.9% of Total
Owner Tablespace Name Object Name Subobject Name Obj. Type Logical Reads %Total
GCARS DATA_GCARS SYSTEM_POLICY TABLE 769,298,368 54.51
GCARS INDX_GCARS XIE1CHEQUE INDEX 92,814,208 6.58
GCARS DATA_GCARS CUSTOMER TABLE 63,214,464 4.48
GCARS DATA_GCARS CHEQUE TABLE 60,275,760 4.27
GCARS DATA_GCARS EXCHANGE_RATE2 TABLE 55,376,928 3.92
Back to Segment Statistics
Back to Top
Segments by Physical Reads
• Total Physical Reads: 23,628,785
• Captured Segments account for 83.9% of Total
Owner Tablespace
Name Object Name Subobjec
t Name
Obj.
Type
Physical
Reads
%Tota
l
GCARS DATA_GCARS INVOICE TABL
E
12,758,70
5
54.00
GCARS_OUTBN
D
WEBM_GCAR
S
WEBM_OUT_ACTIVITY TABL
E
1,257,402 5.32
GCARS_OUTBN
D
WEBM_GCAR
S
WEBM_OUT_INVOICE_SCRATCH_PA
D
TABL
E
1,160,608 4.91
GCARS_BATCH TOOLS IFACTOR_IN_ACTIVITY TABL
E
717,778 3.04
GCARS_BATCH TOOLS IFACTOR_INVOICE TABL
E
593,614 2.51
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Back to Segment Statistics
Back to Top
Segments by Row Lock Waits
• % of Capture shows % of row lock waits for each top segment compared
• with total row lock waits for all segments captured by the Snapshot
Owner Tablespace
Name Object Name Subobject
Name
Obj.
Type
Row Lock
Waits
% of
Capture
GCARS INDX_GCARS XPKINVOICE INDEX 424 54.57
GCARS INDX_GCARS XPKCHEQUE_BATCH_HEADER INDEX 105 13.51
GCARS INDX_GCARS XPKCHEQUE INDEX 75 9.65
GCARS INDX_GCARS XPKARC INDEX 43 5.53
GCARS_WBM WEBM_GCARS XIE1WEBM_IN_INVOICE INDEX 24 3.09
Back to Segment Statistics
Back to Top
Segments by ITL Waits
No data exists for this section of the report.
Back to Segment Statistics
Back to Top
Segments by Buffer Busy Waits
• % of Capture shows % of Buffer Busy Waits for each top segment compared
• with total Buffer Busy Waits for all segments captured by the Snapshot
Owner Tablespace Name Object Name Subobject Name Obj. Type Buffer Busy Waits % of Capture
GCARS DATA_GCARS SYSTEM_POLICY TABLE 15 100.00
Back to Segment Statistics
Back to Top
Dictionary Cache Stats
• "Pct Misses" should be very low (< 2% in most cases)
• "Final Usage" is the number of cache entries being used
Cache Get Requests Pct Miss Scan Reqs Pct Miss Mod Reqs Final Usage
dc_awr_control 832 0.00 0 38 1
dc_constraints 915 33.44 0 915 0
dc_database_links 18,471 0.02 0 0 2
dc_files 2,236 0.58 0 0 13
dc_free_extents 4,223 0.00 0 0 41
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
dc_global_oids 206,466 0.16 0 0 44
dc_histogram_data 42,498,374 0.05 0 24,842 14,734
dc_histogram_defs 10,022,979 0.58 0 23,900 10,080
dc_object_grants 1,178,794 0.13 0 0 704
dc_object_ids 94,939,206 0.01 0 173 2,751
dc_objects 1,364,836 0.71 0 771 2,084
dc_profiles 9,498 0.00 0 0 1
dc_rollback_segments 7,155 0.00 0 0 25
dc_segments 12,156,070 0.05 0 15,836 1,750
dc_sequences 2,643 1.89 0 2,643 11
dc_table_scns 204 81.86 0 2 0
dc_tablespace_quotas 15,672 0.04 0 15,020 1
dc_tablespaces 3,730,697 0.00 0 0 11
dc_usernames 1,268,369 0.01 0 0 56
dc_users 5,406,266 0.00 0 0 88
global database name 87 0.00 0 0 0
outstanding_alerts 1,503 0.13 0 3 19
Back to Top
Library Cache Activity
• "Pct Misses" should be very low
Namespace Get Requests Pct Miss Pin Requests Pct Miss Reloads Invali- dations
BODY 7,334 0.80 117,908 0.24 225 0
CLUSTER 1,148 0.35 2,594 1.12 25 0
INDEX 2,407 86.79 12,294 32.75 55 0
SQL AREA 2,100,837 47.94 281,481,645 0.62 86,488 7,604
TABLE/PROCEDURE 1,164,531 0.35 7,744,538 0.35 13,653 0
TRIGGER 251 0.40 2,502 0.68 16 0
Back to Top
Memory Statistics
• Process Memory Summary
• SGA Memory Summary
• SGA breakdown difference
Back to Top
Process Memory Summary
• B: Begin snap E: End snap
• All rows below contain absolute values (i.e. not diffed over the interval)
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
• Max Alloc is Maximum PGA Allocation size at snapshot time
• Hist Max Alloc is the Historical Max Allocation for still-connected processes
• ordered by Begin/End snapshot, Alloc (MB) desc
Category Alloc
(MB)
Used
(MB)
Avg Alloc
(MB)
Std Dev
Alloc (MB)
Max Alloc
(MB)
Hist Max
Alloc (MB)
Num
Proc
Num
Alloc
B Other 113.92 2.71 5.41 22 22 42 42
Freeable 28.88 0.00 1.00 0.88 4 29 29
SQL 7.67 5.21 0.23 0.51 3 15 33 27
PL/SQL 0.83 0.62 0.02 0.02 0 3 42 42
E Other 464.36 1.45 2.83 29 35 320 320
Freeable 111.31 0.00 0.70 0.48 5 160 160
SQL 36.24 23.92 0.12 0.67 11 396 312 308
PL/SQL 13.89 5.91 0.04 0.07 1 1 320 320
Back to Memory Statistics
Back to Top
SGA Memory Summary
SGA regions Begin Size (Bytes) End Size (Bytes) (if different)
Database Buffers 3,372,220,416 3,506,438,144
Fixed Size 2,151,144
Redo Buffers 1,081,344
Variable Size 4,425,952,536 4,291,734,808
Back to Memory Statistics
Back to Top
SGA breakdown difference
• ordered by Pool, Name
• N/A value for Begin MB or End MB indicates the size of that Pool/Name was insignificant, or zero in that
snapshot
Pool Name Begin MB End MB % Diff
java free memory 16.00 16.00 0.00
large PX msg pool 1.03 1.03 0.00
large free memory 14.97 14.97 0.00
shared CCursor 553.97 258.30 -53.37
shared Cursor Stats 87.69 87.69 0.00
shared KGH: NO ACCESS 342.93 630.08 83.73
shared PCursor 451.80 223.08 -50.62
shared free memory 1,116.24 1,865.82 67.15
shared kglsim heap 53.22 53.73 0.95
shared kglsim object batch 89.33 89.50 0.20
shared library cache 295.59 165.28 -44.08
shared sql area 1,277.08 1,058.03 -17.15
streams free memory 15.99 15.99 0.00
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
buffer_cache 3,216.00 3,344.00 3.98
fixed_sga 2.05 2.05 0.00
log_buffer 1.03 1.03 0.00
Back to Memory Statistics
Back to Top
Streams Statistics
• Streams CPU/IO Usage
• Streams Capture
• Streams Apply
• Buffered Queues
• Buffered Subscribers
• Rule Set
Back to Top
Streams CPU/IO Usage
No data exists for this section of the report.
Back to Streams Statistics
Back to Top
Streams Capture
No data exists for this section of the report.
Back to Streams Statistics
Back to Top
Streams Apply
No data exists for this section of the report.
Back to Streams Statistics
Back to Top
Buffered Queues
No data exists for this section of the report.
Back to Streams Statistics
Back to Top
Buffered Subscribers
No data exists for this section of the report.
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Back to Streams Statistics
Back to Top
Rule Set
• Rule Sets ordered by Evaluations
Ruleset Name Evals Fast Evals SQL Execs CPU Time Elapsed Time
SYS.ALERT_QUE_R 3 0 0 14 50
Back to Streams Statistics
Back to Top
Resource Limit Stats
• only rows with Current or Maximum Utilization > 80% of Limit are shown
• ordered by resource name
Resource Name Current Utilization Maximum Utilization Initial Allocation Limit
processes 321 600 600 600
sessions 328 607 665 665
Back to Top
init.ora Parameters
Parameter Name Begin value End value
(if different)
aq_tm_processes 0
audit_file_dest /disk01/app/oracle/admin/ICEF_P92/audit
audit_trail NONE
background_dump_dest /disk01/app/oracle/admin/ICEF_P92/bdump
compatible 10.2.0.1.0
control_files /database_p92/P92/ctl/cntrlP92_1.ctl, /redo1/P92/ctl/cntrlP92_2.ctl,
/archive_p92/P92/ctl/cntrlP92_3.ctl
core_dump_dest /disk01/app/oracle/admin/ICEF_P92/cdump
cursor_space_for_time FALSE
db_block_size 8192
db_cache_size 16777216
db_domain WORLD
db_file_multiblock_read_count 8
db_files 255
db_name ICEF_P92
disk_asynch_io FALSE
dml_locks 600
java_pool_size 0
job_queue_processes 10
log_archive_dest /archive_p92/P92/arch
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
log_archive_format P92_%t_%s_%r.arc
log_buffer 1048576
log_checkpoint_interval 10000000
log_checkpoint_timeout 0
log_checkpoints_to_alert FALSE
max_dump_file_size 10000
nls_date_format YYYY-MON-DD
nls_language AMERICAN
nls_numeric_characters .,
nls_sort BINARY
nls_territory AMERICA
open_cursors 300
optimizer_mode ALL_ROWS
os_authent_prefix ops$
os_roles FALSE
pga_aggregate_target 25769803776
pre_page_sga FALSE
processes 600
remote_os_authent FALSE
remote_os_roles FALSE
resource_limit TRUE
rollback_segments r01, r02, r03, r04, r05
session_cached_cursors 100
sga_max_size 7801405440
sga_target 7801405440
shared_pool_size 0
sort_area_retained_size 512000
sort_area_size 1048576
timed_statistics TRUE
undo_management AUTO
undo_tablespace undotbs_01
user_dump_dest /disk01/app/oracle/admin/ICEF_P92/udump
utl_file_dir /tmp, /dbf2/bill_csv, /dbf/xusers/3delta, /dbf/xusers/3delta/inbound,
/dbf/xusers/3delta/outbound, /dbf/xusers/3delta/exception_report,
/dbf/xusers/3delta/inbound/input, /dbf/xusers/3delta/inbound/output
Back to Top
End of Report Regards DBA TeaM
==================================================
Steps to resize redo log file?
Resizing redo log file is not possible other way is to add new logfile group with increased size
and switch to that new logfile group and drop the old one.
Steps to find who is Locking Who?
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Who is locking whom?
select l1.sid, ' IS BLOCKING ', l2.sid
from v$lock l1, v$lock l2
where l1.block =1 and l2.request > 0
and l1.id1=l2.id1
and l1.id2=l2.id2;
Who is waiting for lock?
select
object_name,
object_type,
session_id,
os_user_name,
oracle_username,
process,
type, -- Type or system/user lock
lmode, -- lock mode in which session holds lock
request,
block,
ctime -- Time since current mode was granted
from
v$locked_object, all_objects, v$lock
where
v$locked_object.object_id = all_objects.object_id AND
v$lock.id1 = all_objects.object_id AND
v$lock.sid = v$locked_object.session_id
order by
session_id, ctime desc, object_name;
Steps to migrate to ASM?
Migrate non ASM Instance to
ASM Instance
1. ASM Instance is already Created and Ready to Use.
2. DB Instance was Started using ‘spfile’
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
3. Disk Group Name : ‘+DGDATA’
- Set the Parameter ‘db_create_file_dest’ to ‘+DGDATA’
alter system set db_create_file_dest=’+DGDATA’ scope=spfile;
- Set the Parameter ‘control_file’ to the destination in ASM
alter system set control_files=” scope=spfile;
- Bounce/Open the Database with ‘NoMount’ Clause
startup nomount
- Restore Control File from the Previous Destination
restore controlfile from ‘/u001/oracle/data/control001.ctl’;
- Mount the Database
alter database mount;
-Do the following :
backup as copy database format ‘+DGDATA’;
switch database to copy;
recover database;
alter database open;
alter tablespace TEMP add TEMPFILE;
alter database tempfile ‘/u001/oracle/data/temp001.dbf’ drop;
Steps to gather table stats?
begin
dbms_stats.gather_table_stats(
ownname=> 'GCARS',
tabname=> 'INVOICE' ,
estimate_percent=> DBMS_STATS.AUTO_SAMPLE_SIZE,
cascade=> DBMS_STATS.AUTO_CASCADE,
degree=> null,
no_invalidate=> DBMS_STATS.AUTO_INVALIDATE,
granularity=> 'AUTO',
method_opt=> 'FOR ALL COLUMNS SIZE AUTO');
end;
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
How to administer Services in RAC what is command and explain?
srvctl
[start/stop/config/enable/disable/relocate/remove]
[service/asm/listener/database/instance/nodeapps]
options -n nodename/ -d database name/-i instance name/-s service name/-a available
instance/-p preferred instance/-o oraclehome/-A (Address of vip/networks mass)
How to see the contents of Oracle cluster registry?
Ocrdump utility is used to see the contents of ocr
Ocrdump filename – output will be stored into that file
Ocrdump –stdout xml– output will be displayed on screen
Steps to partitioning a table having data and constraints
-- Step 01:
Check Whether table ocdw.DEPOSIT_ACCT_INDICATOR_B can be redefined online.
SQL> exec dbms_redefinition.can_redef_table('OCDW',
'DEPOSIT_ACCT_INDICATOR_B');
PL/SQL procedure successfully completed.
-- Step 02:
Create an interim table ocdw.DEPOSIT_ACCT_INDICATOR_B_ which holds the same
structure as the original table except constraints, indexes, triggers but add the partitioning
attribute.
SQL>@DEPOSIT_ACCT_INDICATOR_B.sql
Later this table can be dropped.
The DDL at table level should also not contain NOT NULL or any constraints. The NOT
NULL contraints at table level if defined in the original table then it will be enforced as
constraints after completion of all the steps not at table level.
-- Step 03:
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Initiates the redefinition process by calling dbms_redefinition.start_redef_table
procedure.
SQL> exec dbms_redefinition.start_redef_table('OCDW',
'DEPOSIT_ACCT_INDICATOR_B', 'DEPOSIT_ACCT_INDICATOR_B_');
PL/SQL procedure successfully completed.
If any error encountered while running this process then abort the redifinition process
first using below statements:
SQL> exec dbms_redefinition.abort_redef_table('OCDW',
'DEPOSIT_ACCT_INDICATOR_B', 'DEPOSIT_ACCT_INDICATOR_B_');
Resolve the error and restart the redefinition process.
-- Step 04:
Copies the dependent objects of the original table onto the interim table. The
COPY_TABLE_DEPENDENTS Procedure clones the dependent objects of the table
being redefined onto the interim table and registers the dependent objects. But this
procedure does not clone the already registered dependent objects.
In fact COPY_TABLE_DEPENDENTS Procedure is used to clone the dependent objects
like grants, triggers, constraints and privileges from the table being redefined to the
interim table which in fact represents the post-redefinition table.
SQL>@copy_table_dependents.sql
The content of this sql file is as below:
DECLARE
error_count PLS_INTEGER := 0;
BEGIN
dbms_redefinition.copy_table_dependents('OCDW',
'DEPOSIT_ACCT_INDICATOR_B', 'DEPOSIT_ACCT_INDICATOR_B_',1, TRUE,
TRUE, TRUE, FALSE,error_count);
DBMS_OUTPUT.PUT_LINE('errors := ' || TO_CHAR(error_count));
END;
/
If any constraints are created on interim table (even NOT NULL), the above PL/SQL will
fail.
-- Step 05:
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Completes the redefinition process by calling FINISH_REDEF_TABLE Procedure.
SQL> exec dbms_redefinition.finish_redef_table('OCDW',
'DEPOSIT_ACCT_INDICATOR_B', 'DEPOSIT_ACCT_INDICATOR_B_');
-- Step 06:
Check the partitioning validation by:
SQL> Select partition_name, high_value from user_tab_partitions where
table_name='DEPOSIT_ACCT_INDICATOR_B';
-- Step 07:
Check index status by
SQL> select index_name , status from user_indexes where
table_name='DEPOSIT_ACCT_INDICATOR_B';
-- Step 08:
Check the constraints
SELECT owner,constraint_name,constraint_type,serach_condition,status
FROM dba_constraints WHERE table_name='DEPOSIT_ACCT_INDICATOR_B';
Check for the NOT NULL constraints. Any constraint defined at table level on original
table is now enforced by system generated constraints.
-- Step 09:
Drop the interim table DEPOSIT_ACCT_INDICATOR_B_.
SQL> drop table ocdw.DEPOSIT_ACCT_INDICATOR_B_;
Explain in brief AWR in Oracle 10g?
Collect performance statistics including:
• Wait events used to identify performance problems.
• Time model statistics indicating the amount of DB time associated with a process from
the V$SESS_TIME_MODEL and V$SYS_TIME_MODEL views.
• Active Session History (ASH) statistics from the V$ACTIVE_SESSION_HISTORY view.
• Some system and session statistics from the V$SYSSTAT and V$SESSTAT views.
• Object usage statistics.
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
• Resource intensive SQL statements
This data is gathered and populated into the AWR by the new 10g MMON and MMNL
(MMON Lite) processes. MMON is also responsible for issuing alerts when there are
metrics that exceed thresholds
Interval 60 Min
Statistics_level typical
Retain time 7 days
Tablespace sysaux
What is Delta Value, metric, sample data, baseline, Wait event
Change in Statistics over a period of time is called as metric value
Another type of statistic collected by Oracle is called a metric. A metric is defined as the
rate of change in some cumulative statistic
A third type of statistical data collected by Oracle is sampled data. This sampling is
performed by the active session history (ASH) sampler. ASH samples the current state of
all active sessions. This data is collected into memory and can be accessed by a V$ view.
A statistical baseline is collection of statistic rates usually taken over time period where
the system is performing well at peak load.
Wait events are statistics that are incremented by a server process/thread to indicate that it
had to wait for an event to complete before being able to continue processing.
What is dbhome ?
Is a script from oracle_home/bin which shows homepath set in oratab for a given oracle
database
How will u collect performance statistics and what is strategy?
Statement to collect performance statistics of database
exec dbms_stats.gather_database_stats( cascade => TRUE, method_opt => 'FOR ALL
COLUMNS SIZE AUTO' );
Statement to collect performance statistics of a Schema
What is RDA?
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Oracle Remote Diagnostics Agent (RDA) is used primarily by Oracle Support.
• Overview
• Operating System Setup
• User Profile
• Performance
• Network
• Oracle Installation
• RDBMS
• RDBMS Log/Trace Files
• Streams Configuration
• Microsoft Languages
• J2EE/OC4J
Remote Diagnostic Agent (RDA) is a command-line diagnostic tool that is executed by an engine
written in the Perl programming language. RDA provides a unified package of support diagnostics
tools and preventive solutions. The data captured provides Oracle Support with a comprehensive
picture of the customer's environment which aids in problem diagnosis.
Oracle Support encourages the use of RDA because it greatly reduces service request resolution
time by minimizing the number of requests from Oracle Support for more information. RDA is
designed to be as unobtrusive as possible; it does not modify systems in any way.
Setup a Transparent Data encryption mechanism to secure columns?
Conn sys/sys as sysdba
Create tablespace testtb datafile ‘/u01/myfile.dbf’ size 1g;
Create user testu identified by testu default tablespace testtb;
Grant connect, create table to testu;
Alter user testu quota unlimited on testtb;
Conn testu/testu
Create table testtab
(
Id number,
Data varchar2(10)) tablespace testtb;
Insert into testtab values(1,’This data is not secure’);
In order to prove the data is not secure use a software called ultraedit editor to open above datafile (
‘/u01/myfile.dbf’) and u will b able to see all the varchar contents over ther.
Steps to secure data
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
1. open sqlnet.ora file
2. add encryption_wallet_location parameter to the sqlnet.ora file
§ encryption_wallet_location=(source=(method=file)(method_data=(directory
=’/u01/mysecurefolder’)))
3. conn sys/sys as sysdba
4. alter system set encryption key authenticated by “password”;
5. alter system set wallet open identified by “password”;
6. conn testu/testu
7. create table testtab2(id number, data varchar2(10) encrypt) tablespace testtab;
8. insert into testtab2 values(1,’This data is secure’);
9. check by same software again for the varchar data.
What is enq: TX - row lock contention
Enqueues are locks that coordinate access to database resources. enq: wait event indicates that
the session is waiting for a lock that is held by another session. The name of the enqueue is as
part of the form enq: enqueue_type - related_details.
The V$EVENT_NAME view provides a complete list of all the enq: wait events.
TX enqueue are acquired exclusive when a transaction initiates its first change and held until the
transaction does a COMMIT or ROLLBACK.
Several Situation of TX enqueue:
--------------------------------------
1) Waits for TX in mode 6 occurs when a session is waiting for a row level lock that is already
held by another session. This occurs when one user is updating or deleting a row, which another
session wishes to update or delete. This type of TX enqueue wait corresponds to the wait event
enq: TX - row lock contention.
The solution is to have the first session already holding the lock perform a COMMIT or
ROLLBACK.
2) Waits for TX in mode 4 can occur if a session is waiting due to potential duplicates in UNIQUE
index. If two sessions try to insert the same key value the second session has to wait to see if an
ORA-0001 should be raised or not. This type of TX enqueue wait corresponds to the wait event
enq: TX - row lock contention.
The solution is to have the first session already holding the lock perform a COMMIT or
ROLLBACK.
3)Waits for TX in mode 4 is also possible if the session is waiting due to shared bitmap index
fragment. Bitmap indexes index key values and a range of ROWIDs. Each ‘entry’ in a bitmap
index can cover many rows in the actual table. If two sessions want to update rows covered by
the same bitmap index fragment, then the second session waits for the first transaction to either
COMMIT or ROLLBACK by waiting for the TX lock in mode 4. This type of TX enqueue wait
corresponds to the wait event enq: TX - row lock contention.
Troubleshooting:
for which SQL currently is waiting to,
select sid, sql_text
from v$session s, v$sql q
where sid in (select sid
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
from v$session where state in ('WAITING')
and wait_class != 'Idle' and event='enq: TX - row lock contention'
and (
q.sql_id = s.sql_id or
q.sql_id = s.prev_sql_id));
The blocking session is,
SQL> select blocking_session, sid, serial#, wait_class, seconds_in_wait from v$session
where blocking_session is not NULL order by blocking_session;
How will u configure EM
Emca is command to be used to set it up
Emctl status agent is used to find out status
Emctl stop agent is used to stop agent
Emctl resetTZ agent is used reset timezone information
This will ask for login as sysman and run following command
exec mgmt_target.set_agent_tzrgn('recp0002:1831','Canada/Eastern'); this command set
timezone as Canada/eastern timezone
emctl start agent is used to start the agent again
emctl stop agent is stopped dbconsole then first
emctl start dbconsole should be run first
Who is dbsnmp? How to resolve java.lang.exception in EM?
Dbsnmp is a schema, is a oracle intelligent agent. This user uses system tablespace and temp
tablespace as temporary. This process starts with lsnrctl start and stops with lsnrctl stop.
$ORACLE_HOME/rdbms/admin/catnsnmp.sql - drops dbsnmp user plus some roles
(SNMPAgent/OEM_MONITOR in 9i]
$ORACLE_HOME/rdbms/admin/catsnmp.sql - create the user plus some roles
DBSNMP is the account used by Oracle’s intelligent agent to logon automatically to remote
servers in order to provide information for presentation via Enterprise Manager. DNSMP has the
SELECT ANY DICTIONARY system privilege which can read the passwords from SYS.USER$
and enables the account to do its work for the Intelligent Agent.
Securing the DBSNMP User
Let's see the privileges granted to DBSNMP.
select
privilege
from
dba_sys_privs
where
grantee = 'DBSNMP'
union all
select
granted_role privilege
from
dba_role_privs
where
grantee = 'DBSNMP'
union all
select
privilege||' on '||owner||'.'||table_name privilege
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
from
dba_tab_privs
where
grantee = 'DBSNMP'
/
PRIVILEGE
-------------------------------------------
SELECT ANY DICTIONARY
CONNECT
Selecting the privileges of the role CONNECT, we find:
CREATE VIEW
CREATE TABLE
ALTER SESSION
CREATE CLUSTER
CREATE SESSION
CREATE SYNONYM
CREATE SEQUENCE
CREATE DATABASE LINK
Observe how the DBSNMP user can select from any data dictionary view, create objects like
table and view, or can alter session to enable trace. In essence, this is a powerful account. The
select privilege from the data dictionary itself will let the user know all about the database – the
users, tables, views, packages, procedures, and everything else. This account should be as
secured as possible.
However, if you change the password of this user, the Oracle Intelligent Agent will cease to work.
To make it work, you need to update the intelligent agent configuration file snmp_rw.ora under
$ORACLE_HOME/network/admin. Add a line as the following
snmp_connect.SERVICE1.password = 5urf43v3r
where 5urf43v3r is the password of the user SNMP. But doesn't the presence of a password in
the file make it less secure?
In Oracle 9i, after the IA connects for the first time, it will read the password and encrypt it in the
file. In Oracle 8i, however, the file snmp_rw.ora must be secured using file permissions such as
600.
It might make sense to completely drop the user DBSNMP. This username is used in default, but
is not necessary. Any other name can be used. For example, you could create a user ANU that
performs this function.
This is a twist on the security by obscurity philosophy, a hacker will look for a user named
DBSNMP, not ANU. Therefore, changing the user ID that the agent uses makes perfect sense. If
the user ID is changed, the same should be reflected in the parameter file snmp_rw.ora as:
snmp_connect.SERVICE1.name = ANU
In Summary
Here are the steps to secure the DBSNMP user:
1. Stop the agent. From the UNIX prompt as user oracle, issue:
$ lsnrctl dbsnmp_stop (in Oracle8i and below)
$ agentctl stop (in Oracle9i and above)
2. Remove all the jobs and events for this database.
3. Decide on a password and a username, if desired. Create the user and grant proper privileges.
4. Edit the file $ORACLE_HOME/network/admin/snmp_rw.ora to include the following lines:
snmp_connect.SERVICE1.password = 5urf43v3r
snmp_connect.SERVICE1.name = ANU
The second line is needed only if the username is changed, too.
5. Change the permissions on the file snmp_rw.ora to 600.
6. Restart the agents. From the UNIX prompt issue:
$ lsnrctl dbsnmp_start (in Oracle8i and below)
$ agentctl start (in Oracle9i and above)
7. If a different user has been used, the user DBSNMP can be dropped.
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
* Tip: Change the password of the user DBSNMP and update the file snmp_rw.ora to reflect the
new password.
Java Lang exception is rectified by dbsnmp is unlocked and dbconsole is restarted
What are Steps to reconfigure data control?
Login as sys and drop sysman user, mgmt_user role , mgmt_view user
Conn sys/sys as sysdba
Drop user sysman cascade;
Drop role mgmt_user;
Drop user mgmt_view cascade;
Drop public synonym mgmt_target_blockouts;
Drop public synonym SETEMVIEWUSERCONTEXT;
Deconfig dbcontrol
Emctl –deconfig dbcontrol db
Config dbcontrol
Emctl –config dbcontrol db –repos create
What is Block recovery and how to do it.
Block recovery is done when a block is found corrupted. ORA-01578 will appear in alert log for
block corruption.
There are logical corruptions where file id and block id can easily be seen in alert log. Replace F
with file id and B with block id when querying dba_extents for corrupted blocks
SELECT SEGMENT_NAME, SEGMENT_TYPE, RELATIVE_FNO
FROM DBA_EXTENTS
WHERE FILE_ID = F
AND B BETWEEN BLOCK_ID AND BLOCK_ID + BLOCKS - 1
Rebuilding index partitions / subpartitions with corrupted block
This procedure assumes you had this error (corrupted block in index
partition):
ORA-01578: ORACLE data block corrupted (file # 459, block # 15)
ORA-01110: data file 459: '/path/to/datafile.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option
1. Find out which partition has corrupted block
select distinct partition_name, index_name
from dba_ind_subpartitions
where subpartition_name in
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
(
select partition_name
from dba_segments
where tablespace_name in
(
select tablespace_name
from dba_data_files
where file_name = '/path/to/datafile.dbf'
)
)
/
2. Mark the index partition unusable
SQLPLUS> ALTER INDEX schema_name.index_name MODIFY PARTITION
partition_name UNUSABLE;
3. Re-build index partition
If it's not a composite range partition you can simply do the
following:
SQLPLUS> ALTER INDEX schema_name.index_name REBUILD PARTITION
partition_name;
Otherwise, if it is a composite range partition you will get the
following error:
ORA-14287: cannot REBUILD a partition of a Composite Range partitioned
index
What you have to do is to rebuild each subpartition at a time. Here's
the SQL script for
rebuilding subpartitions:
set head off pagesize 0 linesize 100
select 'ALTER INDEX ' || index_owner || '.' || index_name || ' REBUILD
SUBPARTITION ' ||
subpartition_name || ';'
from dba_ind_subpartitions
where subpartition_name in
(
select partition_name
from dba_segments
where tablespace_name in
(
select tablespace_name
from dba_data_files
where file_name = '/path/to/datafile.dbf'
)
)
/
Query v$database_block_corruption view to find corrupted blocks and join this view to
dba_extents to find related segments
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
SQL> desc v$database_block_corruption
Name Null? Type
----------------------------------------- -------- ----------------------------
FILE# NUMBER
BLOCK# NUMBER
BLOCKS NUMBER
CORRUPTION_CHANGE# NUMBER
CORRUPTION_TYPE VARCHAR2(9)
SQL> desc dba_extents
Name Null? Type
----------------------------------------- -------- ----------------------------
OWNER VARCHAR2(30)
SEGMENT_NAME VARCHAR2(81)
PARTITION_NAME VARCHAR2(30)
SEGMENT_TYPE VARCHAR2(18)
TABLESPACE_NAME VARCHAR2(30)
EXTENT_ID NUMBER
FILE_ID NUMBER
BLOCK_ID NUMBER
BYTES NUMBER
BLOCKS NUMBER
RELATIVE_FNO NUMBER
Temporary Segments does not require block recovery as block id changes each time, Index
segments require block recovery and index/partition of index can be rebuild. RMAN blockrecover
command used to recover a corrupted block of backup datafile.
Blockrecover datafile <datafile No> block < Block No>;
Backup validate datafile.
To enable block checking to find corrupted blocks each time a block is updated and to record the
same in data dictionary views db_block_checking=TRUE has to be done
Dbv is called as database verification utility used to verify datafile.
Dbv file=<path of datafile> block=<block size of datafile>
Output will be number of blocks examined, total number of blocks processed, total number of
blocks failing, total number of blocks empty, total number of blocks corrupt, total number of blocks
influx.
The db_block_checking parameter is used to performs block checks for all data blocks
making sure that all data in the block is consistent. The default for db_block_checking is
false, because db_block_checking imposes an additional system overhead.
Block checking provides early detection of block corruption, however, it costs between 1-
10% in overhead on the database. The more DML and block writes that occur on a
system (INSERT, UPDATE, DELETE), the more costly db_block_checking becomes.
Note: While the Oracle documentation says that db_block_checking=true only adds a 1
to 10% overhead depending on concurrency of DML, we have seen cases where
db_block_checking=true made the updates run 6x slower.
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Any errors encountered by block checking result in an ORA-600 level message in the
alert log. Before setting the db_block_checking parameter to TRUE, first execute the
database verify (dbv) utility against the datafiles to make sure they are free of
corruption. Even when db_block_checking=false, Oracle still provides block checking
for the system tablespace.
What is db_block_checksum
DB_BLOCK_CHECKSUM determines whether DBWn and the direct loader will calculate a
checksum (a number calculated from all the bytes stored in the block) and store it in the
cache header of every data block when writing it to disk. Checksums are verified when a
block is read-only if this parameter is true and the last write of the block stored a
checksum. In addition, Oracle gives every log block a checksum before writing it to the
current log.
If this parameter is set to false, DBWn calculates checksums only for the SYSTEM
tablespace, but not for user tablespaces.
Checksums allow Oracle to detect corruption caused by underlying disks, storage
systems, or I/O systems. Turning on this feature typically causes only an additional 1% to
2% overhead. Therefore, Oracle recommends that you set DB_BLOCK_CHECKSUM to true.
How can corrupt blocks be caused?
First of all we have two diffent kinds of block corruption:
- physical corruption (media corrupt)
- logical corruption (soft corrupt)
Physical corruption can be caused by defected memory boards, controllers or broken
sectors on a hard disk;
Logical corrution can amoung other reasons be caused by an attempt to recover through a
NOLOGGING action.
There are two initialization parameters for dealing with block corruption:
- DB_BOCK_CHECKSUM (calculates a checksum for each block before it is written to
disk, every time)
causes 1-2% performance overhead
- DB_BLOCK_CHECKING (serverprocess checks block for internal consistency after
every DML)
causes 1-10% performance overhead
If performance is not a big issue then you should use these!
Normally RMAN checks only for physically corrupt blocks
with every backup it takes and every image copy it makes.
This is a common misunderstanding amoung a lot of DBAs.
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
RMAN doesn not automatically detect logical corruption by default!
We have to tell it to do so by using CHECK LOGICAL!
The info about corruptions can be found in the following views:
SYS @ orcl AS SYSDBA SQL > select * from v$backup_corruption;
RECID STAMP SET_STAMP SET_COUNT PIECE# FILE# BLOCK#
———- ———- ———- ———- ———- ———- ———-
BLOCKS CORRUPTION_CHANGE# MAR CORRUPTIO
———- —————— — ———
1 586945441 586945402 3 1 5 81
4 0 YES CORRUPT
– SYS @ orcl AS SYSDBA SQL > select * from v$copy_corruption;
Here is a case study:
HR @ orcl SQL > select last_name, salary
2 from employees;
ERROR at line 2:
ORA-01578: ORACLE data block corrupted (file # 5, block # 83)
# this could be an ORA-26040 in Oracle 8i! and before
ORA-01110: data file 5: ‘/u01/app/oracle/oradata/orcl/
example01.dbf’
This is what you find in the alert_.log:
Wed Apr 5 08:17:40 2006
Hex dump of (file 5, block 83) in trace file
/u01/app/oracle/admin/orcl/udump/orcl_ora_14669.trc
Corrupt block relative dba: 0×01400053 (file 5, block 83)
Bad header found during buffer read
Data in bad block:
type: 67 format: 7 rdba: 0x0a545055
last change scn: 0×0000.0006d162 seq: 0×1 flg: 0×04
spare1: 0×52 spare2: 0×52 spare3: 0×0
consistency value in tail: 0xd1622301
check value in block header: 0x63be
computed block checksum: 0xe420
Reread of rdba: 0×01400053 (file 5, block 83)
found same corrupted data
Wed Apr 5 08:17:41 2006
Corrupt Block Found
TSN = 6, TSNAME = EXAMPLE
RFN = 5, BLK = 83, RDBA = 20971603
OBJN = 51857, OBJD = 51255, OBJECT = , SUBOBJECT =
SEGMENT OWNER = , SEGMENT TYPE =
Starting with Oracle 9i we can use RMAN
to check a database for both physically and logically corrupt blocks.
Here is the syntax:
RMAN> backup validate check logical database;
Starting backup at 05-04-2006:08:23:20
allocated channel: ORA_DISK_1
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
channel ORA_DISK_1: sid=136 devtype=DISK
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00001 name=/u01/app/oracle/oradata/orcl/
system01.dbf
input datafile fno=00003 name=/u01/app/oracle/oradata/orcl/
sysaux01.dbf
input datafile fno=00005 name=/u01/app/oracle/oradata/orcl/
example01.dbf
input datafile fno=00002 name=/u01/app/oracle/oradata/orcl/
undotbs01.dbf
input datafile fno=00004 name=/u01/app/oracle/oradata/orcl/
users01.dbf
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:45
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
including current control file in backupset
including current SPFILE in backupset
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
including current control file in backupset
including current SPFILE in backupset
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 05-04-2006:08:24:10
RMAN does not physically backup the database with this command
but it reads all blocks and checks for corruptions.
If it finds corrupted blocks it will place the information about the corruption into a view:
SYS @ orcl AS SYSDBA SQL > select * from v$database_block_corruption;
FILE# BLOCK# BLOCKS CORRUPTION_CHANGE# CORRUPTIO
———- ———- ———- —————— ———
5 81 4 0 CORRUPT
this is what we find in the alert_.log:
Corrupt block relative dba: 0x014000b1 (file 5, block 177)
Bad header found during backing up datafile
Data in bad block:
type: 67 format: 7 rdba: 0x0a545055
last change scn: 0×0000.0007bc77 seq: 0×3 flg: 0×04
spare1: 0×52 spare2: 0×52 spare3: 0×0
consistency value in tail: 0xbc772003
check value in block header: 0xb32
computed block checksum: 0xe4c1
Reread of blocknum=177, file=/u01/app/oracle/oradata/orcl/
example01.dbf.
found same corrupt data
Now we can tell RMAN to recover all the blocks
which it has found as being corrupt:
RMAN> blockrecover corruption list;
# (all blocks from v$database_block_corruption)
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Starting blockrecover at 05-04-2006:10:09:15
using channel ORA_DISK_1
channel ORA_DISK_1: restoring block(s) from datafile copy /u01/app/
oracle/flash_recovery_area/ORCL/datafile/o1_mf_example_236tmb1c_.dbf
starting media recovery
archive log thread 1 sequence 2 is already on disk as file /u01/app/oracle/
flash_recovery_area/ORCL/archivelog/2006_04_05/o1_mf_1_2_236wxbsp_.arc
archive log thread 1 sequence 1 is already on disk as file
/u01/app/oracle/oradata/
orcl/redo01.log
media recovery complete, elapsed time: 00:00:01
Finished blockrecover at 05-04-2006:10:09:24
this is in the alert_.log:
Starting block media recovery
Wed Apr 5 10:09:22 2006
Media Recovery Log /u01/app/oracle/flash_recovery_area/ORCL/
archivelog/2006_04_05/o1_mf_1_2_%u_.arc
Wed Apr 5 10:09:23 2006
Media Recovery Log /u01/app/oracle/flash_recovery_area/ORCL/
archivelog/2006_04_05/o1_mf_1_2_236wxbsp_.arc ( restored)
Wed Apr 5 10:09:23 2006
Recovery of Online Redo Log: Thread 1 Group 1 Seq 1 Reading mem 0
Mem# 0 errs 0: /u01/app/oracle/oradata/orcl/redo01.log
Detection of Block Corruption
Oracle Database supports different techniques for detecting, repairing, and
monitoring block corruption. The technique depends on whether the corruption is
interblock corruption or intrablock corruption. In intrablock corruption, the
corruption occurs within the block itself. This corruption can be either physical or
logical. In an interblock corruption, the corruption occurs between blocks and
can only be logical.
Validate database is another command used to validate datafiles and controlfiles of
database
To check physical corruptions of database and archive logs
Backup validate database archivelog all;
To check logical corruptions of database and archive logs
Backup validate check logical database archivelog all;
You can use the VALIDATE command to manually check for physical and logical
corruptions in database files. This command performs the same types of checks as
BACKUP VALIDATE, but VALIDATE can check a larger selection of objects. For
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
example, you can validate individual blocks with the VALIDATE DATAFILE ...
BLOCK command.
VALIDATE DATAFILE 1 BLOCK 10;
What is incremental backup and which process tracks it?
Incremental backup is a backup where in backup after level0 can be differential or
cumulative. Which means in differential incremental level1 backup the difference to its
previous level backup is only backed up. Whereas in case of cumulative level1 backup,
the difference from its base level0 backup is backed up, so level0 act as full backup for its
next level backups. So strategy for incremental backup may include daily differentical
level1 backup and weekly cumulative level1 backups and monthly full level0 backup.
Algorithm of incremental backup: Each block SCN of import datafile is compared to
SCN of parent incremental backup. If SCN of input datafile block is greate than SCN of
parent backupfile then RMAN copies that block. If block change tracking is enabled then
a separate file is used to track these changed blocks.
Algorith of block change tracking: CTWR background process uses CTWR shared area
in large pool. This area is used by CTWR to store changed block information. When a
checkpoint happens this information is transferred to change tracking file.
Alter database enable block change tracking; command used db_create_file_dest
parameter to create change track file
Alter database enable block change tracking using ‘<path>’; this command is used store
the file to given location.
Select filename, status, bytes from v$block_change_tracking;
How will you catalogging OS deleted files?
Delete the unaccessible physical logs from RMAN catalog
Change archivelog all crosscheck;
Backup archivelog all not backed up skip inaccessible;
Marking copies as unavailable
Change backupset 2 unavailable;
Change backup of controlfile unavailable;
Change backup of spfile tag “tagname” unavailable;
Crosscheck archivelog all;
It helps RMAN to compare the list of archivelogs it "knows" to be present on disk
(which it hasn't deleted yet) with the real list of archivelogs.
The DBA might have manually moved or compressed some archivelogs when the
filesystem was running low on space OR there might be a "background" job that
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
does it's own backups and maintenance of archivelogs.
In both cases, RMAN doesn't know that some archivelogs which it expects to be
present are no longer available.
A CROSSCHECK ARCHIVELOG ALL shows you the ArchiveLogs which RMAN
couldn't find. You then have the option of running a DELETE EXPIRED ARCHIVELOG
How will u catalog a datafile backup to RMAN?
Catalog datafilecopy ‘path’;
Can u drop a dual table?
Yes, drop table dual;
Create table dual (dummy varchar2 (1));
What is difference between Report and list?
Report is to provide analysis for your backup and recovery
Examples of report command
Report need backup
Report obsolete
Report unrecoverable
Report schema
Backup is determined obsolete based on retention policy
Retentition policy is of two types redudancy <n> and recovery window <n>
Configure retention policy to recover windows of 7 days;
Configure retention policy to redundancy 2;
RECOVERY WINDOW specifies the age of backups that need to be retained.
REDUNDANCY specifies the number of backups that need to be retained.
In order to see redudancy files
Report obsolete redudancy 2;
Report obsolete recovery windows of 5 days;
If configure retention policy to none is done then report obsolete does not consider any
backup as redudent if report obsolete is used, in that case report obsolete must be used
with options.
V$backup_files view obsolete column displays information about obsolete files/backups.
List is used to see what is on a repository it can be querried in v$backup_files
For example: list backupset
After using list command which operate on repository one can use crosscheck if it really
exist and delete if want to delete.
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
One can list backuppieces, imagecopies, backupsets specified objects like backup
generated files, archive logs, controlfiles, and server parameter files etc
Crosscheck is used to ensure that data about the backups in recovery catalog or control
files are in sync with actual files on media.
After crosschecking the status will be available, expired or unavailable
Delete expired is used delete expired backups.
What is the link between flash recovery area and RMAN?
If flash recovery area is not being used that means managing backups storage area is done
by DBA and report obsolete must be run and delete obsolete should be used to delete
obsolete files. If flash recovery area is used then automatically according to configured
retention policy whenever space is needed obsolete files will be deleted.
Explain few format strings of RMAN like %U and %F
%c The copy number of the backup piece within a set of duplexed
backup pieces. If you did not duplex a backup, then this variable
is 1 for backup sets and 0 for proxy copies.
If one of these commands is enabled, then the variable shows the
copy number. The maximum value for %c is 256.
%d The name of the database.
%D The current day of the month (in format DD)
%F Combination of DBID, day, month, year, and sequence into a unique
and repeatable generated name.
%M The month (format MM)
%n The name of the database, padded on the right with x characters
to a total length of eight characters. (AKA: Porn star alias name)
For example, if the scott is the database name, %n= scottxxx.
%p The piece number within the backup set. This value starts at 1
for each backup set and is incremented by 1 as each backup piece
is created. Note: If you specify PROXY, then the %p variable must
be included in the FORMAT string either explicitly or implicitly within %U.
%s The backup set number. This number is a counter in the control file that
is incremented for each backup set. The counter value starts at 1 and is
unique for the lifetime of the control file. If you restore a backup
control file, then duplicate values can result.
Also, CREATE CONTROLFILE initializes the counter back to 1.
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
%t The backup set time stamp, which is a 4-byte value derived as the
number of seconds elapsed since a fixed reference time.
The combination of %s and %t can be used to form a unique name for
the backup set.
%T The year, month, and day (YYYYMMDD)
%u An 8-character name constituted by compressed representations of
the backup set number and the time the backup set was created.
%U A convenient shorthand for %u_%p_%c that guarantees uniqueness in
generated backup filenames.
If you do not specify a format, RMAN uses %U by default.
%Y The year (YYYY)
%% Specifies the '%' character. e.g. %%Y translates to %Y.
Explain Patch installation using Opatch utility?
Using the OPatch Utility
The Opatch utility is used to install Oracle patches. Most Oracle DBAs are familiar with
this utility. Every Oracle CPU (Critical Patch Update) is installed using this utility, as
well as all other patches that Oracle produces to fix bugs and update their product.
Opatch lsinventory detail is the command used to query opatch in order to find out what
patches are installed.
OPatch creates a hidden dotted directory called .patch_storage in the
$ORACLE_HOME. In .patch_storage are directories created by OPatch which have a
name identical to the number of the patch being installed and also contain time and date
of the patch installation:
/export/home/u04/app/oracle/product/10.1.0/db_2/.patch_storage/4193293
Inside each hidden Patch directory are log files describing the patch processes that have
occurred. The instructions for installing each Oracle patch are included with the patch.
Normally, the method is to unpack the patch, move to the directory named the same as
the patch number, then type opatch apply. In versions prior to 10.2.0.4, the command is
opatch apply.
Patches are of two types Version based and interim patches
Opatch is used for interim patch installation
Opatch is a java utility requires, ORACLE_HOME, OUI installed and
LD_LIBRARY_PATH
Opatch utility actually updates oracle Inventory contents
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Opatch apply
Post installation:
Shutdown immediate
Startup migrate
@catpatch.sql
@utlrp.sql
Shutdown immediate
Startup
To apply Opatch conditions are db and listener both must be down as opatch will update
your current ORACLE_HOME with patches.
in single instance its not possible.
but for RAC instance its possible.
as in RAC there will be two seperate oracle home and two seperate instances running
once instance on each oracle_home
use this command:
opatch napply -skip_subset -skip_duplicate -local -oh $ORACLE_HOME
when using -local parameter and -oh $ORACLE_HOME this means this patch session
will only apply patch to current sourced ORACLE_HOME.
steps before applying patch:
----------------------------
1) check the database status.
wch_db.sql
-----------
select name,
open_mode,
database_name,
created,
log_mode,
platform_name
from v$database;
2) Check the object's invalid.
user_inv.sql
============
SELECT owner,
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
COUNT(*)
FROM dba_objects
WHERE status = 'INVALID'
GROUP BY owner;
count_inv.sql
-------------
select count(*)
from dba_objects
WHERE status ='INVALID';
3) Take backup of invalid's
create table bk_inv_ as select * from dba_objects
where status='INVALID';
4) check opatch version using
opatch -v
if opatch version is not compatible check the readme file and
download the latest version and uncompress
in $ORACLE_HOME.
5) check oraInst.loc file pointing to your current $ORACLE_HOME or not.
cat /etc/oraInst.loc
inventory_loc=/u01/app/oracle/10.2.0/GlobalOraInv
inst_group=dba
if your server have more then one $ORACLE_HOME then comment the other
$ORACLE_HOME and
uncomment the current $ORACLE_HOME
inventory must point to the current $ORACLE_HOME which is getting patched.
6) check free space on $ORACLE_HOME
df -h $ORACLE_HOME
7) chek the utilities like
which ld
which ar
which make
etc as per readme file.
8) unzip the patch
unzip -d /loc_2_unzip p.zip
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
9) Go the patch directory
cd /loc_2_unzip/patch_number
10) Bring down the listner.
cd $ORACLE_HOME/bin
lsnrctl stop
11) Bring down the database
Shutdown immediate.
12) export opatch
export PATH=$PATH:$HOME:$ORACLE_HOME/OPatch:/bin
13) Start the patch
skip_duplicate
Skips patches to be applied that are duplicates of other patches installed in the Oracle
home. Two patches are duplicates if they fix the same set of bugs.
skip_subset
Skips patches to be applied that are subsets of other patches installed in the Oracle home.
One patch is a subset of another patch if the former fixes a subset of bugs fixed by the
latter.
opatch napply -skip_subset -skip_duplicate
for RAC database then database can be up
as it may be having more then one instance
so you can bring down one instance and listener and apply the patch and open it
and then do the same on another node.
like this db will be up and no user will face issue in outage also.
to apply opatch in RAC instance (We can use -skip_subset -skip_duplicate with Opatch
napply only? And is it right that if we are installing patch for the first time on a oracle
software we will generally use opatch apply. consider I have applied october cpu patch
and I am applying November patch. so while applting Oct patch I shall just use opatch
apply and while applying Nov patch i shall use opatch napply -skip_subset -
skip_duplicate)
///////////*************************************************888
The readme stats whether you should use opatch apply or opatch napply.
I am not sure whether november CPU patch is a bundle of multiple patch or single patch.
If it is bundle of multiple patch then you should be using opatch napply irrespective of
whether you applied any patch on your software previously. apply and naply does not
depends on previous patches that are appled on your database. It just depend onl current
opatch session. If you need to install just one patch currently use opatch apply. And if
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
you want to install multiple patch in a single session use opatch napply.
And regarding -skip_subset and -skip_duplicate you are correct it shoudl be used only
with opatch napply.
For more info use below command
opatch apply -help
opatch napply -help
**********************************************************///////////
opatch napply -skip_subset -skip_duplicate -local -oh $ORACLE_HOME
when using -local parameter and
-oh $ORACLE_HOME this means this patch session will only apply patch to current
ORACLE_HOME only.
--------------------------------------------------------
. All-Node Patch
. Shutdown all Oracle instances on all nodes
. Apply the patch to all nodes
. Bring all nodes up
. Minimum downtime
. Shutdown the Oracle instance on node 1
. Apply the patch to the Oracle instance on node 1
. Shutdown the Oracle instance on node 2
. Apply the patch to the Oracle instance on node 2
. Shutdown the Oracle instance on node 3
. At this point, instances on nodes 1 and 2 can be brought up
. Apply the patch to the Oracle instance on node 3
. Startup the Oracle instance on node 3
. (no downtime)
. Shutdown the Oracle instance on node 1
. Apply the patch to the Oracle instance on node 1
. Start the Oracle instance on node 1
. Shutdown the Oracle instance on node 2
. Apply the patch to the Oracle instance on node 2
. Start the Oracle instance on node 2
. Shutdown the Oracle instance on node 3
. Apply the patch to the Oracle instance on node 3
. Start the Oracle instance on node 3
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
-------------------------------------------------------------
14) Once patch installation is completed need to do post patching steps.
a) starup the instance
startup
b) Loading modified sqlfiles into the database.
@$ORACLE_HOME/rdbms/admin/catbundle.sql cpu apply
to check the logs generated
catbundle_CPU__APPLY_.log
catbundle_CPU__GENERATE_.log
c) Recompiling Views in the Database
shutdown immediate
startup upgrade
@$ORACLE_HOME/cpu/view_recompile/view_recompile_jan2008cpu.sql
shutdown immediate
startup
If it is a RAC instance.
shutdown
startup nomount
alter database set cluster database=false scope=spfile;
shutdown
startup upgrade
@?/cpu/view_recompile/view_recompile_jan2008cpu.sql
shutdown
startup
alter database set cluster database=true scope=spfile;
restart the database.
cd $CRS_HOME/bin
srvctl start database -d
15) If any invalid objects were reported, run the utlrp.sql script as follows
user_inv.sql
============
SELECT owner,
COUNT(*)
FROM dba_objects
WHERE status = 'INVALID'
GROUP BY owner;
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
count_inv.sql
-------------
select count(*)
from dba_objects
WHERE status ='INVALID';
if any new invalids seen then again take backup of invalid objects and compile it.
create table bk_inv_ as select * from dba_objects
where status='INVALID';
@?/rdbms/admin/utlrp.sql --- to compile the invalid objects.
16) Confirm that patch has been applied successfully or not at db level also.
post_patch.sql
--------------
col action_time for a40
col action for a15
col namespace for a15
col version for a15
col comments for a40
set pages 1000
set lines 170
select * from registry$history ;
Explain Patch Installation Steps (Version Based) no Opatch?
Preinstallation tasks:
Review known installation issues
Identify oracle db installation
Backup of orginal homes
Opatch lsinventory –all
Download and extract installation software
Upgrade oracle timezone definitions
Shutdown oracle database
Stop all processes
Isqlplusctl stop
Emctl stop dbconsole
Lsnrctl stop
Installation Tasks:
Set display=localhost:0.0
Xhost +[fully qualified remote]
Cd patchinstallationdirectory/disk1
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
./runInstaller
Run root.sh as root user
For non-interactive installations
./runInstaller –silent –responsefile reponsefile
Run root.sh
Post Installation Tasks:
Start services
Dbua –silent –dbname $oracle_sid –oraclehome $oracle_home –sysdbausername
username –syspassword password –recompile-invalid_objects true
If using RMAN
Rman catalog username/password@rmandb
Upgrade catalog
Or
Sqlplus / as sysdba
Startup upgrade
Spool patch.log
@?/rdbms/admin/catupgrd.sql
Spooll off
Shutdown immediate
Startup
Recompile objects using
@?/rdbms/admin/utlrp.sql
Check status of components in dba_registry
Steps to downgrade or remove patch?
Sqlplus /nolog
Connect / as sysdba
Shutdown immediate
Take backup of catreload.sql and tnsnames.ora file
Login into database
Startup downgrade
@?/rdbms/admin/catdwgrd.sql -> spool and review
Shutdown immediate
Restore original homes
Copy catreload.sql and tnsnames.ora file
To their original location
Startup downgrade
@?/rdbms/admin/catreload.sql
Shutdown immediate
Recompile invalid objects
@?/rdbms/admin/utlrp.sql
Query the dba_registry
If it is 10.1 downgrade then emca –restore db is required
Steps to clone database using RMAN
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
A powerful feature of RMAN is the ability to duplicate (clone), a database from a
backup. It is possible to create a duplicate database on:
A remote server with the same file structure
A remote server with a different file structure
The local server with a different file structure
A duplicate database is distinct from a standby database, although both types of databases
are created with the DUPLICATE command. A standby database is a copy of the primary
database that you can update continually or periodically by using archived logs from the
primary database. If the primary database is damaged or destroyed, then you can perform
failover to the standby database and effectively transform it into the new primary
database. A duplicate database, on the other hand, cannot be used in this way: it is not
intended for failover scenarios and does not support the various standby recovery and
failover options.
To prepare for database duplication, you must first create an auxiliary instance. For the
duplication to work, you must connect RMAN to both the target (primary) database and
an auxiliary instance started in NOMOUNT mode.
So long as RMAN is able to connect to the primary and duplicate instances, the RMAN
client can run on any machine. However, all backups, copies of datafiles, and archived
logs used for creating and recovering the duplicate database must be accessible by the
server session on the duplicate host.
As part of the duplicating operation, RMAN manages the following:
Restores the target datafiles to the duplicate database and performs incomplete recovery
by using all available backups and archived logs.
Shuts down and starts the auxiliary database.
Opens the duplicate database with the RESETLOGS option after incomplete recovery to
create the online redo logs.
Generates a new, unique DBID for the duplicate database.
Preparing the Duplicate (Auxiliary) Instance for Duplication
Create an Oracle Password File
First we must create a password file for the duplicate instance.
export ORACLE_SID=APP2
orapwd file=orapwAPP2 password=manager entries=5 force=y
Ensure Oracle Net Connectivity to both Instances
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Next add the appropriate entries into the TNSNAMES.ORA and LISTENER.ORA files
in the $TNS_ADMIN directory.
LISTENER.ORA
APP1 = Target Database, APP2 = Auxiliary Database
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = gentic)(PORT = 1521))
)
)
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = APP1.WORLD)
(ORACLE_HOME = /opt/oracle/product/10.2.0)
(SID_NAME = APP1)
)
(SID_DESC =
(GLOBAL_DBNAME = APP2.WORLD)
(ORACLE_HOME = /opt/oracle/product/10.2.0)
(SID_NAME = APP2)
)
)
TNSNAMES.ORA
APP1 = Target Database, APP2 = Auxiliary Database
APP1.WORLD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = gentic)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = APP1.WORLD)
)
)
APP2.WORLD =
(DESCRIPTION =
(ADDRESS_LIST =
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
(ADDRESS = (PROTOCOL = TCP)(HOST = gentic)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = APP2.WORLD)
)
)
SQLNET.ORA
NAMES.DIRECTORY_PATH= (TNSNAMES)
NAMES.DEFAULT_DOMAIN = WORLD
NAME.DEFAULT_ZONE = WORLD
USE_DEDICATED_SERVER = ON
Now restart the Listener
lsnrctl stop
lsnrctl start
Create an Initialization Parameter File for the Auxiliary Instance
Create an INIT.ORA parameter file for the auxiliary instance, you can copy that from the
target instance and then modify the parameters.
### Duplicate Database
### -----------------------------------------------
# This is only used when you duplicate the database
# on the same host to avoid name conflicts
DB_FILE_NAME_CONVERT = (/u01/oracle/db/APP1/,/u01/oracle/db/APP2/)
LOG_FILE_NAME_CONVERT = (/u01/oracle/db/APP1/,/u01/oracle/db/APP2/,
/opt/oracle/db/APP1/,/opt/oracle/db/APP2/)
### Global database name is db_name.db_domain
### -----------------------------------------
db_name = APP2
db_unique_name = APP2_GENTIC
db_domain = WORLD
service_names = APP2
instance_name = APP2
### Basic Configuration Parameters
### ------------------------------
compatible = 10.2.0.4
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
db_block_size = 8192
db_file_multiblock_read_count = 32
db_files = 512
control_files = /u01/oracle/db/APP2/con/APP2_con01.con,
/opt/oracle/db/APP2/con/APP2_con02.con
### Database Buffer Cache, I/O
### --------------------------
# The Parameter SGA_TARGET enables Automatic Shared Memory Management
sga_target = 500M
sga_max_size = 600M
### REDO Logging without Data Guard
### -------------------------------
log_archive_format = APP2_%s_%t_%r.arc
log_archive_max_processes = 2
log_archive_dest = /u01/oracle/db/APP2/arc
### System Managed Undo
### -------------------
undo_management = auto
undo_retention = 10800
undo_tablespace = undo
### Traces, Dumps and Passwordfile
### ------------------------------
audit_file_dest = /u01/oracle/db/APP2/adm/admp
user_dump_dest = /u01/oracle/db/APP2/adm/udmp
background_dump_dest = /u01/oracle/db/APP2/adm/bdmp
core_dump_dest = /u01/oracle/db/APP2/adm/cdmp
utl_file_dir = /u01/oracle/db/APP2/adm/utld
remote_login_passwordfile = exclusive
Create a full Database Backup
Make sure that a full backup of the target is accessible on the duplicate host. You can use
the following BASH script to backup the target database.
rman nocatalog target / <<-EOF
configure retention policy to recovery window of 3 days;
configure backup optimization on;
configure controlfile autobackup on;
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
configure default device type to disk;
configure device type disk parallelism 1 backup type to compressed backupset;
configure datafile backup copies for device type disk to 1;
configure maxsetsize to unlimited;
configure snapshot controlfile name to '/u01/backup/snapshot_controlfile';
show all;
run {
allocate channel ch1 type Disk maxpiecesize = 1900M;
backup full database noexclude
include current controlfile
format '/u01/backup/datafile_%s_%p.bak'
tag 'datafile_daily';
}
run {
allocate channel ch1 type Disk maxpiecesize = 1900M;
backup archivelog all
delete all input
format '/u01/backup/archivelog_%s_%p.bak'
tag 'archivelog_daily';
}
run {
allocate channel ch1 type Disk maxpiecesize = 1900M;
backup format '/u01/backup/controlfile_%s.bak' current controlfile;
}
crosscheck backup;
list backup of database;
report unrecoverable;
report schema;
report need backup;
report obsolete;
delete noprompt expired backup of database;
delete noprompt expired backup of controlfile;
delete noprompt expired backup of archivelog all;
delete noprompt obsolete recovery window of 3 days;
quit
EOF
Creating a Duplicate Database on the Local Host
Before beginning RMAN duplication, use SQL*Plus to connect to the auxiliary instance
and start it in NOMOUNT mode. If you do not have a server-side initialization parameter
file for the auxiliary instance in the default location, then you must specify the client-side
initialization parameter file with the PFILE parameter on the DUPLICATE command.
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Get original Filenames from TARGET
To rename the database files you can use the SET NEWNAME command. Therefore, get
the original filenames from the target and modify these names in the DUPLICATE
command.
ORACLE_SID=APP1
export ORACLE_SID
set feed off
set pagesize 10000
column name format a40 heading "Datafile"
column file# format 99 heading "File-ID"
select name, file# from v$dbfile;
column member format a40 heading "Logfile"
column group# format 99 heading "Group-Nr"
select member, group# from v$logfile;
Datafile File-ID
---------------------------------------- -------
/u01/oracle/db/APP1/sys/APP1_sys1.dbf 1
/u01/oracle/db/APP1/sys/APP1_undo1.dbf 2
/u01/oracle/db/APP1/sys/APP1_sysaux1.dbf 3
/u01/oracle/db/APP1/usr/APP1_users1.dbf 4
Logfile Group-Nr
---------------------------------------- --------
/u01/oracle/db/APP1/rdo/APP1_log1A.rdo 1
/opt/oracle/db/APP1/rdo/APP1_log1B.rdo 1
/u01/oracle/db/APP1/rdo/APP1_log2A.rdo 2
/opt/oracle/db/APP1/rdo/APP1_log2B.rdo 2
/u01/oracle/db/APP1/rdo/APP1_log3A.rdo 3
/opt/oracle/db/APP1/rdo/APP1_log3B.rdo 3
/u01/oracle/db/APP1/rdo/APP1_log4A.rdo 4
/opt/oracle/db/APP1/rdo/APP1_log4B.rdo 4
/u01/oracle/db/APP1/rdo/APP1_log5A.rdo 5
/opt/oracle/db/APP1/rdo/APP1_log5B.rdo 5
/u01/oracle/db/APP1/rdo/APP1_log6A.rdo 6
/opt/oracle/db/APP1/rdo/APP1_log6B.rdo 6
/u01/oracle/db/APP1/rdo/APP1_log7A.rdo 7
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
/opt/oracle/db/APP1/rdo/APP1_log7B.rdo 7
/u01/oracle/db/APP1/rdo/APP1_log8A.rdo 8
/opt/oracle/db/APP1/rdo/APP1_log8B.rdo 8
/u01/oracle/db/APP1/rdo/APP1_log9A.rdo 9
/opt/oracle/db/APP1/rdo/APP1_log9B.rdo 9
/u01/oracle/db/APP1/rdo/APP1_log10A.rdo 10
/opt/oracle/db/APP1/rdo/APP1_log10B.rdo 10
Create Directories for the duplicate Database
mkdir -p /u01/oracle/db/APP2
mkdir -p /opt/oracle/db/APP2
cd /opt/oracle/db/APP2
mkdir con rdo
cd /u01/oracle/db/APP2
mkdir adm arc con rdo sys tmp usr bck
cd adm
mkdir admp bdmp cdmp udmp utld
Create Symbolic Links to Password and INIT.ORA File
Oracle must be able to locate the Password and INIT.ORA File.
cd $ORACLE_HOME/dbs
ln -s /home/oracle/config/10.2.0/orapwAPP2 orapwAPP2
ln -s /home/oracle/config/10.2.0/initAPP2.ora initAPP2.ora
Duplicate the Database
Now you are ready to duplicate the database APP1 to APP2.
ORACLE_SID=APP2
export ORACLE_SID
sqlplus sys/manager as sysdba
startup force nomount pfile='/home/oracle/config/10.2.0/initAPP2.ora';
exit;
rman TARGET sys/manager@APP1 AUXILIARY sys/manager@APP2
Recovery Manager: Release 10.2.0.4.0 - Production on Tue Oct 28 12:00:13 2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.
connected to target database: APP1 (DBID=3191823649)
connected to auxiliary database: APP2 (not mounted)
RUN
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
{
SET NEWNAME FOR DATAFILE 1 TO '/u01/oracle/db/APP2/sys/APP2_sys1.dbf';
SET NEWNAME FOR DATAFILE 2 TO '/u01/oracle/db/APP2/sys/APP2_undo1.dbf';
SET NEWNAME FOR DATAFILE 3 TO '/u01/oracle/db/APP2/sys/APP2_sysaux1.dbf';
SET NEWNAME FOR DATAFILE 4 TO '/u01/oracle/db/APP2/usr/APP2_users1.dbf';
DUPLICATE TARGET DATABASE TO APP2
PFILE = /home/oracle/config/10.2.0/initAPP2.ora
NOFILENAMECHECK
LOGFILE GROUP 1 ('/u01/oracle/db/APP2/rdo/APP2_log1A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log1B.rdo') SIZE 10M REUSE,
GROUP 2 ('/u01/oracle/db/APP2/rdo/APP2_log2A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log2B.rdo') SIZE 10M REUSE,
GROUP 3 ('/u01/oracle/db/APP2/rdo/APP2_log3A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log3B.rdo') SIZE 10M REUSE,
GROUP 4 ('/u01/oracle/db/APP2/rdo/APP2_log4A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log4B.rdo') SIZE 10M REUSE,
GROUP 5 ('/u01/oracle/db/APP2/rdo/APP2_log5A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log5B.rdo') SIZE 10M REUSE,
GROUP 6 ('/u01/oracle/db/APP2/rdo/APP2_log6A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log6B.rdo') SIZE 10M REUSE,
GROUP 7 ('/u01/oracle/db/APP2/rdo/APP2_log7A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log7B.rdo') SIZE 10M REUSE,
GROUP 8 ('/u01/oracle/db/APP2/rdo/APP2_log8A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log8B.rdo') SIZE 10M REUSE,
GROUP 9 ('/u01/oracle/db/APP2/rdo/APP2_log9A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log9B.rdo') SIZE 10M REUSE,
GROUP 10 ('/u01/oracle/db/APP2/rdo/APP2_log10A.rdo',
'/opt/oracle/db/APP2/rdo/APP2_log10B.rdo') SIZE 10M REUSE;
}
The whole, long output is not shown here, but check, that RMAN was able to open the
duplicate database with the RESETLOGS option.
.....
.....
contents of Memory Script:
{
Alter clone database open resetlogs;
}
executing Memory Script
database opened
Finished Duplicate Db at 28-OCT-08
As the final step, eliminate or uncomment the DB_FILE_NAME_CONVERT and
LOG_FILE_NAME_CONVERT in the INIT.ORA file and restart the database.
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
initAPP2.ora
### Duplicate Database
### -----------------------------------------------
# This is only used when you duplicate the database
# on the same host to avoid name conflicts
# DB_FILE_NAME_CONVERT = (/u01/oracle/db/APP1/,/u01/oracle/db/APP2/)
# LOG_FILE_NAME_CONVERT = (/u01/oracle/db/APP1/,/u01/oracle/db/APP2/,
/opt/oracle/db/APP1/,/opt/oracle/db/APP2/)
sqlplus / as sysdba
shutdown immediate;
startup;
Total System Global Area 629145600 bytes
Fixed Size 1269064 bytes
Variable Size 251658936 bytes
Database Buffers 373293056 bytes
Redo Buffers 2924544 bytes
Database mounted.
Database opened.
Moving, copying or cloning a database from one server to another with different
directory structures with RMAN
Moving, copying or cloning a database from one server to another with different
directory structures can be easily accomplished with RMAN. Imagine that you have a
database on one node and you want to copy it to another node without shuting down your
database and move your datafiles to a different directory structure… This will be
demonstrated here by using RMAN.
ASSUMPTIONS
Source Database
* 10.2.0.4 database online (sid neo) at server1 (app)
* archivelog mode is enabled
* db datafiles are in the directory /opt/oracle/oradata/neo2
* database will be backed up online with RMAN to /u01/backup
Destiny Database
* 10.2.0.4 Oracle Home installed without any database running at server2
(mynode2.com)
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
* db datafiles must be created / moved to different directory: /opt/oracle/oradata/neo
* only the manual backup created at server1 will be moved to server2
AT SERVER1
Moving, copying or cloning a database from one server to another with different
directory structures can be easily accomplished with RMAN. Imagine that you have a
database on one node and you want to copy it to another node without shuting down your
database and move your datafiles to a different directory structure… This will be
demonstrated here by using RMAN.
ASSUMPTIONS
Source Database
* 10.2.0.4 database online (sid neo) at server1 (app)
* archivelog mode is enabled
* db datafiles are in the directory /opt/oracle/oradata/neo2
* database will be backed up online with RMAN to /u01/backup
Logon as oracle user software owner at server1 and set your environment variables. Then
open RMAN and backup the source database we want to copy /move / clone.
[oracle@neoface oracle]$ export ORACLE_HOME=/opt/oracle/product/10.2.0/db_1
[oracle@neoface oracle]$ export ORACLE_SID=neo
[oracle@neoface oracle]$ export PATH=$ORACLE_HOME/bin:$PATH
[oracle@neoface oracle]$ rman target /
RMAN> backup database plus archivelog;
cf_NEO_c-1689570411-20090106-00 (control file backup)
back_NEO_675389594_736_1
back_NEO_675389780_737_1
back_NEO_675390018_738_1
back_NEO_675390293_739_1
Copy those 5 backup files to server2
[oracle@neoface oracle]$ scp /u01/backup/back_NEO*
root@mynode2.com:/u01/backup/
Create an initialization file (pfile) from the current spfile. Then copy it to the server2.
[oracle@neoface oracle]$ sqlplus “/ as sysdba”
SQL> create pfile from spfile;
SQL> exit;
[oracle@neoface oracle]$ scp /opt/oracle/product/10.2.0/db_1/dbs/initneo.ora
oracle@mynode2.com:/opt/oracle/product/10.2.0/db_1/dbs/initneo.ora/
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
AT SERVER2
Logon at server2 to do the following steps:
* create the OS directories to hold the datafiles and the admin log files and pfile:
* edit the pfile to modify the instance name in parameters like bdump, udump, etc
* change the onwership of pfile to belong to oracle user
* connect to RMAN and startup the database in nomount mode
* restore the control file from the backup
* mount the database
* validate catalog by crosschecking and cataloging the 4 backups pieces we copied
* rename the datafiles and redolog files and restoring the database
Switch to oracle user and create datafiles directories :
[root@mynode2 root] su – oracle
[oracle@mynode2 oracle]$ mkdir /opt/oracle/admin/neo -p
[oracle@mynode2 oracle]$ cd /opt/oracle/admin/neo
[oracle@mynode2 oracle]$ mkdir cdump udump bdump pfile
[oracle@mynode2 oracle]$ mkdir /opt/oracle/oradata/neo -p
Edit your pfile accordingly your new directory structure:
[oracle@mynode2 oracle]$ vi /opt/oracle/product/10.2.0/db_1/dbs/initneo.ora
Set environment variables and start working on RMAN:
[oracle@mynode2 oracle]$ export ORACLE_HOME=/opt/oracle/product/10.2.0/db_1
[oracle@mynode2 oracle]$ export ORACLE_SID=neo
[oracle@mynode2 oracle]$ export PATH=$ORACLE_HOME/bin:$PATH
[oracle@mynode2 oracle]$ rman target /
RMAN> startup nomount
RMAN> restore controlfile from ‘/u01/backup/cf_NEO_c-1689570411-20090106-00′;
RMAN> alter database mount ;
RMAN> exit
Now that the database is mounted, we’ll check the correct database SCN from the current
log that we’ll use later to recover the database. Take note of your current SCN.
[oracle@mynode2 oracle]$ sqlplus “/ as sysdba”
SQL> select group#, first_change#, status, archived from v$log;
GROUP# FIRST_CHANGE# STATUS ARC
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
---------- ------------- ---------------- ---
1 336565140 ACTIVE YES
2 336415067 CURRENT NO
3 336523814 INACTIVE YES
SQL> exit;
[oracle@mynode2 oracle]$ rman target /
As we only copied to this server the backup we created at the beggining and we did not
copy all the backups we had on server1 we must crosscheck the catalog against the OS
files. Run the following commands at RMAN prompt :
RMAN> CROSSCHECK backup;
RMAN> CROSSCHECK copy;
RMAN> CROSSCHECK backup of database;
RMAN> CROSSCHECK backup of controlfile;
RMAN> CROSSCHECK archivelog all;
Now let’s catalog the 4 backup pieces that we copy to this server2:
RMAN> CATALOG backuppiece ‘/u01/backup/back_NEO_675389594_736_1′;
RMAN> CATALOG backuppiece ‘/u01/backup/back_NEO_675389780_737_1′;
RMAN> CATALOG backuppiece ‘/u01/backup/back_NEO_675390018_738_1′;
RMAN> CATALOG backuppiece ‘/u01/backup/back_NEO_675390293_739_1′;
Next, as we changed the directory of our datafiles we must rename the redologs:
RMAN> ALTER DATABASE rename file ‘/opt/oracle/oradata/neo2/redo01.log’ to
‘/opt/oracle/oradata/neo/redo01.log’;
RMAN> ALTER DATABASE rename file ‘/opt/oracle/oradata/neo2/redo02.log’ to
‘/opt/oracle/oradata/neo/redo02.log’;
RMAN> ALTER DATABASE rename file ‘/opt/oracle/oradata/neo2/redo03.log’ to
‘/opt/oracle/oradata/neo/redo03.log’;
If you use BLOCK CHANGE TRACKING to allow fast incremental backups, and if you
want to move the datafiles to different directory you must disable this feature and
enabling it by specifying the new dir:
RMAN> ALTER DATABASE disable block change tracking;
RMAN> ALTER DATABASE enable block change tracking using file
‘/opt/oracle/oradata/neo/block_change_tracking.f’;
This will avoid errors like ORA-19751 and ORA-19750
Now let’s run the script that will restore our database, renaming the datafiles and
recovering until the archivelog with SCN 336415067, the current one.
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
RMAN> run {
set newname for datafile 1 to “/opt/oracle/oradata/neo/system01.dbf”;
set newname for datafile 2 to “/opt/oracle/oradata/neo/undotbs01.dbf”;
set newname for datafile 3 to “/opt/oracle/oradata/neo/sysaux01.dbf”;
set newname for datafile 4 to “/opt/oracle/oradata/neo/data01.dbf”;
set newname for datafile 5 to “/opt/oracle/oradata/neo/index01.dbf”;
set newname for datafile 6 to “/opt/oracle/oradata/neo/users01.dbf”;
set newname for datafile 7 to “/opt/oracle/oradata/neo/streams.dbf”;
set newname for datafile 8 to “/opt/oracle/oradata/neo/data01brig.dbf”;
set newname for datafile 9 to “/opt/oracle/oradata/neo/index02.dbf”;
restore database;
switch datafile all;
recover database until scn 336415067;
}
RMAN> ALTER DATABASE open resetlogs;
I didn’t manage to avoid errors like ORA-01110 and ORA-01180 at RMAN without
using the “until” clause in the “recover database” sentence instead, like most people use
it, as the first instruction after the run command.
----------------------------------------------------------------------------------
Renaming the database in the process
Pfile
Changing the dbname and path,
sqlplus / as sysdba
create pfile from spfile;
exit
cd $ORACLE_HOME/dbs
rm -f spfiledbname3.ora
vi initdbname3.ora
then,
remove the first lines
:%s/dbname/dbname3/g
Database
Look for datafile IDs,
list backup of database;
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
add the rman statements to relocate the datafiles,
run {
set newname for datafile 1 to '/u02/oradata/dbname3/system01.dbf';
set newname for datafile 2 to '/u02/oradata/dbname3/undotbs01.dbf';
set newname for datafile 3 to '/u02/oradata/dbname3/sysaux01.dbf';
set newname for datafile 4 to '/u02/oradata/dbname3/users01.dbf';
set newname for datafile 5 to '/u02/oradata/dbname3/tsname.dbf';
restore database;
}
ORA-01555: snapshot too old during export EXP-00000: Export terminated
unsuccessfully
Seems the other sessions are updating the database during your export.
So use CONSISTENT=N ( which is default).
Dont specify CONSISTENT=Y.
Else
Increase your RBS(in 8i) and look into undo Management in 9i.
There is a long running transaction that needs more undo space than the available one.
sql >show parameter undo
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
undo_management string AUTO
undo_retention integer 0
undo_suppress_errors boolean FALSE
undo_tablespace string UNDOTBS1
If the UNDO_RETENTION initialization parameter is not specified, the default value is
900 seconds.( I must have tweaked mine. Ingore it).
to reset this would be the command.
ALTER SYSTEM SET UNDO_RETENTION = 30000;
quoting docs
Committed undo information normally is lost when its undo space is overwritten by a
newer transaction. But for consistent read purposes, long running queries might require
old undo information for undoing changes and producing older images of data blocks.
The initialization parameter, UNDO_RETENTION, provides a means of explicitly
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
specifying the amount of undo information to retain. With a proper setting, long running
queries can complete without risk of receiving the "snapshot too old" error.
Cost Based Optimizer (CBO) and Database Statistics
Cost Based Optimizer (CBO) and Database Statistics
Whenever a valid SQL statement is processed Oracle has to decide how to retrieve the
necessary data. This decision can be made using one of two methods:
•Rule Based Optimizer (RBO) - This method is used if the server has no internal statistics
relating to the objects referenced by the statement. This method is no longer favoured by
Oracle and will be desupported in future releases.
•Cost Based Optimizer (CBO) - This method is used if internal statistics are present. The
CBO checks several possible execution plans and selects the one with the lowest cost,
where cost relates to system resources.
If new objects are created, or the amount of data in the database changes the statistics will
no longer represent the real state of the database so the CBO decision process may be
seriously impaired. The mechanisms and issues relating to maintenance of internal
statistics are explained below:
•Analyze Statement
•DBMS_UTILITY
•DBMS_STATS
•Scheduling Stats
•Transfering Stats
•Issues
Analyze Statement
The ANALYZE statement can be used to gather statistics for a specific table, index or
cluster. The statistics can be computed exactly, or estimated based on a specific number
of rows, or a percentage of rows:
ANALYZE TABLE employees COMPUTE STATISTICS;
ANALYZE INDEX employees_pk COMPUTE STATISTICS;
ANALYZE TABLE employees ESTIMATE STATISTICS SAMPLE 100 ROWS;
ANALYZE TABLE employees ESTIMATE STATISTICS SAMPLE 15
PERCENT;DBMS_UTILITY
The DBMS_UTILITY package can be used to gather statistics for a whole schema or
database. Both methods follow the same format as the analyze statement:
EXEC DBMS_UTILITY.analyze_schema('SCOTT','COMPUTE');
EXEC DBMS_UTILITY.analyze_schema('SCOTT','ESTIMATE', estimate_rows =>
100);
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
EXEC DBMS_UTILITY.analyze_schema('SCOTT','ESTIMATE', estimate_percent =>
15);
EXEC DBMS_UTILITY.analyze_database('COMPUTE');
EXEC DBMS_UTILITY.analyze_database('ESTIMATE', estimate_rows => 100);
EXEC DBMS_UTILITY.analyze_database('ESTIMATE', estimate_percent => 15);
DBMS_STATS
The DBMS_STATS package was introduced in Oracle 8i and is Oracles preferred
method of gathering object statistics. Oracle list a number of benefits to using it including
parallel execution, long term storage of statistics and transfer of statistics between
servers. Once again, it follows a similar format to the other methods:
EXEC DBMS_STATS.gather_database_stats;
EXEC DBMS_STATS.gather_database_stats(estimate_percent => 15);
EXEC DBMS_STATS.gather_schema_stats('SCOTT');
EXEC DBMS_STATS.gather_schema_stats('SCOTT', estimate_percent => 15);
EXEC DBMS_STATS.gather_table_stats('SCOTT', 'EMPLOYEES');
EXEC DBMS_STATS.gather_table_stats('SCOTT', 'EMPLOYEES', estimate_percent =>
15);
EXEC DBMS_STATS.gather_index_stats('SCOTT', 'EMPLOYEES_PK');
EXEC DBMS_STATS.gather_index_stats('SCOTT', 'EMPLOYEES_PK',
estimate_percent => 15);
This package also gives you the ability to delete statistics:
EXEC DBMS_STATS.delete_database_stats;
EXEC DBMS_STATS.delete_schema_stats('SCOTT');
EXEC DBMS_STATS.delete_table_stats('SCOTT', 'EMPLOYEES');
EXEC DBMS_STATS.delete_index_stats('SCOTT', 'EMPLOYEES_PK');
Scheduling Stats
Scheduling the gathering of statistics using DBMS_Job is the easiest way to make sure
they are always up to date:
SET SERVEROUTPUT ON
DECLARE
l_job NUMBER;
BEGIN
DBMS_JOB.submit(l_job,
'BEGIN DBMS_STATS.gather_schema_stats(''SCOTT''); END;',
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
SYSDATE,
'SYSDATE + 1');
COMMIT;
DBMS_OUTPUT.put_line('Job: ' || l_job);
END;
/
The above code sets up a job to gather statistics for SCOTT for the current time every
day. You can list the current jobs on the server using the DBA_JOBS and
DBA_JOBS_RUNNING views.
Existing jobs can be removed using:
EXEC DBMS_JOB.remove(X);
COMMIT;
Where 'X' is the number of the job to be removed.
Transfering Stats
It is possible to transfer statistics between servers allowing consistent execution plans
between servers with varying amounts of data. First the statistics must be collected into a
statistics table. In the following examples the statistics for the APPSCHEMA user are
collected into a new table, STATS_TABLE, which is owned by DBASCHEMA:
SQL> EXEC DBMS_STATS.create_stat_table('DBASCHEMA','STATS_TABLE');
SQL> EXEC
DBMS_STATS.export_schema_stats('APPSCHEMA','STATS_TABLE',NULL,'DBASC
HEMA');
This table can then be transfered to another server using your preferred method
(Export/Import, SQLPlus Copy etc.) and the stats imported into the data dictionary as
follows:
SQL> EXEC
DBMS_STATS.import_schema_stats('APPSCHEMA','STATS_TABLE',NULL,'DBASC
HEMA');
SQL> EXEC DBMS_STATS.drop_stat_table('DBASCHEMA','STATS_TABLE');
Issues
•Exclude dataload tables from your regular stats gathering, unless you know they will be
full at the time that stats are gathered.
•I've found gathering stats for the SYS schema can make the system run slower, not
faster.
•Gathering statistics can be very resource intensive for the server so avoid peak workload
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
times or gather stale stats only.
•Even if scheduled, it may be necessary to gather fresh statistics after database
maintenance or large data loads.
Parsing SQL Statements in Oracle
Parsing SQL Statements in Oracle
One of the first steps Oracle takes in processing a SQL statement is to parse it. During the
parsing phase, Oracle will break down the submitted SQL statement into its component
parts, determine what type of statement it is (Query, DML, or DDL), and perform a series
of checks on it. Two important concepts for the DBA to understand is (1) what are the
steps involved in the parse phase and (2) what is the difference between a hard parse and
a soft parse.
The Syntax Check & Semantic Analysis
The first two function of the parse phase Syntax Check and Semantic Analysis happen for
each and every SQL statement within the database.
Syntax Check
Oracle checks that the SQL statement is valid. Does it make sense given the SQL
grammar documented in the SQL Reference Manual? Does it follow all of the rules for
SQL?
Semantic Analysis
This function of the parse phase, takes the Syntax Check one step further by checking if
the statement is valid in light of the objects in the database. Do the tables and columns
referenced in the SQL statement actually exist in the database? Does the user executing
the statement have access to the objects and are the proper privileges in place? Are there
ambiguities in the statement? For example, consider a statement that references two
tables emp1 and emp2 and both tables have a column name. The following statement
"select name from emp1, emp where..." is ambiguous; the query doesn't know which
table to get name from.
Although Oracle considers the first two functions of the parse phase (checking the
validity of the SQL statement and then checking the semantics to ensure that the
statement can be properly executed), the difference is sometimes hard to see from the
users perspective. When Oracle reports an error to the user during the parse phase, it
doesn't just come out and say "Error within the Syntax Function" or "Error within the
Semantics Function".
For example, the following SQL statement fails with a syntax error:
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
SQL> select from where 4;
select from where 4
*
ERROR at line 1:
ORA-00936: missing expression
Here is an example of a SQL statement that fails with a semantic error:
SQL> select * from table_doesnt_exist;
select * from table_doesnt_exist
*
ERROR at line 1:
ORA-00942: table or view does not exist
Hard Parse vs. Soft Parse
We now consider the next and one of the most important functions of Oracle's parse
phase. The Oracle database now needs to check in the Shared Pool to determine if the
current SQL statement being parsed has already been processed by any other sessions.
If the current statement has already been processed, the parse operation can skip the next
two functions in the process: Optimization and Row Source Generation. If the parse
phase does, in fact, skip these two functions, it is called a soft parse. A soft parse will
save a considerable amount of CPU cycles when running your query. On the other hand,
if the current SQL statement has never been parsed by another session, the parse phase
must execute ALL of the parsing steps. This type of parse is called a hard parse. It is
especially important that developers write and design queries that take advantage of soft
parses so that parsing phase can skip the optimization and row source generation
functions, which are very CPU intensive and a point of contention (serialization). If a
high percentage of your queries are being hard-parsed, your system will function slowly,
and in some cases, not at all.
Oracle uses a piece of memory called the Shared Pool to enable sharing of SQL
statements. The Shared Pool is a piece of memory in the System Global Area (SGA) and
is maintained by the Oracle database. After Oracle completes the first two functions of
the parse phase (Syntax and Semantic Checks), it looks in the Shared Pool to see if that
same exact query has already been processed by another session. Since Oracle has
already performed the semantic check, it has already determined:
Exactly what table(s) are involved
That the user has access privileges to the tables.
Oracle will now look at all of the queries in the Shared Pool that have already been
parsed, optimized, and generated to see if the hard-parse portion of the current SQL
statement has already been done.
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Why not Check the Shared Pool First?
Now that you understand the steps involved in parsing SQL statements, it's time to take it
one step further. Oracle will always keep an unparsed representation of the SQL code in
the Shared Pool, and that the database will perform a hashing algorithm to quickly
located the SQL code. OK, so why doesn't Oracle make this step (checking the Shared
Pool for a matching statement) the first step in its parsing phase, before making any other
checks.
Even when soft parsing, Oracle needs to parse the statement before it goes looking in the
Shared Pool. One of the big reason's for this sequence is the Semantic Check. Consider
the following query:
select * from emp;
Assume that this query was first submitted by user "SCOTT" and that the "emp" table in
the FROM clause is a table owned by SCOTT. You then submit the same exact query (as
a user other than SCOTT) to Oracle. The database has no idea what "emp" is a reference
to. Is it a synonym to another table? Is it a view in your schema that references another
table? For this reason, Oracle needs to perform a Semantic Check on the SQL statement
to ensure that the code you are submitting is going to reference the same exact objects
you are requesting in your query.
Then remember that Oracle needs to syntactically parse the query before it can
semantically parse it. The hash is good only for finding query strings that are the same; it
doesn't help the database figure out if the referenced statements are the same statement as
in you execution context.
HOW TO REMOVE CRS AUTO START AND RESTART FOR A RAC
INSTANCE
HOW TO REMOVE CRS AUTO START AND RESTART FOR A RAC INSTANCE
-----------------------------------------------------------
OVERVIEW
Oracle Clusterware (CRS) is a new feature of Oracle Database 10g Real
Application Clusters that further differentiates RAC for high availability and
scalability. CRS provides automated management and sophisticated monitoring of
RAC instances and is designed to enhance the overall user experience of cluster
database management.
By default, CRS is configured to auto-start database instances as a part of node
boot and provide lights-out instance failure detection followed by an
auto-restart of the failed instance. However, on some special occasions,
it might be highly desirable to limit the level of protection CRS provides
for a RAC database. Namely, this implies preventing instances from auto-starting
on boot and not auto-restarting failed instances. The latter, however, may be
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
relaxed to allow a single attempt to restart a failed instance. This way, CRS
will attempt to restore availability of the instance, but avoid thrashing if a
problem that caused the instance to fail also keeps preventing it from
successfully recovering on restart. Either way, the choice to customize this is
left to the DBA.
Oracle Database 10g Real Application Clusters release 10.1.0.4 and above make
it possible to accomplish the above stated change in CRS behavior. This document
lists the steps necessary to limit the level of CRS protection over a RAC
database.
HIGH LEVEL APPROACH
In a nutshell, the procedure amounts to the following two parts
1. Identifying a set of CRS resources that affect the behavior
2. Modifying a few profile attributes to contain special values for the
resources identified in the first step
The following sections will cover both phases in detail.
IDENTIFYING RELEVANT RESOURCES
The automated management of a RAC database is accomplished by modeling various
RAC applications/features as CRS resources. A CRS resource is defined by a
profile, which contains a list of attributes that tell CRS how manage the
resource. CRS resources created to manage a RAC database can be identified as
belonging to a well-known type. There is a finite and relatively small number
of types. The type may be easily identified given the name of a resource: each
name ends with a . For instance, ora.linux.db is of type db,
which happens to mean database. To display the names of the resources managed
by the CRS, use the crs_stat command from a clustered node. The output of the
command is a set of names and states of resources registered on the cluster to
which the node belongs.
Figure 1. Example Listing resource names using crs_stat
$ crs_stat
NAME=ora.linux.ar.us.oracle.com.cs
TYPE=application
TARGET=OFFLINE
STATE=OFFLINE
NAME=ora.linux.ar.us.oracle.com.linux.srv
TYPE=application
TARGET=OFFLINE
STATE=OFFLINE
Additional output is available but has been truncated for clarity's sake
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
The relevant resources are the resources that belong to the following 4 types:
db - Denotes resources that represent a database
srv -Denotes resources that represent a service member.
cs - Denotes resources that represent a service.
inst - Denotes resources that represent a database instance
The CRS profiles for these resources must be modified for the new CRS behavior
to take effect for all RAC databases installed on the cluster. If, however,
the affect of the change is to be limited to a subset of installed databases,
the list of resources needs to be filtered further. (The rest of this section
should be skipped if the new CRS behavior. is to be in effect for all databases
installed on the cluster.)
Please note that since more than one database may be installed on a cluster,
to modify the level of protection for a particular database, one must identify
the resources that represent entities of this database. This may be easily
accomplished since the names of the resources belonging to the above- stated
types always start with ora.. For instance, ora.linux.db
means that the resource belongs to the database named linux. Only resources of
the above-enumerated types that belong to the selected databases will need to
have their profiles modified.
MODIFYING RESOURCE PROFILES
Please note that Oracle strongly discourages any modifications made to CRS
profiles for any resources starting with . Never make any modifications to
CRS profiles for resources but the ones explicitly described below in
this document.
To modify a profile attribute for a resource, the following steps must be
followed:
1.Generate the resource profile file by issuing the following command:
crs_stat -p resource_name > $CRS_HOME/crs/public/resource_name.cap
2.Update desired attributes by editing the file created in step 1.
3.Commit the updates made as a part of the previous step by issuing the
following command
crs_register -u resource_name
4. Verify the updates have been committed by issuing the following command
crs_stat -p resource_name
For each of the resources identified as a part of the preceding section, the
following modifications must be made:
1.Resources of type inst must have the following attributes modified
AUTO_START must be set to 2
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
RESTART_ATTEMPTS must be set to 0 or 1. The former value will prevent
CRS from attempting to restart a failed instance at all while the latter
will grant it a single attempt; if this only attempt is unsuccessful,
CRS will leave the instance as is.
2.Resources of type db, srv, cs must have the following attributes modified
AUTO_START must be set to 2
Explain Silent software installation Steps
./runInstaller –silent –responsefile <path>
How to create responsefile
./runInstaller –record –destinationfile <path>
Rsp files by default exist in /export/home/oracle/response/enterprise.rsp
Steps to install oracle on linux and solaris
1. login as root and check hardware compatibility
a. check memory information
i. grep memtotal /proc/meminfo - linux
ii. prtdiag | grep memory - solaris
b. check swap size
i. grep swaptotal /proc/meminfo – linux
ii. swap –s -solaris
c. check /tmp space if it is less than 400 G
i. df –h /tmp
d. check amount free diskspace to install oracle software
i. df –h
2. Check software requirements
a. Kernel version 2.4.9 ore higher in linix
i. Uname –a
b. Check existence of following packages
i. Make
ii. Openmotif
iii. Gcc
iv. Gcc-c++
v. Libstdc++
vi. Glibc
3. Create required unix group
a. Dba,oinstall
i. Grep dba /etc/group
ii. Grep oinstall /etc/group
b. Create groups
i. Groupadd oinstall
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
ii. Groupadd dba
c. Check if oracle user exists or not
i. Id oracle
d. Add oracle user to group if exist else create
i. Usermod –g oinstall –G dba oracle
ii. Or useradd –g oinstall –G dba oracle
iii. Passwd oracle
4. Create required folders
a. Base directory /uo1/app/oracle
b. Datafie directory /u02/oradata
i. mkdir –p /uo1/app/oracle
ii. mkdir –p /u02/oradata
iii. chown –R oracle:oinstall /u01/app/oracle
iv. chown –R oracle:oinstall /u02/oradata
v. chmod –R 775 /u01/app/oracle
vi. chmod –R 775 /u02/oradata
5. configuring kernel parameters
Kernel
Parameter
Setting To
Get
You Started
Purpose
shmmni 4096 Maximum number of shared memory
segments
shmall 2097152 Maximum total shared memory (4 Kb pages)
shmmax 2147483648 Maximum size of a single shared memory
segment
semmsl 250 Maximum number of semaphores per set
semmns 32000 Maximum number of semaphores
semopm 100 Maximum operations per semop call
semmni 128 Maximum number of semaphore sets
file-max 65536 Maximum number of open files
ip_local_port_range 1024 - 65000 Range of ports to use for client connections
rmem_default 1048576 Default TCP/IP receive window
rmem_max 1048576 Maximum TCP/IP receive window
wmem_default 262144 Maximum TCP/IP send window
wmem_max 262144 Maximum TCP/IP send window
shmmax = 2147483648 (To verify, execute: cat
/proc/sys/kernel/shmmax)
shmmni = 4096 (To verify, execute: cat
/proc/sys/kernel/shmmni)
shmall = 2097152 (To verify, execute: cat
/proc/sys/kernel/shmall) (for 10g R1)
shmmin = 1 (To verify, execute: ipcs -lm |grep "min seg
size")
shmseg = 10 (It's hardcoded in the kernel - the default is
much higher)
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
semmsl = 250 (To verify, execute: cat /proc/sys/kernel/sem
| awk '{print $1}')
semmns = 32000 (To verify, execute: cat /proc/sys/kernel/sem
| awk '{print $2}')
semopm = 100 (To verify, execute: cat /proc/sys/kernel/sem
| awk '{print $3}')
semmni = 128 (To verify, execute: cat /proc/sys/kernel/sem
| awk '{print $4}')
file-max = 65536 (To verify, execute: cat /proc/sys/fs/file-
max)
ip_local_port_range = 1024 65000
(To verify, execute: cat
/proc/sys/net/ipv4/ip_local_port_range)
I added the following lines to the /etc/sysctl.conf file which is used during the boot
process:
kernel.shmmax=2147483648
kernel.sem=250 32000 100 128
fs.file-max=65536
net.ipv4.ip_local_port_range=1024 65000
Adding these lines to the /etc/sysctl.conf file will cause the system to change these
kernel parameters after each boot using the /etc/rc.d/rc.sysinit script which is
invoked by /etc/inittab. But in order that these new added lines or settings in
/etc/sysctl.conf become effective immediately, execute the following command:
su - root
sysctl -p
6. mount CD, login as oracle user
7. Create following environment variable or update them in profile of user such as
umask 022 oracle_base oracle_sid oracle_home
8. ./runInstaller is command to start installation. Run root.sh as root user for
privilege setting of oracle user and run oraInventory/orainstroot.sh also
9. check portlist.ini file for ports and check services running tnsservices, listener,
em, isqlplus etc
10. add an entry in oratab
Explain template and its usage in cloning
Template is of two types
With datafile - (seed database)
Without datafile - (structure database)
Templates are stored at /oracle_base/assistants/dbca/templates
Seed templates --- .dbc and .dbj (datafiles and redologs in compressed)
Non seed templates ---.dbt
Expain dbid utility
Oracle10g>nid
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
DBNEWID: Release 10.2.0.4.0 - Production on Tue Jan 4 09:59:55 2011
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Keyword Description (Default)
----------------------------------------------------
TARGET Username/Password (NONE)
DBNAME New database name (NONE)
LOGFILE Output Log (NONE)
REVERT Revert failed change NO
SETNAME Set a new database name only NO
APPEND Append to output log NO
HELP Displays these messages NO
Is used to create new dbname and dbid for database
Explain db refresh
Take backup of source database
Copy to target server
Create required user/tablespacees if schema refresh and use imp owner=owner
If full is used as full=y then it is full refresh
Then create controlfile to trace and edit it
Create init parameter file and password file
Create respective folders
Setup nomount
Run create controlfile command with set newname databasename
Open database with resetlogs
Do recovery if required
Use dbid to change the id
Startup normal and take backup
What is difference between imp and impdp
Datapump does not use buffers
Datapump export represent the data in xml format
Datapump schema import recreate user and execute all assisted securites
Datapump has parallel processing and one can attach and change these processes
What is TSPITR?
Steps to configure EM datacontrol
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
drop user sysman cascade;
drop public synonym SETEMVIEWUSERCONTEXT;
drop role MGMT_USER;
drop PUBLIC SYNONYM MGMT_TARGET_BLACKOUTS;
drop user MGMT_VIEW;
And then,
$ emca -deconfig dbcontrol db
$ emca -config dbcontrol db -repos create
--- Failed to shutdown DBConsole Gracefully ---
goto $ORACLE_HOME and check hostname_dbid folders if any
cd to all and check emctl.pid is existing or not
cat that file
u find a PID
if u check ps –ef | grep PID
u will see a server running by that id
kill -9 PID to stop that
then emctl stop dbconsole will work without above error
then emctl start dbconsole will work fine
How Sessions are captured at v$object level
v$session is where session information is stored and v$active_session_history is where
every second sample of Sessions are stored and dba_hist_active_sess_history in from
AWR stores history of active sessions.
What is correlation between ASH and AWR in Wait Interface
Oracle wait interface shows wait class events wait count and time from dba_hist_waitstat
(ASH) and dba_hist_active_sess_history(AWR)
Explain most important parts of v$lock
v$lock
This view stores all information relating to locks in the database. The interesting columns in
this view are sid (identifying the session holding or aquiring the lock), type, and the
lmode/request pair.
Important possible values of type are TM (DML or Table Lock), TX (Transaction), MR (Media
Recovery), ST (Disk Space Transaction).
Exactly one of the lmode, request pair is either 0 or 1 while the other indicates the lock
mode. If lmode is not 0 or 1, then the session has aquired the lock, while it waits to aquire
the lock if request is other than 0 or 1. The possible values for lmode and request are:
• 1: null,
• 2: Row Share (SS),
• 3: Row Exclusive (SX),
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
• 4: Share (S),
• 5: Share Row Exclusive (SSX) and
• 6: Exclusive(X)
If the lock type is TM, the column id1 is the object's id and the name of the object can then
be queried like so: select name from sys.obj$ where obj# = id1
A lock type of JI indicates that a materialized view is being
Locks
in
Oracle
create table lck (a number, b number);
insert into lck values (1,2);
insert into lck values (2,4);
insert into lck values (3,6);
insert into lck values (4,8);
insert into lck values (5,3);
insert into lck values (6,5);
insert into lck values (7,7);
commit;
We use two sessions (distinguishable by two colors and being on the left or right
side) to investigate Statement-level read consistency, and a third to select from
v$lock
First, we find the session id of the two participating sessions:
SQL> select sid from
v$session where
audsid=userenv('SESSIONID');
SID
----------
14
SQL>select sid from v$session
where
audsid=userenv('SESSIONID');
SID
----------
10
Now, we're inserting a row in the first session (sid=14).
SQL> insert into lck values
(1000,1001);
1 row created.
SQL> select * from lck;
A B
---------- ----------
1 2
2 4
3 6
4 8
5 3
6 5
7 7
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
1000 1001
8 rows selected.
How does that influence v$lock?
SQL> select sid,type,id1,lmode,request from v$lock where
sid in (10,14);
SID TY ID1 LMODE REQUEST
---------- -- ---------- ---------- ----------
14 TX 262153 6 0
14 TM 4145 3 0
Session 14 (the one that inserted a row) has obviously aquired two locks
(request = 0). One of these locks is a Transaction Lock (type=TX), the other
is a DML or Table Lock (type=TM). Mode 3 means: Row Exclusive which
acutally makes sense. Now, we can use obj$ to verify if the TM Lock is
indeed put on the table LCK:
SQL> select name from sys.obj$ where obj# = 4145 ;
NAME
------------------------------
LCK
What does the 2nd session see if it queries LCK?
SQL>select * from lck;
A B
---------- ----------
1 2
2 4
3 6
4 8
5 3
6 5
7 7
7 rows selected.
The 1st session has not yet commited (or rollbacked) its session, so the
changes are not visible to other sessions.
Now, let's have the first session insert another row.
SQL> insert into lck values
(1001,1000);
1 row created.
We'd expect v$lock to have an row more (for this 2nd inserted row). But
never believe your feelings...
SQL> select sid,type,id1,lmode,request from v$lock where
sid in (10,14);
SID TY ID1 LMODE REQUEST
---------- -- ---------- ---------- ----------
14 TX 262153 6 0
14 TM 4145 3 0
Didn't much change, did it? Now, the 2nd session updates a row:
SQL>update lck set
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
a=2000,b=2001 where a=1;
1 row updated.
SQL>select * from lck;
A B
---------- ----------
2000 2001
2 4
3 6
4 8
5 3
6 5
7 7
7 rows selected.
And v$lock?
SQL> select sid,type,id1,lmode,request from v$lock where
sid in (10,14);
SID TY ID1 LMODE REQUEST
---------- -- ---------- ---------- ----------
10 TX 327698 6 0
10 TM 4145 3 0
14 TX 262153 6 0
14 TM 4145 3 0
SQL>insert into lck values
(2001,2000);
1 row created.
SQL>select * from lck;
A B
---------- ----------
2000 2001
2 4
3 6
4 8
5 3
6 5
7 7
2001 2000
8 rows selected.
What happens, if the first session wants to update a row that was already
updated (but not yet commited) by another session? The first session tries
to do exactly that (the row in which a=1)
SQL> select * from lck;
A B
---------- ----------
1 2
2 4
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
3 6
4 8
5 3
6 5
7 7
1000 1001
1001 1000
9 rows selected.
SQL> update lck set
a=1002,b=1003 where a=1;
The session hangs until the session that has put a lock on the row in
question commits (or rollbacks). This waiting is recorded in v$session_wait:
SQL> select event, seconds_in_wait, sid from
v$session_wait where sid in (10,14);
EVENT
SECONDS_IN_WAIT SID
----------------------------------------------------------
------ --------------- ----------
enqueue
1593 14
SQL*Net message from client
2862 10
v$session_wait tells even how long (in secondes) the session waited. Now,
let the 2nd session commit:
0 rows updated.
SQL>
SQL>commit;
Commit complete.
Note, this can be confusing. When the 1st session selected * from lck, it
defenitively saw a row where a=1 but after seemingly updating it, it wasn't
actually updated.
What do these sessions now see, if they both do a select *?
SQL> select * from lck;
A B
---------- ----------
2000 2001
2 4
3 6
4 8
5 3
6 5
7 7
1000 1001
1001 1000
2001 2000
SQL>select * from lck;
A B
---------- ----------
2000 2001
2 4
3 6
4 8
5 3
6 5
7 7
2001 2000
While the 1st session sees, what the 2nd session commited, the 2nd session
does not see the uncommited changes of the first session. If the 1st session
had been Transaction-level read consistent, it would not see any changes
until it commits.
Command to create a datafile
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
ALTER DATABASE CREATE DATAFILE '/opt/oracle/datafile/users01.dbf' AS
'/opt/oracle/datafile/users01.dbf';
ALTER DATABASE CREATE DATAFILE 4 AS '/opt/oracle/datafile/users01.dbf';
ALTER DATABASE CREATE DATAFILE '/opt/oracle/datafile/users01.dbf' AS NEW;
What is VPD? Steps to implement it?
Using VPD policy security
Virtual private databases have several other names within the Oracle documentation, including row-level
security (RLS) and fine-grained access control (FGAC). Regardless of the name, VPD security provides a
whole new way to control access to Oracle data. Most interesting is the dynamic nature of a VPD. At
runtime, Oracle performs these near magical feats by dynamically modifying the SQL statement of the end
user:
1. Oracle gathers application context information at user logon time and then calls the policy
function, which returns a predicate. A predicate is a where clause that qualifies a particular set of
rows within the table.
2. Oracle dynamically rewrites the query by appending the predicate to users' SQL statements.
Whenever a query is run against the target tables, Oracle invokes the policy and produces a transient view
with a where clause predicate pasted onto the end of the query, like so:
SELECT * FROM book WHERE P1
A VPD security model uses the Oracle dbms_rls package (RLS stands for row-level security) to implement
the security policies and application contexts. This requires a policy that is defined to control access to
tables and rows:.
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
VPDs are involved in the creation of a security policy, and when users access a table (or view) that has a
security policy. The security policy modifies the user's SQL, adding a where clause to restrict access to
specific rows within the target tables. Let's take a close look at how this works.
VPD security Application context
For the VPD to properly use the security policy to add the where clause to the end user's SQL, Oracle must
know details about the authority of the user. This is done at sign-on time using Oracle's dbms_session
package. At sign-on, a database logon trigger executes, setting the application context for the user by
calling dbms_session.set_context. The set_context procedure can be used to set any number of variables
about the end user, including the application name, the user's name, and specific row restriction
information. Once this data is collected, the security policy will use this information to build the run-time
where clause to append to the end user's SQL statement. The set_context procedure sets several parameters
that are used by the VPD, and accepts three arguments:
dbms_session.set_context(namespace, attribute, value)
For example, let's assume that we have a publication table and we want to restrict access based on the type
of end user. Managers will be able to view all books for their publishing company, while authors may only
view their own books. Let's assume that user JSMITH is a manager and user MAULT is an author. At login
time, the Oracle database logon trigger would generate the appropriate values and execute the statements
shown in Listing A:
dbms_session.set_context('publishing_application', 'role_name', 'manager');
dbms_session.set_context('publishing_application', 'user_name', 'jsmith');
dbms_session.set_context('publishing_application', 'company', 'rampant_techpress');
dbms_session.set_context('publishing_application', 'role_name', 'author');
dbms_session.set_context('publishing_application', 'user_name', 'mault');
dbms_session.set_context('publishing_application', 'company', 'rampant_techpress');
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
Once executed, we can view these values with the Oracle session_context view. This data will be used by
the VPD at runtime to generate the where clause. Note that each user has his or her own specific
session_context values, shown in Listing B:
connect jsmith/manpass;
select
namespace, attribute, value
from
session_context;
NAMESPACE ATTRIBUTE VALUE
---------------- --------- ---------
PUBLISHING_APPLICATION ROLE_NAME MANAGER
PUBLISHING_APPLICATION USER_NAME JSMITH
PUBLISHING_APPLICATION COMPANY RAMPANT_TECHPRESS
connect mault/authpass;
select
namespace, attribute, value
from
session_context;
PUBLISHING_APPLICATION ROLE_NAME AUTHOR
PUBLISHING_APPLICATION USER_NAME MAULT
PUBLISHING_APPLICATION COMPANY RAMPANT_TECHPRESS
Now let's see how this application context information is used by the VPD security policy. In Listing C, we
create a security policy function called book_access_policy that builds two types of where clauses,
depending on the information in the session_context for each end user. Note that Oracle uses the
sys_context function to gather the values.
create or replace function
book_access_policy
(obj_schema varchar2, obj_name varchar2) return varchar2
is
d_predicate varchar2(2000);
begin
if sys_context('publishing_application','role_name')='manager' then
d_predicate:=
'upper(company)=sys_context(''publishing_application'',''company'')';
else
-- If the user_type session variable is set to anything else,
-- display only this person's record --
d_predicate:=
'upper(author_name)=sys_context(''userenv'',''session_user'')';
end if;
return d_predicate;
end;
end; /
DBMS_RLS.ADD_POLICY (
'pubs',
'book',
'access_policy',
'pubs',
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
'book_access_policy',
'select'
);
Look at the code in Listing C carefully. If the user was defined as a manager, their where clause
(d_predicate) would be:
where upper(company) = 'RAMPANT_TECHPRESS';
For the author, they get a different where clause:
where upper(author_name) = 'MAULT';
VPDs in action
We are now ready to show our VPD in action. In Listing D, we see very different results from an identical
SQL query, depending on the application context of the specific end user.
connect jsmith/manpass;
select * from book;
Book Author
Title name Publisher
-------------------- ------------- --------------------
Oracle9i RAC mault Rampant Techpress
Oracle job Interview dburleson Rampant Techpress
Oracle Utilities dmmoore Rampant Techpress
Oracle Troubleshooting rschumacher Rampant Techpress
Oracle10i DBA Features mault Rampant Techpress
connect mault/authpass;
select * from book;
It should be obvious that VPD is a totally different way of managing Oracle access than grant-based
security mechanisms. There are many benefits to VPDs:
• Dynamic security—No need to maintain complex roles and grants.
• Multiple security—You can place more than one policy on each object, as well as stack them on
other base policies. This makes VPD perfect for Web applications that are deployed for many
companies.
• No back doors—Users no longer bypass security policies embedded in applications, because the
security policy is attached to the data.
• Complex access rules may be defined—With VPD, you can use data values to specify complex
access rules that would be difficult to create with grant security. You can easily restrict access to
rows.
Of course, there are also some drawbacks to VPD security:
• Difficult column level security—Because access is controlled by adding a where clause, columnlevel
access can only be maintained by defining multiple views for each class of end user.
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
• Requires Oracle IDs for every user—Unlike security that is managed externally, VPD requires
that an Oracle user ID be defined for every person who connects to the database. This adds
maintenance and overhead.
• Hard to audit—It is hard to write an audit script that defines the exact access for each specified
user. This problem becomes even more acute for shops that mix security methods.
Problems with mixing VPD and grant security
Now that we have established the areas of security and auditing, it should be clear that we must come up
with a method to ensure that security methods are not mixed in an inappropriate way. By themselves, each
of these security mechanisms provides adequate access protection, but when these methods are mixed, it
can often be difficult (if not impossible) to identify the access for individual users. You'll have to decide
whether the security benefits of VPD are worth the extra administrative method.
Example
Virtual
Private
Database
(VPD) with
Oracle
Virtual Private Database is also known as fine graind
access control (FGAC). It allows to define which rows users
may have access to.
A simple example
In this example, it is assumed that a company consists of
different departments (with each having an entry in the
departments table). An employee belongs to exactly on
department. A department can have secrets that go into the
department_secrets table.
create table department (
dep_id int primary key,
name varchar2(30)
);
create table employee (
dep_id references department,
name varchar2(30)
);
create table department_secrets (
dep_id references department,
secret varchar2(30)
);
Filling in some truly confidential secrets:
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
insert into department values (1, 'Research
and Development');
insert into department values (2, 'Sales'
);
insert into department values (3, 'Human
Resources' );
insert into employee values (2, 'Peter');
insert into employee values (3, 'Julia');
insert into employee values (3, 'Sandy');
insert into employee values (1, 'Frank');
insert into employee values (2, 'Eric' );
insert into employee values (1, 'Joel' );
insert into department_secrets values (1, 'R+D
Secret #1' );
insert into department_secrets values (1, 'R+D
Secret #2' );
insert into department_secrets values (2,
'Sales Secret #1');
insert into department_secrets values (2,
'Sales Secret #2');
insert into department_secrets values (3, 'HR
Secret #1' );
insert into department_secrets values (3, 'HR
Secret #2' );
For any employee, it must be possible to see all secrets of
his department, but no secret of another department.
In order to make that happen with Oracle, we need to create
a package, a trigger, and set a policy.
First, the package is created.
create or replace package pck_vpd
as
p_dep_id department.dep_id%type;
procedure set_dep_id(v_dep_id
department.dep_id%type);
function predicate (obj_schema varchar2,
obj_name varchar2) return varchar2;
end pck_vpd;
/
create or replace package body pck_vpd as
procedure set_dep_id(v_dep_id
department.dep_id%type) is
begin
p_dep_id := v_dep_id;
end set_dep_id;
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
function predicate (obj_schema varchar2,
obj_name varchar2) return varchar2 is
begin
return 'dep_id = ' || p_dep_id;
end predicate;
end pck_vpd;
/
Then the trigger is defined. This trigger fires whenever
someone log on to the database. It finds the user's
departement id (dep_id) and calls set_dep_id in the
package.
create or replace trigger trg_vpd
after logon on database
declare
v_dep_id department.dep_id%type;
begin
select dep_id into v_dep_id
from employee where upper(name) = user;
pck_vpd.set_dep_id(v_dep_id);
end;
/
Finally, the policy is defined. The policy states which
procedure is used to add a where clause part to the where
clause if someone executes a select statement.
begin
dbms_rls.add_policy (
user,
'department_secrets',
'choosable policy name',
user,
'pck_vpd.predicate',
'select,update,delete');
end;
/
To test the setup, some users are created.
create user frank identified by frank default
tablespace users temporary tablespace temp;
create user peter identified by peter default
tablespace users temporary tablespace temp;
create user julia identified by julia default
tablespace users temporary tablespace temp;
The necessary privileges are granted.
grant all on department_secrets to frank;
grant all on department_secrets to peter;
grant all on department_secrets to julia;
Developed by Gidijala Aravind Babu – Questions are on real life interview based;
Answers are mostly available on net – Are gathered into a doc – Q Set 1
Working as a DBA for the last 4 Yrs.
grant create session to frank;
grant create session to peter;
grant create session to julia;
A public synonym is created.
create public synonym department_secrets for
department_secrets;
Frank (belonging to R+D) executes a query....
connect frank/frank;
select * from department_secrets;
DEP_ID SECRET
---------- ------------------------------
1 R+D Secret #1
1 R+D Secret #2
Peter (belonging to Sales) executes a query....
connect peter/peter;
select * from department_secrets;
DEP_ID SECRET
---------- ------------------------------
2 Sales Secret #1
2 Sales Secret #2