Archives For February 2013

offline troubleshooting

February 28, 2013 — Leave a comment

the situation: a customer faces issues with his oracle database. you cannot go on-site for some reasons and you cannot connect to the database in question. what options do you have to support them ?

  • request a current awr/ash report and try find an issue from there
  • request execution plans if the issue is with bad performing statements and try if you might see anything there without knowing the data
  • try to support them by phone

there is another option some people might not know or are not aware of: dumping the awr data and importing it to your own database for further analysis. as this will include only awr data no user data will be submitted ( except for literals in sql statements, if there are any ) and you don’t need to worry about data privacy.

first of all you need to ask for an awr extract. there is a script which does all the work and you just need to execute it:

SQL> @?/rdbms/admin/awrextr.sql

the header displayed already tells you what the script will do and how to use it:

~~~~~~~~~~~~~
AWR EXTRACT
~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~  This script will extract the AWR data for a range of snapshots  ~
~  into a dump file.  The script will prompt users for the	   ~
~  following information:					   ~
~     (1) database id						   ~
~     (2) snapshot range to extract				   ~
~     (3) name of directory object				   ~
~     (4) name of dump file					   ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

the parameters the script will ask for are the same as when creating awr reports except you’ll additionally need a directory to store the dump file ( DATA_PUMP_DIR, by default ).

that’s it. the script will start the dump process:

| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|  The AWR extract dump file will be located
|  in the following directory/file:
|   /opt/oracle/product/base/11.2.0.3/rdbms/log/
|   demo.dmp
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|  *** AWR Extract Started ...
|
|  This operation will take a few moments. The
|  progress of the AWR extract operation can be
|  monitored in the following directory/file:
|   /opt/oracle/product/base/11.2.0.3/rdbms/log/
|   rman.log

End of AWR Extract

… and once finished the dump file can be copied from the location provided:

SQL> !ls /opt/oracle/product/base/11.2.0.3/rdbms/log/
rman.dmp  rman.log  dp.log

so far for the dump part. once you received the dump file how to import it into your own database ? not a big deal:

SQL> @?/rdbms/admin/awrload.sql

again, the header will tell you pretty much everything you’ll need to know:

~~~~~~~~~~
AWR LOAD
~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~  This script will load the AWR data from a dump file. The   ~
~  script will prompt users for the following information:    ~
~     (1) name of directory object			      ~
~     (2) name of dump file				      ~
~     (3) staging schema name to load AWR data into	      ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

a few things to take care of:

  • when you get asked for the dumpfile pass the filename without the file extension ( in my case “rman” instead of “rman.dmp” )
  • the load will temporarily create a staging schema ( AWR_STAGE ) and will drop it once the load finished

so, now you have the remote databases’ awr data available on your private database. what to do next ? maybe, in the first step you would like to create a set of awr reports to get an overview of the remote system. now, your are able to do this.

in a first step you’ll need the dbid and instance id for the remote database:

SQL> select DBID,INSTANCE_NUMBER,DB_NAME,INSTANCE_NAME,HOST_NAME 
               from DBA_HIST_DATABASE_INSTANCE;

	 DBID INSTANCE_NUMBER DB_NAME	INSTANCE_NAME	 HOST_NAME
------------- --------------- --------- ---------------- --------------------
   3551282359		    1 DEMO	demo		 ol6ora.local
   1664148667		    1 RMAN	rman		 ol6ora2.local

in may case “RMAN” is the remote database for which the awr statistics where imported to my local database “DEMO”. let’s see what snapshots are available for this dbid:

SQL> select SNAP_ID,BEGIN_INTERVAL_TIME,END_INTERVAL_TIME 
           from dba_hist_snapshot where DBID=1664148667;

SNAP_ID BEGIN_INTERVAL_TIME	  END_INTERVAL_TIME
------- ------------------------- -------------------------
   8096 16-FEB-13 04.00.52.846 AM 16-FEB-13 05.00.54.990 AM
   8097 16-FEB-13 05.00.54.990 AM 16-FEB-13 06.00.57.548 AM
   8098 16-FEB-13 06.00.57.548 AM 16-FEB-13 07.00.59.883 AM
   8099 16-FEB-13 07.00.59.883 AM 16-FEB-13 08.00.01.952 AM
   8100 16-FEB-13 08.00.01.952 AM 16-FEB-13 09.00.04.200 AM
   8101 16-FEB-13 09.00.04.200 AM 16-FEB-13 10.00.06.321 AM
   8102 16-FEB-13 10.00.06.321 AM 16-FEB-13 11.00.09.183 AM
   8103 16-FEB-13 11.00.09.183 AM 16-FEB-13 12.00.11.274 PM
...

that’s enough information to produce some reports:

SQL> define inst_num     = 1;
SQL> define inst_name    = 'rman';
SQL> define db_name      = 'RMAN';
SQL> define dbid         = 1664148667;
SQL> define begin_snap   = 8098;
SQL> define end_snap     = 8100;
SQL> define report_type  = 'html';
SQL> define report_name  = /tmp/first_awr_report.html
SQL> @?/rdbms/admin/awrrpti.sql

maybe the reports you generated are sufficient for giving the right hints. but there is even more you could do with all the statistic data available:

as an example, let’s say you want to display the load of the server for a period of two snapshots. for this there is statistic called “LOAD” which you can query:

SELECT to_char(t.begin_interval_time, 'DD.MM.YYYY-HH24:MI:SS' )
     , o.value
  FROM dba_hist_osstat o
     , dba_hist_snapshot t
 WHERE o.dbid = 1664148667
   AND o.dbid = t.dbid
   AND o.snap_id = t.snap_id
   AND o.snap_id BETWEEN 8098 AND 8110
   AND o.stat_name = 'LOAD'
 ORDER BY o.snap_id;

TO_CHAR(T.BEGIN_INT	 VALUE
------------------- ----------
16.02.2013-06:00:57 .829101563
16.02.2013-07:00:59 .629882813
16.02.2013-08:00:01 .309570313
16.02.2013-09:00:04   .1796875
16.02.2013-10:00:06 .329101563
16.02.2013-11:00:09 .279296875
16.02.2013-12:00:11 .119140625
16.02.2013-13:00:13 .649414063
16.02.2013-14:00:15 .919921875
16.02.2013-15:00:17 .459960938
16.02.2013-16:00:20 .889648438
16.02.2013-17:00:22 .629882813
16.02.2013-18:00:24 .549804688

wouldn’t it be nice to display theses values in a picture ? put these numbers to a file:

echo "16.02.2013-06:00:57 .829101563
16.02.2013-07:00:59 .629882813
16.02.2013-08:00:01 .309570313
16.02.2013-09:00:04   .1796875
16.02.2013-10:00:06 .329101563
16.02.2013-11:00:09 .279296875
16.02.2013-12:00:11 .119140625
16.02.2013-13:00:13 .649414063
16.02.2013-14:00:15 .919921875
16.02.2013-15:00:17 .459960938
16.02.2013-16:00:20 .889648438
16.02.2013-17:00:22 .629882813
16.02.2013-18:00:24 .549804688" > /tmp/graph1.lst

define a gnuplot definition:

echo "set terminal png 
set xlabel \"Time\"
set xdata time
set timefmt "%d.%m.%Y-%H:%M:%S" 
set title 'OS-Load'
set format y \"%10.1f\" 
plot \"/tmp/graph1.lst\" using 1:2 with lines title 'OS Load'" > /tmp/graph1.plot

.. create the png file:

gnuplot /tmp/graph1.plot > graph1.png

… and have a look at it:

display graph1.png

server load

for those who don’t know: dbms_xplan also works with dbid, so you may generate the execution plans for statements from the remote instance ( although the below is not a very useful example ):

SELECT * 
 FROM TABLE ( dbms_xplan.display_awr ( '3s1hh8cvfan6w'
                                     , '2137789089'
                                     , 1664148667     -- remote_dbid
                                     , 'ADVANCED' ) );
Plan hash value: 2137789089

---------------------------------------------------------------------------------------------
| Id  | Operation			  | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT		  |	    |	    |	    |	 30 (100)|	    |
|   1 |  COLLECTION ITERATOR PICKLER FETCH| DISPLAY |  8168 | 16336 |	 30   (4)| 00:00:01 |
---------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$F5BB74E1 / KOKBF$0@SEL$2

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('11.2.0.3')
      DB_VERSION('11.2.0.3')
      OPT_PARAM('query_rewrite_enabled' 'false')
      ALL_ROWS
      FORCE_XML_QUERY_REWRITE
      XML_DML_RWT_STMT
      XMLINDEX_REWRITE
      XMLINDEX_REWRITE_IN_SELECT
      NO_COST_XML_QUERY_REWRITE
      OUTLINE_LEAF(@"SEL$F5BB74E1")
      MERGE(@"SEL$2")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SEL$2")
      FULL(@"SEL$F5BB74E1" "KOKBF$0"@"SEL$2")
      END_OUTLINE_DATA
  */

combine all this into a set of scripts and you are ready to do some analysis even if you do not have access to the database. together with the power of gnuplot for creating nice graphs you are in a position for doing an initial troubleshooting.

two more sample images generated out of the awr data:

according to the documentation granting the “manage tablespace” privilege to a user should allow:

  • take tablespaces offline and online
  • begin and end tablespace backups

let’ see if this is true:

SQL> create tablespace tbs1 datafile '/oradata/demo/dbf/tbs1_01.dbf' size 1m;
Tablespace created.
SQL> create user a identified by "a";
User created.
SQL> grant create session, manage tablespace to a;
Grant succeeded.
SQL> connect a/a
Connected.
SQL> alter tablespace tbs1 offline;
Tablespace altered.
SQL> alter tablespace tbs1 online;
Tablespace altered.
SQL> alter tablespace tbs1 begin backup;
Tablespace altered.
SQL> alter tablespace tbs1 end backup;
Tablespace altered.
SQL> 

but this is possible, too:

SQL> show user;
USER is "A"
SQL> alter tablespace tbs1 rename to tbs2;
Tablespace altered.
SQL> 

this is probably nothing you’d except to work …

when loading lots and lots of data to the database it is often better for performance to create the indexes and constraints after the load ( given the loaded data is consistent ). if there are some cpus available doing this in parallel would be a nice option so I started to look what perl can do for me in this case. there is a little module available on cpan called Parallel::ForkManager. below is a simple perl script which executes four scripts in parallel using this module. this example uses postgresql but you may replace the psql call with sqlplus and you’re done if you want to use it with oracle.

#!/usr/bin/perl
use Parallel::ForkManager;

my ( @sqlFiles     # the set of sql-scripts to execute
   , $pm           
   );
push ( @sqlFiles, 'scripts/01.sql' );
push ( @sqlFiles, 'scripts/02.sql' );
push ( @sqlFiles, 'scripts/03.sql' );
push ( @sqlFiles, 'scripts/04.sql' );
my $arraysize = @sqlFiles;
# set the amount of forks to the count of the sql script array
# do not set this greater than the number of CPUs available
$pm = Parallel::ForkManager->new($arraysize);
foreach my $file ( @sqlFiles ) {
  $pm->start and next;     # do the fork
  my $result = `psql -f $file`;
  print "result: $result \n";
  $pm->finish;             # do the exit in the child process
}
1;

the sql scripts in this case just create an index on each column of a test table:

$ cat scripts/01.sql 
select pg_backend_pid();
create index i1 on t1(a);
$ cat scripts/02.sql 
select pg_backend_pid();
create index i2 on t1(b);
$ cat scripts/03.sql 
select pg_backend_pid();
create index i3 on t1(c);
$ cat scripts/04.sql 
select pg_backend_pid();
create index i4 on t1(d);

of course it can be anything else but just creating indexes.

if you are not sure what these abbreviations all stand for:

demo@orademo.local oracle:/home/oracle/adhocScripts $ ps -ef | grep ora_
oracle    2016     1  0 15:33 ?        00:00:28 ora_pmon_demo
oracle    2018     1  0 15:33 ?        00:00:49 ora_psp0_demo
oracle    2020     1 15 15:33 ?        00:39:48 ora_vktm_demo
oracle    2024     1  0 15:33 ?        00:00:19 ora_gen0_demo
oracle    2026     1  0 15:33 ?        00:00:21 ora_diag_demo
oracle    2028     1  0 15:33 ?        00:00:19 ora_dbrm_demo
oracle    2030     1  0 15:33 ?        00:01:14 ora_dia0_demo
oracle    2032     1  0 15:33 ?        00:00:18 ora_mman_demo
oracle    2034     1  0 15:33 ?        00:00:22 ora_dbw0_demo
oracle    2036     1  0 15:33 ?        00:00:27 ora_lgwr_demo
oracle    2038     1  0 15:33 ?        00:00:47 ora_ckpt_demo
oracle    2040     1  0 15:33 ?        00:00:12 ora_smon_demo
oracle    2042     1  0 15:33 ?        00:00:09 ora_reco_demo
oracle    2044     1  0 15:33 ?        00:00:45 ora_mmon_demo
oracle    2046     1  0 15:33 ?        00:01:20 ora_mmnl_demo
oracle    2054     1  0 15:34 ?        00:00:10 ora_qmnc_demo
oracle    2070     1  0 15:34 ?        00:00:09 ora_q001_demo
oracle    2099     1  0 15:39 ?        00:00:18 ora_smco_demo
oracle    2214     1  0 16:01 ?        00:00:10 ora_q002_demo
oracle    2811     1  0 19:39 ?        00:00:00 ora_w000_demo
oracle    2826  2760  0 19:43 pts/3    00:00:00 grep ora_

… just ask the database:

select NAME,DESCRIPTION from v$bgprocess order by 1;

NAME  DESCRIPTION
----- ----------------------------------------------------------------
ABMR  Auto BMR Background Process
ACFS  ACFS CSS
ACMS  Atomic Controlfile to Memory Server
ARB0  ASM Rebalance 0
ARB1  ASM Rebalance 1
ARB2  ASM Rebalance 2
ARB3  ASM Rebalance 3
ARB4  ASM Rebalance 4
ARB5  ASM Rebalance 5
ARB6  ASM Rebalance 6
ARB7  ASM Rebalance 7
ARB8  ASM Rebalance 8
ARB9  ASM Rebalance 9
ARBA  ASM Rebalance 10
ARC0  Archival Process 0
ARC1  Archival Process 1
ARC2  Archival Process 2
ARC3  Archival Process 3
ARC4  Archival Process 4
ARC5  Archival Process 5
ARC6  Archival Process 6
ARC7  Archival Process 7
ARC8  Archival Process 8
ARC9  Archival Process 9
ARCa  Archival Process 10
ARCb  Archival Process 11
ARCc  Archival Process 12
ARCd  Archival Process 13
ARCe  Archival Process 14
ARCf  Archival Process 15
ARCg  Archival Process 16
ARCh  Archival Process 17
ARCi  Archival Process 18
ARCj  Archival Process 19
ARCk  Archival Process 20
ARCl  Archival Process 21
ARCm  Archival Process 22
ARCn  Archival Process 23
ARCo  Archival Process 24
ARCp  Archival Process 25
ARCq  Archival Process 26
ARCr  Archival Process 27
ARCs  Archival Process 28
ARCt  Archival Process 29
ASMB  ASM Background
CJQ0  Job Queue Coordinator
CKPT  checkpoint
CTWR  Change Tracking Writer
DBRM  DataBase Resource Manager
DBW0  db writer process 0
DBW1  db writer process 1
DBW2  db writer process 2
DBW3  db writer process 3
DBW4  db writer process 4
DBW5  db writer process 5
DBW6  db writer process 6
DBW7  db writer process 7
DBW8  db writer process 8
DBW9  db writer process 9
DBWa  db writer process 10 (a)
DBWb  db writer process 11 (b)
DBWc  db writer process 12 (c)
DBWd  db writer process 13 (d)
DBWe  db writer process 14 (e)
DBWf  db writer process 15 (f)
DBWg  db writer process 16 (g)
DBWh  db writer process 17 (h)
DBWi  db writer process 18 (i)
DBWj  db writer process 19 (j)
DBWk  db writer process 20 (k)
DBWl  db writer process 21 (l)
DBWm  db writer process 22 (m)
DBWn  db writer process 23 (n)
DBWo  db writer process 24 (o)
DBWp  db writer process 25 (p)
DBWq  db writer process 26 (q)
DBWr  db writer process 27 (r)
DBWs  db writer process 28 (s)
DBWt  db writer process 29 (t)
DBWu  db writer process 30 (u)
DBWv  db writer process 31 (v)
DBWw  db writer process 32 (w)
DBWx  db writer process 33 (x)
DBWy  db writer process 34 (y)
DBWz  db writer process 35 (z)
DIA0  diagnosibility process 0
DIA1  diagnosibility process 1
DIA2  diagnosibility process 2
DIA3  diagnosibility process 3
DIA4  diagnosibility process 4
DIA5  diagnosibility process 5
DIA6  diagnosibility process 6
DIA7  diagnosibility process 7
DIA8  diagnosibility process 8
DIA9  diagnosibility process 9
DIAG  diagnosibility process
DMON  DG Broker Monitor Process
DSKM  slave DiSKMon process
EMNC  EMON Coordinator
FBDA  Flashback Data Archiver Process
FMON  File Mapping Monitor Process
FSFP  Data Guard Broker FSFO Pinger
GEN0  generic0
GMON  diskgroup monitor
GTX0  Global Txn process 0
GTX1  Global Txn process 1
GTX2  Global Txn process 2
GTX3  Global Txn process 3
GTX4  Global Txn process 4
GTX5  Global Txn process 5
GTX6  Global Txn process 6
GTX7  Global Txn process 7
GTX8  Global Txn process 8
GTX9  Global Txn process 9
GTXa  Global Txn process 10
GTXb  Global Txn process 11
GTXc  Global Txn process 12
GTXd  Global Txn process 13
GTXe  Global Txn process 14
GTXf  Global Txn process 15
GTXg  Global Txn process 16
GTXh  Global Txn process 17
GTXi  Global Txn process 18
GTXj  Global Txn process 19
INSV  Data Guard Broker INstance SlaVe Process
LCK0  Lock Process 0
LGWR  Redo etc.
LMD0  global enqueue service daemon 0
LMHB  lm heartbeat monitor
LMON  global enqueue service monitor
LMS0  global cache service process 0
LMS1  global cache service process 1
LMS2  global cache service process 2
LMS3  global cache service process 3
LMS4  global cache service process 4
LMS5  global cache service process 5
LMS6  global cache service process 6
LMS7  global cache service process 7
LMS8  global cache service process 8
LMS9  global cache service process 9
LMSa  global cache service process 10
LMSb  global cache service process 11
LMSc  global cache service process 12
LMSd  global cache service process 13
LMSe  global cache service process 14
LMSf  global cache service process 15
LMSg  global cache service process 16
LMSh  global cache service process 17
LMSi  global cache service process 18
LMSj  global cache service process 19
LMSk  global cache service process 20
LMSl  global cache service process 21
LMSm  global cache service process 22
LMSn  global cache service process 23
LMSo  global cache service process 24
LMSp  global cache service process 25
LMSq  global cache service process 26
LMSr  global cache service process 27
LMSs  global cache service process 28
LMSt  global cache service process 29
LMSu  global cache service process 30
LMSv  global cache service process 31
LMSw  global cache service process 32
LMSx  global cache service process 33
LMSy  global cache service process 34
LSP0  Logical Standby
LSP1  Dictionary build process for Logical Standby
LSP2  Set Guard Standby Information for Logical Standby
MARK  mark AU for resync koordinator
MMAN  Memory Manager
MMNL  Manageability Monitor Process 2
MMON  Manageability Monitor Process
MRP0  Managed Standby Recovery
NSA1  Redo transport NSA1
NSA2  Redo transport NSA2
NSA3  Redo transport NSA3
NSA4  Redo transport NSA4
NSA5  Redo transport NSA5
NSA6  Redo transport NSA6
NSA7  Redo transport NSA7
NSA8  Redo transport NSA8
NSA9  Redo transport NSA9
NSAA  Redo transport NSAA
NSAB  Redo transport NSAB
NSAC  Redo transport NSAC
NSAD  Redo transport NSAD
NSAE  Redo transport NSAE
NSAF  Redo transport NSAF
NSAG  Redo transport NSAG
NSAH  Redo transport NSAH
NSAI  Redo transport NSAI
NSAJ  Redo transport NSAJ
NSAK  Redo transport NSAK
NSAL  Redo transport NSAL
NSAM  Redo transport NSAM
NSAN  Redo transport NSAN
NSAO  Redo transport NSAO
NSAP  Redo transport NSAP
NSAQ  Redo transport NSAQ
NSAR  Redo transport NSAR
NSAS  Redo transport NSAS
NSAT  Redo transport NSAT
NSAU  Redo transport NSAU
NSAV  Redo transport NSAV
NSS1  Redo transport NSS1
NSS2  Redo transport NSS2
NSS3  Redo transport NSS3
NSS4  Redo transport NSS4
NSS5  Redo transport NSS5
NSS6  Redo transport NSS6
NSS7  Redo transport NSS7
NSS8  Redo transport NSS8
NSS9  Redo transport NSS9
NSSA  Redo transport NSSA
NSSB  Redo transport NSSB
NSSC  Redo transport NSSC
NSSD  Redo transport NSSD
NSSE  Redo transport NSSE
NSSF  Redo transport NSSF
NSSG  Redo transport NSSG
NSSH  Redo transport NSSH
NSSI  Redo transport NSSI
NSSJ  Redo transport NSSJ
NSSK  Redo transport NSSK
NSSL  Redo transport NSSL
NSSM  Redo transport NSSM
NSSN  Redo transport NSSN
NSSO  Redo transport NSSO
NSSP  Redo transport NSSP
NSSQ  Redo transport NSSQ
NSSR  Redo transport NSSR
NSSS  Redo transport NSSS
NSST  Redo transport NSST
NSSU  Redo transport NSSU
NSSV  Redo transport NSSV
NSV0  Data Guard Broker NetSlave Process 0
NSV1  Data Guard Broker NetSlave Process 1
NSV2  Data Guard Broker NetSlave Process 2
NSV3  Data Guard Broker NetSlave Process 3
NSV4  Data Guard Broker NetSlave Process 4
NSV5  Data Guard Broker NetSlave Process 5
NSV6  Data Guard Broker NetSlave Process 6
NSV7  Data Guard Broker NetSlave Process 7
NSV8  Data Guard Broker NetSlave Process 8
NSV9  Data Guard Broker NetSlave Process 9
NSVA  Data Guard Broker NetSlave Process A
NSVB  Data Guard Broker NetSlave Process B
NSVC  Data Guard Broker NetSlave Process C
NSVD  Data Guard Broker NetSlave Process D
NSVE  Data Guard Broker NetSlave Process E
NSVF  Data Guard Broker NetSlave Process F
NSVG  Data Guard Broker NetSlave Process G
NSVH  Data Guard Broker NetSlave Process H
NSVI  Data Guard Broker NetSlave Process I
NSVJ  Data Guard Broker NetSlave Process J
NSVK  Data Guard Broker NetSlave Process K
NSVL  Data Guard Broker NetSlave Process L
NSVM  Data Guard Broker NetSlave Process M
NSVN  Data Guard Broker NetSlave Process N
NSVO  Data Guard Broker NetSlave Process O
NSVP  Data Guard Broker NetSlave Process P
NSVQ  Data Guard Broker NetSlave Process Q
NSVR  Data Guard Broker NetSlave Process R
NSVS  Data Guard Broker NetSlave Process S
NSVT  Data Guard Broker NetSlave Process T
NSVU  Data Guard Broker NetSlave Process U
PING  interconnect latency measurement
PMON  process cleanup
PSP0  process spawner 0
QMNC  AQ Coordinator
RBAL  ASM Rebalance master
RCBG  Result Cache: Background
RECO  distributed recovery
RMS0  rac management server
RSM0  Data Guard Broker Resource Guard Process 0
RSMN  Remote Slave Monitor
RVWR  Recovery Writer
SMCO  Space Manager Process
SMON  System Monitor Process
VBG0  Volume BG 0
VBG1  Volume BG 1
VBG2  Volume BG 2
VBG3  Volume BG 3
VBG4  Volume BG 4
VBG5  Volume BG 5
VBG6  Volume BG 6
VBG7  Volume BG 7
VBG8  Volume BG 8
VBG9  Volume BG 9
VDBG  Volume Driver BG
VKRM  Virtual sKeduler for Resource Manager
VKTM  Virtual Keeper of TiMe process
VMB0  Volume Membership 0
XDMG  cell automation manager
XDWK  cell automation worker actions

295 rows selected.

most people know that they require a license for using all that nice reports which can be generated out of the statistical data stored in the advanced workload repository ( AWR ). probably less people know that there still is an alternative which is free to use: statspack. although statspack is not that much automated as awr and therefore requires a bit more work to do it still can be used to generate reports which can point you to the right direction in case you face some performance issues.

so how do you work with it ?
as usual, the scripts are located in the rdbms/admin directory of your $ORACLE_HOME. statspack needs an initial setup to create all the objects required:

SQL>@?/rdbms/admin/spcreate.sql

the script will ask a few questions:

  • the password for the perfstat user
  • the tablespace to store the statistical data ( you should probably create a separate tablespace for this )
  • the temporary tablespace to use

that’s it. statspack is installed and ready to use, almost. in awr snapshots are created automatically ( usually every hour if you didn’t adjust the interval ). with statspack the snapshots need to be created manually:

SQL> exec perfstat.statspack.snap;

there is even a little script which creates a job for you if you want to automate the snapshots ( hourly, by default ):

SQL>@?/rdbms/admin/spauto.sql

of course you might want to adjust the script if hourly snapshots do not fit in your case.

so, once you have at least two snapshots available you are ready to create a report:

SQL>@?/rdbms/admin/spreport.sql

similar to the awr report the script will ask for the begin and end snapshots as well as a name for the report and then will create the report for you. you’ll notice the similarities to awr immediately, e.g. the top 5 timed events:

Top 5 Timed Events						      Avg %Total
~~~~~~~~~~~~~~~~~~						     wait   Call
Event						 Waits	  Time (s)   (ms)   Time
----------------------------------------- ------------ ----------- ------ ------
CPU time							24	    42.3
db file async I/O submit			   364		11     30   19.1
control file parallel write			   232		 8     36   15.0
db file sequential read 			   577		 4	6    6.6
log file parallel write 			    78		 3     41    5.7
	  -------------------------------------------------------------

ok, the output is not as nice the html reports of awr, but this is still a lot of information to work with, and it’s free.

there are some other scripts around which assist in maintaining the statspack repository:

  • spdrop.sql: for droppping statspack
  • sppurge.sql: for dropping a range a snapshots
  • sprepins.sql: for reporting differences between snapshots
  • sptrunc.sql: for truncating the statspack repository

… to name a few of them. there are some more which you might look at ( they all start with sp* ).

if you want to list the privileges for your current session in oracle you can do:

select * from session_privs;
PRIVILEGE
----------------------------------------
ALTER SYSTEM
AUDIT SYSTEM
CREATE SESSION
ALTER SESSION
RESTRICTED SESSION
CREATE TABLESPACE
ALTER TABLESPACE
...

… and the underlying query is this one:

select spm.name
from sys.v$enabledprivs ep, system_privilege_map spm
where spm.privilege = ep.priv_number

the previous post linked to the first sql implementation called system r. this system was based on a paper called A Relational Model of Data for Large Shared Data Banks by E.F.Codd which introduced the main concepts for relational database systems.

this is definitely worth reading as it introduced a lot of terms and basics still valid today.

system r

February 3, 2013 — Leave a comment

if you are interested in where todays databases are based on you might interested in the following link:

A History and Evaluation of System R

for example these requirements ( taken from the link above ) have been defined back in the 1970ies and are still valied today:

  1. To provide a high-level, nonnavigational user interface for maximum user productivity and data independence
  2. To support different types of database use including programmed transactions, ad hoc queries, and report genereration
  3. To support a rapidly changing database environment, in which tables, indexes, views, transactions, and other objects could easily be added to and removed from the database without stopping the system.
  4. To support a population of many concurrent users, with mechanisms to protext the integrity of the database in a concurrent-update environment
  5. To provide a means of recovering the contents of the database to a consistent state after a failure of hardware or software
  6. To provide a flexible mechanism whereby diffent views of stored data can be defined and various users can be authorized to query and update these views
  7. To support all of the above functions with a level of performance comparable to exsiting lower-function database systems.