Archives For November 30, 1999

if you are not sure what these abbreviations all stand for:

demo@orademo.local oracle:/home/oracle/adhocScripts $ ps -ef | grep ora_
oracle    2016     1  0 15:33 ?        00:00:28 ora_pmon_demo
oracle    2018     1  0 15:33 ?        00:00:49 ora_psp0_demo
oracle    2020     1 15 15:33 ?        00:39:48 ora_vktm_demo
oracle    2024     1  0 15:33 ?        00:00:19 ora_gen0_demo
oracle    2026     1  0 15:33 ?        00:00:21 ora_diag_demo
oracle    2028     1  0 15:33 ?        00:00:19 ora_dbrm_demo
oracle    2030     1  0 15:33 ?        00:01:14 ora_dia0_demo
oracle    2032     1  0 15:33 ?        00:00:18 ora_mman_demo
oracle    2034     1  0 15:33 ?        00:00:22 ora_dbw0_demo
oracle    2036     1  0 15:33 ?        00:00:27 ora_lgwr_demo
oracle    2038     1  0 15:33 ?        00:00:47 ora_ckpt_demo
oracle    2040     1  0 15:33 ?        00:00:12 ora_smon_demo
oracle    2042     1  0 15:33 ?        00:00:09 ora_reco_demo
oracle    2044     1  0 15:33 ?        00:00:45 ora_mmon_demo
oracle    2046     1  0 15:33 ?        00:01:20 ora_mmnl_demo
oracle    2054     1  0 15:34 ?        00:00:10 ora_qmnc_demo
oracle    2070     1  0 15:34 ?        00:00:09 ora_q001_demo
oracle    2099     1  0 15:39 ?        00:00:18 ora_smco_demo
oracle    2214     1  0 16:01 ?        00:00:10 ora_q002_demo
oracle    2811     1  0 19:39 ?        00:00:00 ora_w000_demo
oracle    2826  2760  0 19:43 pts/3    00:00:00 grep ora_

… just ask the database:

select NAME,DESCRIPTION from v$bgprocess order by 1;

NAME  DESCRIPTION
----- ----------------------------------------------------------------
ABMR  Auto BMR Background Process
ACFS  ACFS CSS
ACMS  Atomic Controlfile to Memory Server
ARB0  ASM Rebalance 0
ARB1  ASM Rebalance 1
ARB2  ASM Rebalance 2
ARB3  ASM Rebalance 3
ARB4  ASM Rebalance 4
ARB5  ASM Rebalance 5
ARB6  ASM Rebalance 6
ARB7  ASM Rebalance 7
ARB8  ASM Rebalance 8
ARB9  ASM Rebalance 9
ARBA  ASM Rebalance 10
ARC0  Archival Process 0
ARC1  Archival Process 1
ARC2  Archival Process 2
ARC3  Archival Process 3
ARC4  Archival Process 4
ARC5  Archival Process 5
ARC6  Archival Process 6
ARC7  Archival Process 7
ARC8  Archival Process 8
ARC9  Archival Process 9
ARCa  Archival Process 10
ARCb  Archival Process 11
ARCc  Archival Process 12
ARCd  Archival Process 13
ARCe  Archival Process 14
ARCf  Archival Process 15
ARCg  Archival Process 16
ARCh  Archival Process 17
ARCi  Archival Process 18
ARCj  Archival Process 19
ARCk  Archival Process 20
ARCl  Archival Process 21
ARCm  Archival Process 22
ARCn  Archival Process 23
ARCo  Archival Process 24
ARCp  Archival Process 25
ARCq  Archival Process 26
ARCr  Archival Process 27
ARCs  Archival Process 28
ARCt  Archival Process 29
ASMB  ASM Background
CJQ0  Job Queue Coordinator
CKPT  checkpoint
CTWR  Change Tracking Writer
DBRM  DataBase Resource Manager
DBW0  db writer process 0
DBW1  db writer process 1
DBW2  db writer process 2
DBW3  db writer process 3
DBW4  db writer process 4
DBW5  db writer process 5
DBW6  db writer process 6
DBW7  db writer process 7
DBW8  db writer process 8
DBW9  db writer process 9
DBWa  db writer process 10 (a)
DBWb  db writer process 11 (b)
DBWc  db writer process 12 (c)
DBWd  db writer process 13 (d)
DBWe  db writer process 14 (e)
DBWf  db writer process 15 (f)
DBWg  db writer process 16 (g)
DBWh  db writer process 17 (h)
DBWi  db writer process 18 (i)
DBWj  db writer process 19 (j)
DBWk  db writer process 20 (k)
DBWl  db writer process 21 (l)
DBWm  db writer process 22 (m)
DBWn  db writer process 23 (n)
DBWo  db writer process 24 (o)
DBWp  db writer process 25 (p)
DBWq  db writer process 26 (q)
DBWr  db writer process 27 (r)
DBWs  db writer process 28 (s)
DBWt  db writer process 29 (t)
DBWu  db writer process 30 (u)
DBWv  db writer process 31 (v)
DBWw  db writer process 32 (w)
DBWx  db writer process 33 (x)
DBWy  db writer process 34 (y)
DBWz  db writer process 35 (z)
DIA0  diagnosibility process 0
DIA1  diagnosibility process 1
DIA2  diagnosibility process 2
DIA3  diagnosibility process 3
DIA4  diagnosibility process 4
DIA5  diagnosibility process 5
DIA6  diagnosibility process 6
DIA7  diagnosibility process 7
DIA8  diagnosibility process 8
DIA9  diagnosibility process 9
DIAG  diagnosibility process
DMON  DG Broker Monitor Process
DSKM  slave DiSKMon process
EMNC  EMON Coordinator
FBDA  Flashback Data Archiver Process
FMON  File Mapping Monitor Process
FSFP  Data Guard Broker FSFO Pinger
GEN0  generic0
GMON  diskgroup monitor
GTX0  Global Txn process 0
GTX1  Global Txn process 1
GTX2  Global Txn process 2
GTX3  Global Txn process 3
GTX4  Global Txn process 4
GTX5  Global Txn process 5
GTX6  Global Txn process 6
GTX7  Global Txn process 7
GTX8  Global Txn process 8
GTX9  Global Txn process 9
GTXa  Global Txn process 10
GTXb  Global Txn process 11
GTXc  Global Txn process 12
GTXd  Global Txn process 13
GTXe  Global Txn process 14
GTXf  Global Txn process 15
GTXg  Global Txn process 16
GTXh  Global Txn process 17
GTXi  Global Txn process 18
GTXj  Global Txn process 19
INSV  Data Guard Broker INstance SlaVe Process
LCK0  Lock Process 0
LGWR  Redo etc.
LMD0  global enqueue service daemon 0
LMHB  lm heartbeat monitor
LMON  global enqueue service monitor
LMS0  global cache service process 0
LMS1  global cache service process 1
LMS2  global cache service process 2
LMS3  global cache service process 3
LMS4  global cache service process 4
LMS5  global cache service process 5
LMS6  global cache service process 6
LMS7  global cache service process 7
LMS8  global cache service process 8
LMS9  global cache service process 9
LMSa  global cache service process 10
LMSb  global cache service process 11
LMSc  global cache service process 12
LMSd  global cache service process 13
LMSe  global cache service process 14
LMSf  global cache service process 15
LMSg  global cache service process 16
LMSh  global cache service process 17
LMSi  global cache service process 18
LMSj  global cache service process 19
LMSk  global cache service process 20
LMSl  global cache service process 21
LMSm  global cache service process 22
LMSn  global cache service process 23
LMSo  global cache service process 24
LMSp  global cache service process 25
LMSq  global cache service process 26
LMSr  global cache service process 27
LMSs  global cache service process 28
LMSt  global cache service process 29
LMSu  global cache service process 30
LMSv  global cache service process 31
LMSw  global cache service process 32
LMSx  global cache service process 33
LMSy  global cache service process 34
LSP0  Logical Standby
LSP1  Dictionary build process for Logical Standby
LSP2  Set Guard Standby Information for Logical Standby
MARK  mark AU for resync koordinator
MMAN  Memory Manager
MMNL  Manageability Monitor Process 2
MMON  Manageability Monitor Process
MRP0  Managed Standby Recovery
NSA1  Redo transport NSA1
NSA2  Redo transport NSA2
NSA3  Redo transport NSA3
NSA4  Redo transport NSA4
NSA5  Redo transport NSA5
NSA6  Redo transport NSA6
NSA7  Redo transport NSA7
NSA8  Redo transport NSA8
NSA9  Redo transport NSA9
NSAA  Redo transport NSAA
NSAB  Redo transport NSAB
NSAC  Redo transport NSAC
NSAD  Redo transport NSAD
NSAE  Redo transport NSAE
NSAF  Redo transport NSAF
NSAG  Redo transport NSAG
NSAH  Redo transport NSAH
NSAI  Redo transport NSAI
NSAJ  Redo transport NSAJ
NSAK  Redo transport NSAK
NSAL  Redo transport NSAL
NSAM  Redo transport NSAM
NSAN  Redo transport NSAN
NSAO  Redo transport NSAO
NSAP  Redo transport NSAP
NSAQ  Redo transport NSAQ
NSAR  Redo transport NSAR
NSAS  Redo transport NSAS
NSAT  Redo transport NSAT
NSAU  Redo transport NSAU
NSAV  Redo transport NSAV
NSS1  Redo transport NSS1
NSS2  Redo transport NSS2
NSS3  Redo transport NSS3
NSS4  Redo transport NSS4
NSS5  Redo transport NSS5
NSS6  Redo transport NSS6
NSS7  Redo transport NSS7
NSS8  Redo transport NSS8
NSS9  Redo transport NSS9
NSSA  Redo transport NSSA
NSSB  Redo transport NSSB
NSSC  Redo transport NSSC
NSSD  Redo transport NSSD
NSSE  Redo transport NSSE
NSSF  Redo transport NSSF
NSSG  Redo transport NSSG
NSSH  Redo transport NSSH
NSSI  Redo transport NSSI
NSSJ  Redo transport NSSJ
NSSK  Redo transport NSSK
NSSL  Redo transport NSSL
NSSM  Redo transport NSSM
NSSN  Redo transport NSSN
NSSO  Redo transport NSSO
NSSP  Redo transport NSSP
NSSQ  Redo transport NSSQ
NSSR  Redo transport NSSR
NSSS  Redo transport NSSS
NSST  Redo transport NSST
NSSU  Redo transport NSSU
NSSV  Redo transport NSSV
NSV0  Data Guard Broker NetSlave Process 0
NSV1  Data Guard Broker NetSlave Process 1
NSV2  Data Guard Broker NetSlave Process 2
NSV3  Data Guard Broker NetSlave Process 3
NSV4  Data Guard Broker NetSlave Process 4
NSV5  Data Guard Broker NetSlave Process 5
NSV6  Data Guard Broker NetSlave Process 6
NSV7  Data Guard Broker NetSlave Process 7
NSV8  Data Guard Broker NetSlave Process 8
NSV9  Data Guard Broker NetSlave Process 9
NSVA  Data Guard Broker NetSlave Process A
NSVB  Data Guard Broker NetSlave Process B
NSVC  Data Guard Broker NetSlave Process C
NSVD  Data Guard Broker NetSlave Process D
NSVE  Data Guard Broker NetSlave Process E
NSVF  Data Guard Broker NetSlave Process F
NSVG  Data Guard Broker NetSlave Process G
NSVH  Data Guard Broker NetSlave Process H
NSVI  Data Guard Broker NetSlave Process I
NSVJ  Data Guard Broker NetSlave Process J
NSVK  Data Guard Broker NetSlave Process K
NSVL  Data Guard Broker NetSlave Process L
NSVM  Data Guard Broker NetSlave Process M
NSVN  Data Guard Broker NetSlave Process N
NSVO  Data Guard Broker NetSlave Process O
NSVP  Data Guard Broker NetSlave Process P
NSVQ  Data Guard Broker NetSlave Process Q
NSVR  Data Guard Broker NetSlave Process R
NSVS  Data Guard Broker NetSlave Process S
NSVT  Data Guard Broker NetSlave Process T
NSVU  Data Guard Broker NetSlave Process U
PING  interconnect latency measurement
PMON  process cleanup
PSP0  process spawner 0
QMNC  AQ Coordinator
RBAL  ASM Rebalance master
RCBG  Result Cache: Background
RECO  distributed recovery
RMS0  rac management server
RSM0  Data Guard Broker Resource Guard Process 0
RSMN  Remote Slave Monitor
RVWR  Recovery Writer
SMCO  Space Manager Process
SMON  System Monitor Process
VBG0  Volume BG 0
VBG1  Volume BG 1
VBG2  Volume BG 2
VBG3  Volume BG 3
VBG4  Volume BG 4
VBG5  Volume BG 5
VBG6  Volume BG 6
VBG7  Volume BG 7
VBG8  Volume BG 8
VBG9  Volume BG 9
VDBG  Volume Driver BG
VKRM  Virtual sKeduler for Resource Manager
VKTM  Virtual Keeper of TiMe process
VMB0  Volume Membership 0
XDMG  cell automation manager
XDWK  cell automation worker actions

295 rows selected.

most people know that they require a license for using all that nice reports which can be generated out of the statistical data stored in the advanced workload repository ( AWR ). probably less people know that there still is an alternative which is free to use: statspack. although statspack is not that much automated as awr and therefore requires a bit more work to do it still can be used to generate reports which can point you to the right direction in case you face some performance issues.

so how do you work with it ?
as usual, the scripts are located in the rdbms/admin directory of your $ORACLE_HOME. statspack needs an initial setup to create all the objects required:

SQL>@?/rdbms/admin/spcreate.sql

the script will ask a few questions:

  • the password for the perfstat user
  • the tablespace to store the statistical data ( you should probably create a separate tablespace for this )
  • the temporary tablespace to use

that’s it. statspack is installed and ready to use, almost. in awr snapshots are created automatically ( usually every hour if you didn’t adjust the interval ). with statspack the snapshots need to be created manually:

SQL> exec perfstat.statspack.snap;

there is even a little script which creates a job for you if you want to automate the snapshots ( hourly, by default ):

SQL>@?/rdbms/admin/spauto.sql

of course you might want to adjust the script if hourly snapshots do not fit in your case.

so, once you have at least two snapshots available you are ready to create a report:

SQL>@?/rdbms/admin/spreport.sql

similar to the awr report the script will ask for the begin and end snapshots as well as a name for the report and then will create the report for you. you’ll notice the similarities to awr immediately, e.g. the top 5 timed events:

Top 5 Timed Events						      Avg %Total
~~~~~~~~~~~~~~~~~~						     wait   Call
Event						 Waits	  Time (s)   (ms)   Time
----------------------------------------- ------------ ----------- ------ ------
CPU time							24	    42.3
db file async I/O submit			   364		11     30   19.1
control file parallel write			   232		 8     36   15.0
db file sequential read 			   577		 4	6    6.6
log file parallel write 			    78		 3     41    5.7
	  -------------------------------------------------------------

ok, the output is not as nice the html reports of awr, but this is still a lot of information to work with, and it’s free.

there are some other scripts around which assist in maintaining the statspack repository:

  • spdrop.sql: for droppping statspack
  • sppurge.sql: for dropping a range a snapshots
  • sprepins.sql: for reporting differences between snapshots
  • sptrunc.sql: for truncating the statspack repository

… to name a few of them. there are some more which you might look at ( they all start with sp* ).

if you want to list the privileges for your current session in oracle you can do:

select * from session_privs;
PRIVILEGE
----------------------------------------
ALTER SYSTEM
AUDIT SYSTEM
CREATE SESSION
ALTER SESSION
RESTRICTED SESSION
CREATE TABLESPACE
ALTER TABLESPACE
...

… and the underlying query is this one:

select spm.name
from sys.v$enabledprivs ep, system_privilege_map spm
where spm.privilege = ep.priv_number

the previous post linked to the first sql implementation called system r. this system was based on a paper called A Relational Model of Data for Large Shared Data Banks by E.F.Codd which introduced the main concepts for relational database systems.

this is definitely worth reading as it introduced a lot of terms and basics still valid today.

system r

February 3, 2013 — Leave a comment

if you are interested in where todays databases are based on you might interested in the following link:

A History and Evaluation of System R

for example these requirements ( taken from the link above ) have been defined back in the 1970ies and are still valied today:

  1. To provide a high-level, nonnavigational user interface for maximum user productivity and data independence
  2. To support different types of database use including programmed transactions, ad hoc queries, and report genereration
  3. To support a rapidly changing database environment, in which tables, indexes, views, transactions, and other objects could easily be added to and removed from the database without stopping the system.
  4. To support a population of many concurrent users, with mechanisms to protext the integrity of the database in a concurrent-update environment
  5. To provide a means of recovering the contents of the database to a consistent state after a failure of hardware or software
  6. To provide a flexible mechanism whereby diffent views of stored data can be defined and various users can be authorized to query and update these views
  7. To support all of the above functions with a level of performance comparable to exsiting lower-function database systems.

as it is with oracle postgresql relies on various statistics to produce the explain plan for a query. the catalog table which stores the statistical information is pg_statistic.

for the purpose of this post let’s play with a simple table and see what gets stored in the pg_statistic table.

drop table if exists t1;
create table t1 ( a int );

I will always use the same query for producing the output from the pg_statistics table:

select pa.attname
     , ps.stawidth
     , ps.stadistinct
     , ps.stanullfrac
     , ps.stavalues1
     , ps.stavalues2
     , ps.stavalues3
     , ps.stavalues4
     , ps.stavalues5
  from pg_class pc
     , pg_statistic ps
     , pg_attribute pa
 where pc.relname = 't1'
   and pc.relowner = ( select nspowner 
                         from pg_namespace
                        where nspname = 'public' )
   and pc.oid = ps.starelid
   and pa.attnum = ps.staattnum
   and pa.attrelid = pc.oid;

If you run the query right now it will not return any rows as pg_statistics gets populated once you run analyze and at least one row is inserted into the table ( but not before ).

--exec statistics-query from above:
 attname | stawidth | stadistinct | stanullfrac | stavalues1 | stavalues2 | stavalues3 | stavalues4 | stavalues5 
---------+----------+-------------+-------------+------------+------------+------------+------------+------------
(0 rows)

So let’s create a row and see what’s happening:

insert into t1 (a) values (1);
analyze t1;
--exec statistics-query from above:
 attname | stawidth | stadistinct | stanullfrac | stavalues1 | stavalues2 | stavalues3 | stavalues4 | stavalues5 
---------+----------+-------------+-------------+------------+------------+------------+------------+------------
 a       |        4 |          -1 |           0 |      |      |      |      | 
(1 row)

what does this tell ?

  • the column “a” is reported to have a width of 4 which corresponds to the definition of the integer data-type.
  • distinct values is reported as “-1” which seems a little but surprising for the moment as it should be 1, shouldn’t it ? in fact this is equal to one as it tells that there is statistically one occurrence for the value
  • the number of null values is zero, of course

let’s create another row and have a look at the results:

insert into t1 (a) values (2);
analyze t1;
--exec statistics-query from above:
 attname | stawidth | stadistinct | stanullfrac | stavalues1 | stavalues2 | stavalues3 | stavalues4 | stavalues5 
---------+----------+-------------+-------------+------------+------------+------------+------------+------------
 a       |        4 |          -1 |           0 | {1,2}      |      |      |      | 

the only column that changed is the stavalues1 column. as soon as there is more than one row in the table this column gets populated. distinct values is still reported as “-1” as each row in the table has a different value. this will not change as long as you insert unique values and not too much of them ( more on his later ):

insert into t1 (a) values (generate_series(3,10));
analyze t1;
--exec statistics-query from above:
 attname | stawidth | stadistinct | stanullfrac |       stavalues1       | stavalues2 | stavalues3 | stavalues4 | stavalues5 
---------+----------+-------------+-------------+------------------------+------------+------------+------------+------------
 a       |        4 |          -1 |           0 | {1,2,3,4,5,6,7,8,9,10} |      |      |      | 
(1 row)

once you insert the same values again the stadistinct column gets updated to reflect the change in the data distribution:

insert into t1 (a) values (generate_series(1,10));
analyze t1;
--exec statistics-query from above:
 attname | stawidth | stadistinct | stanullfrac |       stavalues1       | stavalues2 | stavalues3 | stavalues4 | stavalues5 
---------+----------+-------------+-------------+------------------------+------------+------------+------------+------------
 a       |        4 |        -0.5 |           0 | {1,2,3,4,5,6,7,8,9,10} |      |      |      | 

now there is a value of “-0.5” which essentially tells that there are about 2 entries for each distinct value. we can do the same again and the stadistinct value decreases again to match data ( around three occurrences for each distinct value ):

insert into t1 (a) values (generate_series(1,10));
analyze t1;
--exec statistics-query from above:
 attname | stawidth | stadistinct | stanullfrac |       stavalues1       | stavalues2 | stavalues3 | stavalues4 | stavalues5 
---------+----------+-------------+-------------+------------------------+------------+------------+------------+------------
 a       |        4 |   -0.333333 |           0 | {1,2,3,4,5,6,7,8,9,10} |      |      |      | 
(1 row)

as it looks right now postgresql stores all the values of a column another time in the pg_statistic catalog table. as this would be bad practice let’s if this changes once we insert more data:

insert into t1 (a) values (generate_series(11,10000));
analyze t1;
--exec statistics-query from above:
 attname | stawidth | stadistinct | stanullfrac |       stavalues1       |        stavalues2    | stavalues3 | stavalues4 | stavalues5 
---------+----------+-------------+-------------+------------------------+----------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------+---
---------+------------+------------
 a       |        4 |   -0.998004 |           0 | {1,2,3,4,5,6,7,8,9,10} | {11,110,210,310,410,510,610,710,810,910,1009,1109,1209,1309,1409,1509
,1609,1709,1809,1908,2008,2108,2208,2308,2408,2508,2608,2708,2807,2907,3007,3107,3207,3307,3407,3507,3607,3706,3806,3906,4006,4106,4206,4306,440
6,4506,4605,4705,4805,4905,5005,5105,5205,5305,5405,5504,5604,5704,5804,5904,6004,6104,6204,6304,6403,6503,6603,6703,6803,6903,7003,7103,7203,73
02,7402,7502,7602,7702,7802,7902,8002,8102,8201,8301,8401,8501,8601,8701,8801,8901,9001,9100,9200,9300,9400,9500,9600,9700,9800,9900,10000} |      |      | 
(1 row)

now we get the next stavalues column populated in steps of a hundred. the first stavalues columns remains unchanged in this example.

in oracle words the stavaluesN columns are similar to a histogram. it’s a statistical representation of the data distribution which is then used to generate execution plans. more to come …

just noticed another difference between postgres and oracle:

oracle:

SQL> create table t1 ( a number(10,2 ) );
Table created.
SQL> create view v1 as select a from t1;
View created.
SQL> alter table t1 modify ( a number(11,2));
Table altered.

postgresql:

[postgres] > create table t1 ( a numeric(10,2) );
CREATE TABLE
[postgres] > create view v1 as select a from t1;
CREATE VIEW
[postgres] > alter table t1 alter column a type numeric(11,2);
ERROR:  cannot alter type of a column used by a view or rule
DETAIL:  rule _RETURN on view v1 depends on column "a"

the same is true if you want to drop a table which has a view defined on it:
oracle:

SQL> drop table t1;
Table dropped.

postgres:

[postgres] > drop table t1;
ERROR:  cannot drop table t1 because other objects depend on it
DETAIL:  view v1 depends on table t1
HINT:  Use DROP ... CASCADE to drop the dependent objects too.

just noticed that you can disable table locks and thus prevent ddl on a table in oracle.

create table t1 ( a number );
alter table t1 disable table lock;

according to the documentation this should prevent all ddls on this table. let’s see if it works:

alter table t1 modify ( a number(1,0 ));
alter table t1 modify ( a number(1,0 ))
*
ERROR at line 1:
ORA-00069: cannot acquire lock -- table locks disabled for T1

works if I try to modify a column definition. can you drop the table ?

drop table t1;
drop table t1
           *
ERROR at line 1:
ORA-00069: cannot acquire lock -- table locks disabled for T1

ok, for dropping, too. truncating ?

truncate table t1;
truncate table t1
               *
ERROR at line 1:
ORA-00069: cannot acquire lock -- table locks disabled for T1

what about adding a column ?

alter table t1 add ( b number );
Table altered.

hm…not sure if this is a bug, but think so ( this is 11.2.0.3 ). and an alter statement for sure is ddl.

on a 10.2.0.4 this does not work:

select version from v$instance;
VERSION
-----------------
10.2.0.4.0
alter table t1 add ( b number );
alter table t1 add ( b number )
*
ERROR at line 1:
ORA-00069: cannot acquire lock -- table locks disabled for T1

when working with oracle there is exactly one numeric data type one may use for storing numbers ( and that’s number ). in postgresql there are ten. what happened when we migrated an oracle database to postgresql was, that a source table in oracle took much less space than the same table in postgresql. thanks to some people on the pgsql-performance mailing list one of the reasons for this was, that the wrong numeric data type was chosen for storing the integers in postgresql.

a simple test-case:

drop table datatype1;
drop table datatype2;
create table datatype1
( a numeric ( 8,0 )
, b numeric ( 8,0 )
);
insert into datatype1 ( a, b )
       values ( generate_series ( 1, 10000000 )
              , generate_series ( 1, 10000000 )
              );
create index idatatype1_1 on datatype1 ( a );
create index idatatype1_2 on datatype1 ( b );
create table datatype2
( a int
, b int
);
insert into datatype2 ( a, b )
       values ( generate_series ( 1, 10000000 )
              , generate_series ( 1, 10000000 )
             );
create index idatatype2_1 on datatype2 ( a );
create index idatatype2_2 on datatype2 ( b );
analyze verbose datatype1;
analyze verbose datatype2;
select pg_size_pretty ( pg_relation_size ( 'datatype1' ) );
 pg_size_pretty 
----------------
 422 MB
(1 row)
select pg_size_pretty ( pg_relation_size ( 'datatype2' ) );
 pg_size_pretty 
----------------
 346 MB
(1 row)

for these two little tables the difference is about 76mb. depending on the statements this is 76mb more that needs to be scanned and this can have great impacts on performance. surprisingly, at least for me, this is not true for the indexes:

select pg_size_pretty ( pg_relation_size ( 'idatatype1_1' ) );
 pg_size_pretty 
----------------
 214 MB
(1 row)
select pg_size_pretty ( pg_relation_size ( 'idatatype2_1' ) );
 pg_size_pretty 
----------------
 214 MB
(1 row)
select pg_size_pretty ( pg_relation_size ( 'idatatype2_1' ) );
 pg_size_pretty 
----------------
 214 MB
(1 row)
select pg_size_pretty ( pg_relation_size ( 'idatatype2_2' ) );
 pg_size_pretty 
----------------
 214 MB
(1 row)

so, it’s worth to keep an eye on which data types to use for numbers …

tricky situation: planned to apply a patch to an oracle home and opatch failes with:

OPatch failed to locate Central Inventory.
Possible causes are: 
    The Central Inventory is corrupted
    The oraInst.loc file specified is not valid.
LsInventorySession failed: OPatch failed to locate Central Inventory.
Possible causes are: 
    The Central Inventory is corrupted
    The oraInst.loc file specified is not valid.

obviously the first thing to check is if the pointer ( oraInst.loc ) points to the correct location and the inventory group is listed correctly:

cat /etc/oraInst.loc 
inventory_loc=/opt/oracle/oraInventory
inst_group=oinstall

this is what it should look like assuming oinstall is really the inventory group and your inventory location is really /opt/oracle/oraInventory. so this part is fine but somehow the oraInventory got lost:

ls -la /opt/oracle/
total 312
drwxrwx--- 4 grid   oinstall   4096 Sep 19 21:54 .
drwxr-xr-x 4 root   root       4096 Mar 23 08:52 ..
drwxrwx--- 4 grid   oinstall   4096 Mar 23 08:43 product
drwxrwx--- 3 grid   oinstall   4096 May  3 23:52 stage

what to do ? restore it from backup would be one solution ( you do regular backups of your inventory, don’t you ? ). let’s assume no backup is available. you still may restore the inventory using the oracle installer available in the oracle home you want to register:

ls -la $ORACLE_HOME/oui
total 60
drwxr-xr-x  8 oracle oinstall 4096 Mar 26 14:23 .
drwxrwx--- 76 oracle oinstall 4096 May  3 23:17 ..
-rwxr-xr-x  1 oracle oinstall  323 Feb 17  2007 admin_langs.xml
drwxr-xr-x  3 oracle oinstall 4096 Mar 26 14:23 bin
-rw-r-----  1 oracle oinstall 6680 Mar 26 14:20 clusterparam.ini
drwxr-xr-x  2 oracle oinstall 4096 Mar 26 14:20 instImages
drwxr-xr-x  4 oracle oinstall 4096 Mar 26 14:20 jlib
drwxr-xr-x  3 oracle oinstall 4096 Mar 26 14:23 lib
-rwxr-xr-x  1 oracle oinstall 2110 Feb 17  2007 nlsrtlmap.xml
-rw-r-----  1 oracle oinstall 6782 Mar 26 14:20 oraparam.ini
drwxr-xr-x  3 oracle oinstall 4096 Mar 26 14:16 prov
-rwxr-xr-x  1 oracle oinstall  748 Feb 17  2007 runtime_langs.xml
drwxr-xr-x  2 oracle oinstall 4096 Mar 26 14:20 schema

even if the inventory is completely lost for recreating it a simple call to the oracle installer will recreate it and register the ORACLE_HOME:

$ORACLE_HOME/oui/bin/runInstaller -silent -attachHome ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_NAME=H11203

if successful you should see the following output:

Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB.   Actual 2000 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /opt/oracle/oraInventory
'AttachHome' was successful.

… and opatch will succeed from now on.