just a short notice, that the postgresql people released a security update for the current version ( 9.1 ) and some of the older versions. see the announcement for more information.

recently a customer had the request to export a column to a flat file. nothing special, but the request was to encrypt the column as the data needs to be delivered to a third site. as you might know oracle provides the dbms_obfuscation_toolkit package which might be used for requests like this. so, here is an example ( as wordpress does not allow to upload scripts, sorry for the long pasting :) ):

/**
Export a column for a given table and schema in encrypted format to csv
PARAMETERS: OWNER
            TABLE
            COLUMN
            ENRYPTION-KEY
**/
SET LINES 164 VERIFY OFF PAGES 999 SERVEROUT ON FEEDBACK OFF
VARIABLE owner VARCHAR2(30)
VARIABLE table_name VARCHAR2(30)
VARIABLE column_name VARCHAR2(30)
VARIABLE encryption_key VARCHAR2(100)
VARIABLE continue VARCHAR2(1);
EXEC dbms_output.put_line ( chr(13) );
EXEC dbms_output.put_line ( '------------------------------------------------------' );
EXEC dbms_output.put_line ( '-- INFO: Starting encrypted column export  ');
EXEC dbms_output.put_line ( chr(13) );
/**
do some sanity checks
**/
DECLARE
  lv_owner all_users.username%TYPE;
  lv_directory all_directories.directory_path%TYPE;
  FUNCTION schema_exists ( pv_schema IN all_users.username%TYPE )
                         RETURN BOOLEAN
  IS
    CURSOR cur_schema
    IS SELECT 'ok'
         FROM all_users
        WHERE username = pv_schema
    ;
    lv_cursor_result VARCHAR2(2);
  BEGIN
    OPEN cur_schema;
      FETCH cur_schema INTO lv_cursor_result;
    CLOSE cur_schema;
    RETURN NVL ( lv_cursor_result, 'xx' ) = 'ok';
  END schema_exists;
  FUNCTION table_exists ( pv_schema IN all_users.username%TYPE
                        , pv_table IN all_tables.table_name%TYPE
                        ) RETURN BOOLEAN
  IS
    CURSOR cur_table
    IS SELECT 'ok'
         FROM all_tables
        WHERE owner = pv_schema
          AND table_name = pv_table
    ;
    lv_cursor_result VARCHAR2(2);
  BEGIN
    OPEN cur_table;
      FETCH cur_table INTO lv_cursor_result;
    CLOSE cur_table;
    RETURN NVL ( lv_cursor_result, 'xx' ) = 'ok';
  END table_exists;
  FUNCTION column_exists ( pv_schema IN all_users.username%TYPE
                         , pv_table IN all_tables.table_name%TYPE
                         , pv_column IN all_tab_columns.column_name%TYPE
                         ) RETURN BOOLEAN
  IS
    CURSOR cur_column
    IS SELECT 'ok'
         FROM all_tab_columns
        WHERE owner = pv_schema
          AND table_name = pv_table
          AND column_name = pv_column
    ;
    lv_cursor_result VARCHAR2(2);
  BEGIN
    OPEN cur_column;
      FETCH cur_column INTO lv_cursor_result;
    CLOSE cur_column;
    RETURN NVL ( lv_cursor_result, 'xx' ) = 'ok';
  END column_exists;
  FUNCTION exp_directory_exists RETURN BOOLEAN
  IS
    CURSOR cur_directory
    IS SELECT directory_path
         FROM all_directories
        WHERE directory_name = 'DATA_PUMP_DIR'
    ;
  BEGIN
    OPEN cur_directory;
      FETCH cur_directory INTO lv_directory;
    CLOSE cur_directory;
    RETURN lv_directory IS NOT NULL;
  END exp_directory_exists;
BEGIN
   :owner := UPPER('&1');
   :table_name := UPPER('&2'); 
   :column_name := UPPER('&3');
   :encryption_key := '&4';
   IF :encryption_key IS NULL
      OR
      -- encryption key must be at least 80 characters
      length ( :encryption_key ) < 80
   THEN
     raise_application_error ( -20001
                             , '--- CHECK INPUT: The encryption key "'||:encryption_key||'" is too short or empty ! ');
     :continue := 'N';
   END IF;
   IF NOT ( schema_exists ( pv_schema => :owner ) )
   THEN
     raise_application_error ( -20001
                             , '--- CHECK INPUT: The schema '||:owner||' does not exist in this database ! ');
     :continue := 'N';
   END IF;
   IF NOT ( table_exists ( pv_schema => :owner
                         , pv_table => :table_name
                         ) )
   THEN
     raise_application_error ( -20001
                             , '--- CHECK INPUT: The table '||:table_name||' does not exist for the schema '||:owner||' ! ');
     :continue := 'N';
   END IF;
   IF NOT ( column_exists ( pv_schema => :owner
                          , pv_table => :table_name
                          , pv_column => :column_name
                          ) )
   THEN
     raise_application_error ( -20001
                             , '--- CHECK INPUT: The column '||:column_name||' does not exist for the table '||:table_name||' in schema '||:owner||'! ');
     :continue := 'N';
   END IF;
   IF NOT ( exp_directory_exists )
   THEN
     raise_application_error ( -20001
                             , '--- CHECK DIRECTORY: Can not find directory DATA_PUMP_DIR ! ');
     :continue := 'N';
   ELSE
     dbms_output.put_line (' --- INFO: File will be located here : '||lv_directory );
   END IF;
   :continue := 'Y';
END;
/
DECLARE
  lv_owner all_users.username%TYPE := UPPER(:owner);
  lv_table all_tables.table_name%TYPE := UPPER(:table_name);
  lv_column all_tab_columns.column_name%TYPE := UPPER(:column_name);
  lt_file_type utl_file.file_type;
  lv_statement VARCHAR2(4000);
  ln_cursor NUMBER;
  ln_amount_of_rows INTEGER := 0;
  lv_value VARCHAR2(4000);
  lv_seed VARCHAR2(4000) := :encryption_key;
BEGIN
  IF ( :continue = 'Y' ) 
  THEN
    dbms_output.put_line (' --- INFO: selected schema is : '||lv_owner );
    dbms_output.put_line (' --- INFO: selected table is  : '||lv_table );
    dbms_output.put_line (' --- INFO: selected column is : '||lv_column );
    dbms_output.put_line (' --- INFO: exporting to       : DATA_PUMP_DIR ');
    -- open file for writing
    lt_file_type := utl_file.fopen ( location => 'DATA_PUMP_DIR'
                                   , filename => 'exp_'||lv_owner||'_'||lv_table||'_'||lv_column||'_'
                                                  ||TO_CHAR(sysdate,'DD.MM.YYYY HH24:MI:SS')||'.csv'
                                   , open_mode => 'w'
                                   );
    lv_statement := 'SELECT '||lv_column||' FROM '||lv_owner||'.'||lv_table;
    dbms_output.put_line (' --- INFO: statement is: '||lv_statement );
    -- prepare statement
    ln_cursor := dbms_sql.open_cursor;
    dbms_sql.parse ( c => ln_cursor
                   , statement => lv_statement
                   , language_flag => dbms_sql.v7 
                   );
    dbms_sql.define_column ( c => ln_cursor
                           , position => 1
                           , column => lv_column
                           , column_size => 4000
                           );
    ln_amount_of_rows := dbms_sql.execute ( c => ln_cursor ); 
    LOOP
      -- fetch all the rows
      IF dbms_sql.fetch_rows ( c => ln_cursor ) = 0
      THEN
        EXIT;
      ELSE
        ln_amount_of_rows := ln_amount_of_rows + 1;
        -- get the column value
        dbms_sql.column_value ( c => ln_cursor
                              , position => 1
                              , value => lv_value 
                              ); 
        -- input must be a multiple of 8 bytes, so rpadding with blanks
        dbms_obfuscation_toolkit.desencrypt ( input_string => rpad ( lv_value
                                                                   , ( trunc ( length ( lv_value ) / 8 ) + 1 ) * 8
                                                                   , chr(0)
                                                                   )
                                            , key_string => lv_seed 
                                            , encrypted_string  => lv_value
                                            );
        -- write to output file
        utl_file.put_line ( file => lt_file_type 
                          , buffer => utl_raw.cast_to_raw ( lv_value )
                          );
      END IF;
    END LOOP;
    -- close cursor
    dbms_sql.close_cursor ( c => ln_cursor );
    -- close file handle
    utl_file.fclose ( file => lt_file_type );
    dbms_output.put_line (' --- Exported '||ln_amount_of_rows||' rows !!! ');
  END IF;
END;
/

for decrypting one could use this snippet:

/**
Decrypt the encrypted data in the exported csv files
**/
SET LINES 164 VERIFY OFF PAGES 999 SERVEROUT ON FEEDBACK OFF
VARIABLE filename VARCHAR2(4000)
VARIABLE encryption_key VARCHAR2(100)
VARIABLE continue VARCHAR2(1);
EXEC dbms_output.put_line ( chr(13) );
EXEC dbms_output.put_line ( '------------------------------------------------------' );
EXEC dbms_output.put_line ( '-- INFO: Starting decryption  ');
EXEC dbms_output.put_line ( chr(13) );
/**
do some sanity checks
**/
DECLARE
  lv_directory all_directories.directory_path%TYPE;
  FUNCTION exp_directory_exists RETURN BOOLEAN
  IS
    CURSOR cur_directory
    IS SELECT directory_path
         FROM all_directories
        WHERE directory_name = 'DATA_PUMP_DIR'
    ;
  BEGIN
    OPEN cur_directory;
      FETCH cur_directory INTO lv_directory;
    CLOSE cur_directory;
    RETURN lv_directory IS NOT NULL;
  END exp_directory_exists;
BEGIN
  :filename := '&1';
  :encryption_key := '&2';
  IF ( length ( :encryption_key ) < 80 ) 
  THEN
     raise_application_error ( -20001
                             , '--- CHECK INPUT: The encryption key "'||:encryption_key||'" is too short or empty ! ');
     :continue := 'N';
  END IF;
  IF NOT ( exp_directory_exists )
  THEN
    raise_application_error ( -20001
                            , '--- CHECK DIRECTORY: Can not find directory DATA_PUMP_DIR ! ');
    :continue := 'N';
  ELSE
    dbms_output.put_line (' --- INFO: File to be loaded : '||lv_directory );
  END IF;
  :continue := 'Y';
END;
/
DECLARE
  lt_file_type utl_file.file_type;
  lv_buffer VARCHAR2(4000);
  lv_buffer_raw RAW(4000);
  lraw_decrypted_value RAW(4000);
  lv_value RAW(32767);
BEGIN
  IF ( :continue = 'Y' ) 
  THEN
    -- open file handle
    lt_file_type := utl_file.fopen ( location => 'DATA_PUMP_DIR'
                                   , filename => :filename
                                   , open_mode => 'r'
                          --         , max_linesize => 4000
                                   );
    -- read through the file line by line
    LOOP
      BEGIN
        utl_file.get_line ( file => lt_file_type
                         ,  buffer => lv_buffer
                         );
        lv_buffer_raw := lv_buffer;
        dbms_obfuscation_toolkit.desdecrypt ( input => lv_buffer_raw
                                            , key => utl_raw.cast_to_raw ( :encryption_key )
                                            , decrypted_data => lraw_decrypted_value
                                            );
        dbms_output.put_line ( 'encrypted value: '||utl_raw.cast_to_varchar2(lraw_decrypted_value) );
      EXCEPTION
        WHEN no_data_found THEN EXIT;
      END;
    END LOOP;
    -- close file handle
    utl_file.fclose ( file => lt_file_type );
  ELSE
    NULL;
  END IF;
END;
/

for encrypting and decrypting two simple calls now do the trick:

# to encrypt:
@encrypted_column_export.sql USER TABLE_NAME COLUMN_NAME ENCRYPTION_KEY
# to decrypt
@decrypt_exported_csv.sql FILE_NAME ENCRYPTION_KEY

postgresql, as well as oracle, heavily depends on accurate statistics for being able to provide the best execution plan for a given query ( both use a cost based optimizer ).

in postgresql the components involved for creating and executing the best execution plan are:

a, from an oracle perspective, special feature of the rewriter is, that you can create custom rules.
a simple example for a table containing three rows:

sysdba@[local]:1540/dbs200# create table t1 ( a integer, b char(20) );
CREATE TABLE
sysdba@[local]:1540/dbs200*# insert into t1 values ( 1, 'text1' );
INSERT 0 1
sysdba@[local]:1540/dbs200*# insert into t1 values ( 2, 'text2' );
INSERT 0 1
sysdba@[local]:1540/dbs200*# insert into t1 values ( 3, 'text3' );
INSERT 0 1
sysdba@[local]:1540/dbs200*# COMMIT;
COMMIT

let’s say the second row is such important that you do not want to allow changes to it. to achieve this you could create a trigger or you may create a rule:

sysdba@[local]:1540/dbs200# create rule myrule as on update to t1 where old.a = 2 do instead nothing;
CREATE RULE
sysdba@[local]:1540/dbs200# update t1 set b = 'blabla' where a=2;
UPDATE 0
sysdba@[local]:1540/dbs200*# select * from t1 where a=2;
 a |          b           
---+----------------------
 2 | text2               
(1 row)

such rules can be created for insert, update and delete statements and add additional conditions to statements on the tables in question. check the documentation for a complete description on this.

the optimizer in general does the same job as the optimizer in oracle does, except that you can not give hints to the optimizer in postgresql.

of course you can create indexes in postgresql for optimizing access to the data of interest. postgres provides four types of indexes:

  • B-tree: same as in oracle
  • hash: uses hash tables and is only available for “=” operations
  • Generalzied Search Tree ( GiST ): a special type often used for geometric or full text search purposes
  • Generalized Inverted Index ( GIN ): an index used for lists and arrays
    • a more detailed view on indexes in another post.

      as mentioned above the planner/optimizer is responsible for creating the best possible execution plan. a plan internally is a tree which consists of several sub plans. as with oracle there are different choices the planner/optimizer has available:

      • sequential scan: read the whole table and check each row for a match, same as in oracle
      • indexscan: read the index for matches and go to the table for reading the row. it is important to know that postgres always has to visit the table to check if a row is allowed to see in the current transaction ( no index only access possible ). this is because postgres saves the “undo” data in the table itself but not in the index
      • bitmap index scan: scans the index for matches, save the results in a bitmap in memory, sorts the bitmap in the order of the table ( that is sort by block numbers ) and than reads the table. this suppresses jumping between the index and the table all the time.
      • nested loop join: scan the outer table and then go the inner table, same as in oracle
      • hash join: creates a hash table from one the of the joined tables and then uses the hash values for searching the other table(s), same as in oracle
      • merge join: first sorts the joined tables ( depending on the join ) and then reads the tables in parallel

      in oracle you can either use the explain command, the dbms_xplan package, autotrace or give hints to the optimizer for displaying explain plans. in postgresql it’s the explain command. using the same table from above to display the explain plan for a simple select you would do:

      sysdba@[local]:1540/dbs200*# explain select * from t1;
                            QUERY PLAN                      
      ------------------------------------------------------
       Seq Scan on t1  (cost=0.00..16.90 rows=690 width=88)
      (1 row)
      sysdba@[local]:1540/dbs200*# 
      

      as I know the table only contains three rows, the 690 rows reported above a far from being correct. same issue as in oracle: if the statistics are not good, the plans will not be good, too.
      the cost reporting is special in postgres:
      “cost=0.00..16.90” means: 0 for the start costs ( as this is a sequential scan the results can be send immediately as results are retrieved ) and 16.9 for the end costs ( that is the whole cost for executing the plan ).
      the width column reports the size of one result, so in total there would be 690 rows * 88 bytes = 59840 bytes ( depending on the current statistics ).

      lets check the statistics of the table:

      sysdba@[local]:1540/dbs200*# select relpages,reltuples from pg_class where relname = 't1';
       relpages | reltuples 
      ----------+-----------
              0 |         0
      (1 row)

      obviously wrong, that’s why the statistics reported by explain above are wrong, too. these numbers should change if statistics are generated:

      sysdba@[local]:1540/dbs200*# analyze verbose t1;
      INFO:  analyzing "public.t1"
      INFO:  "t1": scanned 1 of 1 pages, containing 3 live rows and 0 dead rows; 3 rows in sample, 3 estimated total rows
      ANALYZE
      sysdba@[local]:1540/dbs200*# select relpages,reltuples from pg_class where relname = 't1';
       relpages | reltuples 
      ----------+-----------
              1 |         3
      (1 row)
      

      much better. what does explain report now ?:

      sysdba@[local]:1540/dbs200*#  explain select * from t1;
                          QUERY PLAN                     
      ---------------------------------------------------
       Seq Scan on t1  (cost=0.00..1.03 rows=3 width=25)
      (1 row)
      

      much better, too. wonder how the costs get calculated ? as in oracle oracle the costs are depended on the cost of a disk read on the cost for the cpu to process the rows. in postgresql there are two parameters which specify this:

      sysdba@[local]:1540/dbs200*# show seq_page_cost;
       seq_page_cost 
      ---------------
       1
      (1 row)
      sysdba@[local]:1540/dbs200*# show cpu_tuple_cost;
       cpu_tuple_cost 
      ----------------
       0.01
      (1 row)
      

      the number reported here are the default values. of course you can tweak these by specifying the parameters in the server parameter file.

      so the costs are: ( 1 * 1 ) + ( 3 * 0.01 ) = 1.03 ( one page read + three times 0.01 for processing the three rows ).

      as plans might lie because they are based on assumptions and statistics you can use “explain analyze” ( which really executes the plan ) for comparing the calculated costs against the real costs ( in oracle you can do this by passing the gather_plan_statistics hint and calling the dbms_xplan.display function with the ‘ADVANCED’ parameter ):

      sysdba@[local]:1540/dbs200*# explain analyze select * from t1;
                                               QUERY PLAN                                          
      ---------------------------------------------------------------------------------------------
       Seq Scan on t1  (cost=0.00..1.03 rows=3 width=88) (actual time=0.011..0.026 rows=3 loops=1)
       Total runtime: 0.150 ms
      (2 rows)
      

      the value reported for loops is only interesting for joins as this reports how often a particular step was executed.

      what about indexes ? let’s generate some more data for the t1 table and create an index:

      sysdba@[local]:1540/dbs200# insert into t1 select * from t1 where a = 1;
      INSERT 0 1
      sysdba@[local]:1540/dbs200*# insert into t1 select * from t1 where a = 1;
      INSERT 0 2
      sysdba@[local]:1540/dbs200*# insert into t1 select * from t1 where a = 1;
      INSERT 0 4
      sysdba@[local]:1540/dbs200*# insert into t1 select * from t1 where a = 1;
      INSERT 0 8
      sysdba@[local]:1540/dbs200*# insert into t1 select * from t1 where a = 1;
      INSERT 0 16
      sysdba@[local]:1540/dbs200*# insert into t1 select * from t1 where a = 1;
      INSERT 0 32
      sysdba@[local]:1540/dbs200*# insert into t1 select * from t1 where a = 1;
      INSERT 0 64
      sysdba@[local]:1540/dbs200*# insert into t1 select * from t1 where a = 1;
      INSERT 0 128
      sysdba@[local]:1540/dbs200*# insert into t1 select * from t1 where a = 1;
      INSERT 0 256
      sysdba@[local]:1540/dbs200*# insert into t1 select * from t1 where a = 1;
      INSERT 0 512
      sysdba@[local]:1540/dbs200*# insert into t1 select * from t1 where a = 1;
      INSERT 0 1024
      sysdba@[local]:1540/dbs200*# COMMIT;
      COMMIT
      sysdba@[local]:1540/dbs200# CREATE INDEX I1 ON T1(A);
      CREATE INDEX
      sysdba@[local]:1540/dbs200*# COMMIT;
      COMMIT
      sysdba@[local]:1540/dbs200# \d t1
               Table "public.t1"
       Column |     Type      | Modifiers 
      --------+---------------+-----------
       a      | integer       | 
       b      | character(20) | 
      Indexes:
          "i1" btree (a)
      Rules:
          myrule AS
          ON UPDATE TO t1
         WHERE old.a = 2 DO INSTEAD NOTHING
      

      let’s see what happens if we query for a=1 now:

      sysdba@[local]:1540/dbs200*# explain analyze select * from t1 where a = 1;
                                                   QUERY PLAN                                             
      ----------------------------------------------------------------------------------------------------
       Seq Scan on t1  (cost=0.00..40.62 rows=2048 width=25) (actual time=0.030..6.873 rows=2048 loops=1)
         Filter: (a = 1)
       Total runtime: 12.110 ms
      (3 rows)
      

      as expected the index is not used ( too many rows will match the criteria a=1 ). let’s check the table statistics and see if the cost are as expected:

      sysdba@[local]:1540/dbs200*# select relpages,reltuples from pg_class where relname = 't1';
       relpages | reltuples 
      ----------+-----------
             15 |      2050
      

      so now there are 15 pages with 2050 rows: ( 1 * 15 ) + ( 2050 * 0.01 ) = 35.5
      not really what is expected, the cost is reported higher by explain. that’s because another parameter comes into the game when there is a where clause:

      sysdba@[local]:1540/dbs200*# show cpu_operator_cost;
       cpu_operator_cost 
      -------------------
       0.0025
      (1 row)
      

      again, the whole table will be scanned, each result will be processed and additionally each result will be checked against the condition ( this is the cpu_operator_cost ), so:
      ( 1 * 15 ) + ( 2050 * 0.01 ) + ( 2050 * 0.0025 ) = 40.6250 which is almost the cost reported above.

      what happens if the index will get used ?:

      sysdba@[local]:1540/dbs200# explain analyze verbose select * from t1 where a = 3;
                                                        QUERY PLAN                                                   
      ---------------------------------------------------------------------------------------------------------------
       Index Scan using i1 on public.t1  (cost=0.00..8.27 rows=1 width=25) (actual time=0.020..0.023 rows=1 loops=1)
         Output: a, b
         Index Cond: (t1.a = 3)
       Total runtime: 0.060 ms
      (4 rows)
      

      this are two random reads ( one for the index one for the table ) which is 8, which comes from the random_page_cost parameter:

      sysdba@[local]:1540/dbs200*# show random_page_cost;
       random_page_cost 
      ------------------
       4
      (1 row)
      

      so in total it is 8 plus the overhead for the cpu.
      when it comes to joins the procedure is the same as for every other cost based rdbms: make sure the statements perform well ( and produce the right plan ) for every single table joined in the statement. if this is fine, the join will be fine, too.

      for playing with different settings postgresql provides some parameters which can be set dynamically. but be careful: as with the hints you can give to the oracle optimizer these parameters should not be used to permanently fix your plans. if the plans are wrong, check your statistics, and even more important: know your data. know your data. know your data.

while reading more about postgresql I came across a discussion which is valid for oracle, too: it’s about implementing hints in the postgresql optimizer. actually it is more about why the postgresql people do not want to provide a feature like this.

for developing and testing purposes there might be situations where hints are useful. but hints do not solve the issue, hints are always a workaround. I like the way the postgresql people think about this: fix the problem ( either in the application or the postgresql source code ).

check out the pros, cons and discussions:
postgresql optimizer hint discussion
Hinting at PostgreSQL

as databases tend to hold sensitive information this information should be protected as much as possible. oracle provides various tools for securing and auditing the database: database firewall, audit vault, enterprise security, database vault to name a few of them ( and for most of them you’ll need a separate license ) and of course all the privileges you can assign and revoke inside the database.

Roles, Groups and Passwords

as oracle does, postgres bases it’s internal security mechanisms on users and roles. users created in postgres are valid globally, that is: not specific to a single database. this means the amount of users is the same for all databases. privileges can also be assigned to groups, which can be granted to users. if you are used to the oracle terms be aware that:

  • users in oracle are called roles in postgres
  • roles in oracle are called groups in postgres
  • sometimes the word role is used for both, users and groups, in postgres ( that is login roles, which are users, and nologin roles, which are groups )

to create a new login role in the database the “create role” command is used:

CREATE ROLE "user1" LOGIN;
CREATE ROLE "user2" LOGIN PASSWORD 'user2';
# create a superuser
CREATE ROLE "user3" LOGIN PASSWORD 'user3' SUPERUSER;
# create a user and grant the privilege to create roles
CREATE ROLE "user4" LOGIN PASSWORD 'user4' CREATEROLE;
# create a user allowed to create databases
CREATE ROLE "user5" LOGIN PASSWORD 'user5' CREATEDB;
# create a user allowed to create databases and password validity
CREATE ROLE "user6" LOGIN PASSWORD 'user6' CREATEDB VALID UNTIL '2012-10-01';
# create a user and limit her amount of connections
CREATE ROLE "user7" LOGIN PASSWORD 'user7' CONNECTION LIMIT 2;

be careful if you create users like above and provide the password as normal string. depending on your server configuration the passwords will be visible in the server’s logfile and the psql history:

LOG:  statement: CREATE ROLE "user1" LOGIN;
LOG:  statement: CREATE ROLE "user2" LOGIN PASSWORD 'user2';
LOG:  statement: CREATE ROLE "user3" LOGIN PASSWORD 'user2' SUPERUSER;
LOG:  statement: CREATE ROLE "role";
LOG:  statement: CREATE ROLE "user4" LOGIN PASSWORD 'user4' CREATEROLE;

as postgres internally encrypts the passwords with md5 you can prevent this by providing the encrypted password when creating users:

CREATE USER someuser LOGIN PASSWORD 'md572947234907hfasf3';

to get the encrypted password out of the database use the pg_authid view:

SELECT rolname, rolpassword FROM pg_authid;
 rolname  |             rolpassword             
----------+-------------------------------------
 sysdba   | md5448a3ec0e7a2689f0866afca52f91e13
 user1    | 
 user2    | md572881e285cdb0f9370dcdf1db0d9a869
 user3    | md53b24544e8f4b2a20f4bcca02a35df8fb
 user4    | md547e1c205dd73d4c06405bd08d255e320
 user5    | md51dc34834df4da4804236eb250118fb41
 user6    | md5bdf2912fce3ee3f6657bacc65527c7bd
 user7    | md5c5068c076d70d192c7f205a9bba4c469
 role1    | 

to create a group ( or role in oracle terms ) just skip the login attribute:

CREATE ROLE "role1";

granting groups to users:

GRANT ROLE1 TO USER1;

or

GRANT ROLE1 TO USER1 WITH ADMIN OPTION;

you can either use the psql shortcut to list the roles in the database server:

\du
                             List of roles
 Role name |                   Attributes                   | Member of 
-----------+------------------------------------------------+-----------
 role1     | Cannot login                                   | {}
 sysdba    | Superuser, Create role, Create DB, Replication | {}
 user1     |                                                | {role1}
 user2     |                                                | {}
 user3     | Superuser, Replication                         | {}
 user4     | Create role                                    | {}
 user5     | Create DB                                      | {}
 user6     | Create DB                                      | {}
 user7     | 2 connections                                  | {}

… or you may use the pg_roles view:

SELECT rolname,rolsuper,rolcreatedb,rolconnlimit,rolvaliduntil FROM pg_roles;
 rolname  | rolsuper | rolcreatedb | rolconnlimit |     rolvaliduntil      
----------+----------+-------------+--------------+------------------------
 sysdba   | t        | t           |           -1 | 
 user1    | f        | f           |           -1 | 
 user2    | f        | f           |           -1 | 
 user3    | t        | f           |           -1 | 
 user4    | f        | f           |           -1 | 
 user5    | f        | t           |           -1 | 
 user6    | f        | t           |           -1 | 2012-10-01 00:00:00+02
 user7    | f        | f           |            2 | 
 role1    | f        | f           |           -1 | 

to delete a role, just drop it:

DROP ROLE ROLE1;
# or to suppress error messages in case the role does not exist 
DROP ROLE IF EXISTS ROLE1;

to delete everything owner by a specific role:

DROP OWNED BY USER1;

you can even re-assign all objects from one role to another:

REASSIGN OWNED BY USER1 TO USER2;

granting / revoking privileges on objects is similar than in oracle with a few exceptions. if you want to grant execute on a function you’ll have to specify the parameters, too:

GRANT EXECUTE ON FUNCTION1 ( int, int ) TO USER1; (

you can grant a privilege on a whole schema ( tables, sequences and functions ) :

GRANT SELECT ON ALL TABLES IN SCHEMA A TO USER2;

you can grant privileges on a whole database:

GRANT ALL PRIVILEGES ON DATABASE DBS200 TO USER2;

you can change the owner of objects:

ALTER TABLE TEST1 OWNER TO USER2;

if you want to create objects in a separate schema ( public is the default ) you’ll have to create it first:

CREATE SCHEMA SCHEMA1;
CREATE TABLE SCHEMA1.TABLE1 ( A INTEGER );

specify the search path to avoid the schema in your commands:

SHOW search_path;
SET search_path TO schema1,public;

to display privileges either use the psql shortcut:

\z
                                    Access privileges
 Schema  |        Name        | Type  |   Access privileges   | Column access privileges 
---------+--------------------+-------+-----------------------+--------------------------
 public  | pg_stat_statements | view  | sysdba=arwdDxt/sysdba+| 
         |                    |       | =r/sysdba             | 
 schema1 | table1             | table |                       | 
(2 rows)

or query the information schema for a specific object:

SELECT * FROM information_schema.table_privileges WHERE table_name = 'TABLE1';

Client Connections

in postgres there is one file which controls if and how clients connect to the database server. the file is called “pg_hba.conf” and is located in the data area of the database server. initdb automatically creates this file when the cluster is initialized.

in my case the file looks like this:

# TYPE  DATABASE        USER            ADDRESS                 METHOD
local   all             all                                     md5
host    all             all             ::1/128                 md5
local   replication     sysdba                                md5
host    replication     sysdba        127.0.0.1/32            md5

the first column is the type, which can be one of:

  • local: this is for unix domain sockets
  • host: this is for tcp/ip
  • hostssl: this is for ssl over tcp/ip
  • hostnossl: this is for tcp/ip connections which do not use ssl

the second and third columns specifies the database name and user the configuration is valid for. by specifying addresses you can enable individual hosts or networks to connect to the database server. the last column specifies the authentication method, which can be one of:

  • trust: this effectively disables authentication and should not be used
  • reject: rejects all connections which are valid for the entry
  • md5: password authentication using md5
  • password: password authentication ( clear text )
  • ident: use the os to authenticate the user

additionally to the methods above postgres provides support for pam, kerberos, gssapi, sspi for windows, radius and ldap. all for free, in contrast to oracle.

in general one should at least use md5 to provide minimum security. trust and ident should not be used in production environments.

check the documentation for a more detailed description ….

Auditing

for auditing in postgresql you’ll need to create triggers and/or functions. there is no out of the box module which you can use. but you can use several server parameters to log information to the server’s logfile.
A quick check on pgfoundry listed a project called “audittrail” which is still in beta status and the last update was in 2007.
in the end, you’ll have to spend more work on auditing in postgresql than in oracle. this may be a drawback for enterprise installations …

as oracle does, postgresql is controlled by plenty of parameters. not all of them need to be adjusted but some of them are important to understand. so I will setup a new postgresql database with a more detailed view on the parameters one should consider when going live ( I will not go into detail on how to layout the filesystems, the focus is on the parameters ).

initially I will use the same setup as in the first post but adjust the most important parameters.

initial setup:

pg_ctl stop -D /opt/postgres/mydb -m fast
rm -rf /opt/postgres/mydb
initdb -D /opt/postgres/mydb -U sysdba -W
rm -f /opt/postgres/mydb/postgresql.conf

I deleted the sample configuration as I want to specify the most important parameters to fit my needs.

log messages are essential for the dba so one of the first things to do is to define where and how much the database server should log. there are several parameters which control this in postgresql:

parameter description
log_destionation tells the server where to write logs to, can be one of: stderr, syslog, eventlog, cvslog
logging_collector if on, the server will start its own logging process for catching logs from stderr and writing them the a log file
log_directory the directory where the log files should go to
log_filename the filename to use for the server log ( several place holders may be used to specify the format )
log_rotation_age specifies the amount of time before rotating the log file
log_rotation_size specifies the size the log file can reach before rotating the log file
log_truncate_on_rotation if on, rotated log files will be overwritten
client_min_messages controls how many and what messages are returned to the client (DEBUG5-DEBUG1,LOG;NOTICE,WARNING,ERROR,FATAL,PANIC)
log_min_messages controls how many and what messages are written to the log (DEBUG5-DEBUG1,LOG;NOTICE,WARNING,ERROR,FATAL,PANIC)
log_autovacuum_min_duration the time a vacuum opration may consume until it is reported in the logfile
log_error_verbosity control how detailed the output to the log file will be ( terse, default, verbose )
log_min_error_statement additionally reports the statement that produced an error (DEBUG5-DEBUG1,LOG;NOTICE,WARNING,ERROR,FATAL,PANIC)
log_min_durations_statement additionally reports statements which tool longer that specified
log_checkpoints if on, logs checkpoints the server’s log file
log_connections logs each new database connection to the log file
log_disconnections logs each disconnection to the log file
log_duration logs the duration of every sql statement
log_hostname converts ip addresses to hostnames in the log file
log_line_prefix specifies the prefix for each line reported to the log ( various place holders available )
log_lock_waits if on, every process waiting longer than deadlock_timeout for a lock will be reported
log_statement specifies if and which sql statements will be written to the log file ( none, ddl, mod, all )
log_temp_files specifies if log entry will be written each time a temporary file gets deleted
log_timezone specifies the timezone for the log entries

as you can see, the dba is given much more control about logging than in oracle. it clearly depends on the database and application what should be logged. to start, this set should be appropriate:

export PARAMFILE=/opt/postgres/mydb/postgresql.conf
echo "###### logging settings" >> $PARAMFILE
echo "logging_collector=on" >> $PARAMFILE
echo "log_truncate_on_rotation=on" >> $PARAMFILE
echo "log_filename='postgresql-%a.log'" >> $PARAMFILE
echo "log_rotation_age='8d'" >> $PARAMFILE
echo "log_line_prefix='%m - %l - %p - %u@%d '" >> $PARAMFILE
echo "log_directory='/var/log/'" >> $PARAMFILE
echo "log_min_messages='WARNING'" >> $PARAMFILE
echo "log_autovacuum_min_duration=360s" >> $PARAMFILE
echo "log_error_verbosity=default" >> $PARAMFILE
echo "log_min_error_statement=ERROR" >> $PARAMFILE
echo "log_duration_statement=5min" >> $PARAMFILE
echo "log_checkpoints=on" >> $PARAMFILE
echo "log_statement=ddl" >> $PARAMFILE
echo "client_min_messages='WARNING'" >> $PARAMFILE

once having specified the log settings it is time to think about the memory requirements. compared to the oracle settings there are not too much parameters to specify here:

parameter description
shared_buffers controls the amount of shared memory available to the whole database cluster. the initial size on my box is 32M which is rather small.
temp_buffers controls the amount of buffers used for temporary tables _per_ session.
work_mem the amount of memory used for sort and hash operations per operation
maintenance_work_mem the amount of memory used for maintenance operations such as ACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY

although these settings strongly depend on the database and application requirements and the serves hardware this could be a good start:

echo "###### memory settings" >> $PARAMFILE
echo "shared_buffers=256MB" >> $PARAMFILE
echo "temp_buffers=16MB" >> $PARAMFILE
echo "work_mem=4MB" >> $PARAMFILE
echo "maintenance_work_mem=16MB" >> $PARAMFILE

the next point to think about is the wal ( write ahead log ). as the wal files are essential for consistency and a production system never should go without archived logs these settings are critical. postgresql offers various parameters for controlling this ( only the most important here ):

parameter description
fsync should always be on ( default ) as this controls that comitted transactions are guaranteed to be written to disk
wal_buffers size of the wal buffers inside the databases’ shared memory ( comparable to the log_buffer in oracle )
synchronous_commit if off, asynchronous writes to the wal files are enabled ( loss of transactions may occur, but no data inconsistency )
wal_writer_delay the time frame the wal writer process writes blocks to the wal files ( 200ms by default )
checkpoint_segments the amount of checkpoint segments ( typically 16MB each ) available: comparable to oracle’s amount of redo logs
checkpoint_timeout controls the frequency of checkpoints ( 5 seconds by default )
checkpoint_warning controls how frequent checkpoints may occur until a warning to the log will be written
checkpoint_completion_target controls how fast checkpoints should complete ( 0.0 => fastest, 1.0 => slowest, which means the whole period between to checkpoints )
full_page_writes should be on to enable that the whole pages will be written to disk after the first change after a checkpoint.
wal_level controls how much information is written to the wal files: minimal ( crash recovery ), archive ( wal based recovery ), hot_standby ( read only standby )
archive_mode archiving of the wal files: on/off
archive_command any command used to archive the wal files
archive_timeout controls how often wal archived should be saved
hot_standby enables read only standby ( active dataguard in oracle terms )
max_wal_senders controls the amount of standby databases this master can serve
wal_sender_delay controls how often data gets replicated ( default is 200ms )

a reasonable configuration to start with ( standby databases are not in scope here ) could be:

echo "###### wal settings" >> $PARAMFILE
echo "fsync=on" >> $PARAMFILE
echo "wal_buffers=16MB" >> $PARAMFILE
echo "synchronous_commit=on" >> $PARAMFILE
echo "wal_writer_delay=200ms" >> $PARAMFILE
echo "checkpoint_segments=16" >> $PARAMFILE
echo "checkpoint_timeout=300s" >> $PARAMFILE
echo "checkpoint_warning=30s" >> $PARAMFILE
echo "checkpoint_completion_target=0.9" >> $PARAMFILE
echo "full_page_writes=on" >> $PARAMFILE
echo "wal_level=archive" >> $PARAMFILE
echo "archive_mode=on" >> $PARAMFILE
echo "archive_command='test ! -f /opt/postgres/arch/%f && cp %p /opt/postgres/arch/%f'" >> $PARAMFILE
echo "archive_timeout=10min" >> $PARAMFILE

as the vacuum and analyze processes are such important there are parameters to control this ( the most important here ):

parameter description
autovacuum enables the autovaccum process launcher
autovacuum_max_workers controls how many autovacuum processes will be started
autovacuum_naptime controls the minimum delay between vacuum processes ( defaults to 1 minute )

adding them to the server’s parameter file:

echo "###### autovaccum settings" >> $PARAMFILE
echo "autovacuum=on" >> $PARAMFILE
echo "autovacuum_max_workers=3" >> $PARAMFILE
echo "autovacuum_naptime=5min" >> $PARAMFILE

one more parameter to specify is for loading the pg_stat_statements module from the contrib directory:

echo "###### pg_stat_statements" >> $PARAMFILE
echo "shared_preload_libraries='pg_stat_statements'" >> $PARAMFILE

keep in mind that this is only a set to start with, especially if you do not know how the application will behave. there are a bunch of more parameters which give you much more control over various aspects of the database. check the documentation for the complete reference.

once the database server is up and running, high availability might be configured, maintenance scripts are running and backup is in place normal operation starts. what’s still missing is: the procedures and techniques to monitor the database server either for being able to take the necessary actions in case you’ll reach some limits or users are complaining because performance drops.

oracle provides awr/ash reports based on various statistics to help the dba track down any issues and for proper capacity planning. what does postgres provide ? the third post already introduced the analyze command which is used for collecting various statistics about the tables in the database. so, basically, postgres collects statistics as oracle does. the questions is how you can control it, and how you can use the various statistics to build reports that can help you in identifying things of interest?

what, and how much, oracle is collecting is controlled by the parameter statistics_level. in postgres there are several parameters you can specify to control the gathering behaviour. the three most important are:

  • track_counts: enables collection of database wide statistics ( on by default )
  • track_function: enables collection of usage of user defined functions ( off be default )
  • track_activities: enables collection of statistics of each command currently executed by any server process/session ( on by default )

the database uses a separate directory for storing the statistics in temporary files and to provide the statistics to other processes:

psql -U sysdba -d postgres -c "show stats_temp_directory;" stats_temp_directory 
----------------------
 pg_stat_tmp
(1 row)

of course you may and probably should change this to a fast filesystem ( the documentation recommends a RAM based filesystem ). once the server shuts down these statistics will get copied to the global directory to make them permanent. this procedure is slightly different from the oracle approach where the statistics are stored in the data dictionary ( that is the system/sysaux tablespace ).

as oracle does, postgres provides a wide range of views to query the statistics of various areas of the database. the complete list can by found in the documentation.

what impresses me from an oracle point of view are the various functions which can be used to quickly check various statistics. for example, to query the number of transactions comitted in the database you can simple do this:

SELECT pg_stat_get_db_xact_commit(d.oid)
  FROM pg_database d
 WHERE d.datname = 'postgres';

… which will report the commited transactions of the database postgres. in combination which psql’s ability to define aliases this can be a very powerful feature. to give you an example this will define a new alias in psql’s control file:

echo "\\set mypgstats 'SELECT pg_stat_get_db_xact_commit(d.oid) FROM pg_database d WHERE d.datname = \'postgres\';'" >> ~/.psqlrc

when you restart psql you can reference the alias:

sysdba@[local]:5432/postgres*# :mypgstats 
 pg_stat_get_db_xact_commit 
----------------------------
                       2965
(1 row)

… and what is even better ( at least on linux ) : tab completion works, too. even for the aliases. a feature i’d definitely like to see in oracle.

the documentation list a nice example to query the process ids with the commands they are executing:

SELECT pg_stat_get_backend_pid(s.backendid) AS procpid,
       pg_stat_get_backend_activity(s.backendid) AS current_query
    FROM (SELECT pg_stat_get_backend_idset() AS backendid) AS s;
 procpid |                           current_query                           
---------+-------------------------------------------------------------------
    4107 | 
    4581 | SELECT pg_stat_get_backend_pid(s.backendid) AS procpid,          +
         |        pg_stat_get_backend_activity(s.backendid) AS current_query+
         |     FROM (SELECT pg_stat_get_backend_idset() AS backendid) AS s;
(2 rows)

very easy…impressed.

as mentioned above, according to the documentation these statistics will be stored permanently once the database server stops. I wondered if the counters will get re-setted or are ever increasing:

[postgres@postgres ~]$ psql -U sysdba -d postgres -c "SELECT pg_stat_get_db_xact_commit(d.oid) FROM pg_database d WHERE d.datname = 'postgres';"
 pg_stat_get_db_xact_commit 
----------------------------
                       2992
(1 row)

[postgres@postgres ~]$ pgstop
waiting for server to shut down..... done
server stopped
[postgres@postgres ~]$ pgstart
server starting
[postgres@postgres ~]$ psql -U sysdba -d postgres -c "SELECT pg_stat_get_db_xact_commit(d.oid) FROM pg_database d WHERE d.datname = 'postgres';"
 pg_stat_get_db_xact_commit 
----------------------------
                       2994

the values are ever increasing. so, for getting meaningful values you’ll need to store some of the values including the time and compare it to future values of the same statistics. to reset the counters pg_stat_reset is used:

SELECT pg_stat_reset();

I didn’t figure out if there is anything comparable to the oracle awr snapshots of which one may generate reports ( any hints on that are welcome ).

beside the views and functions which provide the statistics there is one more nice system table which reports on all the locks in the database:

sysdba@[local]:5432/postgres# select * from pg_locks;
  locktype  | database | relation |   page   |  tuple   | virtualxid | transactionid | classid  |  objid   | objsubid | virtualtransaction | pid  |      mode       | granted 
------------+----------+----------+----------+----------+------------+---------------+----------+----------+----------+--------------------+------+-----------------+---------
 relation   |    12780 |    11000 | <> | <> | <>   |      <> | <> | <> | <> | 3/21               | 4639 | AccessShareLock | t
 virtualxid | <> | <> | <> | <> | 3/21       |      <> | <> | <> | <> | 3/21               | 4639 | ExclusiveLock   | t
(2 rows)

… a nice and simple overview which can help in identifying causes of issues.

in linux as well as most of the unixes you may use top to query the current processes running on the system and their resource consumptions. there is a seperate project on the postgres website called ptop which provides a similar functionality for the postgres database sessions. sample output:

last pid:  1245;  load avg:  1.60,  1.29,  0.52;       up 0+00:02:25                   08:18:52
2 processes: 1 running, 1 sleeping
CPU states:  1.0% user,  0.0% nice,  1.8% system, 93.3% idle,  4.0% iowait
Memory: 229M used, 768M free, 52M buffers, 79M cached
Swap: 2000M free
  PID USERNAME PRI NICE  SIZE   RES STATE   TIME   WCPU    CPU COMMAND
 1246 lcspostg  20    0  461M 5356K run     0:00  0.02%  0.20% postgres: sysdba dbs200 [local] 
 1245 lcspostg  20    0  461M 4104K sleep   0:00  0.02%  0.20% postgres: sysdba postgres [local

ptop might not be available in your postgres distribution but it can easily be compiled from the contrib section of the source distribution. some vendors provide packaged versions as well.

for daily operations there is a nice plugin for nagios which saves you a lot of work and automates much of the daily tasks.

another extension one might consider is pg_stat_statements. it is available in the contrib directory of the source code distribution, too. once compiled and installed it provides an overview of the statements executed in the system:

select * from pg_stat_statements;
userid | dbid  |                                                                                                              query                                                                    
                                          | calls | total_time | rows | shared_blks_hit | shared_blks_read | shared_blks_written | local_blks_hit | local_blks_read | local_blks_written | temp_blks_rea
d | temp_blks_written 
--------+-------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------+-------+------------+------+-----------------+------------------+---------------------+----------------+-----------------+--------------------+--------------
--+-------------------
     10 | 12780 | SELECT a.attname,                                                                                                                                                                     
                                         +|     1 |   0.029022 |   13 |              34 |                4 |                   0 |              0 |               0 |                  0 |              
0 |                 0

more to come about the parser, rewriter and planner

this is the summary of the postgresql introduction posts available so far:

1. installation and basics
2. database creation
3. maintenance tasks
4. backup and restore
5. high availability
6. monitoring and troubleshooting basics
7. a rather more productive configuration
8. privileges and connections

… more to come soon

you did your dataguard setup and everything seems to be configured correct. the broker configuration is fine, you can cross connect to each instance as sysdba but when performing a switchover through dgmgrl you receive:

ORA-01017 : invalid username/password; logon denied

… and you even provided the sys password while connecting with dgmgrl:

dgmgrl sys/password@somedatabase

what went wrong ? just had this case and it took me an hour to figure it out: the sys password contained special characters. what a …. maybe this is mentioned somewhere in the documentation.

a quick way to test it:

if:

sqlplus sys/"somepasswordwithspecialcharacters"@somedb as sysdba

works, but the same connect without the quotes does not:

sqlplus sys/somepasswordwithspecialcharacters@somedb as sysdba

… try to change the password to anything without special characters and you will succeed ( remember to change the password for the password file, too ) ….