While browsing http://planet.postgresql.org/ today I came across a link to a nice paper which is definitely worth reading:

Architecture of a Database System

today I browsed the red news and came over this: Keep an eye on these 5 new features in RHEL 7. I did know about systemd, docker, xfs. I don’t care about AD integration, at least currently. but what is Performance Co-Pilot?

quickly checked the documentation and it seemed pretty interesting. especially that there is a plugin for postgres. so, lets take a look (short intro, only :) ):

for my tests a quick setup of postgres using the sample makefile posted some while ago is sufficient (do not use a development snapshot as pcp does not support this. I used 9.3.5 for the tests):

yum install -y wget readline-devel bzip2 zlib-devel
groupadd postgres
useradd -g postgres postgres
su - postgres
-- get makefile
make fromscratch
psql (9.3.5)
Type "help" for help.

so far, so good. lets get the pcp packages:

yum install pcp pcp-gui

enable and start pcp:

chkconfig pmcd on
chkconfig --list | grep pmcd
/etc/init.d/pmcd start

easy. let’s see if pmatop works:



great. pminfo tells you what metrics are available currently:

pminfo -f
    inst [0 or "cpu0"] value 0

    inst [0 or "cpu0"] value 747300

a lot of stuff but nothing directly related to postgres except some information about the processes:

pminfo -f | grep -i postgres
    inst [12665 or "012665 /home/postgres/install/bin/postgres -D /home/postgres/data"] value 12665
    inst [12667 or "012667 postgres: checkpointer process   "] value 12667
    inst [12668 or "012668 postgres: writer process   "] value 12668
    inst [12669 or "012669 postgres: wal writer process   "] value 12669
    inst [12670 or "012670 postgres: autovacuum launcher process   "] value 12670
    inst [12671 or "012671 postgres: stats collector process   "] value 12671

according to the documentation pmdapostgres “Extracts performance metrics from the PostgreSQL relational database”. reading over and over again it became clear what to do. all these tools need to be installed first, so:

cd /var/lib/pcp/pmdas/postgresql
Perl database interface (DBI) is not installed


yum install -y perl-DBI

next try:

Postgres database driver (DBD::Pg) is not installed


yum install perl-DBD-Pg

and again:

You will need to choose an appropriate configuration for installation of
the "postgresql" Performance Metrics Domain Agent (PMDA).

  collector	collect performance statistics on this system
  monitor	allow this system to monitor local and/or remote systems
  both		collector and monitor configuration for this system

Please enter c(ollector) or m(onitor) or b(oth) [b] b
Updating the Performance Metrics Name Space (PMNS) ...
Terminate PMDA if already installed ...
Updating the PMCD control file, and notifying PMCD ...
Waiting for pmcd to terminate ...
Starting pmcd ... 
Check postgresql metrics have appeared ... 15 warnings, 208 metrics and 0 values

much better. lets see if something is available now:

pminfo -f | grep -i postgres

cool. can I get some values?

pminfo -f postgresql.stat.all_tables.n_tup_upd

No value(s) available!

hm. not really what I expected. looking at the logfile:

tail -100 /var/log/pcp/pmcd/postgresql.log
DBI connect('dbname=postgres','postgres',...) failed: could not connect to server: No such file or directory
	Is the server running locally and accepting
	connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? at /var/lib/pcp/pmdas/postgresql/pmdapostgresql.pl line 252.

ok, seems connections to postgresql are not possible. the issue is that DBI looks for the sockets at “/var/run/postgresql/”. checking my postgresql.conf:

#unix_socket_directories = '/tmp'	# comma-separated list of directories
#unix_socket_group = ''			# (change requires restart)
#unix_socket_permissions = 0777		# begin with 0 to use octal notation

easy to fix:

su -
mkdir /var/run/postgresql/
chown postgres:postgres /var/run/postgresql/
su - postgres
echo "unix_socket_directories = '/var/run/postgresql/'" >> postgresql.conf

restart postgresql:

install/bin/pg_ctl stop -D data/
install/bin/pg_ctl start -D data/

checking again:

pminfo -f postgresql.stat.all_tables.n_tup_upd

    inst [1261 or "pg_auth_members"] value 0
    inst [2617 or "pg_operator"] value 0
    inst [2600 or "pg_aggregate"] value 0
    inst [1136 or "pg_pltemplate"] value 0
    inst [12529 or "sql_implementation_info"] value 0
    inst [2609 or "pg_description"] value 0
    inst [2612 or "pg_language"] value 0
    inst [12539 or "sql_packages"] value 0
    inst [3601 or "pg_ts_parser"] value 0
    inst [3466 or "pg_event_trigger"] value 0
    inst [3592 or "pg_shseclabel"] value 0
    inst [3118 or "pg_foreign_table"] value 0

much better. but not really nice to read. this is where pmchart comes into the game:

pmchart &

metric selection
real time graphs

now performance data can be viewed in realtime…really cool.

Oregon Sues Oracle

a german version:
Oregon verklagt Oracle

do not apply PSU if you are using APEX and the embedded plsql gateway: you might hit bug 18610915 which is internal and therefore no description is available.

if you hit this you are either not able to even bring up the apex login page (neither http nor https) or it stops working after a few connections (dispatchers are dying).

path “Patch 18610915: INTERMITTENT SEGMENTATION FAULT DURING NZ/PRNG INIT” is only available for

once a session which is doing a transaction gets killed before the session commits or rollbacks oracle needs to recover(undo) the work done by this session. one way to watch the progress of this recovery is to use the v$fast_start_servers and v$fast_start_transactions views.

small test case:

drop user u1 cascade;
create user u1 identified by u1
  default tablespace users
  temporary tablespace temp
  quota unlimited on users;
grant create session, create table to u1;
grant select on dba_objects to u1;
connect u1/u1
create table t1 as select * from dba_objects;
insert into t1 select * from t1;

in a second session get the sid and serial# for the first session:

connect / as sysdba
select sid,serial# from v$session where username = 'U1';

back in the first session do some (stupid) work on the table:

  while 1=1 loop
    update t1 set OBJECT_NAME = OBJECT_NAME;
  end loop;

this will write undo data without committing the work. now, in the first session, kill the session which is doing the transaction:

alter system kill session '[SID_FROM_ABOVE],[SERIAL#_FROM_ABOVE]';

if your hardware is slow enough or you are fast enough or you keep the loop running for some time before killing the session you can now see the slave(s) doing recovery:

select * from v$fast_start_servers;

----------- -------------- ---------- ----------------
RECOVERING	      9379	   26 020021006D010000

… and the transaction(s) being recovered:

select * from v$fast_start_transactions;

---------- ---------- ---------- ---------------- -------------- --------------- ---------- ---------- ---------- ---------- ---------- ----------------
---------------- ----------
	 2	   33	     365 RECOVERED		   19357	   19357		    11					020021006D010000

	 5	    4	     357 RECOVERED		   17220	   17220		    13					0500040065010000

consider reading this before jumping on this train:

Oracle’s In-Memory Database: The True Cost Of Licensing

tested on and on Linux x64:

echo "select * from ( " > ${SQLFILE}
for x in {1..100000}
  if [ $x == "100000" ]; then
    echo "select * from dual )" >>  ${SQLFILE}
    echo " select * from ( " >> ${SQLFILE}

for x in {1..99999}
  echo ")"  >> ${SQLFILE}

execute the generated /var/tmp/subqueries.sql script:

200001  ;
select * from (
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 21287
Session ID: 34 Serial number: 2369

I do not say, that using 100000 subqueries is in any way useful :)

maybe this is not related to the amount of subqueries but rather to the number of lines or the number of characters in the statement. did not test this.