Quantcast
Channel: Archives des PostgreSQL - dbi Blog
Viewing all 522 articles
Browse latest View live

Auditing in PostgreSQL

$
0
0

Today, especially in the Pharma and Banking sectors, sooner or later you will be faced with the requirement of auditing. Detailed requirements will vary but usually at least tracking logons to the database is a must. Some companies need more information to pass their internal audits such as: Who created which objects, who fired which sql against the database, who was given which permissions and so on. In this post we’ll look at what PostgreSQL can offer here.

PostgreSQL comes with a comprehensive logging system by default. In my 9.5.4 instance there are 28 parameters related to logging:

(postgres@[local]:5438) [postgres] > select count(*) from pg_settings where name like 'log%';
 count 
-------
    28
(1 row)

Not all of them are relevant when it comes to auditing but some can be used for a minimal auditing setup. For logons and loggoffs there are “log_connections” and “log_disconnections”:

(postgres@[local]:5438) [postgres] > alter system set log_connections=on;
ALTER SYSTEM
(postgres@[local]:5438) [postgres] > alter system set log_disconnections=on;
ALTER SYSTEM
(postgres@[local]:5438) [postgres] > select context from pg_settings where name in ('log_dicconnections','log_connections');
      context      
-------------------
 superuser-backend
(1 row)
(postgres@[local]:5438) [postgres] > select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

From now on, as soon as someone connects to or disconnects from the instance it is reported in the logfile:

2016-09-02 10:35:56.983 CEST - 2 - 13021 - [local] - postgres@postgres LOG:  connection authorized: user=postgres database=postgres
2016-09-02 10:36:04.820 CEST - 3 - 13021 - [local] - postgres@postgres LOG:  disconnection: session time: 0:00:07.837 user=postgres database=postgres host=[local]

Another parameter that might be useful for auditing is “log_statement”. When you set this to “ddl” all DDLs are logged, when you set it to “mod” all DDLs plus all statements that modify data will be logged. To log all statements there is the value of “all”.

(postgres@[local]:5438) [postgres] > alter system set log_statement='all';
ALTER SYSTEM

For new session all statements will be logged from now on:

2016-09-02 10:45:15.859 CEST - 3 - 13086 - [local] - postgres@postgres LOG:  statement: create table t ( a int );
2016-09-02 10:46:44.064 CEST - 4 - 13098 - [local] - postgres@postgres LOG:  statement: insert into t values (1);
2016-09-02 10:47:00.162 CEST - 5 - 13098 - [local] - postgres@postgres LOG:  statement: update t set a = 2;
2016-09-02 10:47:10.606 CEST - 6 - 13098 - [local] - postgres@postgres LOG:  statement: delete from t;
2016-09-02 10:47:22.012 CEST - 7 - 13098 - [local] - postgres@postgres LOG:  statement: truncate table t;
2016-09-02 10:47:25.284 CEST - 8 - 13098 - [local] - postgres@postgres LOG:  statement: drop table t;

Be aware that your logfile can grow significantly if you turn this on and especially if you set the value to “all”.

That’s it more or less when it comes to auditing: You can audit logons, logoffs and SQL statements. This might be sufficient for your requirements but this also might not be sufficient for requirements. What do you do if you need e.g. to audit on object level? With the default logging parameters you can not do this. But, as always in PostgreSQL, there is an extension: pgaudit.

If you want to install this extension you’ll need the PostgreSQL source code. To show the complete procedure here is a PostgreSQL setup from source. Obiously the first step is to download and extract the source code:

postgres@pgbox:/u01/app/postgres/software/ [PG953] cd /u01/app/postgres/software/
postgres@pgbox:/u01/app/postgres/software/ [PG953] wget https://ftp.postgresql.org/pub/source/v9.5.4/postgresql-9.5.4.tar.bz2
--2016-09-02 09:39:29--  https://ftp.postgresql.org/pub/source/v9.5.4/postgresql-9.5.4.tar.bz2
Resolving ftp.postgresql.org (ftp.postgresql.org)... 213.189.17.228, 217.196.149.55, 87.238.57.227, ...
Connecting to ftp.postgresql.org (ftp.postgresql.org)|213.189.17.228|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 18496299 (18M) [application/x-bzip-compressed-tar]
Saving to: ‘postgresql-9.5.4.tar.bz2’

100%[==================================================================================>] 18'496'299  13.1MB/s   in 1.3s   

2016-09-02 09:39:30 (13.1 MB/s) - ‘postgresql-9.5.4.tar.bz2’ saved [18496299/18496299]

postgres@pgbox:/u01/app/postgres/software/ [PG953] tar -axf postgresql-9.5.4.tar.bz2 
postgres@pgbox:/u01/app/postgres/software/ [PG953] cd postgresql-9.5.4

Then do the usual configure, make and make install:

postgres@pgbox:/u01/app/postgres/software/ [PG953] PGHOME=/u01/app/postgres/product/95/db_4
postgres@pgbox:/u01/app/postgres/software/ [PG953] SEGSIZE=2
postgres@pgbox:/u01/app/postgres/software/ [PG953] BLOCKSIZE=8
postgres@pgbox:/u01/app/postgres/software/ [PG953] ./configure --prefix=${PGOME} \
            --exec-prefix=${PGHOME} \
            --bindir=${PGOME}/bin \
            --libdir=${PGOME}/lib \
            --sysconfdir=${PGOME}/etc \
            --includedir=${PGOME}/include \
            --datarootdir=${PGOME}/share \
            --datadir=${PGOME}/share \
            --with-pgport=5432 \
            --with-perl \
            --with-python \
            --with-tcl \
            --with-openssl \
            --with-pam \
            --with-ldap \
            --with-libxml \
            --with-libxslt \
            --with-segsize=${SEGSIZE} \
            --with-blocksize=${BLOCKSIZE} \
            --with-wal-segsize=16  \
            --with-extra-version=" dbi services build"
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] make world
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] make install
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] cd contrib
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] make install

Once this is done you can continue with the installation of the pgaudit extension:

postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] git clone https://github.com/pgaudit/pgaudit.git
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/contrib/ [PG953] cd pgaudit/
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/contrib/pgaudit/ [PG953] make -s check
============== creating temporary instance            ==============
============== initializing database system           ==============
============== starting postmaster                    ==============
running on port 57736 with PID 8635
============== creating database "contrib_regression" ==============
CREATE DATABASE
ALTER DATABASE
============== running regression test queries        ==============
test pgaudit                  ... ok
============== shutting down postmaster               ==============
============== removing temporary instance            ==============

=====================
 All 1 tests passed. 
=====================

postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/contrib/pgaudit/ [PG953] make install
/usr/bin/mkdir -p '/u01/app/postgres/product/95/db_4/lib'
/usr/bin/mkdir -p '/u01/app/postgres/product/95/db_4/share/extension'
/usr/bin/mkdir -p '/u01/app/postgres/product/95/db_4/share/extension'
/usr/bin/install -c -m 755  pgaudit.so '/u01/app/postgres/product/95/db_4/lib/pgaudit.so'
/usr/bin/install -c -m 644 ./pgaudit.control '/u01/app/postgres/product/95/db_4/share/extension/'
/usr/bin/install -c -m 644 ./pgaudit--1.0.sql  '/u01/app/postgres/product/95/db_4/share/extension/'

That’s it. Initialize a new cluster:

postgres@pgbox:/u01/app/postgres/software/ [PG954] initdb -D /u02/pgdata/PG954 -X /u03/pgdata/PG954
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     en_US.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /u02/pgdata/PG954 ... ok
creating directory /u03/pgdata/PG954 ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /u02/pgdata/PG954/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /u02/pgdata/PG954 -l logfile start

… and install the extension:

postgres@pgbox:/u02/pgdata/PG954/ [PG954] psql postgres
psql (9.5.4 dbi services build)
Type "help" for help.

(postgres@[local]:5438) [postgres] > create extension pgaudit;
ERROR:  pgaudit must be loaded via shared_preload_libraries
Time: 2.226 ms

(postgres@[local]:5438) [postgres] > alter system set shared_preload_libraries='pgaudit';
ALTER SYSTEM
Time: 18.236 ms

##### Restart the PostgreSQL instance

(postgres@[local]:5438) [postgres] > show shared_preload_libraries ;
 shared_preload_libraries 
--------------------------
 pgaudit
(1 row)

Time: 0.278 ms
(postgres@[local]:5438) [postgres] > create extension pgaudit;
CREATE EXTENSION
Time: 4.688 ms

(postgres@[local]:5438) [postgres] > \dx
                   List of installed extensions
  Name   | Version |   Schema   |           Description           
---------+---------+------------+---------------------------------
 pgaudit | 1.0     | public     | provides auditing functionality
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language
(2 rows)

Ready. So, what can you do with it? As the documentation is quite well here are just a few examples.

To log all statements against a role:

(postgres@[local]:5438) [postgres] > alter system set pgaudit.log = 'ROLE';

Altering or creating roles from now on is reported in the logfile as:

2016-09-02 14:50:45.432 CEST - 9 - 13353 - [local] - postgres@postgres LOG:  AUDIT: SESSION,2,1,ROLE,CREATE ROLE,,,create user uu login password ,
2016-09-02 14:52:03.745 CEST - 16 - 13353 - [local] - postgres@postgres LOG:  AUDIT: SESSION,3,1,ROLE,ALTER ROLE,,,alter user uu CREATEDB;,
2016-09-02 14:52:20.881 CEST - 18 - 13353 - [local] - postgres@postgres LOG:  AUDIT: SESSION,4,1,ROLE,DROP ROLE,,,drop user uu;,

Object level auditing can be implemented like this (check the documentation for the meaning of the pgaudit.role parameter):

(postgres@[local]:5438) [postgres] > create user audit;
CREATE ROLE
(postgres@[local]:5438) [postgres] > create table taudit ( a int );
CREATE TABLE
(postgres@[local]:5438) [postgres] > insert into taudit values ( 1 );
INSERT 0 1
(postgres@[local]:5438) [postgres] > grant select,delete on taudit to audit;
GRANT
(postgres@[local]:5438) [postgres] > alter system set pgaudit.role='audit';
ALTER SYSTEM
(postgres@[local]:5438) [postgres] > select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

Once we touch the table:

(postgres@[local]:5438) [postgres] > select * from taudit;
 a 
---
 1
(1 row)
(postgres@[local]:5438) [postgres] > update taudit set a = 4;

… the audit information appears in the logfile:

2016-09-02 14:57:10.198 CEST - 5 - 13708 - [local] - postgres@postgres LOG:  AUDIT: OBJECT,1,1,READ,SELECT,TABLE,public.taudit,select * from taudit;,
2016-09-02 15:00:59.537 CEST - 9 - 13708 - [local] - postgres@postgres LOG:  AUDIT: OBJECT,2,1,WRITE,UPDATE,TABLE,public.taudit,update taudit set a = 4;,

Have fun with auditing …

 

Cet article Auditing in PostgreSQL est apparu en premier sur Blog dbi services.


EDB Failover Manager 2.1, upgrading

$
0
0

Some days ago EnterpriseDB released a new version of its EDB Failover Manager which brings one feature that really sounds great: “Controlled switchover and switchback for easier maintenance and disaster recovery tests”. This is exactly what you want when you are used to operate Oracle DataGuard. Switching back and forward as you like without caring much about the old master. The old master shall just be converted to a standby which follows the new master automatically. This post is about upgrading EFM from version 2.0 to 2.1.

As I still have the environment available which was used for describing the maintenance scenarios with EDB Failover Manager (Maintenance scenarios with EDB Failover Manager (1) – Standby node, Maintenance scenarios with EDB Failover Manager (2) – Primary node and Maintenance scenarios with EDB Failover Manager (3) – Witness node) I will use the same environment to upgrade to the new release. Lets start …

This is the current status of my failover cluster:

[root@edbbart ~]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Master      192.168.22.245       UP     UP        
	Witness     192.168.22.244       UP     N/A       
	Standby     192.168.22.243       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.245 192.168.22.243

Standby priority host list:
	192.168.22.243

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.245       0/3B01C5E0       
	Standby     192.168.22.243       0/3B01C5E0       

	Standby database(s) in sync with master. It is safe to promote.
[root@edbbart ~]$ 

Obviously you have to download the new version to begin the upgrade. Once the rpm is available on all nodes simply install it on all the nodes:

[root@edbppas tmp]$ yum localinstall efm21-2.1.0-1.rhel7.x86_64.rpm

EFM 2.1 comes with an utility command that helps in upgrading a cluster. You should invoke it on each node:

[root@edbbart tmp]$ /usr/efm-2.1/bin/efm upgrade-conf efm
Processing efm.properties file.
Setting new property node.timeout to 40 (sec) based on existing timeout 5000 (ms) and max tries 8.

Processing efm.nodes file.

Upgrade of files is finished. Please ensure that the new file permissions match those of the template files before starting EFM.
The db.service.name property should be set before starting a non-witness agent.

This created a new configuration file in the new directory under /etc which was created when the new version was installed:

[root@edbbart tmp]$ ls /etc/efm-2.1
efm.nodes  efm.nodes.in  efm.properties  efm.properties.in

All the values from the old EFM cluster should be there in the new configuration files:

[root@edbbart efm-2.1]$ pwd
/etc/efm-2.1
[root@edbbart efm-2.1]$ cat efm.properties | grep daniel
user.email=daniel.westermann...

Before going further check the new configuration parameters for EFM 2.1, which are:

auto.allow.hosts
auto.resume.period
db.service.name
jvm.options
minimum.standbys
node.timeout
promotable
recovery.check.period
script.notification
script.resumed

I’ll leave everything as it was before for now. Notice that a new service got created:

[root@edbppas efm-2.1]$ systemctl list-unit-files | grep efm
efm-2.0.service                             enabled 
efm-2.1.service                             disabled

Lets try to shutdown the old service on all nodes and then start the new one. Step 1 (on all nodes):

[root@edbppas efm-2.1]$ systemctl stop efm-2.0.service
[root@edbppas efm-2.1]$ systemctl disable efm-2.0.service
rm '/etc/systemd/system/multi-user.target.wants/efm-2.0.service'

Then enable the new service:

[root@edbppas efm-2.1]$ systemctl enable efm-2.1.service
ln -s '/usr/lib/systemd/system/efm-2.1.service' '/etc/systemd/system/multi-user.target.wants/efm-2.1.service'
[root@edbppas efm-2.1]$ systemctl list-unit-files | grep efm
efm-2.0.service                             disabled
efm-2.1.service                             enabled 

Make sure your efm.nodes file contains all the nodes which make up the cluster, in my case:

[root@edbppas efm-2.1]$ cat efm.nodes
# List of node address:port combinations separated by whitespace.
# The list should include at least the membership coordinator's address.
192.168.22.243:9998 192.168.22.244:9998 192.168.22.245:9998

Lets try to start the new service on the witness node first:

[root@edbbart efm-2.1]$ systemctl start efm-2.1.service
[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       

Allowed node host list:
	192.168.22.244

Membership coordinator: 192.168.22.244

Standby priority host list:
	(List is empty.)

Promote Status:

Did not find XLog location for any nodes.

Looks good. Are we really running the new version?

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm -v
Failover Manager, version 2.1.0

Looks fine as well. Time to add the other nodes:

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm add-node efm 192.168.22.243
add-node signal sent to local agent.
[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm add-node efm 192.168.22.245
add-node signal sent to local agent.
[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       

Allowed node host list:
	192.168.22.244 192.168.22.243

Membership coordinator: 192.168.22.244

Standby priority host list:
	(List is empty.)

Promote Status:

Did not find XLog location for any nodes.

Proceed on the master:

[root@ppasstandby efm-2.1]$ systemctl start efm-2.1.service
[root@ppasstandby efm-2.1]$ systemctl status efm-2.1.service
efm-2.1.service - EnterpriseDB Failover Manager 2.1
   Loaded: loaded (/usr/lib/systemd/system/efm-2.1.service; enabled)
   Active: active (running) since Thu 2016-09-08 12:04:11 CEST; 25s ago
  Process: 4020 ExecStart=/bin/bash -c /usr/efm-2.1/bin/runefm.sh start ${CLUSTER} (code=exited, status=0/SUCCESS)
 Main PID: 4075 (java)
   CGroup: /system.slice/efm-2.1.service
           └─4075 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.77-0.b03.el7_2.x86_64/jre/bin/java -cp /usr/e...

Sep 08 12:04:07 ppasstandby systemd[1]: Starting EnterpriseDB Failover Manager 2.1...
Sep 08 12:04:08 ppasstandby sudo[4087]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/efm-... efm
Sep 08 12:04:08 ppasstandby sudo[4098]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/efm-... efm
Sep 08 12:04:08 ppasstandby sudo[4114]: efm : TTY=unknown ; PWD=/ ; USER=postgres ; COMMAND=/usr/... efm
Sep 08 12:04:08 ppasstandby sudo[4125]: efm : TTY=unknown ; PWD=/ ; USER=postgres ; COMMAND=/usr/... efm
Sep 08 12:04:10 ppasstandby sudo[4165]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/efm-...9998
Sep 08 12:04:10 ppasstandby sudo[4176]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/efm-...4075
Sep 08 12:04:11 ppasstandby systemd[1]: Started EnterpriseDB Failover Manager 2.1.
Hint: Some lines were ellipsized, use -l to show in full.

And then continue on the standby:

[root@edbppas efm-2.1]$ systemctl start efm-2.1.service
[root@edbppas efm-2.1]$ systemctl status efm-2.1.service
efm-2.1.service - EnterpriseDB Failover Manager 2.1
   Loaded: loaded (/usr/lib/systemd/system/efm-2.1.service; enabled)
   Active: active (running) since Thu 2016-09-08 12:05:28 CEST; 3s ago
  Process: 3820 ExecStart=/bin/bash -c /usr/efm-2.1/bin/runefm.sh start ${CLUSTER} (code=exited, status=0/SUCCESS)
 Main PID: 3875 (java)
   CGroup: /system.slice/efm-2.1.service
           └─3875 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.77-0.b03.el7_2.x86_64/jre/bin/jav...

Sep 08 12:05:24 edbppas systemd[1]: Starting EnterpriseDB Failover Manager 2.1...
Sep 08 12:05:25 edbppas sudo[3887]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/u...efm
Sep 08 12:05:25 edbppas sudo[3898]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/u...efm
Sep 08 12:05:25 edbppas sudo[3914]: efm : TTY=unknown ; PWD=/ ; USER=postgres ; COMMAN...efm
Sep 08 12:05:25 edbppas sudo[3925]: efm : TTY=unknown ; PWD=/ ; USER=postgres ; COMMAN...efm
Sep 08 12:05:25 edbppas sudo[3945]: efm : TTY=unknown ; PWD=/ ; USER=postgres ; COMMAN...efm
Sep 08 12:05:28 edbppas sudo[3981]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/u...998
Sep 08 12:05:28 edbppas sudo[3994]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/u...875
Sep 08 12:05:28 edbppas systemd[1]: Started EnterpriseDB Failover Manager 2.1.
Hint: Some lines were ellipsized, use -l to show in full.

What is the cluster status now?:

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Master      192.168.22.245       UP     UP        
	Witness     192.168.22.244       UP     N/A       
	Standby     192.168.22.243       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.243 192.168.22.245

Membership coordinator: 192.168.22.244

Standby priority host list:
	192.168.22.243

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.245       0/3B01C7A0       
	Standby     192.168.22.243       0/3B01C7A0       

	Standby database(s) in sync with master. It is safe to promote.

Cool. Back in operation on the new release. Quite easy.

PS: Remember to re-point your symlinks in /etc and /usr if you created symlinks for easy of use.

 

Cet article EDB Failover Manager 2.1, upgrading est apparu en premier sur Blog dbi services.

EDB Failover Manager 2.1, (two) new features

$
0
0

In the last post we upgraded EDB EFM from version 2.0 to 2.1. In this post we’ll look at the new features. Actually we’ll look only at two of the new features in this post:

  • Failover Manager now simplifies cluster startup with the auto.allow.hosts property
  • efm promote now includes a -switchover option; the -switchover option instructs Failover Manager to perform a failover, promoting a Standby to Master, and then, return the Master node to the cluster as a Standby node. For more information

Lets go …

My failover cluster status is still fine:

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       
	Standby     192.168.22.243       UP     UP        
	Master      192.168.22.245       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.243 192.168.22.245

Membership coordinator: 192.168.22.244

Standby priority host list:
	192.168.22.243

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.245       0/3C000220       
	Standby     192.168.22.243       0/3C000220       

	Standby database(s) in sync with master. It is safe to promote.

The first bit we’re going to change is the auto.allow.hosts on the database servers. According to the documentation this should eliminate the need to allow the hosts to join the cluster and registration should happen automatically. So, lets change it from “false” to “true” on all nodes:

[root@ppasstandby efm-2.1]$ grep allow.hosts efm.properties
auto.allow.hosts=true

… and then lets add all nodes to the efm.nodes files on the witness:

[root@edbbart efm-2.1]$ cat efm.nodes
# List of node address:port combinations separated by whitespace.
# The list should include at least the membership coordinator's address.
192.168.22.244:9998 192.168.22.243:9998 192.168.22.245:9998

When we now shutdown the EFM service on all hosts and bring it up again on the witness what is the result?

[root@edbbart efm-2.1]$ systemctl stop efm-2.1.service  # do this on all hosts

Lets start on the witness again:

[root@edbbart efm-2.1]$ systemctl start efm-2.1.service
[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       

Allowed node host list:
	192.168.22.244 192.168.22.243 192.168.22.245

Membership coordinator: 192.168.22.244

Standby priority host list:
	(List is empty.)

Promote Status:

Did not find XLog location for any nodes.

So far so good, all nodes are in the “Allowed” list. What happens when we start EFM on the current primary node:

[root@ppasstandby efm-2.1]$  systemctl start efm-2.1.service
[root@ppasstandby efm-2.1]$ 

We should see the node as a member now without explicitly allowing it to join:

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       
	Master      192.168.22.245       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.243 192.168.22.245

Membership coordinator: 192.168.22.244

Standby priority host list:
	(List is empty.)

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.245       0/3D000060       

	No standby databases were found.

Cool, same on the standby node:

[root@edbppas edb-efm]$ cat efm.nodes
# List of node address:port combinations separated by whitespace.
# The list should include at least the membership coordinator's address.
192.168.22.244:9998
[root@edbppas edb-efm]$  systemctl start efm-2.1.servic

What is the status:

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       
	Master      192.168.22.245       UP     UP        
	Standby     192.168.22.243       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.243 192.168.22.245

Membership coordinator: 192.168.22.244

Standby priority host list:
	192.168.22.243

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.245       0/3D000060       
	Standby     192.168.22.243       0/3D000060       

	Standby database(s) in sync with master. It is safe to promote.

Perfect. Makes it a bit easier and fewer things to remember to bring up a failover cluster.

Coming to the “big” new feature (at least in my opinion): Switching to the standby and making the old master automatically a new standby which follows the new master. According to the docs all we need to do is this:

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm promote efm -switchover

Does it really work?

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm promote efm -switchover
Promote/switchover command accepted by local agent. Proceeding with promotion and will reconfigure original master. Run the 'cluster-status' command for information about the new cluster state.

Hm, lets check the status:

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       
	Master      192.168.22.243       UP     UP        
	Standby     192.168.22.245       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.245 192.168.22.243

Membership coordinator: 192.168.22.244

Standby priority host list:
	192.168.22.245

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.243       0/410000D0       
	Standby     192.168.22.245       0/410000D0       

	Standby database(s) in sync with master. It is safe to promote.

It really worked! And backwards:

[root@edbbart ~]$ /usr/edb-efm/bin/efm promote efm -switchover
Promote/switchover command accepted by local agent. Proceeding with promotion and will reconfigure original master. Run the 'cluster-status' command for information about the new cluster state.

[root@edbbart ~]$ /usr/edb-efm/bin/efm cluster-status efm
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

    Agent Type  Address              Agent  DB       Info
    --------------------------------------------------------------
    Witness     192.168.22.244       UP     N/A       
    Standby     192.168.22.243       UP     UP        
    Master      192.168.22.245       UP     UP        

Allowed node host list:
    192.168.22.244 192.168.22.245 192.168.22.243

Membership coordinator: 192.168.22.244

Standby priority host list:
    192.168.22.243

Promote Status:

    DB Type     Address              XLog Loc         Info
    --------------------------------------------------------------
    Master      192.168.22.245       0/480001A8       
    Standby     192.168.22.243       0/480001A8       

    Standby database(s) in sync with master. It is safe to promote.

Cool, that is really a great new feature.

 

Cet article EDB Failover Manager 2.1, (two) new features est apparu en premier sur Blog dbi services.

Securing your connections to PostgreSQL by using SSL

$
0
0

Security is a big topic today and in the news almost every day. As the database usually holds sensitive data this data must be well protected. In most cases this is done by encrypting critical data inside the database and decrypt only when requested. But this is not all: When a client reads the data it is decrypted inside the database and then send back over the network unencrypted. What do you win with such a setup? The only risk it protects you from is a theft of either your disks or the whole server. Even more important is that all the connections to your database are encrypted so the traffic from and to your database can not be read be someone else. In this post we’ll look at how you can do this with PostgreSQL.

Obviously, for securing the connections to the database by using SSL we’ll need a server certificate. As I am on Linux this can be generated very easy by using openssl to create a self signed certificate. Be aware that your PostgreSQL binaries need to be compiled with “–with-openssl” for the following to work. You can check this by using using pg_config:

postgres@pgbox:/u01/app/postgres/local/dmk/ [PG960] pg_config | grep CONFIGURE
CONFIGURE = '--prefix=/u01/app/postgres/product/96/db_0' '--exec-prefix=/u01/app/postgres/product/96/db_0' '--bindir=/u01/app/postgres/product/96/db_0/bin' '--libdir=/u01/app/postgres/product/96/db_0/lib' '--sysconfdir=/u01/app/postgres/product/96/db_0/etc' '--includedir=/u01/app/postgres/product/96/db_0/include' '--datarootdir=/u01/app/postgres/product/96/db_0/share' '--datadir=/u01/app/postgres/product/96/db_0/share' '--with-pgport=5432' '--with-perl' '--with-python' '--with-tcl' '--with-openssl' '--with-pam' '--with-ldap' '--with-libxml' '--with-libxslt' '--with-segsize=2' '--with-blocksize=8' '--with-wal-segsize=16' '--with-extra-version= dbi services build'

To create a self signed certificate with openssl simple execute the following command:

postgres@pgbox:/home/postgres/ [PG960] openssl req -new -text -out server.req

This creates a new certificate request based on the information you provide. The only important point here (for the scope of this post) is that the “Common Name” must match the server name where your PostgreSQL is running on, e.g.:

Generating a 2048 bit RSA private key
............................................................................................................................................+++
..................................................................................................+++
writing new private key to 'privkey.pem'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:CH
State or Province Name (full name) []:BS
Locality Name (eg, city) [Default City]:Basel
Organization Name (eg, company) [Default Company Ltd]:dbi services
Organizational Unit Name (eg, section) []:dba
Common Name (eg, your name or your server's hostname) []:pgbox
Email Address []:xx@xx@com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

This created two files in the directory where you executed the command (the first one is the certificate request and the second one is the private key):

-rw-r--r--. 1 postgres postgres  3519 Sep  9 13:24 server.req
-rw-r--r--. 1 postgres postgres  1821 Sep  9 13:24 privkey.pem

If you want your PostgreSQL instance to start automatically you should remove the pass phrase from the generated private key:

postgres@pgbox:/home/postgres/ [PG960] openssl rsa -in privkey.pem -out server.key
Enter pass phrase for privkey.pem:
writing RSA key
postgres@pgbox:/home/postgres/ [PG960] rm privkey.pem

The password which is asked for is the one you provided when you generated the certificate request above. The new key is now in “server.key”. Now you can create your certificate:

postgres@pgbox:/home/postgres/ [PG960] openssl req -x509 -in server.req -text -key server.key -out server.crt

If everything went well your brand new certificate should be available:

postgres@pgbox:/home/postgres/ [PG960] ls -l server.crt 
-rw-r--r--. 1 postgres postgres 4473 Sep  9 13:32 server.crt
postgres@pgbox:/home/postgres/ [PG960] cat server.crt
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 12528845138836301488 (0xaddf6645ea37a6b0)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=CH, ST=BS, L=Basel, O=dbi services, OU=dba, CN=pgbox/emailAddress=xx@xx@com
        Validity
            Not Before: Sep  9 11:32:42 2016 GMT
            Not After : Oct  9 11:32:42 2016 GMT
        Subject: C=CH, ST=BS, L=Basel, O=dbi services, OU=dba, CN=pgbox/emailAddress=xx@xx@com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:cb:4f:d1:b7:81:c4:83:22:2f:fb:9f:4b:fa:6a:
                    16:77:fd:62:37:91:f1:09:cc:c4:e1:04:e1:de:f2:
                    3f:77:35:ec:e5:8f:5a:03:1d:7b:53:8e:5a:72:76:
                    42:2a:cb:95:9a:35:4a:98:1d:78:3c:21:85:3d:7c:
                    59:f6:e8:7b:20:d0:73:db:42:ff:38:ca:0c:13:f6:
                    cc:3e:bc:b0:8f:41:29:f1:c7:33:45:79:c7:04:33:
                    51:47:0b:23:f8:d6:58:68:2d:95:83:c9:ad:40:7c:
                    95:9a:0c:ff:92:bd:d6:4f:b2:96:6c:41:45:0d:eb:
                    19:57:b3:9a:fc:1c:82:01:9c:2d:e5:2e:1b:0f:47:
                    ab:84:fa:65:ed:80:e7:19:da:ab:89:09:ed:6a:2c:
                    3a:aa:fe:dc:ba:53:e5:52:3f:1c:db:47:4c:4a:d6:
                    e5:0f:76:12:df:f4:6c:fd:5a:fb:a5:70:b4:7b:06:
                    c3:0c:b1:4d:cf:04:8e:5c:b0:05:cb:f2:ac:78:a6:
                    12:44:55:07:f9:88:55:59:23:11:0f:dd:53:14:6a:
                    e8:c4:bb:6a:94:af:1e:54:e8:7d:4f:10:8a:e5:7e:
                    31:3b:cf:28:28:80:37:62:eb:5e:49:26:9d:10:17:
                    33:bc:a7:3f:2a:06:a4:f0:37:a5:b3:07:6d:ce:6a:
                    b7:17
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Subject Key Identifier: 
                EA:63:B1:7F:07:DF:31:3F:55:28:77:CC:FB:F2:1F:3A:D6:45:3F:55
            X509v3 Authority Key Identifier: 
                keyid:EA:63:B1:7F:07:DF:31:3F:55:28:77:CC:FB:F2:1F:3A:D6:45:3F:55

            X509v3 Basic Constraints: 
                CA:TRUE
    Signature Algorithm: sha256WithRSAEncryption
         18:2b:96:b6:01:d8:3e:7f:bb:35:0c:4b:53:c2:9c:02:22:41:
         25:82:d3:b6:a9:88:6e:0e:5d:5b:d3:ac:00:43:0a:04:f4:12:
         6e:22:fd:3f:77:63:0e:42:28:e3:09:6b:16:67:5f:b7:08:08:
         74:a3:55:1f:49:09:69:96:e8:f6:2e:9c:8a:d6:a0:e2:f7:d8:
         30:62:06:f0:5e:1a:85:fe:ff:2d:39:64:f7:f1:e9:ce:21:02:
         f3:86:5f:3b:f6:12:1d:61:cd:a8:bf:36:e2:98:d4:99:b6:95:
         5e:05:87:8d:ab:2f:30:38:b2:fe:68:ac:50:8d:98:fd:aa:4d:
         79:e2:f5:71:92:d6:e5:1d:59:42:02:49:7a:2e:e0:f3:ba:41:
         4d:f4:15:33:44:36:14:43:3b:7a:41:1b:61:6c:ff:78:fb:13:
         4a:a4:e0:96:6c:45:80:0e:30:e3:63:9d:dc:f1:77:16:22:9c:
         7a:c9:92:96:53:3b:62:87:ca:cb:e8:4a:a4:4f:69:a6:a0:5a:
         a9:eb:be:58:7f:c1:da:d4:d7:41:d4:54:06:fb:5b:8b:ea:46:
         68:f5:e6:1e:2b:6a:0b:65:f9:66:5a:a2:14:ec:eb:05:2f:99:
         46:bc:bb:d8:11:f6:3f:2e:6e:15:48:ac:70:1f:18:2d:e2:78:
         4b:a3:cb:ef
-----BEGIN CERTIFICATE-----
MIIDxTCCAq2gAwIBAgIJAK3fZkXqN6awMA0GCSqGSIb3DQEBCwUAMHkxCzAJBgNV
BAYTAkNIMQswCQYDVQQIDAJCUzEOMAwGA1UEBwwFQmFzZWwxFTATBgNVBAoMDGRi
aSBzZXJ2aWNlczEMMAoGA1UECwwDZGJhMQ4wDAYDVQQDDAVwZ2JveDEYMBYGCSqG
SIb3DQEJARYJeHhAeHhAY29tMB4XDTE2MDkwOTExMzI0MloXDTE2MTAwOTExMzI0
MloweTELMAkGA1UEBhMCQ0gxCzAJBgNVBAgMAkJTMQ4wDAYDVQQHDAVCYXNlbDEV
MBMGA1UECgwMZGJpIHNlcnZpY2VzMQwwCgYDVQQLDANkYmExDjAMBgNVBAMMBXBn
Ym94MRgwFgYJKoZIhvcNAQkBFgl4eEB4eEBjb20wggEiMA0GCSqGSIb3DQEBAQUA
A4IBDwAwggEKAoIBAQDLT9G3gcSDIi/7n0v6ahZ3/WI3kfEJzMThBOHe8j93Nezl
j1oDHXtTjlpydkIqy5WaNUqYHXg8IYU9fFn26Hsg0HPbQv84ygwT9sw+vLCPQSnx
xzNFeccEM1FHCyP41lhoLZWDya1AfJWaDP+SvdZPspZsQUUN6xlXs5r8HIIBnC3l
LhsPR6uE+mXtgOcZ2quJCe1qLDqq/ty6U+VSPxzbR0xK1uUPdhLf9Gz9WvulcLR7
BsMMsU3PBI5csAXL8qx4phJEVQf5iFVZIxEP3VMUaujEu2qUrx5U6H1PEIrlfjE7
zygogDdi615JJp0QFzO8pz8qBqTwN6WzB23OarcXAgMBAAGjUDBOMB0GA1UdDgQW
BBTqY7F/B98xP1Uod8z78h861kU/VTAfBgNVHSMEGDAWgBTqY7F/B98xP1Uod8z7
8h861kU/VTAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQAYK5a2Adg+
f7s1DEtTwpwCIkElgtO2qYhuDl1b06wAQwoE9BJuIv0/d2MOQijjCWsWZ1+3CAh0
o1UfSQlpluj2LpyK1qDi99gwYgbwXhqF/v8tOWT38enOIQLzhl879hIdYc2ovzbi
mNSZtpVeBYeNqy8wOLL+aKxQjZj9qk154vVxktblHVlCAkl6LuDzukFN9BUzRDYU
Qzt6QRthbP94+xNKpOCWbEWADjDjY53c8XcWIpx6yZKWUztih8rL6EqkT2mmoFqp
675Yf8Ha1NdB1FQG+1uL6kZo9eYeK2oLZflmWqIU7OsFL5lGvLvYEfY/Lm4VSKxw
Hxgt4nhLo8vv
-----END CERTIFICATE-----

For PostgreSQL to accept the key when it starts up you’ll need to modify the permissions:

postgres@pgbox:/home/postgres/ [PG960] chmod 600 server.key
postgres@pgbox:/home/postgres/ [PG960] ls -l server.key
-rw-------. 1 postgres postgres 1675 Sep  9 13:30 server.key

Both files (server.key and server.crt) need to be copied to your data directory (you can adjust this by using the ssl_cert_file and ssl_key_file configuration parameters):

postgres@pgbox:/home/postgres/ [PG960] mv server.key server.crt $PGDATA/

Now you can turn on ssl…

(postgres@[local]:5432) [postgres] > alter system set ssl='on';
ALTER SYSTEM
Time: 5.427 ms

… and restart your instance:

postgres@pgbox:/home/postgres/ [PG960] pg_ctl -D $PGDATA restart -m fast

How can you test if SSL connections do work? Add the following line to pg_hba.conf for your instance:

hostssl  all             all             127.0.0.1/32            md5

Reload your server and then create a new connection:

postgres@pgbox:/u02/pgdata/PG1/ [PG960] psql -h localhost -p 5432 postgres
psql (9.6rc1 dbi services build)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.

Works as expected. For everything which is not for testing you’ll need a real certificate, of course. Just in case you expected to configure another port: PostgreSQL listens for normal and SSL connections on the same port. When the client supports SSL then SSL connections will be established, otherwise normal connections. When you want to force the use of SSL connections you can do it by adjusting your pg_hba.conf (deny all connections which are not SSL).

 

Cet article Securing your connections to PostgreSQL by using SSL est apparu en premier sur Blog dbi services.

What the hell are these template0 and template1 databases in PostgreSQL?

$
0
0

When people start to work with PostgreSQL, especially when they are used to Oracle, some things might be very confusing. A few of the questions we usually get asked are:

  • Where is the listener and how can I configure it?
  • When you talk about a PostgreSQL cluster where are the other nodes?
  • Why do we have these template databases and what are they used for?
  • …and some others…

In this post we’ll look at the last point: Why do we have two template databases (template0 and template1) and additionally a database called “postgres”? That makes three databases by default. In Oracle we only have one, well two when you use pluggable databases (the root and pdb$seed). Why does PostgreSQL need three by default? Isn’t that only overhead? Lets see …

To begin with: Assuming these databases are really not required we can drop them, can’t we?

(postgres@[local]:5432) [postgres] > \l
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
-----------+----------+----------+-------------+-------------+-----------------------
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres

Can we drop the “postgres” database?

(postgres@[local]:5432) [postgres] > drop database postgres;
ERROR:  cannot drop the currently open database
Time: 1.052 ms

Ok, this is the first point to remember: You can not drop a database which users are currently connected to (in this case it is my own connection). So lets try to connect to template1 and then drop the “postgres” database:

postgres@pgbox:/home/postgres/ [PG960] psql template1
psql (9.6rc1 dbi services build)
Type "help" for help.

(postgres@[local]:5432) [template1] > drop database postgres;
DROP DATABASE
Time: 489.287 ms
(postgres@[local]:5432) [template1] > 

Uh, our default “postgres” database is gone. Does it matter? Not really from a PostgreSQL perspective but probably all your clients (pgadmin, monitoring scripts, …) will have a problem now:

postgres@pgbox:/home/postgres/ [postgres] psql
psql: FATAL:  database "postgres" does not exist

The “postgres” database is meant as a default database for clients to connect to. When you administer a PostgreSQL instance which runs under the postgres operating system user the default database that is used for a connection is the same as the username => postgres. Now that this database does not exist anymore you can not longer connect if you do not provide a database name in you connection request. But we can still connect to “template1″:

postgres@pgbox:/home/postgres/ [postgres] psql template1
psql (9.6rc1 dbi services build)
Type "help" for help.

Second thing to remember: The “postgres” database is meant as a default database for connections. It is not required, you can drop it but probably a lot of tools you use will need to be adjusted because they assume that the “postgres” database is there by default.

Luckily we can easy recover from that:

postgres@pgbox:/home/postgres/ [postgres] psql template1
psql (9.6rc1 dbi services build)
Type "help" for help.

(postgres@[local]:5432) [template1] > create database postgres;
CREATE DATABASE
(postgres@[local]:5432) [template1] > \l
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
-----------+----------+----------+-------------+-------------+-----------------------
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
(3 rows)

What happened? We connected to template1 again and re-created the “postgres” database. Of course everything we added to the “postgres” database before we dropped it is not any more available. This brings us to the next question: When we create a new database what or who defines the initial contents?

Third thing to remember: When you create a new database by using the syntax “create database [DB_NAME]” you get an exact copy of template1.

Really? What happens when I modify template1? Lets add one table and one extension:

postgres@pgbox:/home/postgres/ [postgres] psql template1
psql (9.6rc1 dbi services build)
Type "help" for help.

(postgres@[local]:5432) [template1] > create table my_test_tab ( a int );
CREATE TABLE
(postgres@[local]:5432) [template1] > create extension hstore;
CREATE EXTENSION
Time: 123.722 ms
(postgres@[local]:5432) [template1] > \dx
                           List of installed extensions
  Name   | Version |   Schema   |                   Description                    
---------+---------+------------+--------------------------------------------------
 hstore  | 1.4     | public     | data type for storing sets of (key, value) pairs
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language
(2 rows)

If the above statement is true every new database created with the above syntax should contain the table and the extension, right?

(postgres@[local]:5432) [postgres] > \c db_test
You are now connected to database "db_test" as user "postgres".
(postgres@[local]:5432) [db_test] > \d
            List of relations
 Schema |    Name     | Type  |  Owner   
--------+-------------+-------+----------
 public | my_test_tab | table | postgres
(1 row)

(postgres@[local]:5432) [db_test] > \dx
                           List of installed extensions
  Name   | Version |   Schema   |                   Description                    
---------+---------+------------+--------------------------------------------------
 hstore  | 1.4     | public     | data type for storing sets of (key, value) pairs
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language
(2 rows)

Whatever you put into template1 will be available in a new database if you use the following syntax: “create database [DB_NAME];” This can simplify your deployments a lot if you rely on pre-installed objects for e.g. monitoring or development.

Ok, I got it, but what is template0 for then? For this to understand we first take a look at pg_database, especially two columns: datallowconn and datistemplate:

(postgres@[local]:5432) [db_test] > select datname,datallowconn,datistemplate from pg_database order by 3;
  datname  | datallowconn | datistemplate 
-----------+--------------+---------------
 postgres  | t            | f
 db_test   | t            | f
 template1 | t            | t
 template0 | f            | t
(4 rows)

When you take a look at “datallowcon” the only database that has set this to false is “template0″. Do you remember the beginning of this post when we tried to delete the “postgres” database? You can only delete a database when there are no connections. But: You can only create a database from another database if there are no connections to the source, too. Really? Why then can I create a new database when I am connected to template1 when template1 is the source for the new database?

(postgres@[local]:5432) [template1] > create database db_test_2;
CREATE DATABASE

Confusing? This does not work anymore if there is another session connected to template1.

Lets try to create another new database but this time we we use db_test as the source. Yes, this is possible if you slightly adjust the syntax. But before we create the database we create a another connection to db_test:

(postgres@[local]:5432) [template1] > \q
postgres@pgbox:/home/postgres/ [PG960] psql db_test
psql (9.6rc1 dbi services build)
Type "help" for help.

In another session we try to create new database with db_test as the source:

postgres@pgbox:/home/postgres/ [PG960] psql postgres
psql (9.6rc1 dbi services build)
Type "help" for help.

(postgres@[local]:5432) [postgres] > create database db_test_3 template db_test;
ERROR:  source database "db_test" is being accessed by other users
DETAIL:  There is 1 other session using the database.

Fourth point to remember: For creating new databases you can use whatever database you like as the source when you specify then template explicitly.
Fifth point to remember: When you want to drop a database or when you want to create a database there must be no connections to the database you either want to drop or you want to create a new database from.

Coming back to the “datistemplate” and “datallowcon” settings: template0 is the only database that has “datallowcon” set to false, why? Because template0 is meant as the default unmodifiable database. You never should make any changes there. In a brand new PostgreSQL instance template0 and template1 are exactly the same. But why do I need both of them then? Assume you messed up template1 somehow (installed too many objects, for example). Using template one you still can recover from that easily:

postgres@pgbox:/home/postgres/ [PG960] psql postgres
psql (9.6rc1 dbi services build)
Type "help" for help.
(postgres@[local]:5432) [postgres] > update pg_database set datistemplate = false where datname = 'template1';
UPDATE 1
(postgres@[local]:5432) [postgres] > drop database template1;
DROP DATABASE
(postgres@[local]:5432) [postgres] > create database template1 template template0;
CREATE DATABASE
(postgres@[local]:5432) [postgres] > update pg_database set datistemplate = true where datname = 'template1';
UPDATE 1

What happened here? I modified template1 to not being a template anymore because you can not drop a database flagged as a template. Then I dropped and re-created the template1 database by using template0 as the template. Now my template1 is an exact copy of template0 again and all the things I messed up are gone. Another use case for this is: Image you modified template1 to include the stuff you rely on but at some point in the future you need a new database which shall be without your modifications (e.g. restoring a dump). For this you always can use template0 as a template because template0 is always clean. And this is why connections are not allowed to template0.

Of course you can create your own template database(s) by setting the “datistemplate” to true:

(postgres@[local]:5432) [postgres] > update pg_database set datistemplate = true where datname = 'db_test';
UPDATE 1
(postgres@[local]:5432) [postgres] > drop database db_test;
ERROR:  cannot drop a template database

What about the overhead:

(postgres@[local]:5432) [postgres] > select * from pg_size_pretty ( pg_database_size ( 'template0' ));
 pg_size_pretty 
----------------
 7233 kB
(1 row)

(postgres@[local]:5432) [postgres] > select * from pg_size_pretty ( pg_database_size ( 'template1' ));
 pg_size_pretty 
----------------
 7233 kB
(1 row)

(postgres@[local]:5432) [postgres] > select * from pg_size_pretty ( pg_database_size ( 'postgres' ));
 pg_size_pretty 
----------------
 7343 kB
(1 row)

Should not really be an issue on todays hardware. Hope this puts some light on these default databases.

 

Cet article What the hell are these template0 and template1 databases in PostgreSQL? est apparu en premier sur Blog dbi services.

Connecting your PostgreSQL instance to an Oracle database – Debian version

$
0
0

Some time ago I blogged about attaching your PostgreSQL instance to an Oracle database by using the oracle_fdw foreign data wrapper. This resulted in a comment which is the reason for this post: Doing the same with a Debian system where you can not use the rpm versions of the Oracle Instant Client (at least not directly). Lets go …

What I did to start with is to download the Debian 8 netinstall ISO and started from there with a minimal installation (see the end of this post for the screen shots of the installation if you are not sure on how to do it).

As I will compile PostgreSQL from source I’ll need to install the required packages:

root@debianpg:~ apt-get install libldap2-dev libpython-dev libreadline-dev libssl-dev bison flex libghc-zlib-dev libcrypto++-dev libxml2-dev libxslt1-dev tcl tclcl-dev bzip2 wget screen ksh git unzip

Create the directory structure for my PostgreSQL binaries and instance:

root@debianpg:~ mkdir -p /u01/app/postgres/product
root@debianpg:~ chown -R postgres:postgres /u01/app 
root@debianpg:~ mkdir -p /u02/pgdata
root@debianpg:~ mkdir -p /u03/pgdata
root@debianpg:~ chown -R postgres:postgres /u0*/pgdata 

Compile and install PostgreSQL from source:

postgres@debianpg:~$ PGHOME=/u01/app/postgres/product/95/db_4
postgres@debianpg:~$ SEGSIZE=2
postgres@debianpg:~$ BLOCKSIZE=8
postgres@debianpg:~$ WALSEGSIZE=16
postgres@debianpg:~$ wget https://ftp.postgresql.org/pub/source/v9.5.4/postgresql-9.5.4.tar.bz2
postgres@debianpg:~$ tar -axf postgresql-9.5.4.tar.bz2
postgres@debianpg:~$ cd postgresql-9.5.4/
./configure --prefix=${PGHOME} \
            --exec-prefix=${PGHOME} \
            --bindir=${PGHOME}/bin \
            --libdir=${PGHOME}/lib \
            --sysconfdir=${PGHOME}/etc \
            --includedir=${PGHOME}/include \
            --datarootdir=${PGHOME}/share \
            --datadir=${PGHOME}/share \
            --with-pgport=5432 \
            --with-perl \
            --with-python \
            --with-openssl \
            --with-ldap \
            --with-libxml \
            --with-libxslt \
            --with-segsize=${SEGSIZE} \
            --with-blocksize=${BLOCKSIZE} \
            --with-wal-segsize=${WALSEGSIZE}  \
            --with-extra-version=" dbi services build"
postgres@debianpg:~$ make world
postgres@debianpg:~$ make install
postgres@debianpg:~$ cd contrib
postgres@debianpg:~$ make install
postgres@debianpg:~$ cd ../..
postgres@debianpg:~$ rm -rf postgres*

Initialize a new cluster:

postgres@debianpg:~$ /u01/app/postgres/product/95/db_4/bin/initdb -D /u02/pgdata/PG1 -X /u02/pgdata/PG1 --locale=en_US.UTF-8
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /u02/pgdata/PG1 ... ok
fixing permissions on existing directory /u02/pgdata/PG1 ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /u02/pgdata/PG1/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    /u01/app/postgres/product/95/db_4/bin/pg_ctl -D /u02/pgdata/PG1 -l logfile start

Adjust the logging_collector parameter and startup the instance:

postgres@debianpg:~$ sed -i 's/#logging_collector = off/logging_collector = on/g' /u02/pgdata/PG1/postgresql.conf
postgres@debianpg:~$ mkdir /u02/pgdata/PG1/pg_log
postgres@debianpg:~$ /u01/app/postgres/product/95/db_4/bin/pg_ctl start -D /u02/pgdata/PG1/
postgres@debianpg:~$ /u01/app/postgres/product/95/db_4/bin/psql
psql (9.5.4 dbi services build)
Type "help" for help.

postgres= select version();
                                                   version                                                   
-------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.5.4 dbi services build on x86_64-pc-linux-gnu, compiled by gcc (Debian 4.9.2-10) 4.9.2, 64-bit
(1 row)

postgres=

Download the Oracle Instant Client zip file from here. You’ll need these:

  • instantclient-basic-linux.x64-12.1.0.2.0.zip
  • instantclient-sqlplus-linux.x64-12.1.0.2.0.zip
  • instantclient-sdk-linux.x64-12.1.0.2.0.zip

Extract to a location which fits your needs:

postgres@debianpg:~$ cd /u01/app/
postgres@debianpg:/u01/app$ unzip /home/postgres/instantclient-basic-linux.x64-12.1.0.2.0.zip
postgres@debianpg:/u01/app$ unzip /home/postgres/instantclient-sqlplus-linux.x64-12.1.0.2.0.zip
postgres@debianpg:/u01/app$ unzip /home/postgres/instantclient-sdk-linux.x64-12.1.0.2.0.zip
postgres@debianpg:/u01/app$ ls -l
total 8
drwxr-xr-x 3 postgres postgres 4096 Sep 27 12:04 instantclient_12_1
drwxr-xr-x 4 postgres postgres 4096 Sep 27 10:57 postgres

Do a connection test with sqlplus to be sure the instant client is working in general:

postgres@debianpg:/u01/app$ export ORACLE_HOME=/u01/app/instantclient_12_1
postgres@debianpg:/u01/app$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/u01/app/instantclient_12_1
postgres@debianpg:/u01/app$ export PATH=$PATH:$ORACLE_HOME
postgres@debianpg:/u01/app$ which sqlplus
/u01/app/instantclient_12_1/sqlplus
postgres@debianpg:/u01/app$ sqlplus sh/sh@192.168.22.242:1521/PROD
sqlplus: error while loading shared libraries: libaio.so.1: cannot open shared object file: No such file or directory

Ups, easy to fix:

root@debianpg:~ apt-get install libaio1

Again:

postgres@debianpg:/u01/app$ sqlplus sh/sh@192.168.22.242:1521/PROD.local

SQL*Plus: Release 12.1.0.2.0 Production on Tue Sep 27 12:12:44 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Tue Sep 27 2016 12:09:05 +02:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> 

Perfect. Connections are working to the Oracle instance. Continue with the oracle_fdw setup:

postgres@debianpg:~$ wget https://github.com/laurenz/oracle_fdw/archive/master.zip
postgres@debianpg:~$ unzip master.zip 
postgres@debianpg:~$ cd oracle_fdw-master/
postgres@debianpg:~/oracle_fdw-master$ export PATH=/u01/app/postgres/product/95/db_4/bin/:$PATH
postgres@debianpg:~/oracle_fdw-master$ which pg_config 
/u01/app/postgres/product/95/db_4/bin//pg_config
postgres@debianpg:~/oracle_fdw-master$ make
...
/usr/bin/ld: cannot find -lclntsh
collect2: error: ld returned 1 exit status
/u01/app/postgres/product/95/db_4/lib/pgxs/src/makefiles/../../src/Makefile.shlib:311: recipe for target 'oracle_fdw.so' failed
make: *** [oracle_fdw.so] Error 1

This one was unexpected. After some digging this resolves the issue:

postgres@debianpg:/u01/app/instantclient_12_1$ cd /u01/app/instantclient_12_1
postgres@debianpg:/u01/app/instantclient_12_1$ ln -s libclntsh.so.12.1 libclntsh.so

Not sure if I missed something or this is a bug (you can follow the issue here).

Once the link is there you’ll be able to “make” and to “make install”. This is the result:

postgres@debianpg:~/oracle_fdw-master$ make
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fpic -I/u01/app/instantclient_12_1/sdk/include -I/u01/app/instantclient_12_1/oci/include -I/u01/app/instantclient_12_1/rdbms/public -I/usr/include/oracle/12.1/client -I/usr/include/oracle/12.1/client64 -I/usr/include/oracle/11.2/client -I/usr/include/oracle/11.2/client64 -I/usr/include/oracle/11.1/client -I/usr/include/oracle/11.1/client64 -I/usr/include/oracle/10.2.0.5/client -I/usr/include/oracle/10.2.0.5/client64 -I/usr/include/oracle/10.2.0.4/client -I/usr/include/oracle/10.2.0.4/client64 -I/usr/include/oracle/10.2.0.3/client -I/usr/include/oracle/10.2.0.3/client64 -I. -I./ -I/u01/app/postgres/product/95/db_4/include/server -I/u01/app/postgres/product/95/db_4/include/internal -D_GNU_SOURCE -I/usr/include/libxml2   -c -o oracle_fdw.o oracle_fdw.c
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fpic -I/u01/app/instantclient_12_1/sdk/include -I/u01/app/instantclient_12_1/oci/include -I/u01/app/instantclient_12_1/rdbms/public -I/usr/include/oracle/12.1/client -I/usr/include/oracle/12.1/client64 -I/usr/include/oracle/11.2/client -I/usr/include/oracle/11.2/client64 -I/usr/include/oracle/11.1/client -I/usr/include/oracle/11.1/client64 -I/usr/include/oracle/10.2.0.5/client -I/usr/include/oracle/10.2.0.5/client64 -I/usr/include/oracle/10.2.0.4/client -I/usr/include/oracle/10.2.0.4/client64 -I/usr/include/oracle/10.2.0.3/client -I/usr/include/oracle/10.2.0.3/client64 -I. -I./ -I/u01/app/postgres/product/95/db_4/include/server -I/u01/app/postgres/product/95/db_4/include/internal -D_GNU_SOURCE -I/usr/include/libxml2   -c -o oracle_utils.o oracle_utils.c
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fpic -I/u01/app/instantclient_12_1/sdk/include -I/u01/app/instantclient_12_1/oci/include -I/u01/app/instantclient_12_1/rdbms/public -I/usr/include/oracle/12.1/client -I/usr/include/oracle/12.1/client64 -I/usr/include/oracle/11.2/client -I/usr/include/oracle/11.2/client64 -I/usr/include/oracle/11.1/client -I/usr/include/oracle/11.1/client64 -I/usr/include/oracle/10.2.0.5/client -I/usr/include/oracle/10.2.0.5/client64 -I/usr/include/oracle/10.2.0.4/client -I/usr/include/oracle/10.2.0.4/client64 -I/usr/include/oracle/10.2.0.3/client -I/usr/include/oracle/10.2.0.3/client64 -I. -I./ -I/u01/app/postgres/product/95/db_4/include/server -I/u01/app/postgres/product/95/db_4/include/internal -D_GNU_SOURCE -I/usr/include/libxml2   -c -o oracle_gis.o oracle_gis.c
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fpic -shared -o oracle_fdw.so oracle_fdw.o oracle_utils.o oracle_gis.o -L/u01/app/postgres/product/95/db_4/lib -Wl,--as-needed -Wl,-rpath,'/u01/app/postgres/product/95/db_4/lib',--enable-new-dtags  -L/u01/app/instantclient_12_1 -L/u01/app/instantclient_12_1/bin -L/u01/app/instantclient_12_1/lib -L/u01/app/instantclient_12_1/sdk/include -lclntsh -L/usr/lib/oracle/12.1/client/lib -L/usr/lib/oracle/12.1/client64/lib -L/usr/lib/oracle/11.2/client/lib -L/usr/lib/oracle/11.2/client64/lib -L/usr/lib/oracle/11.1/client/lib -L/usr/lib/oracle/11.1/client64/lib -L/usr/lib/oracle/10.2.0.5/client/lib -L/usr/lib/oracle/10.2.0.5/client64/lib -L/usr/lib/oracle/10.2.0.4/client/lib -L/usr/lib/oracle/10.2.0.4/client64/lib -L/usr/lib/oracle/10.2.0.3/client/lib -L/usr/lib/oracle/10.2.0.3/client64/lib 
postgres@debianpg:~/oracle_fdw-master$ make install
/bin/mkdir -p '/u01/app/postgres/product/95/db_4/lib'
/bin/mkdir -p '/u01/app/postgres/product/95/db_4/share/extension'
/bin/mkdir -p '/u01/app/postgres/product/95/db_4/share/extension'
/bin/mkdir -p '/u01/app/postgres/product/95/db_4/share/doc/extension'
/usr/bin/install -c -m 755  oracle_fdw.so '/u01/app/postgres/product/95/db_4/lib/oracle_fdw.so'
/usr/bin/install -c -m 644 .//oracle_fdw.control '/u01/app/postgres/product/95/db_4/share/extension/'
/usr/bin/install -c -m 644 .//oracle_fdw--1.1.sql .//oracle_fdw--1.0--1.1.sql  '/u01/app/postgres/product/95/db_4/share/extension/'
/usr/bin/install -c -m 644 .//README.oracle_fdw '/u01/app/postgres/product/95/db_4/share/doc/extension/'

Remember that the PostgreSQL instance needs to find the Oracle libraries, so set the environment before restarting PostgreSQL:

postgres@debianpg:~$ echo $ORACLE_HOME
/u01/app/instantclient_12_1
postgres@debianpg:~$ echo $LD_LIBRARY_PATH
:/u01/app/instantclient_12_1:/u01/app/instantclient_12_1/sdk/include/
postgres@debianpg:~$ pg_ctl -D /u02/pgdata/PG1/ restart -m fast
postgres@debianpg:~$ psql
psql (9.5.4 dbi services build)
Type "help" for help.

postgres= create extension oracle_fdw;
CREATE EXTENSION
postgres= \dx
                        List of installed extensions
    Name    | Version |   Schema   |              Description               
------------+---------+------------+----------------------------------------
 oracle_fdw | 1.1     | public     | foreign data wrapper for Oracle access
 plpgsql    | 1.0     | pg_catalog | PL/pgSQL procedural language
(2 rows)

All fine. Lets get the foreign data:

postgres= create schema oracle;
CREATE SCHEMA
postgres= create server oracle foreign data wrapper oracle_fdw options (dbserver '//192.168.22.242/PROD.local' );
CREATE SERVER
postgres= create user mapping for postgres server oracle options (user 'sh', password 'sh');
CREATE USER MAPPING
postgres= import foreign schema "SH" from server oracle into oracle;
IMPORT FOREIGN SCHEMA
postgres= set search_path='oracle';
SET
postgres= \d
                       List of relations
 Schema |            Name            |     Type      |  Owner   
--------+----------------------------+---------------+----------
 oracle | cal_month_sales_mv         | foreign table | postgres
 oracle | channels                   | foreign table | postgres
 oracle | costs                      | foreign table | postgres
 oracle | countries                  | foreign table | postgres
 oracle | currency                   | foreign table | postgres
 oracle | customers                  | foreign table | postgres
 oracle | dimension_exceptions       | foreign table | postgres
 oracle | fweek_pscat_sales_mv       | foreign table | postgres
 oracle | products                   | foreign table | postgres
 oracle | profits                    | foreign table | postgres
 oracle | promotions                 | foreign table | postgres
 oracle | sales                      | foreign table | postgres
 oracle | sales_transactions_ext     | foreign table | postgres
 oracle | supplementary_demographics | foreign table | postgres
 oracle | times                      | foreign table | postgres
(15 rows)

postgres= select count(*) from countries;
 count 
-------
    23
(1 row)

Perfect, works. Hope this helps.

Debian 8 installation screen shots:

Selection_014
Selection_015
Selection_016
Selection_017
Selection_018
Selection_020
Selection_021
Selection_022
Selection_023
Selection_024
Selection_025
Selection_026
Selection_027
Selection_028
Selection_029
Selection_030
Selection_031
Selection_032
Selection_033
Selection_034
Selection_036
Selection_037
Selection_038
Selection_039
Selection_040
Selection_041
Selection_042
Selection_043
Selection_044
Selection_045
Selection_046
Selection_047
Selection_048

 

Cet article Connecting your PostgreSQL instance to an Oracle database – Debian version est apparu en premier sur Blog dbi services.

Running PostgreSQL on ZFS on Linux

$
0
0

ZFS for Solaris is around for several years now (since 2015). But there is also a project called OpenZFS which makes ZFS available on other operating systems. For Linux the announcement for ZFS being production ready was back in 2013. So why not run PostgreSQL on it? ZFS provides many cool features including compression, snapshots and build in volume management. Lets give it a try and do an initial setup. More details will follow in separate posts.

As usual I am running a CentOS 7 VM for my tests:

[root@centos7 ~] lsb_release -a
LSB Version:	:core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID:	CentOS
Description:	CentOS Linux release 7.2.1511 (Core) 
Release:	7.2.1511
Codename:	Core

There is a dedicated website for ZFS on Linux where you can find the instructions on how to install it for various distributions. The instruction for CentOS/RHEL are quite easy. Download the repo files:

[root@centos7 ~] yum install http://download.zfsonlinux.org/epel/zfs-release$(rpm -E %dist).noarch.rpm
Loaded plugins: fastestmirror
zfs-release.el7.centos.noarch.rpm                                                                    | 5.0 kB  00:00:00     
Examining /var/tmp/yum-root-Uv79vc/zfs-release.el7.centos.noarch.rpm: zfs-release-1-3.el7.centos.noarch
Marking /var/tmp/yum-root-Uv79vc/zfs-release.el7.centos.noarch.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package zfs-release.noarch 0:1-3.el7.centos will be installed
--> Finished Dependency Resolution

Dependencies Resolved

============================================================================================================================
 Package                  Arch                Version                     Repository                                   Size
============================================================================================================================
Installing:
 zfs-release              noarch              1-3.el7.centos              /zfs-release.el7.centos.noarch              2.9 k

Transaction Summary
============================================================================================================================
Install  1 Package

Total size: 2.9 k
Installed size: 2.9 k
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : zfs-release-1-3.el7.centos.noarch                                                                        1/1 
  Verifying  : zfs-release-1-3.el7.centos.noarch                                                                        1/1 

Installed:
  zfs-release.noarch 0:1-3.el7.centos                                                                                       

Complete!

[root@centos7 ~] gpg --quiet --with-fingerprint /etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux
gpg: new configuration file `/root/.gnupg/gpg.conf' created
gpg: WARNING: options in `/root/.gnupg/gpg.conf' are not yet active during this run
pub  2048R/F14AB620 2013-03-21 ZFS on Linux 
      Key fingerprint = C93A FFFD 9F3F 7B03 C310  CEB6 A9D5 A1C0 F14A B620
sub  2048R/99685629 2013-03-21

For the next step it depends if you want to go with DKMS or kABI-tracking kmod. I’ll go with kABI-tracking kmod and therefore will disable the DKMS repository and enable the kmod repository:

[root@centos7 ~] cat /etc/yum.repos.d/zfs.repo 
[zfs]
name=ZFS on Linux for EL7 - dkms
baseurl=http://download.zfsonlinux.org/epel/7/$basearch/
enabled=0
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux

[zfs-kmod]
name=ZFS on Linux for EL7 - kmod
baseurl=http://download.zfsonlinux.org/epel/7/kmod/$basearch/
enabled=1
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux

[zfs-source]
name=ZFS on Linux for EL7 - Source
baseurl=http://download.zfsonlinux.org/epel/7/SRPMS/
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux

[zfs-testing]
name=ZFS on Linux for EL7 - dkms - Testing
baseurl=http://download.zfsonlinux.org/epel-testing/7/$basearch/
enabled=0
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux

[zfs-testing-kmod]
name=ZFS on Linux for EL7 - kmod - Testing
baseurl=http://download.zfsonlinux.org/epel-testing/7/kmod/$basearch/
enabled=0
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux

[zfs-testing-source]
name=ZFS on Linux for EL7 - Testing Source
baseurl=http://download.zfsonlinux.org/epel-testing/7/SRPMS/
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux
[root@centos7 ~] 

Installing ZFS from here on is just a matter of using yum:

[root@centos7 ~] yum install zfs
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.spreitzer.ch
 * extras: mirror.spreitzer.ch
 * updates: mirror.de.leaseweb.net
zfs-kmod/x86_64/primary_db                                                                           | 231 kB  00:00:01     
Resolving Dependencies
--> Running transaction check
---> Package zfs.x86_64 0:0.6.5.8-1.el7.centos will be installed
--> Processing Dependency: zfs-kmod = 0.6.5.8 for package: zfs-0.6.5.8-1.el7.centos.x86_64
--> Processing Dependency: spl = 0.6.5.8 for package: zfs-0.6.5.8-1.el7.centos.x86_64
--> Processing Dependency: libzpool2 = 0.6.5.8 for package: zfs-0.6.5.8-1.el7.centos.x86_64
--> Processing Dependency: libzfs2 = 0.6.5.8 for package: zfs-0.6.5.8-1.el7.centos.x86_64
--> Processing Dependency: libuutil1 = 0.6.5.8 for package: zfs-0.6.5.8-1.el7.centos.x86_64
--> Processing Dependency: libnvpair1 = 0.6.5.8 for package: zfs-0.6.5.8-1.el7.centos.x86_64
--> Processing Dependency: libzpool.so.2()(64bit) for package: zfs-0.6.5.8-1.el7.centos.x86_64
--> Processing Dependency: libzfs_core.so.1()(64bit) for package: zfs-0.6.5.8-1.el7.centos.x86_64
--> Processing Dependency: libzfs.so.2()(64bit) for package: zfs-0.6.5.8-1.el7.centos.x86_64
--> Processing Dependency: libuutil.so.1()(64bit) for package: zfs-0.6.5.8-1.el7.centos.x86_64
--> Processing Dependency: libnvpair.so.1()(64bit) for package: zfs-0.6.5.8-1.el7.centos.x86_64
--> Running transaction check
---> Package kmod-zfs.x86_64 0:0.6.5.8-1.el7.centos will be installed
--> Processing Dependency: spl-kmod for package: kmod-zfs-0.6.5.8-1.el7.centos.x86_64
---> Package libnvpair1.x86_64 0:0.6.5.8-1.el7.centos will be installed
---> Package libuutil1.x86_64 0:0.6.5.8-1.el7.centos will be installed
---> Package libzfs2.x86_64 0:0.6.5.8-1.el7.centos will be installed
---> Package libzpool2.x86_64 0:0.6.5.8-1.el7.centos will be installed
---> Package spl.x86_64 0:0.6.5.8-1.el7.centos will be installed
--> Running transaction check
---> Package kmod-spl.x86_64 0:0.6.5.8-1.el7.centos will be installed
--> Finished Dependency Resolution

Dependencies Resolved

============================================================================================================================
 Package                     Arch                    Version                                Repository                 Size
============================================================================================================================
Installing:
 zfs                         x86_64                  0.6.5.8-1.el7.centos                   zfs-kmod                  334 k
Installing for dependencies:
 kmod-spl                    x86_64                  0.6.5.8-1.el7.centos                   zfs-kmod                  110 k
 kmod-zfs                    x86_64                  0.6.5.8-1.el7.centos                   zfs-kmod                  665 k
 libnvpair1                  x86_64                  0.6.5.8-1.el7.centos                   zfs-kmod                   35 k
 libuutil1                   x86_64                  0.6.5.8-1.el7.centos                   zfs-kmod                   41 k
 libzfs2                     x86_64                  0.6.5.8-1.el7.centos                   zfs-kmod                  123 k
 libzpool2                   x86_64                  0.6.5.8-1.el7.centos                   zfs-kmod                  423 k
 spl                         x86_64                  0.6.5.8-1.el7.centos                   zfs-kmod                   29 k

Transaction Summary
============================================================================================================================
Install  1 Package (+7 Dependent packages)

Total download size: 1.7 M
Installed size: 5.7 M
Is this ok [y/d/N]: y
Downloading packages:
(1/8): kmod-spl-0.6.5.8-1.el7.centos.x86_64.rpm                                                      | 110 kB  00:00:01     
(2/8): libnvpair1-0.6.5.8-1.el7.centos.x86_64.rpm                                                    |  35 kB  00:00:00     
(3/8): libuutil1-0.6.5.8-1.el7.centos.x86_64.rpm                                                     |  41 kB  00:00:00     
(4/8): kmod-zfs-0.6.5.8-1.el7.centos.x86_64.rpm                                                      | 665 kB  00:00:02     
(5/8): libzfs2-0.6.5.8-1.el7.centos.x86_64.rpm                                                       | 123 kB  00:00:00     
(6/8): libzpool2-0.6.5.8-1.el7.centos.x86_64.rpm                                                     | 423 kB  00:00:00     
(7/8): spl-0.6.5.8-1.el7.centos.x86_64.rpm                                                           |  29 kB  00:00:00     
(8/8): zfs-0.6.5.8-1.el7.centos.x86_64.rpm                                                           | 334 kB  00:00:00     
----------------------------------------------------------------------------------------------------------------------------
Total                                                                                       513 kB/s | 1.7 MB  00:00:03     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : libuutil1-0.6.5.8-1.el7.centos.x86_64                                                                    1/8 
  Installing : libnvpair1-0.6.5.8-1.el7.centos.x86_64                                                                   2/8 
  Installing : libzpool2-0.6.5.8-1.el7.centos.x86_64                                                                    3/8 
  Installing : kmod-spl-0.6.5.8-1.el7.centos.x86_64                                                                     4/8 
  Installing : spl-0.6.5.8-1.el7.centos.x86_64                                                                          5/8 
  Installing : libzfs2-0.6.5.8-1.el7.centos.x86_64                                                                      6/8 
  Installing : kmod-zfs-0.6.5.8-1.el7.centos.x86_64                                                                     7/8 
  Installing : zfs-0.6.5.8-1.el7.centos.x86_64                                                                          8/8 
  Verifying  : libnvpair1-0.6.5.8-1.el7.centos.x86_64                                                                   1/8 
  Verifying  : libzfs2-0.6.5.8-1.el7.centos.x86_64                                                                      2/8 
  Verifying  : zfs-0.6.5.8-1.el7.centos.x86_64                                                                          3/8 
  Verifying  : spl-0.6.5.8-1.el7.centos.x86_64                                                                          4/8 
  Verifying  : kmod-zfs-0.6.5.8-1.el7.centos.x86_64                                                                     5/8 
  Verifying  : libzpool2-0.6.5.8-1.el7.centos.x86_64                                                                    6/8 
  Verifying  : libuutil1-0.6.5.8-1.el7.centos.x86_64                                                                    7/8 
  Verifying  : kmod-spl-0.6.5.8-1.el7.centos.x86_64                                                                     8/8 

Installed:
  zfs.x86_64 0:0.6.5.8-1.el7.centos                                                                                         

Dependency Installed:
  kmod-spl.x86_64 0:0.6.5.8-1.el7.centos   kmod-zfs.x86_64 0:0.6.5.8-1.el7.centos  libnvpair1.x86_64 0:0.6.5.8-1.el7.centos 
  libuutil1.x86_64 0:0.6.5.8-1.el7.centos  libzfs2.x86_64 0:0.6.5.8-1.el7.centos   libzpool2.x86_64 0:0.6.5.8-1.el7.centos  
  spl.x86_64 0:0.6.5.8-1.el7.centos       

Complete!
[root@centos7 ~]

Be aware that the kernel modules are not loaded by default, so you have to do this on your own:

[root@centos7 ~] /sbin/modprobe zfs
Last login: Wed Sep 28 11:04:21 2016 from 192.168.22.1
[postgres@centos7 ~]$ lsmod | grep zfs
zfs                  2713912  0 
zunicode              331170  1 zfs
zavl                   15236  1 zfs
zcommon                55411  1 zfs
znvpair                93227  2 zfs,zcommon
spl                    92223  3 zfs,zcommon,znvpair
[root@centos7 ~] zfs list
no datasets available

For loading the modules automatically create a file under /etc/modules-load.d:

[root@centos7 ~] echo "zfs" > /etc/modules-load.d/zfs.conf
[root@centos7 ~] cat /etc/modules-load.d/zfs.conf
zfs

So far so good. Lets create a ZFS file system. I have two disks available for playing with ZFS (sdb and sdc):

[root@centos7 ~] ls -la /dev/sd*
brw-rw----. 1 root disk 8,  0 Sep 28 11:14 /dev/sda
brw-rw----. 1 root disk 8,  1 Sep 28 11:14 /dev/sda1
brw-rw----. 1 root disk 8,  2 Sep 28 11:14 /dev/sda2
brw-rw----. 1 root disk 8, 16 Sep 28 11:14 /dev/sdb
brw-rw----. 1 root disk 8, 32 Sep 28 11:14 /dev/sdc

The first thing you have to do is to create a new zfs pool (I don’t care about the warnings, that is why I use the “-f” option below):

[root@centos7 ~] zpool create pgpool mirror /dev/sdb /dev/sdc
invalid vdev specification
use '-f' to override the following errors:
/dev/sdb does not contain an EFI label but it may contain partition information in the MBR.
/dev/sdc does not contain an EFI label but it may contain partition information in the MBR.
[root@centos7 ~] zpool create pgpool mirror /dev/sdb /dev/sdc -f
[root@centos7 ~] zpool list
NAME     SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
pgpool  9.94G    65K  9.94G         -     0%     0%  1.00x  ONLINE  -
[root@centos7 ~] zpool status pgpool
  pool: pgpool
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	pgpool      ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    sdb     ONLINE       0     0     0
	    sdc     ONLINE       0     0     0

errors: No known data errors

[root@centos7 ~] df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   49G  1.7G   47G   4% /
devtmpfs                 235M     0  235M   0% /dev
tmpfs                    245M     0  245M   0% /dev/shm
tmpfs                    245M  4.3M  241M   2% /run
tmpfs                    245M     0  245M   0% /sys/fs/cgroup
/dev/sda1                497M  291M  206M  59% /boot
tmpfs                     49M     0   49M   0% /run/user/1000
pgpool                   9.7G     0  9.7G   0% /pgpool

What I did here is to create a mirrored pool over my two disks. The open zfs wiki has some performance tips for running PostgreSQL on ZFS as well as for other topics. Lets go with the recommendations:

[root@centos7 ~] zfs create pgpool/pgdata -o recordsize=8192
[root@centos7 ~] zfs set logbias=throughput pgpool/pgdata
[root@centos7 ~] zfs set primarycache=all pgpool/pgdata
[root@centos7 ~] zfs list
NAME            USED  AVAIL  REFER  MOUNTPOINT
pgpool           82K  9.63G  19.5K  /pgpool
pgpool/pgdata    19K  9.63G    19K  /pgpool/pgdata

My new ZFS file system is ready and already mounted, cool. Lets change the permissions and list all the properties:

[root@centos7 ~] chown postgres:postgres /pgpool/pgdata
[root@centos7 ~] zfs get all /pgpool/pgdata
NAME           PROPERTY              VALUE                  SOURCE
pgpool/pgdata  type                  filesystem             -
pgpool/pgdata  creation              Wed Sep 28 11:31 2016  -
pgpool/pgdata  used                  19K                    -
pgpool/pgdata  available             9.63G                  -
pgpool/pgdata  referenced            19K                    -
pgpool/pgdata  compressratio         1.00x                  -
pgpool/pgdata  mounted               yes                    -
pgpool/pgdata  quota                 none                   default
pgpool/pgdata  reservation           none                   default
pgpool/pgdata  recordsize            8K                     local
pgpool/pgdata  mountpoint            /pgpool/pgdata         default
pgpool/pgdata  sharenfs              off                    default
pgpool/pgdata  checksum              on                     default
pgpool/pgdata  compression           off                    default
pgpool/pgdata  atime                 on                     default
pgpool/pgdata  devices               on                     default
pgpool/pgdata  exec                  on                     default
pgpool/pgdata  setuid                on                     default
pgpool/pgdata  readonly              off                    default
pgpool/pgdata  zoned                 off                    default
pgpool/pgdata  snapdir               hidden                 default
pgpool/pgdata  aclinherit            restricted             default
pgpool/pgdata  canmount              on                     default
pgpool/pgdata  xattr                 on                     default
pgpool/pgdata  copies                1                      default
pgpool/pgdata  version               5                      -
pgpool/pgdata  utf8only              off                    -
pgpool/pgdata  normalization         none                   -
pgpool/pgdata  casesensitivity       sensitive              -
pgpool/pgdata  vscan                 off                    default
pgpool/pgdata  nbmand                off                    default
pgpool/pgdata  sharesmb              off                    default
pgpool/pgdata  refquota              none                   default
pgpool/pgdata  refreservation        none                   default
pgpool/pgdata  primarycache          all                    default
pgpool/pgdata  secondarycache        all                    default
pgpool/pgdata  usedbysnapshots       0                      -
pgpool/pgdata  usedbydataset         19K                    -
pgpool/pgdata  usedbychildren        0                      -
pgpool/pgdata  usedbyrefreservation  0                      -
pgpool/pgdata  logbias               throughput             local
pgpool/pgdata  dedup                 off                    default
pgpool/pgdata  mlslabel              none                   default
pgpool/pgdata  sync                  standard               default
pgpool/pgdata  refcompressratio      1.00x                  -
pgpool/pgdata  written               19K                    -
pgpool/pgdata  logicalused           9.50K                  -
pgpool/pgdata  logicalreferenced     9.50K                  -
pgpool/pgdata  filesystem_limit      none                   default
pgpool/pgdata  snapshot_limit        none                   default
pgpool/pgdata  filesystem_count      none                   default
pgpool/pgdata  snapshot_count        none                   default
pgpool/pgdata  snapdev               hidden                 default
pgpool/pgdata  acltype               off                    default
pgpool/pgdata  context               none                   default
pgpool/pgdata  fscontext             none                   default
pgpool/pgdata  defcontext            none                   default
pgpool/pgdata  rootcontext           none                   default
pgpool/pgdata  relatime              on                     temporary
pgpool/pgdata  redundant_metadata    all                    default
pgpool/pgdata  overlay               off                    default

Ready to deploy a PostgreSQL instance on it:

postgres@centos7:/home/postgres/ [pg954] initdb -D /pgpool/pgdata/
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     en_US.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /pgpool/pgdata ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /pgpool/pgdata/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /pgpool/pgdata/ -l logfile start

Startup:

postgres@centos7:/home/postgres/ [pg954] mkdir /pgpool/pgdata/pg_log
postgres@centos7:/home/postgres/ [pg954] sed -i 's/logging_collector = off/logging_collector = on/g' /pgpool/pgdata/postgresql.conf
postgres@centos7:/home/postgres/ [pg954] pg_ctl -D /pgpool/pgdata/ start
postgres@centos7:/home/postgres/ [pg954] psql postgres
psql (9.5.4 dbi services build)
Type "help" for help.

postgres=

Ready. Lets reboot and check if the ZFS file system is mounted automatically:

postgres@centos7:/home/postgres/ [pg954] df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   49G  1.8G   47G   4% /
devtmpfs                 235M     0  235M   0% /dev
tmpfs                    245M     0  245M   0% /dev/shm
tmpfs                    245M  4.3M  241M   2% /run
tmpfs                    245M     0  245M   0% /sys/fs/cgroup
/dev/sda1                497M  291M  206M  59% /boot
tmpfs                     49M     0   49M   0% /run/user/1000
postgres@centos7:/home/postgres/ [pg954] lsmod | grep zfs
zfs                  2713912  0 
zunicode              331170  1 zfs
zavl                   15236  1 zfs
zcommon                55411  1 zfs
znvpair                93227  2 zfs,zcommon
spl                    92223  3 zfs,zcommon,znvpair

Gone. The kernel modules are loaded but the file system was not mounted. What to do?

[root@centos7 ~] zpool list
no pools available
[root@centos7 ~] zpool import pgpool
[root@centos7 ~] zpool list
NAME     SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
pgpool  9.94G  39.3M  9.90G         -     0%     0%  1.00x  ONLINE  -
[root@centos7 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   49G  1.8G   47G   4% /
devtmpfs                 235M     0  235M   0% /dev
tmpfs                    245M     0  245M   0% /dev/shm
tmpfs                    245M  4.3M  241M   2% /run
tmpfs                    245M     0  245M   0% /sys/fs/cgroup
/dev/sda1                497M  291M  206M  59% /boot
tmpfs                     49M     0   49M   0% /run/user/1000
pgpool                   9.6G     0  9.6G   0% /pgpool
pgpool/pgdata            9.7G   39M  9.6G   1% /pgpool/pgdata

Ok, how to auto mount?

[root@centos7 ~] systemctl enable zfs-mount
[root@centos7 ~] systemctl enable zfs-import-cache
[root@centos7 ~] reboot

I am not sure why this is necessary, should happen automatically.

PS: There is an interesting discussion about PostgreSQL on ZFS on the PostgreSQL performance mailing list currently.

 

Cet article Running PostgreSQL on ZFS on Linux est apparu en premier sur Blog dbi services.

Running PostgreSQL on ZFS on Linux – Fun with snapshots and clones

$
0
0

In the last post we looked at how to get a ZFS file system up and running on a CentOS 7 host and how to enable the auto mount of the ZFS file systems. In this post we’ll look at two of the features ZFS provides: Snapshots and clones.

A ZFS snapshot is a read only copy of a file system. How can we benefit from that when it comes to PostgreSQL. There are several scenarios where this can be useful. Imagine you are developing an application and you want to test the deployment of a new release on top of a previous release. What you probably want to have is a production like PostgreSQL instance with lots of data for being able to test the upgrade path. In addition it would be great if you can revert in seconds and start from scratch just in case you run into troubles or you missed one important point in the upgrade scripts. Using ZFS snapshots you can have all of this. Lets see.

Currently my PostgreSQL instance from the last post does not contain any user data, so lets generate some:

postgres= create table my_app_table ( a int, b varchar(50) );
CREATE TABLE
postgres=# with aa as 
postgres-# ( select * 
postgres(#     from generate_series (1,1000000) a
postgres(# )
postgres-# insert into my_app_table
postgres-# select aa.a, md5(aa.a::varchar)
postgres-#   from aa;
INSERT 0 1000000

This is the release we want to test our upgrade scripts from so lets create a snapshot of the current state of our instance:

[root@centos7 ~] zfs snapshot pgpool/pgdata@baserelease
[root@centos7 ~] zfs list -t snapshot
NAME                        USED  AVAIL  REFER  MOUNTPOINT
pgpool/pgdata@baserelease  16.6M      -   202M  -

The “@baserelease” is the name of the snapshot or to be correct everything after the “@” is the name of the snapshot.

Are you worried about consistency? This should not be an issue as PostgreSQL fsyncs the WAL so the instance should just start, apply all the wal records which are missing from the data files and you’re fine. Anyway, this is a scenario for testing: So as long as you have a consistent starting point you are fine.

A simple upgrade script could be:

postgres=# alter table my_app_table add column c date;
ALTER TABLE
postgres=# update my_app_table set c = now();
UPDATE 1000000

What happened to the snapshot?

[root@centos7 ~] zfs list -t snapshot
NAME                        USED  AVAIL  REFER  MOUNTPOINT
pgpool/pgdata@baserelease  78.3M      -   202M  -

As soon as you modify data the snapshot will grow, no surprise.

So you did run your tests and discovered some things you could improve and once you improved what you wanted you want to start from the same point again. When having a snapshot this is quite easy, just revert to the snapshot. Of course you’ll need to stop your PostgreSQL instance first:

postgres@centos7:/home/postgres/ [PG1] pg_ctl stop -D /pgpool/pgdata/ -m fast
waiting for server to shut down....LOG:  received fast shutdown request
LOG:  aborting any active transactions
LOG:  autovacuum launcher shutting down
LOG:  shutting down
 done
server stopped

As soon as the instance is down the snapshot can be reverted:

[root@centos7 ~] zfs rollback pgpool/pgdata@baserelease
[root@centos7 ~] zfs list -t snapshot
NAME                        USED  AVAIL  REFER  MOUNTPOINT
pgpool/pgdata@baserelease     1K      -   202M  -

When you check the data after you started the instance again it is exactly as it was before:

postgres@centos7:/home/postgres/ [PG1] pg_ctl start -D /pgpool/pgdata/
postgres@centos7:/home/postgres/ [PG1] LOG:  database system was not properly shut down; automatic recovery in progress
postgres@centos7:/home/postgres/ [PG1] psql postgres
psql (9.5.4 dbi services build)
Type "help" for help.

postgres= \d my_app_table
        Table "public.my_app_table"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(50) | 

Notice the message about the automatic recovery, that is when the wal is replayed. Now you can just start your upgrade script again, revert in case of issues, start again, revert again, and so on.

Another use case: Rapid cloning of PostgreSQL instances (clones are writable, snapshots not). How does that work? This is where clones come into the game. For being able to clone you need a snapshot as clones depend on snapshots. Another thing to keep in mind is that you can not delete a snapshot when you have a clone still sitting on top of it. Lets see how it works:

As said we need a snapshot:

[root@centos7 ~] zfs snapshot pgpool/pgdata@clonebase

On top of this snapshot we can now create a clone:

[root@centos7 ~] zfs create pgpool/clones
[root@centos7 ~] zfs clone pgpool/pgdata@clonebase pgpool/clones/1
[root@centos7 ~] zfs list
NAME              USED  AVAIL  REFER  MOUNTPOINT
pgpool            170M  9.46G    21K  /pgpool
pgpool/clones    20.5K  9.46G  19.5K  /pgpool/clones
pgpool/clones/1     1K  9.46G   169M  /pgpool/clones/1
pgpool/pgdata     170M  9.46G   169M  /pgpool/pgdata

Using the new clone we bring up another PostgreSQL instance in seconds, containing the exact data from the source of the clone:

postgres@centos7:/home/postgres/ [PG1] rm /pgpool/clones/1/*.pid
postgres@centos7:/home/postgres/ [PG1] sed -i 's/#port = 5432/port=5433/g' /pgpool/clones/1/postgresql.conf
postgres@centos7:/home/postgres/ [PG1] pg_ctl start -D /pgpool/clones/1/
postgres@centos7:/home/postgres/ [PG1] psql -p 5433 postgres
psql (9.5.4 dbi services build)
Type "help" for help.

postgres=

Quite cool and easy.

Conclusion: I am not sure if I’d use ZFS for production databases on Linux because I have not tested enough. But for development and testing purposes there are quite a few benefits such as snapshots and cloning. This can simply your processes a lot. You could even use snapshots and clones as a basis for your backups although I’d prefer barman or bart.

PS: To clean up:

[root@centos7 ~] zfs destroy pgpool/clones/1
[root@centos7 ~] zfs destroy pgpool/clones
 

Cet article Running PostgreSQL on ZFS on Linux – Fun with snapshots and clones est apparu en premier sur Blog dbi services.


Running PostgreSQL on ZFS on Linux – Compression

$
0
0

In the last posts in this little series we looked at how to get a ZFS file system up and running on a CentOS 7 host and how snapshots and clones can be used to simply processes such as testing and cloning PostgreSQL instances. In this post we’ll look at another feature of zfs: Compression.

The current status of my ZFS file systems is:

[root@centos7 ~] zfs list
NAME            USED  AVAIL  REFER  MOUNTPOINT
pgpool          170M  9.46G  20.5K  /pgpool
pgpool/pgdata   169M  9.46G   169M  /pgpool/pgdata

To check if compression is enabled:

[root@centos7 ~] zfs get compression pgpool/pgdata
NAME           PROPERTY     VALUE     SOURCE
pgpool/pgdata  compression  off       default

Lets create another file system and enable compression for it:

[root@centos7 ~] zfs create pgpool/pgdatacompressed
[root@centos7 ~] zfs list
NAME                      USED  AVAIL  REFER  MOUNTPOINT
pgpool                    170M  9.46G  20.5K  /pgpool
pgpool/pgdata             169M  9.46G   169M  /pgpool/pgdata
pgpool/pgdatacompressed    19K  9.46G    19K  /pgpool/pgdatacompressed
[root@centos7 ~] zfs get compression pgpool/pgdatacompressed
NAME                     PROPERTY     VALUE     SOURCE
pgpool/pgdatacompressed  compression  off       default
[root@centos7 ~] zfs set compression=on pgpool/pgdatacompressed
[root@centos7 ~] zfs get compression pgpool/pgdatacompressed
NAME                     PROPERTY     VALUE     SOURCE
pgpool/pgdatacompressed  compression  on        local

You can ask zfs to report the compression ratio for a file system:

[root@centos7 ~] zfs get compressratio pgpool/pgdatacompressed
NAME                     PROPERTY       VALUE  SOURCE
pgpool/pgdatacompressed  compressratio  1.00x  -
[root@centos7 ~] chown postgres:postgres /pgpool/pgdatacompressed/

The ratio is 1 which is because we do not have any data yet. Lets copy the PostgreSQL cluster from the uncompressed file system into our new compressed file system:

postgres@centos7:/home/postgres/ [PG1] cp -pr /pgpool/pgdata/* /pgpool/pgdatacompressed/
postgres@centos7:/home/postgres/ [PG1] ls -l /pgpool/pgdatacompressed/
total 30
drwx------. 6 postgres postgres     6 Sep 29 14:00 base
drwx------. 2 postgres postgres    54 Sep 29 14:27 global
drwx------. 2 postgres postgres     3 Sep 28 15:11 pg_clog
drwx------. 2 postgres postgres     2 Sep 28 15:11 pg_commit_ts
drwx------. 2 postgres postgres     2 Sep 28 15:11 pg_dynshmem
-rw-------. 1 postgres postgres  4468 Sep 28 15:11 pg_hba.conf
-rw-------. 1 postgres postgres  1636 Sep 28 15:11 pg_ident.conf
drwxr-xr-x. 2 postgres postgres     2 Sep 28 15:11 pg_log
drwx------. 4 postgres postgres     4 Sep 28 15:11 pg_logical
drwx------. 4 postgres postgres     4 Sep 28 15:11 pg_multixact
drwx------. 2 postgres postgres     3 Sep 29 14:27 pg_notify
drwx------. 2 postgres postgres     2 Sep 28 15:11 pg_replslot
drwx------. 2 postgres postgres     2 Sep 28 15:11 pg_serial
drwx------. 2 postgres postgres     2 Sep 28 15:11 pg_snapshots
drwx------. 2 postgres postgres     5 Sep 29 14:46 pg_stat
drwx------. 2 postgres postgres     2 Sep 29 14:46 pg_stat_tmp
drwx------. 2 postgres postgres     3 Sep 28 15:11 pg_subtrans
drwx------. 2 postgres postgres     2 Sep 28 15:11 pg_tblspc
drwx------. 2 postgres postgres     2 Sep 28 15:11 pg_twophase
-rw-------. 1 postgres postgres     4 Sep 28 15:11 PG_VERSION
drwx------. 3 postgres postgres     8 Sep 29 14:26 pg_xlog
-rw-------. 1 postgres postgres    88 Sep 28 15:11 postgresql.auto.conf
-rw-------. 1 postgres postgres 21270 Sep 28 15:11 postgresql.conf
-rw-------. 1 postgres postgres    69 Sep 29 14:27 postmaster.opts

We already should see a difference, shouldn’t we?

postgres@centos7:/home/postgres/ [PG1] df -h | grep pgdata
pgpool/pgdata            9.6G  170M  9.4G   2% /pgpool/pgdata
pgpool/pgdatacompressed  9.5G   82M  9.4G   1% /pgpool/pgdatacompressed

Not bad, less than half of the size. We should see another compression ratio than 1 now:

[root@centos7 ~] zfs get compressratio pgpool/pgdatacompressed
NAME                     PROPERTY       VALUE  SOURCE
pgpool/pgdatacompressed  compressratio  1.93x  -

Lets generate some data in our two PostgreSQL instances and check the time it takes as well as the size of the file systems afterwards. As in the last post the second instance just gets a different port, everything else is identical:

postgres@centos7:/home/postgres/ [PG1] pg_ctl start -D /pgpool/pgdata
postgres@centos7:/home/postgres/ [PG1] sed -i 's/#port = 5432/port=5433/g' /pgpool/pgdatacompressed/postgresql.conf
postgres@centos7:/home/postgres/ [PG1] FATAL:  data directory "/pgpool/pgdatacompressed" has group or world access
postgres@centos7:/home/postgres/ [PG1] chmod o-rwx,g-rwx /pgpool/pgdatacompressed/
postgres@centos7:/home/postgres/ [PG1] pg_ctl start -D /pgpool/pgdatacompressed/

This is the script to generate some data:

\timing
\c postgres
drop database if exists dataload;
create database dataload;
\c dataload
create table dataload ( a bigint
                      , b varchar(100)
                      , c timestamp
                      );
with 
  data_generator_num as
     ( select *
         from generate_series ( 1
                              , 1000000 ) nums
     ) 
insert into dataload
select data_generator_num.nums
     , md5(data_generator_num.nums::varchar)
     , current_date+data_generator_num.nums
 from data_generator_num;

I will run the script two times on each instance. For the instance on the uncompressed file system:

-- FIRST RUN
postgres=# \i generate_data.sql
Timing is on.
You are now connected to database "postgres" as user "postgres".
DROP DATABASE
Time: 720.626 ms
CREATE DATABASE
Time: 4631.212 ms
You are now connected to database "dataload" as user "postgres".
CREATE TABLE
Time: 6.517 ms
INSERT 0 1000000
Time: 28668.343 ms
-- SECOND RUN
dataload=# \i generate_data.sql
Timing is on.
You are now connected to database "postgres" as user "postgres".
DROP DATABASE
Time: 774.061 ms
CREATE DATABASE
Time: 2721.169 ms
You are now connected to database "dataload" as user "postgres".
CREATE TABLE
Time: 7.374 ms
INSERT 0 1000000
Time: 32168.043 ms
dataload=# 

For the instance on the compressed file system:

-- FIRST RUN
postgres=# \i generate_data.sql
Timing is on.
You are now connected to database "postgres" as user "postgres".
psql:generate_data.sql:3: NOTICE:  database "dataload" does not exist, skipping
DROP DATABASE
Time: 0.850 ms
CREATE DATABASE
Time: 4281.965 ms
You are now connected to database "dataload" as user "postgres".
CREATE TABLE
Time: 5.120 ms
INSERT 0 1000000
Time: 30606.966 ms
-- SECOND RUN
dataload=# \i generate_data.sql
Timing is on.
You are now connected to database "postgres" as user "postgres".
DROP DATABASE
Time: 2359.120 ms
CREATE DATABASE
Time: 3267.151 ms
You are now connected to database "dataload" as user "postgres".
CREATE TABLE
Time: 8.665 ms
INSERT 0 1000000
Time: 23474.290 ms
dataload=# 

Despite that the numbers are quite bad (5 seconds to create an empty table) the fastest load was the second one on the compressed file system. So at least it is not slower. I have to admit that I did not do any tuning on the file systems and my VM does not have much memory (512m) which is far too less if you work with ZFS (ZFS needs much memory, at least 1gb).

So, what about the size of the data. First lets check what PostgreSQL is telling us:

-- instance on the uncompressed file system
dataload=# select * from pg_size_pretty ( pg_relation_size ( 'dataload' ));
 pg_size_pretty 
----------------
 81 MB
(1 row)
-- instance on the compressed file system
dataload=# select * from pg_size_pretty ( pg_relation_size ( 'dataload' ));
 pg_size_pretty 
----------------
 81 MB
(1 row)

Exactly the same, which is not surprising as PostgreSQL sees the files as if they would be uncompressed (please be aware that the my_app_table from the last post is still there which is why the file system usage in total is larger than you might expect). It is quite funny on how the size is reported on the compressed file system depending on how you ask.

You can use oid2name to map the file name to a table name:

postgres@centos7:/pgpool/pgdatacompressed/base/24580/ [PG1] oid2name -d dataload -p 5433 -f 24581
From database "dataload":
  Filenode  Table Name
----------------------
     24581    dataload

File 24581 is the table we generated. When you ask for the size by using “du” you get:

postgres@centos7:/pgpool/pgdatacompressed/base/24580/ [PG1] du -h 24581
48M	24581

This is the compressed size. When you use “ls” you get the uncompressed size:

postgres@centos7:/pgpool/pgdatacompressed/base/24580/ [PG1] ls -lh 24581
-rw-------. 1 postgres postgres 81M Sep 30 10:43 24581

What does “df” tell us:

postgres@centos7:/home/postgres/ [PG1] df -h | grep pgdata
pgpool/pgdata            9.5G  437M  9.1G   5% /pgpool/pgdata
pgpool/pgdatacompressed  9.2G  165M  9.1G   2% /pgpool/pgdatacompressed

Not bad, 437M of uncompressed data which is 165M compressed. So, if you are short on space this really can be an option.

 

Cet article Running PostgreSQL on ZFS on Linux – Compression est apparu en premier sur Blog dbi services.

How to patch Postgres Plus Advanced Server

$
0
0

As with any other software there comes the time when you need to patch your Postgres Plus Advanced Server instances. Is that different from patching Community PostgreSQL? Yes and no :) The difference is that you need a subscription to get access to the EDB Customer Portal for being able to download the patch. This is pretty much the same as with My Oracle Support where you need a customer support identifier mapped to your account for being able to download patches, to have access to the knowledge base and for being able to open cases in case you run into troubles which you are not able to solve yourself.

Assuming you have access to the EDB customer portal and you downloaded the patch for your base release the procedure is pretty simple. For this little demo I am running the 9.5.0.5 base release of Postgres Plus Advanced Server:

postgres@centos7:/home/postgres/ [PG2] psql
psql.bin (9.5.0.5)
Type "help" for help.

postgres=# select version();
                                                   version                                                    
--------------------------------------------------------------------------------------------------------------
 EnterpriseDB 9.5.0.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-55), 64-bit
(1 row)

postgres=# 

This is the release you can download from the EDB website for testing. The patch I downloaded from the portal is this one:

postgres@centos7:/home/postgres/ [PG2] cd /u01/app/postgres/software/
postgres@centos7:/u01/app/postgres/software/ [PG2] ls -la
total 27216
drwxrwxr-x. 2 postgres postgres       51 Oct  5 10:16 .
drwxrwxr-x. 5 postgres postgres       47 Jun 15 13:10 ..
-rw-rw-r--. 1 postgres postgres 27868299 Oct  5 10:16 postgresplusas-9.5.4.9-1-linux-x64.run
postgres@centos7:/u01/app/postgres/software/ [PG2] chmod +x postgresplusas-9.5.4.9-1-linux-x64.run 

This should patch my base release to currently latest release which is 9.5.4.9-1. How does it work? Lets execute the binary and see what happens:

postgres@centos7:/u01/app/postgres/software/ [PG2] ./postgresplusas-9.5.4.9-1-linux-x64.run 
Language Selection

Please select the installation language
[1] English - English
[2] Japanese - 日本語
[3] Simplified Chinese - 简体中文
[4] Traditional Chinese - 繁体中文
[5] Korean - 한국어
Please choose an option [1] : 1

Error: There has been an error.
This installer requires root privileges. Please become superuser before 
executing the installer
Press [Enter] to continue:

A no go for most of the cases. Running installers as root is not a good practice and should be avoided whenever possible. But, luckily, as with the base release installer the patch itself can be installed in “extract only” mode:

postgres@centos7:/u01/app/postgres/software/ [PG2] ./postgresplusas-9.5.4.9-1-linux-x64.run --extract-only yes --prefix /u01/app/postgres/product/95edb/db_5/9.5AS/
Language Selection

Please select the installation language
[1] English - English
[2] Japanese - 日本語
[3] Simplified Chinese - 简体中文
[4] Traditional Chinese - 繁体中文
[5] Korean - 한국어
Please choose an option [1] : 1
----------------------------------------------------------------------------
Welcome to the Postgres Plus Advanced Server Setup Wizard.

----------------------------------------------------------------------------
Please specify the directory where Postgres Plus Advanced Server will be 
installed.

Installation Directory [/u01/app/postgres/product/95edb/db_5/9.5AS]: 

----------------------------------------------------------------------------
Setup is now ready to begin installing Postgres Plus Advanced Server on your 
computer.

Do you want to continue? [Y/n]: y

----------------------------------------------------------------------------
Please wait while Setup installs Postgres Plus Advanced Server on your computer.

 Installing Database Server
 0% ______________ 50% ______________ 100%
 #########################################

----------------------------------------------------------------------------
Setup has finished installing Postgres Plus Advanced Server on your computer.

postgres@centos7:/u01/app/postgres/software/ [PG2] 

Looks good, but you never should do this when your PostgreSQL is running, otherwise you’ll get this:

2016-10-05 10:22:43 CEST LOG:  server process (PID 4359) was terminated by signal 11: Segmentation fault
2016-10-05 10:22:43 CEST LOG:  terminating any other active server processes
2016-10-05 10:22:43 CEST WARNING:  terminating connection because of crash of another server process
2016-10-05 10:22:43 CEST DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2016-10-05 10:22:43 CEST HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2016-10-05 10:22:43 CEST LOG:  statistics collector process (PID 3324) was terminated by signal 11: Segmentation fault
2016-10-05 10:22:43 CEST LOG:  all server processes terminated; reinitializing

Always shutdown, before you begin to patch. In my case I just started the instance again and I am on the current release:

postgres@centos7:/u01/app/postgres/software/ [PG2] sqh
psql.bin (9.5.4.9)
Type "help" for help.

postgres=# select version();
                                                   version                                                    
--------------------------------------------------------------------------------------------------------------
 EnterpriseDB 9.5.4.9 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-55), 64-bit
(1 row)

postgres=# 

Simple and fast. If you prepare this very well your downtime will be around one minute. I can already hear the question: Can I switchover to my standby, apply the patch on the master, switchback and then proceed on the standby for reducing the downtime even more? This will be a topic for another post.

PS: Of course you can also prepare a brand new home for the patched binaries and then shutdown your instance, switch to the new binaries and start again from there.

PS2: Just in case you are not aware of: As we have established a partnership with EnterpriseDB in the past you can obtain (an) EDB subscription(s) easily from us. Of course we would do a review of what you really need before. It is not always required to go for the Postgres Plus version. Community PostgreSQL works very well in the most cases and can be backed by a EDB subscription as well, if required.

 

Cet article How to patch Postgres Plus Advanced Server est apparu en premier sur Blog dbi services.

How to patch Postgres Plus Advanced Server in a Standby configuration

$
0
0

In the last post we looked at how you can patch a Postgres Plus Advanced server. Wouldn’t it be nice, in a standby configuration, to patch the standby first without touching the master, then do a controlled switchover and finally patch the old master? In case this is a configuration with EDB Failover Manager the only downtime which would happen is the relocation of the VIP from one node to another (if you use a VIP). Without using a VIP but using pgpool-II the downtime is even less. Lets see if it works by starting from my usual EDB Failover Manager configuration.

This is the current status of my failover cluster:

[root@edbbart efm-2.1]# /usr/efm-2.1/bin/efm cluster-status efm
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       
	Master      192.168.22.243       UP     UP        
	Standby     192.168.22.245       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.245 192.168.22.243

Membership coordinator: 192.168.22.245

Standby priority host list:
	192.168.22.245

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.243       0/4A000140       
	Standby     192.168.22.245       0/4A000140       

	Standby database(s) in sync with master. It is safe to promote.

All is fine, I have one master, one standby and one witness. Going straight forward lets shutdown the standby (please notice that I have disabled auto failover):

Shutdown the standby database:

postgres@edbppasstandby:/home/postgres/ [pg950] pg_ctl -D /u02/pgdata/PGSITE2 stop -m fast
waiting for server to shut down.... done
server stopped

What happened to my cluster?

[root@edbbart efm-2.1]# /usr/efm-2.1/bin/efm cluster-status efm
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       
	Master      192.168.22.243       UP     UP        
	Standby     192.168.22.245       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.245 192.168.22.243

Membership coordinator: 192.168.22.245

Standby priority host list:
	192.168.22.245

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.243       0/4A000140       
	Unknown     192.168.22.245       UNKNOWN          Connection to 192.168.22.245:4445 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.

	One or more standby databases are not in sync with the master database.
[root@edbbart efm-2.1]# 

Not really surprising EFM complains that the standby is not reachable anymore. Thats fine. Lets patch the standby:

postgres@edbppasstandby:/u01/app/postgres/software/ [pg950] chmod +x postgresplusas-9.5.4.9-1-linux-x64.run
postgres@edbppasstandby:/u01/app/postgres/software/ [pg950] ./postgresplusas-9.5.4.9-1-linux-x64.run --extract-only true --prefix /u01/app/postgres/product/95/db_1/9.5AS/
Language Selection

Please select the installation language
[1] English - English
[2] Japanese - 日本語
[3] Simplified Chinese - 简体中文
[4] Traditional Chinese - 繁体中文
[5] Korean - 한국어
Please choose an option [1] : 1
----------------------------------------------------------------------------
Welcome to the Postgres Plus Advanced Server Setup Wizard.

----------------------------------------------------------------------------
Please specify the directory where Postgres Plus Advanced Server will be 
installed.

Installation Directory [/u01/app/postgres/product/95/db_1/9.5AS]: 

----------------------------------------------------------------------------
Setup is now ready to begin installing Postgres Plus Advanced Server on your 
computer.

Do you want to continue? [Y/n]: Y

----------------------------------------------------------------------------
Please wait while Setup installs Postgres Plus Advanced Server on your computer.

 Installing Database Server
 0% ______________ 50% ______________ 100%
 #########################################

----------------------------------------------------------------------------
Setup has finished installing Postgres Plus Advanced Server on your computer.

postgres@edbppasstandby:/u01/app/postgres/software/ [pg950] 

… bring it up again:

postgres@edbppasstandby:/home/postgres/ [pg950] pg_ctl -D /u02/pgdata/PGSITE2 start
server starting

… checking the PostgreSQL log file all is fine, streaming restarted:

2016-10-05 11:35:25.745 GMT - 2 - 4984 -  - @ LOG:  entering standby mode
2016-10-05 11:35:25.751 GMT - 3 - 4984 -  - @ LOG:  consistent recovery state reached at 0/4A000108
2016-10-05 11:35:25.751 GMT - 4 - 4984 -  - @ LOG:  redo starts at 0/4A000108
2016-10-05 11:35:25.751 GMT - 5 - 4984 -  - @ LOG:  invalid record length at 0/4A000140
2016-10-05 11:35:25.751 GMT - 4 - 4982 -  - @ LOG:  database system is ready to accept read only connections
2016-10-05 11:35:25.755 GMT - 1 - 4988 -  - @ LOG:  started streaming WAL from primary at 0/4A000000 on timeline 8

What is the status of EFM?

postgres@edbppasstandby:/home/postgres/ [pg950] efmstat
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       
	Master      192.168.22.243       UP     UP        
	Idle        192.168.22.245       UP     UNKNOWN   

Allowed node host list:
	192.168.22.244 192.168.22.245 192.168.22.243

Membership coordinator: 192.168.22.245

Standby priority host list:
	(List is empty.)

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.243       0/4A000140       

	No standby databases were found.

Idle Node Status (idle nodes ignored in XLog location comparisons):

	Address              XLog Loc         Info
	--------------------------------------------------------------
	192.168.22.245       0/4A000140       DB is in recovery.

Status “Idle” for the standby which is fine, just resume:

postgres@edbppasstandby:/home/postgres/ [pg950] sudo /usr/efm-2.1/bin/efm resume efm
Resume command successful on local agent.
postgres@edbppasstandby:/home/postgres/ [pg950] efmstat
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       
	Standby     192.168.22.245       UP     UP        
	Master      192.168.22.243       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.245 192.168.22.243

Membership coordinator: 192.168.22.245

Standby priority host list:
	192.168.22.245

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.243       0/4A000140       
	Standby     192.168.22.245       0/4A000140       

	Standby database(s) in sync with master. It is safe to promote.

… and everything is back as it should be. Time to switchover:

postgres@edbppasstandby:/home/postgres/ [PGSITE2] sudo /usr/efm-2.1/bin/efm promote efm -switchover
Promote/switchover command accepted by local agent. Proceeding with promotion and will reconfigure original master. Run the 'cluster-status' command for information about the new cluster state.

The master and the standby should have switched its roles:

postgres@edbppasstandby:/home/postgres/ [PGSITE2] efmstat
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Master      192.168.22.245       UP     UP        
	Witness     192.168.22.244       UP     N/A       
	Standby     192.168.22.243       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.245 192.168.22.243

Membership coordinator: 192.168.22.245

Standby priority host list:
	192.168.22.243

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.245       0/4B0001A8       
	Standby     192.168.22.243       0/4B0001A8       

	Standby database(s) in sync with master. It is safe to promote.
postgres@edbppasstandby:/home/postgres/ [PGSITE2] 

Same procedure again, stop the standby:

postgres@edbppas:/home/postgres/ [PGSITE1] pg_ctl -D /u02/pgdata/PGSITE1 stop -m fast
waiting for server to shut down.... done
server stopped
postgres@edbppasstandby:/home/postgres/ [PGSITE2] efmstat
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Standby     192.168.22.243       UP     UP        
	Witness     192.168.22.244       UP     N/A       
	Master      192.168.22.245       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.245 192.168.22.243

Membership coordinator: 192.168.22.245

Standby priority host list:
	192.168.22.243

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.245       0/4B0001A8       
	Unknown     192.168.22.243       UNKNOWN          Connection to 192.168.22.243:4445 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.

	One or more standby databases are not in sync with the master database.

Apply the patch:

postgres@edbppas:/u01/app/postgres/software/ [PGSITE1] ./postgresplusas-9.5.4.9-1-linux-x64.run --extract-only true --prefix /u01/app/postgres/product/95/db_1/9.5AS/
Language Selection

Please select the installation language
[1] English - English
[2] Japanese - 日本語
[3] Simplified Chinese - 简体中文
[4] Traditional Chinese - 繁体中文
[5] Korean - 한국어
Please choose an option [1] : 1
----------------------------------------------------------------------------
Welcome to the Postgres Plus Advanced Server Setup Wizard.

----------------------------------------------------------------------------
Please specify the directory where Postgres Plus Advanced Server will be 
installed.

Installation Directory [/u01/app/postgres/product/95/db_1/9.5AS]: 

----------------------------------------------------------------------------
Setup is now ready to begin installing Postgres Plus Advanced Server on your 
computer.

Do you want to continue? [Y/n]: y

----------------------------------------------------------------------------
Please wait while Setup installs Postgres Plus Advanced Server on your computer.

 Installing Database Server
 0% ______________ 50% ______________ 100%
 #########################################

----------------------------------------------------------------------------
Setup has finished installing Postgres Plus Advanced Server on your computer.

Startup again:

postgres@edbppas:/u01/app/postgres/software/ [PGSITE1] pg_ctl -D /u02/pgdata/PGSITE1 start
server starting

Streaming restarted:

2016-10-05 11:45:36.807 GMT - 2 - 4883 -  - @ LOG:  entering standby mode
2016-10-05 11:45:36.810 GMT - 3 - 4883 -  - @ LOG:  consistent recovery state reached at 0/4B0000C8
2016-10-05 11:45:36.810 GMT - 4 - 4883 -  - @ LOG:  redo starts at 0/4B0000C8
2016-10-05 11:45:36.810 GMT - 5 - 4883 -  - @ LOG:  invalid record length at 0/4B0001A8
2016-10-05 11:45:36.810 GMT - 4 - 4881 -  - @ LOG:  database system is ready to accept read only connections
2016-10-05 11:45:36.815 GMT - 1 - 4887 -  - @ LOG:  started streaming WAL from primary at 0/4B000000 on timeline 9

Same status “Idle” as before:

postgres@edbppasstandby:/home/postgres/ [PGSITE2] efmstat
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Idle        192.168.22.243       UP     UNKNOWN   
	Witness     192.168.22.244       UP     N/A       
	Master      192.168.22.245       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.245 192.168.22.243

Membership coordinator: 192.168.22.245

Standby priority host list:
	(List is empty.)

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.245       0/4B0001A8       

	No standby databases were found.

Idle Node Status (idle nodes ignored in XLog location comparisons):

	Address              XLog Loc         Info
	--------------------------------------------------------------
	192.168.22.243       0/4B0001A8       DB is in recovery.

Resume:

postgres@edbppas:/home/postgres/ [PGSITE1] sudo /usr/efm-2.1/bin/efm resume efm
Resume command successful on local agent.

Fully back:

postgres@edbppasstandby:/home/postgres/ [PGSITE2] efmstat
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Standby     192.168.22.243       UP     UP        
	Witness     192.168.22.244       UP     N/A       
	Master      192.168.22.245       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.245 192.168.22.243

Membership coordinator: 192.168.22.245

Standby priority host list:
	192.168.22.243

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.245       0/4B0001A8       
	Standby     192.168.22.243       0/4B0001A8       

	Standby database(s) in sync with master. It is safe to promote.

Works like a charm. The organizational overhead is much more than what you actually need to do. Technically this is a task of a few minutes.

 

Cet article How to patch Postgres Plus Advanced Server in a Standby configuration est apparu en premier sur Blog dbi services.

Momentum16 – Day 1 – Feelings

$
0
0

This first day at Momentum 2016

Normally I should write the second one as we started yesterday with a partner session where we got some information. One of these news was that EMC had more than 400 partners a few years ago and today this has been reduced to less than 80 and dbi services is still one of them. For us it is a good news, I hope this is also one for our current and future customers.

 

Today the different sessions, a part of the key notes hold by Rohit Ghai, were more related to customer experience, solutions ECD partners can propose, business presentations, description of particular challenges that companies had to face and how they dealt with it without presenting technical details.
As I am more on the technical side, this was more for my culture, I would say.

 

In the keynote we learned that with Documentum 7.3 cost saving will increase. For instance, PostgreSQL can be used with Document 7.3, the upgrade will be faster, aso… Since time is money…
PostgreSQL can be an interesting subject as dbi services is also active in this database and I will have to work with our DB experts to see what we have to test, how and find out the pro and cons using PostgreSQL on a technical point of view, as for sure the license cost will decrease. I planned, no I have, to go to the technical session tomorrow about “What’s new in Documentum 7.3″.

 

I also took the opportunity to discuss with some Dell EMC partners to learn more about the solutions they propose. For instance I was able to talk with Neotys people to understand what their product can bring us compared to JMeter or LoadRunner which we or our customers are using to do the load tests. Having a better view of possible solutions in this area can help me in case some customers have specific requirements to help him choose the best tool.
I also had a chat with Aerow and they showed me how ARender4Documentum is working and how fast “big” documents can be displayed in their html5 viewer. So even if the fist day cannot be viewed as a technical day, I actually learned a lot.
In this kind of event, what I find cool too, is that you can meet people, for instance at lunch time around a table and start talking about your/their experiences, your/their concerns, solutions, aso… So today we had a talk about cloud (private, public) and what does this means in case you have a validated system.

 

So let’s see what will happen tomorrow, the day where more technical information will be shared.

Note: Read Morgan’s blog where you can find technical stuff. You know I felt Morgan frustrated today as he could not “eat” technical food :-)

 

Cet article Momentum16 – Day 1 – Feelings est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 1 – Restore points

$
0
0

When discussing with customers about PostgreSQL we often hear that they can do things in one database that they can not do in PostgreSQL. Most of the times this is not true and you actually can do it in PostgreSQL. Maybe not in exactly the same way but this is not surprising as PostgreSQL does implement features not in exactly the same way other vendors do.

To start this series we’ll talk about restore points. Of course you can create restore points in PostgreSQL and then restore up to such a point in case you need to (e.g. after a failed schema or application upgrade or just for testing purposes ). Lets go…

We’ll use the latest version of PostgreSQL which is 9.6.1 currently:

postgres@pgbox:/home/postgres/ [PG961] sqh
psql (9.6.1 dbi services build)
Type "help" for help.

(postgres@[local]:5439) [postgres] > select version();
                                                          version                                                           
----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.6.1 dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit
(1 row)

Time: 47.119 ms
(postgres@[local]:5439) [postgres] > 

When we want to do point in time recovery we need to setup archiving. Without going into the details (as this is out of scope here) the parameters which need to be adjusted are these (if not already done):

(postgres@[local]:5439) [postgres] > alter system set wal_level = 'replica';
ALTER SYSTEM
Time: 28.056 ms
(postgres@[local]:5439) [postgres] > alter system set archive_command='test ! -f /u90/pgdata/PG961/%f && cp %p /u90/pgdata/PG961/%f'; 
ALTER SYSTEM
Time: 20.925 ms
(postgres@[local]:5439) [postgres] > alter system set archive_mode ='on';
ALTER SYSTEM
Time: 5.307 ms
(postgres@[local]:5439) [postgres] > select name,context from pg_settings where name in ('archive_mode','archive_command','wal_level');
      name       |  context   
-----------------+------------
 archive_command | sighup
 archive_mode    | postmaster
 wal_level       | postmaster
(3 rows)

Time: 1.460 ms

Be sure to restart your instance before you continue. Changing archive_mode and wal_level can not be done online. Once you restarted make sure that your archive_command really succeeds:

(postgres@[local]:5439) [postgres] > create database test1;
CREATE DATABASE
Time: 1705.539 ms
(postgres@[local]:5439) [postgres] > drop database test1;
DROP DATABASE
Time: 107.283 ms
(postgres@[local]:5439) [restore] > select pg_switch_xlog();
 pg_switch_xlog 
----------------
 0/22001798
(1 row)

Time: 214.216 ms
(postgres@[local]:5439) [postgres] > \! ls -l /u90/pgdata/PG961/
total 16384
-rw-------. 1 postgres postgres 16777216 Nov 24 17:34 000000020000000000000022

When you can not see an archived wal in the last step you did something wrong. The next bit you need when you want to do point in time recovery with PostgreSQL is a base backup:

postgres@pgbox:/u02/pgdata/PG961/ [PG961] mkdir /u90/pgdata/PG961/basebackups
postgres@pgbox:/u02/pgdata/PG961/ [PG961] pg_basebackup -x -D /u90/pgdata/PG961/basebackups/
postgres@pgbox:/u02/pgdata/PG961/ [PG961] ls /u90/pgdata/PG961/basebackups/
backup_label  pg_commit_ts   pg_log        pg_replslot   pg_stat_tmp  PG_VERSION
base          pg_dynshmem    pg_logical    pg_serial     pg_subtrans  pg_xlog
global        pg_hba.conf    pg_multixact  pg_snapshots  pg_tblspc    postgresql.auto.conf
pg_clog       pg_ident.conf  pg_notify     pg_stat       pg_twophase  postgresql.conf

Fine. Lets generate some test data with this simple script:

(postgres@[local]:5439) [postgres] > \! cat a.sql
\c postgres
drop database if exists restore;
create database restore;
\c restore
create table t1 ( a int );
insert into t1 (a)
       values (generate_series(1,1000000));
select count(*) from t1;
\d t1

When you run this you’ll get a table (t1) containing 1 million rows:

(postgres@[local]:5439) [postgres] > \i a.sql
You are now connected to database "postgres" as user "postgres".
DROP DATABASE
Time: 114.000 ms
CREATE DATABASE
Time: 1033.245 ms
You are now connected to database "restore" as user "postgres".
CREATE TABLE
Time: 5.917 ms
INSERT 0 1000000
Time: 2226.599 ms
  count  
---------
 1000000
(1 row)

Time: 65.864 ms
      Table "public.t1"
 Column |  Type   | Modifiers 
--------+---------+-----------
 a      | integer | 

Ok, fine. Now we are ready for testing restore points. Lets say you want to do some modifications to your table and to be on the safe side you want to create a restore point before. No problem:

(postgres@[local]:5439) [postgres] > select pg_create_restore_point('RP1');
 pg_create_restore_point 
-------------------------
 0/28D50EF8
(1 row)

Time: 0.825 ms

Quite easy and fast. Now lets play with our table:

(postgres@[local]:5439) [restore] > select count(*) from t1;
  count  
---------
 1000010
(1 row)

Time: 66.214 ms
(postgres@[local]:5439) [restore] > \d t1
      Table "public.t1"
 Column |  Type   | Modifiers 
--------+---------+-----------
 a      | integer | 

(postgres@[local]:5439) [restore] > alter table t1 add column b varchar(10);
ALTER TABLE
Time: 1.810 ms
(postgres@[local]:5439) [restore] > update t1 set b='b';
UPDATE 1000010
Time: 11004.972 ms
(postgres@[local]:5439) [restore] > drop table t1;
DROP TABLE
Time: 238.329 ms

Ups, table gone. How can we now go back to the restore point created above? Quite easy:

Shutdown your instance and copy back the base backup:

postgres@pgbox:/u02/pgdata/PG961/ [PG961] rm -rf pg_xlog
postgres@pgbox:/u02/pgdata/PG961/ [PG961] cp -pr /u90/pgdata/PG961/basebackups/* $PGDATA
cp: cannot overwrite non-directory ‘/u02/pgdata/PG961/pg_xlog’ with directory ‘/u90/pgdata/PG961/basebackups/pg_xlog’
postgres@pgbox:/u02/pgdata/PG961/ [PG961] ln -s /u03/pgdata/PG961/ pg_xlog

Then create a recovery.conf file (for telling PostgreSQL to go into recovery mode when it comes up) and specify the restore point you created above:

postgres@pgbox:/home/postgres/ [PG961] echo "restore_command = 'cp /u90/pgdata/PG961/%f %p'
> recovery_target_name = 'RP1'" > $PGDATA/recovery.conf
postgres@pgbox:/home/postgres/ [PG961] cat $PGDATA/recovery.conf
restore_command = 'cp /u90/pgdata/PG961/%f %p'
recovery_target_name = 'RP1'

Start the instance and check the log file:

LOG:  database system was interrupted; last known up at 2016-11-24 17:36:28 CET
LOG:  creating missing WAL directory "pg_xlog/archive_status"
LOG:  starting point-in-time recovery to "RP1"

If everything went fine your table should be back without the additional column:

(postgres@[local]:5439) [restore] > \d t1
      Table "public.t1"
 Column |  Type   | Modifiers 
--------+---------+-----------
 a      | integer | 

(postgres@[local]:5439) [restore] > select count(*) from t1;
  count  
---------
 1000000
(1 row)

Time: 82.797 ms

So, yes, you can definitely use restore points with PostgreSQL :)

If you want me to blog about any feature you are not sure is there in PostgreSQL let me know.

 

Cet article Can I do it with PostgreSQL? – 1 – Restore points est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 2 – Dual

$
0
0

In the first post of this series we talked about restore points. Another question that pops up from time to time is how you can do things in PostgreSQL that would require the dual table in Oracle. Lets go …

The question is: When do you need the dual table in Oracle? Well, everything time you have nothing to select from, meaning no table you could provide in the from clause and you need exactly one row. This could be the case when you want to do math:

SQL> select 1+2+3*4/2 from dual;

 1+2+3*4/2
----------
	 9

This can be the case when you want to generate test data:

SQL> select 'a' from dual connect by level <= 5;

'
-
a
a
a
a
a

This can be the case when you want to select from a PL/SQL function, such as:

SQL> create table ta (a number);

Table created.

SQL> select dbms_metadata.get_ddl('TABLE','TA',USER) from dual;

DBMS_METADATA.GET_DDL('TABLE','TA',USER)
--------------------------------------------------------------------------------

  CREATE TABLE "SYS"."TA"
   (	"A" NUMBER
   ) PCTFREE 10 PCTUSED 40 INITRANS

… any many more.

The easy answer to the question if you can do it in PostgreSQL is: You don’t need to. Why? Because you can do things like this:

(postgres@[local]:5439) [postgres] > select 'Time for a beer';
    ?column?     
-----------------
 Time for a beer
(1 row)

… or this:

(postgres@[local]:5439) [postgres] > select 1+2+3*4/2;
 ?column? 
----------
        9
(1 row)

The same is true for getting the results of a function:

(postgres@[local]:5439) [postgres] > create function f1 (integer,integer) returns integer
as 'select $1 * $2;'
language sql;
CREATE FUNCTION
Time: 249.499 ms
(postgres@[local]:5439) [postgres] > select f1(5,5);
 f1 
----
 25
(1 row)

PostgreSQL does not force you to provide a table to select from. You can completely skip this. Looks strange when you are used to work with Oracle, I know, but hey: This is much more easy: Why provide a from clause when it is not necessary?

If you really, really can’t live without dual:

(postgres@[local]:5439) [postgres] > create table dual (dummy varchar(1));
CREATE TABLE
(postgres@[local]:5439) [postgres] > insert into dual (dummy) values ('a');
INSERT 0 1
(postgres@[local]:5439) [postgres] > select 'I can not live without dual' from dual;
          ?column?           
-----------------------------
 I can not live without dual
(1 row)
(postgres@[local]:5439) [postgres] > select 1+2+3*4/2 from dual;
 ?column? 
----------
        9
(1 row)

And here you go …

 

Cet article Can I do it with PostgreSQL? – 2 – Dual est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 3 – Tablespaces

$
0
0

In the last posts of this series we talked about restore points and how you could do things that would require the dual table in Oracle. In this post we’ll look at tablespaces. This is one of the features that is available in PostgreSQL but is totally different from what you know from Oracle. In Oracle you need to create a datafile which is attached to a tablespace. Once you have this you can start creating tables in there if you have the permissions to do so. How does this work in PostgreSQL?

Before we start playing with our own tablespaces you need to know that there are two default tablespaces in each PostgreSQL instance:

(postgres@[local]:5439) [postgres] > \db+
                                  List of tablespaces
    Name    |  Owner   | Location | Access privileges | Options |  Size  | Description 
------------+----------+----------+-------------------+---------+--------+-------------
 pg_default | postgres |          |                   |         | 21 MB  | 
 pg_global  | postgres |          |                   |         | 497 kB | 
(2 rows)

When you create a table and do not specify in which tablespace you want to get it created it will be created in the pg_default tablespace (this is the default tablespace for template0 and template1 and therefore will be the default for every user created database if not overwritten). pg_global contains the shared system catalog.

This means, whenever you create a table without specifying a tablespace in the create table statement it will go to the pg_default tablespace:

(postgres@[local]:5439) [postgres] > create table t1 ( a int );
CREATE TABLE
Time: 99.609 ms
(postgres@[local]:5439) [postgres] > select tablespace from pg_tables where tablename = 't1';
 tablespace 
------------
 NULL
(1 row)

NULL, in this case, means default. If you want to know where exactly the files that make up the tables are you can use oid2name:

postgres@pgbox:/home/postgres/ [PG961] oid2name -t t1
From database "postgres":
  Filenode  Table Name
----------------------
     24592          t1
postgres@pgbox:/home/postgres/ [PG961] find $PGDATA -name 2459*
/u02/pgdata/PG961/base/13322/24592

In addition oid2name tells you more about the databases and the default tablespace associated to them:

postgres@pgbox:/home/postgres/ [PG961] oid2name 
All databases:
    Oid  Database Name  Tablespace
----------------------------------
  13322       postgres  pg_default
  13321      template0  pg_default
      1      template1  pg_default

So far for the basics. Time to create our own tablespace. When you look at the syntax:

(postgres@[local]:5439) [postgres] > \h create tablespace
Command:     CREATE TABLESPACE
Description: define a new tablespace
Syntax:
CREATE TABLESPACE tablespace_name
    [ OWNER { new_owner | CURRENT_USER | SESSION_USER } ]
    LOCATION 'directory'
    [ WITH ( tablespace_option = value [, ... ] ) ]

… this is quite different from what you know when you work with Oracle. The important point for now is the “LOCATION”. This refers to a directory somewhere the PostgreSQL owner has write access to. This can be a local directory, can be a directory on any storage the host has access to and it even can be on a ramdisk. It really doesn’t matter as long as the PostgreSQL OS user has write permissions to it.

Lets create our first tablespace:

(postgres@[local]:5439) [postgres] > \! mkdir /var/tmp/tbs1
(postgres@[local]:5439) [postgres] > create tablespace tbs1 location '/var/tmp/tbs1';
CREATE TABLESPACE
Time: 26.362 ms
(postgres@[local]:5439) [postgres] > \db+
                                     List of tablespaces
    Name    |  Owner   |   Location    | Access privileges | Options |  Size   | Description 
------------+----------+---------------+-------------------+---------+---------+-------------
 pg_default | postgres |               |                   |         | 21 MB   | 
 pg_global  | postgres |               |                   |         | 497 kB  | 
 tbs1       | postgres | /var/tmp/tbs1 |                   |         | 0 bytes | 
(3 rows)

What happened? The first thing to notice is that we can now see the “Location” column populated when we display all the tablespaces and that the size of our new tablespace is zero (well, not surprising as nothing is created in the tablespace right now). Did PostgreSQL already create datafiles in this location you might ask?

(postgres@[local]:5439) [postgres] > \! ls -l /var/tmp/tbs1/
total 0
drwx------. 2 postgres postgres 6 Nov 25 11:03 PG_9.6_201608131

At least a directory which contains the version of PostgreSQL was created. What is inside this directory?

(postgres@[local]:5439) [postgres] > \! ls -l /var/tmp/tbs1/PG_9.6_201608131/
total 0

Nothing, so lets create a table in this brand new tablespace:

(postgres@[local]:5439) [postgres] > create table t1 ( a int ) tablespace tbs1;
CREATE TABLE
(postgres@[local]:5439) [postgres] > \d+ t1
                          Table "public.t1"
 Column |  Type   | Modifiers | Storage | Stats target | Description 
--------+---------+-----------+---------+--------------+-------------
 a      | integer |           | plain   |              | 
Tablespace: "tbs1"

How does the directory look like now?:

(postgres@[local]:5439) [postgres] > \! ls -l /var/tmp/tbs1/PG_9.6_201608131/
total 0
drwx------. 2 postgres postgres 18 Nov 25 12:02 13322

Ok, 13322 is the OID of the database which the table belongs to:

(postgres@[local]:5439) [postgres] > \! oid2name
All databases:
    Oid  Database Name  Tablespace
----------------------------------
  13322       postgres  pg_default
  13321      template0  pg_default
      1      template1  pg_default

And below that?

(postgres@[local]:5439) [postgres] > \! ls -l /var/tmp/tbs1/PG_9.6_201608131/13322/
total 0
-rw-------. 1 postgres postgres 0 Nov 25 12:02 24596

This is the OID of the table. So in summary this is the layout you get per tablespace:

|
|---[LOCATION]
|       |
|       | ----- [FIXED_VERSION_DIRECTORY]
|       |                  |
|       |                  |---------[DATABASE_OID]
|       |                  |              |
|       |                  |              |-----------[TABLE_AND_INDEX_FILES_OID]

One point that is often forgotten is that you can set various parameters on a tablespace level:

CREATE TABLESPACE tablespace_name
    [ OWNER { new_owner | CURRENT_USER | SESSION_USER } ]
    LOCATION 'directory'
    [ WITH ( tablespace_option = value [, ... ] ) ]

What you can set per tablespace is:

This can be very helpful when you have tablespaces on disks (ramdisk?) that have very different performance specifications.

A very important point to keep in mind: Each tablespace you create in PostgreSQL creates a symlink in the clusters data directory:

CREATE TABLESPACE tablespace_name
postgres@pgbox:/home/postgres/ [PG961] ls -l $PGDATA/pg_tblspc 
total 0
lrwxrwxrwx. 1 postgres postgres 13 Nov 25 11:03 24595 -> /var/tmp/tbs1

Again, the number (24595) is the OID, in this case of the tablespace:

|
(postgres@[local]:5439) [postgres] > select oid,spcname from pg_tablespace where spcname = 'tbs1';
  oid  | spcname 
-------+---------
 24595 | tbs1
(1 row)

This is important to know because when you do backups of you PostgreSQL instance it is critical that you backup the tablespaces as well. You can find all the pointers/symlinks in the pg_tblspc directory.

What else can you do with tablespaces? Of course you can change the default tablespace for the whole instance:

|
(postgres@[local]:5439) [postgres] > alter system set default_tablespace='tbs1';
ALTER SYSTEM
Time: 120.406 ms

(postgres@[local]:5439) [postgres] > select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

Time: 4.279 ms
(postgres@[local]:5439) [postgres] > show default_tablespace ;
 default_tablespace 
--------------------
 tbs1
(1 row)

You can assign a tablespace to a database:

|
(postgres@[local]:5439) [postgres] > create database db1 TABLESPACE = tbs1;
CREATE DATABASE
Time: 1128.020 ms
(postgres@[local]:5439) [postgres] > \l+
                                                                    List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   |  Size   | Tablespace |                Description                 
-----------+----------+----------+-------------+-------------+-----------------------+---------+------------+--------------------------------------------
 db1       | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |                       | 7233 kB | tbs1       | 
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |                       | 7343 kB | pg_default | default administrative connection database
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +| 7233 kB | pg_default | unmodifiable empty database
           |          |          |             |             | postgres=CTc/postgres |         |            | 
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +| 7233 kB | pg_default | default template for new databases
           |          |          |             |             | postgres=CTc/postgres |         |            | 

You can make someone else the owner of a tablespace:

|
(postgres@[local]:5439) [postgres] > create user u1 password 'u1';
CREATE ROLE
Time: 31.414 ms
(postgres@[local]:5439) [postgres] > ALTER TABLESPACE tbs1 OWNER TO u1;
ALTER TABLESPACE
Time: 2.072 ms
(postgres@[local]:5439) [postgres] > \db
          List of tablespaces
    Name    |  Owner   |   Location    
------------+----------+---------------
 pg_default | postgres | 
 pg_global  | postgres | 
 tbs1       | u1       | /var/tmp/tbs1
(3 rows)

And finally you can set one or more tablespaces to be used as temporary tablespaces:

|
(postgres@[local]:5439) [postgres] > alter system set temp_tablespaces='tbs1';
ALTER SYSTEM
Time: 4.175 ms

(postgres@[local]:5439) [postgres] > select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

Time: 3.638 ms
(postgres@[local]:5439) [postgres] > show temp_tablespaces ;
 temp_tablespaces 
------------------
 tbs1
(1 row)

Conclusion: Yes, you can have tablespaces in PostgreSQL and they give you great flexibility on how you can organize your PostgreSQL files on disk. The implementation is very different from other vendors, though.

 

Cet article Can I do it with PostgreSQL? – 3 – Tablespaces est apparu en premier sur Blog dbi services.


Can I do it with PostgreSQL? – 4 – External tables

$
0
0

In the last posts of this series we talked about restore points, how you could do things that would require the dual table in Oracle and how you can make use of tablespaces in PostgreSQL. In this post we’ll look at what my colleague Clemens thinks is one of the greatest features in Oracle. Can you do external external tables in PostgreSQL?

The easy answers is: yes, of course you can. And you can do it in various ways. To start with we’ll need a sample file were we can load data from. For the test here we’ll use this one. Note that this file uses Windows line feeds which you’ll need to convert to unix style if you are working on Linux like me. You can use VI to do this.

Once you extracted the file the content looks like this:

postgres@pgbox:/home/postgres/ [PG961] head -2 FL_insurance_sample.csv
policyID,statecode,county,eq_site_limit,hu_site_limit,fl_site_limit,fr_site_limit,tiv_2011,tiv_2012,eq_site_deductible,hu_site_deductible,fl_site_deductible,fr_site_deductible,point_latitude,point_longitude,line,construction,point_granularity
119736,FL,CLAY COUNTY,498960,498960,498960,498960,498960,792148.9,0,9979.2,0,0,30.102261,-81.711777,Residential,Masonry,1

So, we have a total of 18 columns and 36634 rows to test with. Should be fine :)

How can we bring that into PostgreSQL? Clemens talked about SQL*Loader in his post. There is a similar project for PostgreSQL called pg_bulkload which we’ll not be talking about. We will look at two options you can use to load data from files into PostgreSQL which are available by default:

  1. copy
  2. file_fdw

What we need no matter with which option we go first is the definition of the table. These are the columns we need:

postgres@pgbox:/home/postgres/ [PG961] head -1 FL_insurance_sample.csv | sed 's/,/,\n/g'
policyID,
statecode,
county,
eq_site_limit,
hu_site_limit,
fl_site_limit,
fr_site_limit,
tiv_2011,
tiv_2012,
eq_site_deductible,
hu_site_deductible,
fl_site_deductible,
fr_site_deductible,
point_latitude,
point_longitude,
line,
construction,
point_granularity

So the create table statement will look something like this:

(postgres@[local]:5439) [postgres] > create table exttab ( policyID int,
                                                           statecode varchar(2),
                                                           county varchar(50),
                                                           eq_site_limit numeric,
                                                           hu_site_limit numeric,
                                                           fl_site_limit numeric,
                                                           fr_site_limit numeric,
                                                           tiv_2011 numeric,
                                                           tiv_2012 numeric,
                                                           eq_site_deductible numeric,
                                                           hu_site_deductible numeric,
                                                           fl_site_deductible numeric,
                                                           fr_site_deductible numeric,
                                                           point_latitude numeric,
                                                           point_longitude numeric,
                                                           line varchar(50),
                                                           construction varchar(50),
                                                           point_granularity int);
CREATE TABLE

Now that we have the table we can use copy to load the data:

(postgres@[local]:5439) [postgres] > copy exttab from '/home/postgres/FL_insurance_sample.csv' with csv header;
COPY 36634
(postgres@[local]:5439) [postgres] > select count(*) from exttab;
 count 
-------
 36634
(1 row)

Quite fast. But there is a downside with this approach. As Clemens mentions in his posts one of the benefits of external tables in Oracle is that you can access the file via standard SQL and do transformations before the data arrives in the database. Can you do the same with PostgreSQL? Yes, if you use the file_fdw foreign data wrapper.

The file_fdw is available by default:

(postgres@[local]:5439) [postgres] > create extension file_fdw;
CREATE EXTENSION
Time: 442.777 ms
(postgres@[local]:5439) [postgres] > \dx
                        List of installed extensions
   Name   | Version |   Schema   |                Description                
----------+---------+------------+-------------------------------------------
 file_fdw | 1.0     | public     | foreign-data wrapper for flat file access
 plpgsql  | 1.0     | pg_catalog | PL/pgSQL procedural language

(postgres@[local]:5439) [postgres] > create server srv_file_fdw foreign data wrapper file_fdw;
CREATE SERVER
(postgres@[local]:5439) [postgres] > create foreign table exttab2  ( policyID int,
                                statecode varchar(2),
                                county varchar(50),
                                eq_site_limit numeric,     
                                hu_site_limit numeric,     
                                fl_site_limit numeric,     
                                fr_site_limit numeric,     
                                tiv_2011 numeric,          
                                tiv_2012 numeric,          
                                eq_site_deductible numeric,
                                hu_site_deductible numeric,
                                fl_site_deductible numeric,
                                fr_site_deductible numeric,
                                point_latitude numeric,    
                                point_longitude numeric,   
                                line varchar(50),          
                                construction varchar(50),  
                                point_granularity int)     
server srv_file_fdw options ( filename '/home/postgres/FL_insurance_sample.csv', format 'csv', header 'true' );
CREATE FOREIGN TABLE

(postgres@[local]:5439) [postgres] > select count(*) from exttab2;
 count 
-------
 36634
(1 row)

From now on you can work with the file by accessing it using standard SQL and all the options you have with SQL are available. Very much the same as Clemens states in his post: “Because external tables can be accessed through SQL. You have all possibilities SQL-queries offer. Prallelism, difficult joins with internal or other external tables and of course all complex operations SQL allows. ETL became much easier using external tables, because it allowed to process data through SQL joins and filters already before it was loaded in the database.”

 

Cet article Can I do it with PostgreSQL? – 4 – External tables est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 5 – Generating DDL commands

$
0
0

From time to time it is very useful that you can generate the DDL commands for existing objects (Tables, Indexes, whole Schema …). In Oracle you can either use the dbms_metadata PL/SQL package for this or use expdp/impdp to generate the statements out of a dump file. What options do you have in PostgreSQL? Note: We’ll not look at any third party tools you could use for that, only plain PostgreSQL.

As always we’ll need some objects to test with, so here we go:

\c postgres
drop database if exists ddl;
create database ddl;
\c ddl
create table t1 ( a int, b int );
insert into t1 (a,b)
       values (generate_series(1,1000000)
              ,generate_series(1,1000000));
select count(*) from t1;
create index i1 on t1(a);
create unique index i2 on t1(b);
create view v1 as select a from t1;
alter table t1 add constraint con1 check ( a < 2000000 );
\d t1
CREATE FUNCTION add(integer, integer) RETURNS integer
    AS 'select $1 + $2;'
    LANGUAGE SQL
    IMMUTABLE
    RETURNS NULL ON NULL INPUT;

PostgreSQL comes with a set of administration functions which can be used to query various stuff. Some are there to get the definitions for your objects.

You can get the definition of a view:

(postgres@[local]:5439) [ddl] > select pg_get_viewdef('v1'::regclass, true);
 pg_get_viewdef 
----------------
  SELECT t1.a  +
    FROM t1;
(1 row)

You can get the definition of a constraint:

(postgres@[local]:5439) [ddl] > SELECT conname
                                     , pg_get_constraintdef(r.oid, true) as definition
                                  FROM pg_constraint r
                                 WHERE r.conrelid = 't1'::regclass;
 conname |     definition      
---------+---------------------
 con1    | CHECK (a < 2000000)

You can get the definition of a function:

(postgres@[local]:5439) [ddl] > SELECT proname
     , pg_get_functiondef(a.oid)
  FROM pg_proc a
 WHERE a.proname = 'add';
 proname |                   pg_get_functiondef                    
---------+---------------------------------------------------------
 add     | CREATE OR REPLACE FUNCTION public.add(integer, integer)+
         |  RETURNS integer                                       +
         |  LANGUAGE sql                                          +
         |  IMMUTABLE STRICT                                      +
         | AS $function$select $1 + $2;$function$                 +
         | 
--OR
(postgres@[local]:5439) [ddl] > SELECT pg_get_functiondef(to_regproc('add'));
                   pg_get_functiondef                    
---------------------------------------------------------
 CREATE OR REPLACE FUNCTION public.add(integer, integer)+
  RETURNS integer                                       +
  LANGUAGE sql                                          +
  IMMUTABLE STRICT                                      +
 AS $function$select $1 + $2;$function$                 +

You can get the definition of an index:

(postgres@[local]:5439) [ddl] > select pg_get_indexdef('i1'::regclass);
            pg_get_indexdef            
---------------------------------------
 CREATE INDEX i1 ON t1 USING btree (a)
(1 row)

But surprisingly you can not get the DDL for a table. There is just no function available to do this. How can you do that without concatenating the definitions you can get out of the PostgreSQL catalog? The only option I am aware of is pg_dump:

postgres@pgbox:/home/postgres/ [PG961] pg_dump -s -t t1 ddl | egrep -v "^--|^$"
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET row_security = off;
SET search_path = public, pg_catalog;
SET default_tablespace = '';
SET default_with_oids = false;
CREATE TABLE t1 (
    a integer,
    b integer,
    CONSTRAINT con1 CHECK ((a < 2000000))
);
ALTER TABLE t1 OWNER TO postgres;
CREATE INDEX i1 ON t1 USING btree (a);
CREATE UNIQUE INDEX i2 ON t1 USING btree (b);

Using the “-s” (schema only) and “-t” (tables) options you get the DDL for the complete table. Not as handy as in Oracle where you can do this in sqlplus but it works and produces a result you can work with.

Of course you can always create the DDLs for your own by querying the catalog, e.g. pg_attribute which holds all the column definitions for the tables:

    Table "pg_catalog.pg_attribute"
    Column     |   Type    | Modifiers 
---------------+-----------+-----------
 attrelid      | oid       | not null
 attname       | name      | not null
 atttypid      | oid       | not null
 attstattarget | integer   | not null
 attlen        | smallint  | not null
 attnum        | smallint  | not null
 attndims      | integer   | not null
 attcacheoff   | integer   | not null
 atttypmod     | integer   | not null
 attbyval      | boolean   | not null
 attstorage    | "char"    | not null
 attalign      | "char"    | not null
 attnotnull    | boolean   | not null
 atthasdef     | boolean   | not null
 attisdropped  | boolean   | not null
 attislocal    | boolean   | not null
 attinhcount   | integer   | not null
 attcollation  | oid       | not null
 attacl        | aclitem[] | 
 attoptions    | text[]    | 
 attfdwoptions | text[]    | 
Indexes:
    "pg_attribute_relid_attnam_index" UNIQUE, btree (attrelid, attname)
    "pg_attribute_relid_attnum_index" UNIQUE, btree (attrelid, attnum)

One nasty way which which is even documented on the PostgreSQL wiki is this:

(postgres@[local]:5439) [ddl] > create extension plperlu;
CREATE EXTENSION
Time: 90.074 ms
(postgres@[local]:5439) [ddl] > \dx
                      List of installed extensions
  Name   | Version |   Schema   |              Description               
---------+---------+------------+----------------------------------------
 plperlu | 1.0     | pg_catalog | PL/PerlU untrusted procedural language
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language

(postgres@[local]:5439) [ddl] > CREATE OR REPLACE FUNCTION system(text) RETURNS text 
AS 'my $cmd=shift; return `cd /tmp;$cmd`;' LANGUAGE plperlu;
CREATE FUNCTION

(postgres@[local]:5439) [ddl] > select system('pg_dump -s -t t1 ddl | egrep -v "^--|^$"');
                    system                     
-----------------------------------------------
 SET statement_timeout = 0;                   +
 SET lock_timeout = 0;                        +
 SET idle_in_transaction_session_timeout = 0; +
 SET client_encoding = 'UTF8';                +
 SET standard_conforming_strings = on;        +
 SET check_function_bodies = false;           +
 SET client_min_messages = warning;           +
 SET row_security = off;                      +
 SET search_path = public, pg_catalog;        +
 SET default_tablespace = '';                 +
 SET default_with_oids = false;               +
 CREATE TABLE t1 (                            +
     a integer,                               +
     b integer,                               +
     CONSTRAINT con1 CHECK ((a < 2000000))    +
 );                                           +
 ALTER TABLE t1 OWNER TO postgres;            +
 CREATE INDEX i1 ON t1 USING btree (a);       +
 CREATE UNIQUE INDEX i2 ON t1 USING btree (b);+
 

Can be a workaround. Hope this helps…

 

Cet article Can I do it with PostgreSQL? – 5 – Generating DDL commands est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 6 – Server programming

$
0
0

Today we’ll continue this series with another topic: What does PostgreSQL provide when it comes to server programming, that is: Writing functions and triggers to support your application? In Oracle you can either use PL/SQL or Java, in MariaDB you can use stored procedures written in SQL, MS SQL Server provides Transact SQL and with DB2 you can write stored procedures in a host language or SQL.

We’ll use the same sample data as in the last post:

\c postgres
drop database if exists ddl;
create database ddl;
\c ddl
create table t1 ( a int, b int );
insert into t1 (a,b)
       values (generate_series(1,1000000)
              ,generate_series(1,1000000));
select count(*) from t1;
create index i1 on t1(a);
create unique index i2 on t1(b);
create view v1 as select a from t1;
alter table t1 add constraint con1 check ( a < 2000000 );
\d t1

So, what can you do? To begin with you can create functions containing pure SQL commands. These are called “query language functions”. You can for example do things like this (although this function is not very useful as can you do the same by just selecting the whole table):

CREATE FUNCTION select_all_from_t1() RETURNS SETOF t1 AS '
  SELECT * 
    FROM t1;
' LANGUAGE SQL;

There are two important points here: The “LANGUAGE” part which means that the function is written in pure SQL. The keyword “SETOF” which means that we want to return a whole set of the rows of t1. Once the function is created you can use it in SQL:

(postgres@[local]:5439) [ddl] > select select_all_from_t1();
 select_all_from_t1 
--------------------
 (1,1)
 (2,2)
 (3,3)
...

When you want to do something where it does not make sense to return anything you can do it by using the “VOID” keyword:

CREATE FUNCTION update_t1() RETURNS VOID AS '
  UPDATE t1
     SET a = 5
   WHERE a < 10
' LANGUAGE SQL;

When you execute this you do not get a result:

(postgres@[local]:5439) [ddl] > select update_t1();
 update_t1 
-----------
 NULL
(1 row)
(postgres@[local]:5439) [ddl] > select count(*) from t1 where a = 5;
 count 
-------
     9
(1 row)

What about parameters? You can do this as well:

CREATE FUNCTION do_the_math(anumber1 numeric, anumber2 numeric ) RETURNS numeric AS '
  SELECT do_the_math.anumber1 * do_the_math.anumber2;
' LANGUAGE SQL;

Execute it:

(postgres@[local]:5439) [ddl] > select do_the_math(1.1,1.2);
 do_the_math 
-------------
        1.32

Another great feature is that you can have a variable/dynamic amount of input parameters when you specify the input parameter as an array:

CREATE FUNCTION dynamic_input(VARIADIC arr numeric[]) RETURNS int AS $$
    SELECT array_length($1,1);
$$ LANGUAGE SQL;

(postgres@[local]:5439) [ddl] > select dynamic_input( 1,2,3,4 );
 dynamic_input 
---------------
             4

So far for the SQL functions. What can you do when you need more than SQL? Then you can use the so called “procedural language functions”. One of these which is available by default is PL/pgSQL:

(postgres@[local]:5439) [ddl] > \dx
                 List of installed extensions
  Name   | Version |   Schema   |         Description          
---------+---------+------------+------------------------------
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language
(1 row)

By using PL/pgSQL you can add control structures around your SQL very much as you can do it in PL/SQL (except that you cannot create packages).

CREATE FUNCTION f1(int,int) RETURNS text AS $$
DECLARE
    t_row t1%ROWTYPE;
    result text;
BEGIN
    SELECT * 
      INTO t_row
      FROM t1
     WHERE a = 99;
    IF t_row.b > 0
    THEN
        result := 'aaaaaa';
    ELSE
        result := 'bbbbbb';
    END IF;
    RETURN result;
END;
$$ LANGUAGE plpgsql;
(postgres@[local]:5439) [ddl] > select f1(1,1);
   f1   
--------
 aaaaaa

You can also use anonymous blocks:

(postgres@[local]:5439) [ddl] > DO $$
BEGIN
  FOR i IN 1..10
  LOOP
    raise notice 'blubb';
  END LOOP;
END$$ LANGUAGE plpgsql;
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
DO

Of course there is more than IF-THEN-ELSE which is documented here.

So by now we know two options to write functions in PostgreSQL. Is there more we can do? Of course: You prefer to write your functions in Perl?

(postgres@[local]:5439) [ddl] > create extension plperl;
CREATE EXTENSION
(postgres@[local]:5439) [ddl] > \dx
                 List of installed extensions
  Name   | Version |   Schema   |         Description          
---------+---------+------------+------------------------------
 plperl  | 1.0     | pg_catalog | PL/Perl procedural language
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language


CREATE FUNCTION perl_max (integer, integer) RETURNS integer AS $$
    my ($x, $y) = @_;
    if (not defined $x) {
        return undef if not defined $y;
        return $y;
    }
    return $x if not defined $y;
    return $x if $x > $y;
    return $y;
$$ LANGUAGE plperl;

(postgres@[local]:5439) [ddl] > select perl_max(1,2);
 perl_max 
----------
        2

You prefer python?

(postgres@[local]:5439) [ddl] > create extension plpythonu;
CREATE EXTENSION
Time: 327.434 ms
(postgres@[local]:5439) [ddl] > \dx
                        List of installed extensions
   Name    | Version |   Schema   |               Description                
-----------+---------+------------+------------------------------------------
 plperl    | 1.0     | pg_catalog | PL/Perl procedural language
 plpgsql   | 1.0     | pg_catalog | PL/pgSQL procedural language
 plpythonu | 1.0     | pg_catalog | PL/PythonU untrusted procedural language

CREATE FUNCTION pymax (a integer, b integer)
  RETURNS integer
AS $$
  if a > b:
    return a
  return b
$$ LANGUAGE plpythonu;

(postgres@[local]:5439) [ddl] > select pymax(1,1);
 pymax 
-------
     1

… or better TcL?

(postgres@[local]:5439) [ddl] > create extension pltclu;
CREATE EXTENSION
Time: 382.982 ms
(postgres@[local]:5439) [ddl] > \dx
                        List of installed extensions
   Name    | Version |   Schema   |               Description                
-----------+---------+------------+------------------------------------------
 plperl    | 1.0     | pg_catalog | PL/Perl procedural language
 plpgsql   | 1.0     | pg_catalog | PL/pgSQL procedural language
 plpythonu | 1.0     | pg_catalog | PL/PythonU untrusted procedural language
 pltclu    | 1.0     | pg_catalog | PL/TclU untrusted procedural language

And these are only the default extensions. There is much more you can do:

  • Java
  • PHP
  • R
  • Ruby
  • Scheme
  • Unix shell

You see: PostgreSQL gives you the maximum flexibility :)

 

Cet article Can I do it with PostgreSQL? – 6 – Server programming est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 7 – Partitioning

$
0
0

PostgreSQL supports tables up to 32TB. Do you want to be the one responsible for managing such a table? I guess not. Usually you start to partition your tables when they grow very fast and consume more than hundreds of gigabytes. Can PostgreSQL do this? Do you you know what table inheritance is? No? PostgreSQL implements partitioning by using table inheritance and constraint exclusion. Sounds strange? Lets have a look …

Us usual I am running the currently latest version of PostgreSQL:

postgres@pgbox:/home/postgres/ [PG961] psql postgres
psql (9.6.1 dbi services build)
Type "help" for help.

(postgres@[local]:5439) [postgres] > select version();
                                                          version                                                           
----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.6.1 dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit
(1 row)

Time: 0.513 ms
(postgres@[local]:5439) [postgres] > 

So, what is table inheritance. In PostgreSQL you do things like this:

(postgres@[local]:5439) [postgres] > create table databases ( name varchar(10), vendor varchar(10) );
CREATE TABLE
Time: 20.477 ms
(postgres@[local]:5439) [postgres] > create table databases_rdbms ( rdbms boolean ) inherits (databases);
CREATE TABLE
Time: 20.080 ms
(postgres@[local]:5439) [postgres] > create table databases_nosql ( nosql boolean ) inherits (databases);
CREATE TABLE
Time: 22.048 ms

What we’ve done here is: We created three tables in total. The “databases_rdbms” and “databases_nosql” tables inherit from the “databases” table. What does that mean? Lets insert some data into the tables that inherit from the “databases” table:

(postgres@[local]:5439) [postgres] > insert into databases_rdbms values ('PostgreSQL','Community',true);
INSERT 0 1
Time: 20.215 ms
(postgres@[local]:5439) [postgres] > insert into databases_rdbms values ('MariaDB','MariaDB',true);
INSERT 0 1
Time: 1.666 ms
(postgres@[local]:5439) [postgres] > insert into databases_nosql values ('MongoDB','MongoDB',true);
INSERT 0 1
Time: 1.619 ms
(postgres@[local]:5439) [postgres] > insert into databases_nosql values ('Cassandra','Apache',true);
INSERT 0 1
Time: 0.833 ms

Note that we did not insert any data into the “databases” table, but when we query the “databases” table we get this result:

(postgres@[local]:5439) [postgres] > select * from databases;
    name    |  vendor   
------------+-----------
 PostgreSQL | Community
 MariaDB    | MariaDB
 MongoDB    | MongoDB
 Cassandra  | Apache
(4 rows)

All the data from all child tables has been retrieved (of course without the additional column on the child tables). We can still query the child tables:

(postgres@[local]:5439) [postgres] > select * from databases_rdbms;
    name    |  vendor   | rdbms 
------------+-----------+-------
 PostgreSQL | Community | t
 MariaDB    | MariaDB   | t
(2 rows)

Time: 0.224 ms
(postgres@[local]:5439) [postgres] > select * from databases_nosql;
   name    | vendor  | nosql 
-----------+---------+-------
 MongoDB   | MongoDB | t
 Cassandra | Apache  | t
(2 rows)

But when we query “only” on the master table there is no result:

(postgres@[local]:5439) [postgres] > select * from only databases;
 name | vendor 
------+--------
(0 rows)

Of course for this specific example it would be better to add an additional column to the master table which specifies if a database is a NoSQL database or not. This is just to show how it works. There is a good example for another use case in the documentation.

What does all this have to do with partitioning? When you want to partition your tables in PostgreSQL you’ll do exactly the same thing:

(postgres@[local]:5439) [postgres] > create table log_data ( id int, some_data varchar(10), ts date );
CREATE TABLE
(postgres@[local]:5439) [postgres] > create table log_data_2016() inherits ( log_data );
CREATE TABLE
(postgres@[local]:5439) [postgres] > create table log_data_2015() inherits ( log_data );
CREATE TABLE

We want to partition our log data by year, so we create a child table for each year we know we have data for. We additionally need is a check constraint on each of the child tables:

(postgres@[local]:5439) [postgres] > \d+ log_data_2016
                             Table "public.log_data_2016"
  Column   |         Type          | Modifiers | Storage  | Stats target | Description 
-----------+-----------------------+-----------+----------+--------------+-------------
 id        | integer               |           | plain    |              | 
 some_data | character varying(10) |           | extended |              | 
 ts        | date                  |           | plain    |              | 
Check constraints:
    "cs1" CHECK (ts >= '2016-01-01'::date AND ts  \d+ log_data_2015
                             Table "public.log_data_2015"
  Column   |         Type          | Modifiers | Storage  | Stats target | Description 
-----------+-----------------------+-----------+----------+--------------+-------------
 id        | integer               |           | plain    |              | 
 some_data | character varying(10) |           | extended |              | 
 ts        | date                  |           | plain    |              | 
Check constraints:
    "cs1" CHECK (ts >= '2015-01-01'::date AND ts < '2016-01-01'::date)
Inherits: log_data

This guarantees that the child tables only get data for a specific year. So far so good. But how does PostgreSQL know that inserts into the master table should get routed to the corresponding child table? This is done by using triggers:

(postgres@[local]:5439) [postgres] > CREATE OR REPLACE FUNCTION log_data_insert_trigger()
RETURNS TRIGGER AS $$
BEGIN
IF ( NEW.ts >= DATE '2015.01.01' AND
NEW.ts < DATE '2016-01-01' ) THEN INSERT INTO log_data_2015 VALUES (NEW.*); ELSIF ( NEW.ts >= DATE '2016-01-01' AND
NEW.ts < DATE '2017-01-01' ) THEN
INSERT INTO log_data_2016 VALUES (NEW.*);
ELSE
RAISE EXCEPTION 'Date out of range!';
END IF;
RETURN NULL;
END;
$$
LANGUAGE plpgsql;

CREATE TRIGGER insert_log_data_trigger
BEFORE INSERT ON log_data
FOR EACH ROW EXECUTE PROCEDURE log_data_insert_trigger();

When there are inserts against the master table, from now on these go to the corresponding child table:

(postgres@[local]:5439) [postgres] > insert into log_data values ( 1, 'aaaa', date('2016.03.03'));
INSERT 0 0
(postgres@[local]:5439) [postgres] > insert into log_data values ( 2, 'aaaa', date('2015.03.03'));
INSERT 0 0
(postgres@[local]:5439) [postgres] > select * from log_data;
 id | some_data |     ts     
----+-----------+------------
  1 | aaaa      | 2016-03-03
  2 | aaaa      | 2015-03-03
(2 rows)
(postgres@[local]:5439) [postgres] > select * from log_data_2015;
 id | some_data |     ts     
----+-----------+------------
  2 | aaaa      | 2015-03-03
(1 row)

(postgres@[local]:5439) [postgres] > select * from log_data_2016;
 id | some_data |     ts     
----+-----------+------------
  1 | aaaa      | 2016-03-03
(1 row)

Selects against the master table where we use the ts column in the where condition now only select from the child table:

(postgres@[local]:5439) [postgres] > explain analyze select * from log_data where ts = date ('2016.03.03');
                                                  QUERY PLAN                                                   
---------------------------------------------------------------------------------------------------------------
 Append  (cost=0.00..23.75 rows=7 width=46) (actual time=0.006..0.006 rows=1 loops=1)
   ->  Seq Scan on log_data  (cost=0.00..0.00 rows=1 width=46) (actual time=0.002..0.002 rows=0 loops=1)
         Filter: (ts = '2016-03-03'::date)
   ->  Seq Scan on log_data_2016  (cost=0.00..23.75 rows=6 width=46) (actual time=0.004..0.004 rows=1 loops=1)
         Filter: (ts = '2016-03-03'::date)
 Planning time: 0.131 ms
 Execution time: 0.019 ms
(7 rows)
(postgres@[local]:5439) [postgres] > explain analyze select * from log_data where ts = date ('2015.03.03');
                                                  QUERY PLAN                                                   
---------------------------------------------------------------------------------------------------------------
 Append  (cost=0.00..23.75 rows=7 width=46) (actual time=0.007..0.007 rows=1 loops=1)
   ->  Seq Scan on log_data  (cost=0.00..0.00 rows=1 width=46) (actual time=0.002..0.002 rows=0 loops=1)
         Filter: (ts = '2015-03-03'::date)
   ->  Seq Scan on log_data_2015  (cost=0.00..23.75 rows=6 width=46) (actual time=0.004..0.004 rows=1 loops=1)
         Filter: (ts = '2015-03-03'::date)
 Planning time: 0.102 ms
 Execution time: 0.019 ms
(7 rows)

Of course you can create indexes on the child tables as well. This is how partitioning basically works in PostgreSQL. To be honest, this is not the most beautiful way to do partitioning and this can become tricky to manage. But as always there are projects that assist you, e.g. pg_partman or pg_pathman.

Wouldn’t it be nice to have a SQL syntax to do table partitioning? Exactly this was committed yesterday and will probably be there in PostgreSQL 10 next year. The development documentation already describes the syntax:

[ PARTITION BY { RANGE | LIST } ( { column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [, ... ] ) ]
[ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE tablespace_name ]
 

Cet article Can I do it with PostgreSQL? – 7 – Partitioning est apparu en premier sur Blog dbi services.

Getting started with Docker – 1 – overview and installation

$
0
0

Everybody is talking about Docker nowadays. What it is about? Do you remember Solaris Zones or Containers? It is more or less the same although development of Docker during the last years made Linux Containers the de-facto standard for deploying applications in a standardized and isolated way. Docker is build in a classical client server model: There is the docker server (or daemon) which servers the requests of docker clients. The client is the one you’ll use to tell the server what you want to do. The main difference from the classical client/server model is that docker uses the same binary for the server as well as for the client. It is just a matter of how you invoke the docker binary that makes it a server or client application. In contrast to the Solaris Zones Docker containers are stateless by default, that means: When you shutdown a docker container you’ll lose everything that was done when the container started to what happened when container got destroyed (Although there are ways to avoid that). This is important to remember.

When you start a docker container on a host the host’s resources are shared with the container (Although you can limit that). It is not like when you fire up a virtual machine (which brings up an instance of a whole operating system) but more like a process that shares resources with the host it is running on. This might be as simple as running a “wget” command but it might be as complicated as bringing up a whole infrastructure that serves your service desk. Docker containers should be lightweight.

So what does make docker unique then? It is the concept of a layered filesystem. We’ll come to that soon. Lets start by installing everything we need to run a docker daemon. As always we’ll start with as CentOS 7 minimal installation:

[root@centos7 ~]$ cat /etc/centos-release
CentOS Linux release 7.2.1511 (Core) 
[root@centos7 ~]$ 

The easiest way to get docker installed is to add the official docker yum repository (for CentOS in this case):

[root@centos7 ~]$ echo "[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg" > /etc/yum.repos.d/docker.repo

Working directly as root never is a good idea so lets create a user for that and let this user do everything via sudo ( not a good practice, I know :) ):

[root@centos7 ~]$ groupadd docker
[root@centos7 ~]$ useradd -g docker docker
[root@centos7 ~]$ passwd docker
[root@centos7 ~]$ echo "docker ALL=(ALL)   NOPASSWD: ALL" >> /etc/sudoers
[root@centos7 ~]$ su - docker
[docker@centos7 ~]$ sudo ls

Ready to install:

[docker@centos7 ~]$ sudo yum install docker-engine

This will install the docker engine and these additional packages:

======================================================================================================================================
 Package                                Arch                   Version                               Repository                  Size
======================================================================================================================================
Installing:
 docker-engine                          x86_64                 1.12.3-1.el7.centos                   dockerrepo                  19 M
Installing for dependencies:
 audit-libs-python                      x86_64                 2.4.1-5.el7                           base                        69 k
 checkpolicy                            x86_64                 2.1.12-6.el7                          base                       247 k
 docker-engine-selinux                  noarch                 1.12.3-1.el7.centos                   dockerrepo                  28 k
 libcgroup                              x86_64                 0.41-8.el7                            base                        64 k
 libseccomp                             x86_64                 2.2.1-1.el7                           base                        49 k
 libsemanage-python                     x86_64                 2.1.10-18.el7                         base                        94 k
 libtool-ltdl                           x86_64                 2.4.2-21.el7_2                        updates                     49 k
 policycoreutils-python                 x86_64                 2.2.5-20.el7                          base                       435 k
 python-IPy                             noarch                 0.75-6.el7                            base                        32 k
 setools-libs                           x86_64                 3.3.7-46.el7                          base                       485 k

Transaction Summary
======================================================================================================================================

Enable the service:

[docker@centos7 ~]$ sudo systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

Start the service:

[docker@centos7 ~]$ sudo systemctl start docker
[docker@centos7 ~]$ sudo systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2016-12-10 12:26:46 CET; 6s ago
     Docs: https://docs.docker.com
 Main PID: 2957 (dockerd)
   Memory: 12.9M
   CGroup: /system.slice/docker.service
           ├─2957 /usr/bin/dockerd
           └─2960 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim --...

Dec 10 12:26:45 centos7.local dockerd[2957]: time="2016-12-10T12:26:45.481380483+01:00" level=info msg="Graph migration to co...conds"
Dec 10 12:26:45 centos7.local dockerd[2957]: time="2016-12-10T12:26:45.481751429+01:00" level=warning msg="mountpoint for pid...found"
Dec 10 12:26:45 centos7.local dockerd[2957]: time="2016-12-10T12:26:45.481751451+01:00" level=info msg="Loading containers: start."
Dec 10 12:26:45 centos7.local dockerd[2957]: time="2016-12-10T12:26:45.574330143+01:00" level=info msg="Firewalld running: false"
Dec 10 12:26:45 centos7.local dockerd[2957]: time="2016-12-10T12:26:45.822997195+01:00" level=info msg="Default bridge (docke...dress"
Dec 10 12:26:46 centos7.local dockerd[2957]: time="2016-12-10T12:26:46.201798804+01:00" level=info msg="Loading containers: done."
Dec 10 12:26:46 centos7.local dockerd[2957]: time="2016-12-10T12:26:46.201984648+01:00" level=info msg="Daemon has completed ...ation"
Dec 10 12:26:46 centos7.local dockerd[2957]: time="2016-12-10T12:26:46.202003760+01:00" level=info msg="Docker daemon" commit...1.12.3
Dec 10 12:26:46 centos7.local dockerd[2957]: time="2016-12-10T12:26:46.207416263+01:00" level=info msg="API listen on /var/ru....sock"
Dec 10 12:26:46 centos7.local systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.

And we’re done. Lets check if docker is working as expected:

[docker@centos7 ~]$ sudo docker run --rm hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
c04b14da8d14: Pull complete 
Digest: sha256:0256e8a36e2070f7bf2d0b0763dbabdd67798512411de4cdcf9431a1feb60fd9
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker Hub account:

https://hub.docker.com

For more examples and ideas, visit:

https://docs.docker.com/engine/userguide/

What happened here is that we already executed our first docker image: “hello-world”. The “–rm” flag tells docker to automatically remove the image once it exits. As the image was not available on our host it was automatically downloaded from the docker hub:

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
c04b14da8d14: Pull complete 
Digest: sha256:0256e8a36e2070f7bf2d0b0763dbabdd67798512411de4cdcf9431a1feb60fd9
Status: Downloaded newer image for hello-world:latest

You can browse the docker hub for many, many other images using your favorite browser or you can use the command line:

[docker@centos7 ~]$ docker search postgres
NAME                      DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
postgres                  The PostgreSQL object-relational database ...   2939                 [OK]       
kiasaki/alpine-postgres   PostgreSQL docker image based on Alpine Linux   28                   [OK]
abevoelker/postgres       Postgres 9.3 + WAL-E + PL/V8 and PL/Python...   10                   [OK]
onjin/alpine-postgres     PostgreSQL / v9.1 - v9.6 / size <  50MB      9                    [OK]
macadmins/postgres        Postgres that accepts remote connections b...   8                    [OK]
jamesbrink/postgres       Highly configurable PostgreSQL container.       5                    [OK]
eeacms/postgres           Docker image for PostgreSQL (RelStorage re...   4                    [OK]
cptactionhank/postgres                                                    4                    [OK]
azukiapp/postgres         Docker image to run PostgreSQL by Azuki - ...   2                    [OK]
kampka/postgres           A postgresql image build on top of an arch...   2                    [OK]
clkao/postgres-plv8       Docker image for running PLV8 1.4 on Postg...   2                    [OK]
2020ip/postgres           Docker image for PostgreSQL with PLV8           1                    [OK]
steenzout/postgres        Steenzout's docker image packaging for Pos.1                    [OK]
blacklabelops/postgres    Postgres Image for Atlassian Applications       1                    [OK]
buker/postgres            postgres                                        0                    [OK]
kobotoolbox/postgres      Postgres image for KoBo Toolbox.                0                    [OK]
vrtsystems/postgres       PostgreSQL image with added init hooks, bu...   0                    [OK]
timbira/postgres          Postgres  containers                            0                    [OK]
coreroller/postgres       official postgres:9.4 image but it adds 2 ...   0                    [OK]
livingdocs/postgres       Postgres v9.3 with the plv8 extension inst...   0                    [OK]
1maa/postgres             PostgreSQL base image                           0                    [OK]
opencog/postgres          This is a configured postgres database for...   0                    [OK]
khipu/postgres            postgres with custom uids                       0                    [OK]
travix/postgres           A container to run the PostgreSQL database.     0                    [OK]
beorc/postgres            Ubuntu-based PostgreSQL server                  0                    [OK]

The first one is the official PostgreSQL image. How do I run it?

[docker@centos7 ~]$ docker run -it postgres
Unable to find image 'postgres:latest' locally
latest: Pulling from library/postgres
386a066cd84a: Pull complete 
e6dd80b38d38: Pull complete 
9cd706823821: Pull complete 
40c17ac202a9: Pull complete 
7380b383ba3d: Pull complete 
538e418b46ce: Pull complete 
c3b9d41b7758: Pull complete 
dd4f9522dd30: Pull complete 
920e548f9635: Pull complete 
628af7ef2ee5: Pull complete 
004275e6f5b5: Pull complete 
Digest: sha256:e761829c4b5ec27a0798a867e5929049f4cbf243a364c81cad07e4b7ac2df3f1
Status: Downloaded newer image for postgres:latest
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /var/lib/postgresql/data -l logfile start

****************************************************
WARNING: No password has been set for the database.
         This will allow anyone with access to the
         Postgres port to access your database. In
         Docker's default configuration, this is
         effectively any other container on the same
         system.

         Use "-e POSTGRES_PASSWORD=password" to set
         it in "docker run".
****************************************************
waiting for server to start....LOG:  database system was shut down at 2016-12-10 11:42:01 UTC
LOG:  MultiXact member wraparound protections are now enabled
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started
 done
server started
ALTER ROLE


/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*

waiting for server to shut down....LOG:  received fast shutdown request
LOG:  aborting any active transactions
LOG:  autovacuum launcher shutting down
LOG:  shutting down
LOG:  database system is shut down
 done
server stopped

PostgreSQL init process complete; ready for start up.

LOG:  database system was shut down at 2016-12-10 11:42:04 UTC
LOG:  MultiXact member wraparound protections are now enabled
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started

And ready. As with the “hello:world” image docker had to download the image as it was not available locally. Once that was done the image was started and new PostgreSQL instance was created automatically. Here you can see what the layered filesystem is about:

386a066cd84a: Pull complete 
e6dd80b38d38: Pull complete 
9cd706823821: Pull complete 
40c17ac202a9: Pull complete 
7380b383ba3d: Pull complete 
538e418b46ce: Pull complete 
c3b9d41b7758: Pull complete 
dd4f9522dd30: Pull complete 
920e548f9635: Pull complete 
628af7ef2ee5: Pull complete 
004275e6f5b5: Pull complete 

Each of this lines represents a layered/stacked filesystem on top of the previous one. This is an important concept because when you change things only the layer that contains the change needs to be rebuild, but not the layers below. In other words you could build an image based on a CentOS 7 image and then deploy your changes on top of that. You deliver that image and some time later you need to make some modifications: The only thing you need to deliver are the modifications you did because the layers below did not change.

You will notice that you cannot type any command when the image was started. As soon as you enter “CRTL-C” the container will shutdown (this is because of the “-it” switch, which is “interactive” and “pseudo terminal”):

^CLOG:  received fast shutdown request
LOG:  aborting any active transactions
LOG:  autovacuum launcher shutting down
LOG:  shutting down
LOG:  database system is shut down

Everything what happened inside the container is now gone. The correct way to launch it is:

[docker@centos7 ~]$ docker run --name my-first-postgres -e POSTGRES_PASSWORD=postgres -d postgres
d51abc52108d3040817474fa8c85ab15020c12cb753515543c2d064143277155

The “-d” switch tells docker to detach, so we get back our shell. The magic string dockers returns is the container id:

[docker@centos7 ~]$ docker ps --no-trunc
CONTAINER ID                                                       IMAGE               COMMAND                            CREATED             STATUS              PORTS               NAMES
d51abc52108d3040817474fa8c85ab15020c12cb753515543c2d064143277155   postgres            "/docker-entrypoint.sh postgres"   3 minutes ago       Up 3 minutes        5432/tcp            my-first-postgres

When you want to know what images you have available locally you can ask docker for that:

[docker@centos7 ~]$ docker images 
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
postgres            latest              78e3985acac0        2 days ago          264.7 MB
hello-world         latest              c54a2cc56cbb        5 months ago        1.848 kB

How do you now connect to the PostgreSQL image?

[docker@centos7 ~]$ docker run -it --rm --link my-first-postgres:postgres postgres psql -h postgres -U postgres
Password for user postgres: 
psql (9.6.1)
Type "help" for help.

postgres=# \l+
                                                                   List of databases
   Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges   |  Size   | Tablespace |                Description                 
-----------+----------+----------+------------+------------+-----------------------+---------+------------+--------------------------------------------
 postgres  | postgres | UTF8     | en_US.utf8 | en_US.utf8 |                       | 7063 kB | pg_default | default administrative connection database
 template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +| 6953 kB | pg_default | unmodifiable empty database
           |          |          |            |            | postgres=CTc/postgres |         |            | 
 template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +| 6953 kB | pg_default | default template for new databases
           |          |          |            |            | postgres=CTc/postgres |         |            | 
(3 rows)

Or to get bash:

[docker@centos7 ~]$ docker run -it --rm --link my-first-postgres:postgres postgres bash
root@f8c3b3738336:/$ cat /etc/os-release 
PRETTY_NAME="Debian GNU/Linux 8 (jessie)"
NAME="Debian GNU/Linux"
VERSION_ID="8"
VERSION="8 (jessie)"
ID=debian
HOME_URL="http://www.debian.org/"
SUPPORT_URL="http://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

Ok, this PostgreSQL image is based on Debian 8. Lets say this is not what I like because I want my PostgreSQL image based on CentOS. This is the topic for the next post: We’ll build our own CentOS image and get deeper in what the stacked filesystem is about. Once we’ll have that available we’ll use that image to build a PostgreSQL image on top of that.

 

Cet article Getting started with Docker – 1 – overview and installation est apparu en premier sur Blog dbi services.

Viewing all 522 articles
Browse latest View live