Re: [GENERAL] template_postgis issue

2015-05-15 Thread Giuseppe Broccolo
Hi Sachin,

2015-05-15 11:35 GMT+02:00 Sachin Srivastava :

> Dear Concern,
>
> When I am installing PostgreSQL version 9.1 with PostGIS 1.5 then it's
> creating "template_postgis" database by default.
>
> But when I am installing below PostgreSQL version 9.3 with PostGIS 2.1.7
>
> postgresql-9.3.6-2-windows-x64
> postgis-bundle-pg93x64-setup-2.1.7-1
>
> And PostgreSQL version 9.4 with PostGIS 2.1.7
>
> postgresql-9.4.1-3-windows-x64
> postgis-bundle-pg94x64-setup-2.1.7-1
>
> It's not creating template_postgis by default. Kindly confirm what's
> the problem is.
>
> Note: I am installing these on Windows 7 machine (64 bit)
>

If you are running PostgreSQL 9.1+ you don't need to bother with the
template database, but with the EXTENSION mechanism as introduced from this
release. Please consider the following link:

http://www.bostongis.com/PrinterFriendly.aspx?content_name=postgis_tut01

In particular, consider this part:

If you are running PostgreSQL 9.1+, we recommend you
don't even bother with the template database and just use

   CREATE EXTENSION postgis

in the database of your choosing or use PgAdmin
Extensions install feature which we will cover in this tutorial.
<http://www.bostongis.com/PrinterFriendly.aspx?content_name=postgis_tut01>
Regards,

-- 
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it


Re: [GENERAL] BDR conpilation error un Ubuntu 12.4 LTS

2014-12-23 Thread Giuseppe Broccolo
Hi Stefano,

I used the same version of gcc (Ubuntu/Linaro 4.6.3-1ubuntu5), and I
followed your steps compiling without problems. I also considered the same
git branch (bdr/0.7.1) as in your case. Taking a look to the error you
obtained it sounds like an error due to the code, regardless of the used
version of gcc. I would suggest to:

* be sure you have properly installed the dependencies (build-essential
libreadline-dev zlib1g-dev libssl-dev bison flex)
* try to compile the code in the bdr-REL9_4_STABLE git branch

Anyway, I followed the same procedure you reported in your first email and
it works.

Giuseppe.

2014-12-23 8:45 GMT+01:00 stefano bonnin :

> I have upgraded GCC to:
>
> gcc (Ubuntu 4.8.1-2ubuntu1~12.04) 4.8.1
>
> Now the error is:
>
> make -C ../../src/interfaces/libpq all
> make[1]: Entering directory
> `/usr/src/meteosmit/postgresql-BDR/2ndquadrant_bdr/src/interfaces/libpq'
> make[1]: Nothing to be done for `all'.
> make[1]: Leaving directory
> `/usr/src/meteosmit/postgresql-BDR/2ndquadrant_bdr/src/interfaces/libpq'
> sed '0,/BDR_VERSION_DATE/s,\(BDR_VERSION_DATE\).*,\1
> "2014-12-23",;0,/BDR_VERSION_GITHASH/s,\(BDR_VERSION_GITHASH\).*,\1
> "a63674e",' bdr_version.h.in >bdr_version.h
> gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith
> -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
> -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard
> -fpic -I../../src/interfaces/libpq -I. -I. -I../../src/include
> -D_GNU_SOURCE   -c -o bdr.o bdr.c
> bdr.c: In function ‘bdr_perdb_worker_main’:
> bdr.c:1057:10: warning: ‘rc’ may be used uninitialized in this function
> [-Wmaybe-uninitialized]
>if (rc & WL_POSTMASTER_DEATH)
>   ^
> gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith
> -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
> -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard
> -fpic -I../../src/interfaces/libpq -I. -I. -I../../src/include
> -D_GNU_SOURCE   -c -o bdr_apply.o bdr_apply.c
> In file included from ../../src/include/postgres.h:48:0,
>  from bdr_apply.c:15:
> bdr_apply.c: In function ‘process_remote_commit’:
> bdr_apply.c:288:22: error: invalid operands to binary >> (have
> ‘TimestampTz’ and ‘int’)
> (uint32) (end_lsn >> 32), (uint32) end_lsn,
>   ^
> ../../src/include/utils/elog.h:254:23: note: in definition of macro ‘elog’
>elog_finish(elevel, __VA_ARGS__); \
>^
> bdr_apply.c:358:22: error: invalid operands to binary >> (have
> ‘TimestampTz’ and ‘int’)
>   (uint32)(end_lsn>>32), (uint32)end_lsn,
>   ^
> ../../src/include/utils/elog.h:117:14: note: in definition of macro
> ‘ereport_domain’
> errfinish rest; \
>   ^
> bdr_apply.c:355:3: note: in expansion of macro ‘ereport’
>ereport(LOG,
>^
> make: *** [bdr_apply.o] Error 1
>
> Regards.
>
> 2014-12-23 8:11 GMT+01:00 stefano bonnin :
>>
>> Hi Giuseppe,
>>
>> gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3
>>
>> a newer version needed?
>>
>> 2014-12-22 14:00 GMT+01:00 Giuseppe Broccolo <
>> giuseppe.brocc...@2ndquadrant.it>:
>>>
>>> Hi Stefano,
>>>
>>> Which version of gcc are you using?
>>>
>>> Giuseppe.
>>>
>>> 2014-12-22 9:52 GMT+01:00 stefano bonnin :
>>>
>>>>
>>>> Hi all, I have followed the wiki about postgresql BDR:
>>>>
>>>> OS: Ubuntu 12.04.5 LTS
>>>>
>>>>
>>>>- git clone git://git.postgresql.org/git/2ndquadrant_bdr.git
>>>>- cd 2ndquadrant_bdr
>>>>- git checkout bdr/0.7.1
>>>>- ./configure --prefix=$HOME/bdr --with-openssl
>>>>- make
>>>>- make install
>>>>
>>>> To install the BDR extension you'd then run:
>>>>
>>>>- (cd contrib/btree_gist && make && make install)
>>>>- (cd contrib/bdr && make && make install)
>>>>
>>>> or to instead install all contribs (useful to include widely used
>>>> extensions like hstore):
>>>>
>>>>- (cd contrib && make all)
>>>>- (cd contrib && make install)
>>>>
>>>>
>>>> BUT during cd contrib/bdr && make I obtain the following error:
>>>>
>>>> make -C ../../src/interfaces/libpq all
>>>> make[1]: Entering directory
>>>> `/usr/src/meteosmit/postgresql-BDR/2n

Re: [GENERAL] BDR conpilation error un Ubuntu 12.4 LTS

2014-12-22 Thread Giuseppe Broccolo
Hi Stefano,

Which version of gcc are you using?

Giuseppe.

2014-12-22 9:52 GMT+01:00 stefano bonnin :

>
> Hi all, I have followed the wiki about postgresql BDR:
>
> OS: Ubuntu 12.04.5 LTS
>
>
>- git clone git://git.postgresql.org/git/2ndquadrant_bdr.git
>- cd 2ndquadrant_bdr
>- git checkout bdr/0.7.1
>- ./configure --prefix=$HOME/bdr --with-openssl
>- make
>- make install
>
> To install the BDR extension you'd then run:
>
>- (cd contrib/btree_gist && make && make install)
>- (cd contrib/bdr && make && make install)
>
> or to instead install all contribs (useful to include widely used
> extensions like hstore):
>
>- (cd contrib && make all)
>- (cd contrib && make install)
>
>
> BUT during cd contrib/bdr && make I obtain the following error:
>
> make -C ../../src/interfaces/libpq all
> make[1]: Entering directory
> `/usr/src/meteosmit/postgresql-BDR/2ndquadrant_bdr/src/interfaces/libpq'
> make[1]: Nothing to be done for `all'.
> make[1]: Leaving directory
> `/usr/src/meteosmit/postgresql-BDR/2ndquadrant_bdr/src/interfaces/libpq'
> sed '0,/BDR_VERSION_DATE/s,\(BDR_VERSION_DATE\).*,\1
> "2014-12-22",;0,/BDR_VERSION_GITHASH/s,\(BDR_VERSION_GITHASH\).*,\1
> "a63674e",' bdr_version.h.in >bdr_version.h
> gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith
> -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
> -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard
> -fpic -I../../src/interfaces/libpq -I. -I. -I../../src/include
> -D_GNU_SOURCE   -c -o bdr.o bdr.c
> bdr.c: In function ‘bdr_perdb_worker_main’:
> bdr.c:1057:10: warning: ‘rc’ may be used uninitialized in this function
> [-Wuninitialized]
> gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith
> -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
> -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard
> -fpic -I../../src/interfaces/libpq -I. -I. -I../../src/include
> -D_GNU_SOURCE   -c -o bdr_apply.o bdr_apply.c
> bdr_apply.c: In function ‘process_remote_commit’:
> bdr_apply.c:286:2: error: invalid operands to binary >> (have
> ‘TimestampTz’ and ‘int’)
> bdr_apply.c:355:3: error: invalid operands to binary >> (have
> ‘TimestampTz’ and ‘int’)
> make: *** [bdr_apply.o] Error 1
>
>
> Any help appreciated. Thanks.
>



-- 
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it


Re: [GENERAL] GiST index question

2014-09-04 Thread Giuseppe Broccolo
Hi Eric,

As Michael said, path data type does not support for gist operators.
Anyway, you could redefine data type using 'ltree' instead of 'path'. Take
a look on the following link:

http://www.postgresql.org/docs/9.1/static/ltree.html

Try to understand if this could be fine for you.

Cheers,

Giuseppe.


2014-09-04 6:31 GMT+02:00 Michael Paquier :

> On Thu, Sep 4, 2014 at 8:35 AM, Eric Fleming  wrote:
> > I have a table that I have defined as:
> >
> > CREATE TABLE test (
> > "id" SERIAL PRIMARY KEY,
> > "first_path" path NOT NULL,
> > "second_path" path NOT NULL
> > );
> >
> > I am attempting to create a GiST index on the two "path" columns using:
> >
> > CREATE INDEX  "idx_test_first_path" ON test USING gist(first_path);
> > CREATE INDEX  "idx_test_second_path" ON test USING gist(second_path);
> >
> > I am getting this error:
> >
> > Error : ERROR:  data type path has no default operator class for access
> > method "gist"
> >
> > Is it that I am not allowed to create an index on the path data type or
> do I
> > need to enable an extension for this to work? Thanks in advance for your
> > help.
> In-core contrib modules (and core) do not have yet support for gist
> operator classes for the datatype path as far as I recall.
> Regards,
> --
> Michael
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it


Re: [GENERAL] How to get PG 9.3 for a RaspberryPI (Debian Wheezy)?

2014-08-05 Thread Giuseppe Broccolo
Hi Andreas,

Maybe the content of this blog could be usefull: http://raspberrypg.org/

Cheers,
Giuseppe.


2014-08-05 6:32 GMT+02:00 Michael Paquier :

> On Tue, Aug 5, 2014 at 12:32 PM, Andreas  wrote:
> > I then even tried to remove the local repository and install PG 9.1.
> > That doesn't work either now. Same problem with the socket.
> > Is there a way to get a working PG9.3 on a Rasberry?
> This seems like a problem inherent to your OS or your RPMs as two
> buildfarm machines are Raspberry PIs and are able to run Postgres:
> hamster and chipmunk. So I don't see any reason why Postgres would not
> run on it. Why don't you give a try to Raspbian? You could even update
> their packages to 9.3:
>
> http://www.raspberryconnect.com/raspbian-packages-list/item/56-raspbian-database
> Regards,
> --
> Michael
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it


Re: [GENERAL] How to turn off DEBUG statements from psql commends

2014-02-10 Thread Giuseppe Broccolo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Il 10/02/2014 16:43, peterlen ha scritto:
> We are using PostgreSQL 9.3.  Something seems to have changed with
> our psql command-line output since we first installed it.  When I
> run commands at my plsql prompt, I am getting a lot of debug
> statements which I was not getting before.  I am just trying to
> find out how to tell psql not to display this output.  As an
> example, if I were to create a new 'test' schema, the output looks
> like:
> 
> gis_demo=# create schema test; DEBUG:  StartTransactionCommand 
> DEBUG:  StartTransaction DEBUG:  name: unnamed; blockState:
> DEFAULT; state: INPROGR, xid/subid/cid: 0/1/0, nestlvl: 1,
> children: LOG:  statement: create schema test; DEBUG:
> ProcessUtility DEBUG:  CommitTransactionCommand DEBUG:
> CommitTransaction DEBUG:  name: unnamed; blockState:   STARTED;
> state: INPROGR, xid/subid/cid: 15099/1/1, nestlvl: 1, children: 
> CREATE SCHEMA gis_demo=#

You have set the "client_min_messages" to "debug" level. If you launch

SET client_min_messages TO notice;

You should obtain just the "CREATE SCHEMA" message.

Giuseppe.
- -- 
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it
-BEGIN PGP SIGNATURE-
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJS+Pj3AAoJELtGpyN/y3je5zMQALBNbdLyIOIUI1cN0nwHX/Wb
vg0QdQ4IOqcZkojEFVspD1dJ6cl827eymmhPgkYwol0VU+Lf7TidQuBrw1Xz1+zq
cMA56atTdl1lHkR+gL9fNSX9xb2h/cTfc5ASm1FbspaiitB3Rx3MqWhJTRKWxwAt
lrVQBt5/OXWr+9UVM0cJkLWhGo5WTtkkLJ2UZy0C5IdldoWy0FN/pxtqx36+iUhH
6/joGV1i4wg9MFBO5CC8+vEiSUirbAjRgKaOfg6a2+htQfQUV7MmA+tZx7Hi2O2n
9jVKkYXHsdLD275lGfI4c3qgBWI9kbXOgJMQUsunXZAZNxXmcoMxFvzYbjaI8zZS
sSCFPt3Ztld9zT81wQMzPYxT8cGEcEfyG97iJQv4Xt7ILcxuxTn2AmmVAQs+JWXK
ISfz68bYlzc3FDO72i7sVhlm1kOXhvSMry3pQuqu/v6E+TJN5OUAX1gGSuG4Cn2O
rGh3THBC/W/8+GjfW51X/FfPPQHjcYqzjs8h3Qn86XiarQYQEMpNhtTdFGdnI1U4
P0dF9fu6k9Yx5MQhp9r/zV9lPLfk8E/cmAyV0xDAuP9Hv1mATmWomFmeIpEx91BM
Afimb8JbOAfmZty/5Yj29WVNeKxa8hkyPplV9qdReup2UfcLddPhxRglZN33Ou8N
GIx7YPPx5xs+mukF1kmg
=QLyE
-END PGP SIGNATURE-


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why the size (PQ_BUFFER_SIZE) of backend send buffers is 8192 ?

2014-01-03 Thread Giuseppe Broccolo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Il 03/01/2014 09:47, xbzhang ha scritto:
> Why the size (PQ_BUFFER_SIZE) of backend send buffers  is 8192 ?
> Can i set it to other value ?

In principle, it is just an arbitrary amount by which to increase the
I/O buffer size. Of course it has to take into account the buffering
behaviour of the system kernel and TCP stack.

> If i extend or decrease it ,   can it affect the performance of
> sending message?

PQ_BUFFER_SIZE handle the low-level details of communication between
frontend and backend. I would not decrease its value, there is no
reason to expect advantage splitting 8k buffers (i.e. a page content
size) in my opinion.

Giuseppe.
- -- 
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it
-BEGIN PGP SIGNATURE-
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJSxoloAAoJELtGpyN/y3jeaF4P/04JXkTNj3uAd3gmff5y74g2
fOo5xNJG4ZhQ4U3Kp8IumN3UIKAq9bP39ufPL2rQZTDzeoYrhF+tn4gtEFYHKobW
JMpehihpmgC0/vc78FEQu0hZrMH08A8xdY5L+8heoGkOqZIQwMjsmxwcdTLJNdyO
leD933NiiDinXESwXEy3x0LgqaBv11ZeoB6aJgz5jxz95CkpmdSQQwuROYP3nTxy
PnjcjX3rzrC5jvUQS9c5gvOHstwyK/aOWjCI14dygEM6WuwU/jjJoGnSb5SyHUcS
HfNKsjpFnjlaYTYHeQq1mYz3br8JhOZ/akK3FHkuvHKAwhi8wAymAXANm7RUGHZr
nsiv4MyTMIpI6ynA12c7LbIYwlbgoVE+u/dNoyZySNOJv4kkSV5VNMmbwmRLO2GL
ebde35zTn1LjE8LTun/PORiLkG61Xc6+NzjD0PR3yqKklQXPTP/4xoYIxjtg2lp3
dd2ImLhoTE/ruUZpjue75M4XW97jaXKEG2JmdOsOoxJM9UwfXnB9/9DVHJCo5DZk
e4B1YM8xn88MTwoWldC2z4SG3Pg91EU8569nkj1eaudx3LTUm9yNGVCr6tK4Lfd2
SxzEpclytxcFsIO7vQrJK374K/3nYQWb/BmYvIn+iarhY/DDb7TeOjuHUHDualNb
QBoamS6hNuLdNe+il5dM
=2qp+
-END PGP SIGNATURE-


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] vacuum_freeze_table_age for 9.3.2

2013-12-05 Thread Giuseppe Broccolo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Il 05/12/2013 17:16, Andy Colson ha scritto:
> The docs say vacuum, but the param is vacuum_freeze_table_age, so 
> do I need to "vacuum freeze" all the tables, or is vacuum enough?
> 
> Also, will "set vacuum_freeze_table_age = 0; vacuum freeze;" work, 
> or do I need to modify the postgresql.conf and reload?

Setting vacuum_freeze_table_age to 0 forces VACUUM to always scan all
pages, effectively ignoring the visibility map. In this way a scan of
the whole table is done, ensuring all old XIDs are replaced by FrozenXID.

vacuum_freeze_table_age is a parameter with context 'user', meaning
that you can set it during a session and run a "vacuum freeze" with
the modified setting.

Giuseppe.
- -- 
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it
-BEGIN PGP SIGNATURE-
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJSoLfwAAoJELtGpyN/y3je278QAJomNDxax15x2MCxT4hoXyFA
pJX4z7H7QjHXILcZ1ha5ajRvWrEtJEy3VvAZ2VBwAmk/VhAfcNAkU1tWA61K6UYD
47Npu1NhOzjDI9j9AfdA9cr/p3b/HRyx2qxjda6jBfhhLiDSZbQZXocC+FZicwtn
qn3QQPpx6Ty9rt18OkliP9TKKjwm3tDGz1goOa58pbeH3TAjCipdLR+6Fn9WGM9b
XLvNTMsHB3157VS7A6CjRRKvLj8Dxj/JoWvQC0q8ROZ728IpaEh+EqjeMmKoF8G2
/7xFyHFLINgUyAqVpv1scua5pO9RBYTsy4NCdnxLeGuSJt/ucfm+EkazNAMHLAj8
Vyq/lT9XUN2SB7OALa79TTcB+gFYrD+6yswBExnutTUZUP7gFrB6X7PFnmKxVPVO
2ovBzTXGcLRGp9lRoaJ/gwWRkyv1oKW39vYT6UBUjFM2NwGizX0z6BMd2vRC93r/
wq8Ll8KCMOe8vE/6qhnFMQ90pfJm+Zv4yT4CtdN5eP0b2Os6vVZVu9EMHgGtii0l
ribqPQqmczySssdOSpTnlnq00du8moG7M5KCktRcuVsbDUpBS8yAbb2b4ccLxW/S
VmSn2TuRWim/4oOfi1fsZJMB6ihmIv+e6qXjNeyZNZQI19dDxfnvfey5Wt3g0SnH
mwqOBqn/DMia79z0048K
=pp71
-END PGP SIGNATURE-


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] vacuum_freeze_table_age for 9.3.2

2013-12-05 Thread Giuseppe Broccolo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Il 05/12/2013 17:16, Andy Colson ha scritto:


Setting vacuum_freeze_table_age to 0 forces VACUUM to always scan all
pages, effectively ignoring the visibility map. In this way a scan of
the whole table is done, ensuring all old XIDs are replaced by FrozenXID.

vacuum_freeze_table_age is a parameter with context 'user', meaning
that you can set it during a session and run a "vacuum freeze" with
the modified setting.

Giuseppe.
- -- 
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it
-BEGIN PGP SIGNATURE-
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJSoLfqAAoJELtGpyN/y3jedG4P/0yaz4RpHoKT6mm53rT1KrRC
5+UEKffji5P8x1cPWbkUyK8jvSbODByWHMqhAmMtDo5+Mdc9XW/MXrfJ7aJINVU7
rVI/VUaHRwJmwxQSKR0FM01Gqgjgx8W0mAcmIpk4nZpus+OH/8Ib6ImO58iebSaL
2E/ii7itZovtp816n2Pffk2HrftDfBCAQ1/XxQksTW8uGXRRx3qpF/EJm6lqsv4d
xNxlOFjCqDM0j3poPnJaeprFyeWgzCiUCyg75NoW8na0VI7QunpwEI99QP7XcFmi
drvx2uipzoajBSUOnPvtGSWRnJAoP7jTg4nooPDxn0DvMrf3YoqQ7xK4fKV0psQX
xoXTF1IAiI6EW0Tr2uGoO4akWjiO0Yo4grLJ8W9oYca29Ai+qwtOA46qyyMmZrtm
c9p+B1mK0DWwkgLpyuJJYYAvTyiCS+h9iLrooiIQdz/7bC0GlwjMKZo8rGvHFy6M
RPCHi2w7Uw4emj0E0/ZpeeHhA2Aa8wJKUS/uoR8I+MTgf8VeVjOY3hOWuUHS/PNa
S1XsrzfYp5z3uQ4dzMbVbdvgG6q7DqS8qSb+JocUg1HFRzdWFcgEE9M/0mLJOlDL
0VAwhpYPG/UBX0gXQkGoFavBorYA2vTuedKaKpCQuxtFxeaaKfbYPPfU/Ib1/6fP
AwQUxlTBde8aC0ATVi5+
=WGwy
-END PGP SIGNATURE-


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgbench

2013-10-03 Thread Giuseppe Broccolo

Il 03/10/2013 16:11, Simeó Reig ha scritto:

A 2013-10-03 15:51, Adrian Klaver escrigué:

On 10/03/2013 06:21 AM, Simeó Reig wrote:

  Hello

  I was doing a performance test with pgbench with a pretty long 
queries

and I have the next error:

  $ pgbench -n -c1 -T 3 -f veins_pgbench.sql pdn
  Client 0 aborted in state 0: ERROR:  syntax error at end of input
  LINE 1: ...(abc.persones.provincia = abc.poblacions.cod_provinc
  ^
  transaction type: Custom query
  scaling factor: 1
  query mode: simple
  number of clients: 1
  number of threads: 1
  duration: 3 s
  number of transactions actually processed: 0
  tps = 0.00 (including connections establishing)
  tps = 0.00 (excluding connections establishing)

  I believe pgbench has a very low limit with the queries you can put
inside a file with the 't' option.
  Am I right? How can avoid it ?



Well first you say 't' option but show 'T' option, they are different.

Second the error is reporting a syntax error in your script, so I
would look there first.


Yes, I did I mistake. I would say 'f' option (file option) not 't' 
option, sorry . But no, there is no mistake with the script, I can do:


# psql -d pdn < veins_pgbench.sql

and it works perfectly

thanks Adrian, I'm almost sure that the problem is the query is too 
long for pgbench (1600 characters)


The format of the script file has to be one SQL command per line; 
multiline SQL commands are not supported, and empty lines are ignored. 
This could bring to errors. Could this be your case?


Giuseppe.

--
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CREATE DATABASE Š [ TEMPLATE [=] template ] [ TABLESPACE [=] tablespace_name ]

2013-09-26 Thread Giuseppe Broccolo

Il 26/09/2013 13:27, Tim Kane ha scritto:


I have a question regarding the behaviour of CREATE DATABASE when used 
with TEMPLATE and TABLESPACE options together.



The documentation describes the tablespace parameter as:

The name of the tablespace that will be associated with the new
database, or DEFAULT to use the template database's tablespace.
This tablespace will be the default tablespace used for objects
created in this database. See CREATE TABLESPACE
<http://www.postgresql.org/docs/9.3/static/sql-createtablespace.html> for
more information.


I would take that to mean that all objects from the template will be 
created in the new database within the specified tablespace.

This is possible setting the parameter 'default_tablespace':

SET default_tablespace = space1;

in this way, an implicit TABLESPACE clause is supplied in the objects 
creation, independently it's done from a template or not.


Giuseppe.

--
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it



Re: [GENERAL] remove everything before the period

2013-09-17 Thread Giuseppe Broccolo

Il 17/09/2013 04:21, karinos57 ha scritto:

Hi,
I am trying to remove everything before the period in other words i only
want to show the values that starts from the period.  For instance
897.78 ==> 78
74.25 ==> 25
3657.256 ==> 256

well the code below only shows everything before the period but i want to
show everything after the period

select volume, substring(volume from 1 for position('.' in volume) - 1) as
column
from MyTable;
Try with: " SELECT volume, substring(volume from position('.' in volume) 
+ 1) AS column FROM MyTable; "


Giuseppe.

--
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problem with query

2013-09-13 Thread Giuseppe Broccolo

Il 12/09/2013 22:34, Roberto Scattini ha scritto:
hi, today we discovered that this query doesn't return the expected 
values:


SELECT DISTINCT
p.id <http://p.id>, p.apellido AS "Apellido", p.nombre AS "Nombre", 
pf.nombre AS "Funcion", to_char(da.f_ingreso_pg, 'dd/mm/') AS 
"Fecha Ingreso PG", e.estado AS "Estado", to_char(pe.fecha, 
'dd/mm/') AS "Fecha Estado"

 FROM personal.personas AS p
  LEFT JOIN personal.personas_da_pg AS da ON p.id 
<http://p.id>=da.id_persona
  LEFT JOIN personal.personas_estado AS pe ON da.estado_id=pe.id 
<http://pe.id>

  LEFT JOIN personal.estados AS e ON pe.id_estado=e.id <http://e.id>
  LEFT JOIN procu_departamento AS d ON d.id 
<http://d.id>=da.id_departamento
  LEFT JOIN procu_departamento AS dto ON left(d.c_organigrama, 
4)||'000'=dto.c_organigrama
  LEFT JOIN procu_departamento AS dir ON left(d.c_organigrama, 
3)||''=dir.c_organigrama
  LEFT JOIN procu_departamento AS dg ON left(d.c_organigrama, 
2)||'0'=dg.c_organigrama

  LEFT JOIN personal.funciones AS pf ON pf.id <http://pf.id>=da.funcion_id
  LEFT JOIN personal.profesiones AS pp ON pp.id 
<http://pp.id>=p.id_profesion

 WHERE p.apellido ilike '%nuñez%'
 ORDER BY "Apellido"
The problem is the encoding: SQL_ASCII encodes only the first 128 
characters, so 'ñ' and 'Ñ' cannot be encoded in ASCII. If you insert 
text 'ñ' or 'Ñ' in a table inside a database (e.g. 'ascii_test') with 
SQL_ASCII encoding, they will be seen as two distinct characters:


ascii_test=# SELECT length(E'ñ');
 length

  2
(1 row)
ascii_test=# SELECT length(E'Ñ');
 length

  2
(1 row)
ascii_test=# SELECT 'ñ'::bytea;
 bytea

 \xc3b1
(1 row)
ascii_test=# SELECT 'Ñ'::bytea;
 bytea

 \xc391
(1 row)

Hexadecimals 'c3', 'b1' and '91' does not decode anything in ASCII, even 
if terminal show pairs as 'ñ' or 'Ñ'. So ILIKE operator cannot apply 
case insensitive pattern matching to data which does not encode any 
string (in the SQL_ASCII encoding) and works as a normal LIKE.


Even if the client which insert data has 8-bit encoding (UTF8, where 'ñ' 
and 'Ñ' are decoded), SQL_ASCII database cannot convert strings anyway, 
and problem remains.


The best way is to work with a UTF8 encoded database. Is there a 
particular reason to work with SQL_ASCII encoding?


Giuseppe.

--
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it



Re: [GENERAL] Risk of set system wise statement_timeout

2013-09-12 Thread Giuseppe Broccolo

Il 11/09/2013 22:02, Alex Lai ha scritto:
I have been reading few posted comment about the risk for autovacuum 
for older postgres liek version 8.
I am currently running 9.2.4.  We have a need to terminate any query 
running longer than 2 hours.  Most of our query should finish within 
15 minutes.  We don't have very large amount of changes in the system 
and we run autovacuum daily.  Running the larger table for autovacuum 
should be fast.  Under my situation, setting statement_timeout = 
720 which is 2 hours seems very low risk trigger fail to 
autovacuum.  Any one have any idea not to do it or any workaround to 
decrease the risk of fail autovacuum 
Setting statement_timeout in postgresql.conf is not recommended for many 
reasons. You are interested to terminate just your query. I suggest to 
use pg_stat_activity table to search query running longer than 2 hours, 
and them to terminate them with pg_cancel_backend() function. I just did 
a simple test where I defined a function which retrieves the pid of the 
query process, and then terminate it if its running time is longer than 
2 hours:


CREATE OR REPLACE FUNCTION cancel_after_2hours() RETURNS VOID AS $$
DECLARE
r RECORD;
BEGIN
FOR r IN SELECT * FROM pg_stat_activity WHERE query_start < 
CURRENT_TIMESTAMP - interval '120 minutes'

LOOP
SELECT pg_cancel_backend(r.pid);
END LOOP;
END;
$$ LANGUAGE 'plpgsql';

then add a line like the following in your cron

0 * * * * psql  -c "SELECT cancel_after_2hours();"

to be sure that it will be executed in automatic way.

Hope it can help,

Giuseppe.

--
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FW: Single Line Query Logging

2013-09-10 Thread Giuseppe Broccolo

From: emreozt...@outlook.com
To: /pgsql-general@postgresql.org/
Subject: Single Line Query Logging
Date: Wed, 10 Jul 2013 13:16:13 +0300

Hello all,

Is there a parameter to log any SQL query as a single line in audit 
logs? I have some problems in my SIEM application. If a DBA sends the 
query as a single line I can gather the whole query, but if he enters like


UPDATE x  ...
  y=Z ..
  where ..

I only get the line starts with UPDATE then I can not see what is 
really changed in my SIEM logs. I have heard that there is a parameter 
do what I mean. Do you agree?


No. There's no parameter to be set for log parsing in a simple way, 
specially for multi-raws query. A possible solution is to use syslog 
instead of stderr: syslog stores each log element as an independent item.

You can try setting

log_destination = 'syslog'
redirect_stderr = off

Then you have to setup syslog by editing your syslog conf (On Debian: 
/etc/rsyslog.d/50-default.conf): add this new line (supposing your log 
directory is /var/log/pgsql/, and 'local0' is set in "syslog_facility" 
parameter in your postgres.conf - do a check)


LOCAL0.*-/var/log/pgsql

and in the "catch all log files" area add

LOCAL0.none

then restart syslog (sudo /etc/init.d/rsyslog restart). I've tried it, 
and it works!


Giuseppe.

--
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it



Re: [GENERAL] plpgsql code doen't work

2013-09-10 Thread Giuseppe Broccolo

Il 10/09/2013 10:46, Beena Emerson ha scritto:

Hello,

Try changing the variable left to something other like left_val. It 
will work.

Maybe the problem is because LEFT is a keyword.
Yes, left() is a function returning a 'text'. There's a conflict when 
you define it as an 'integer'...


Giuseppe.

--
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FETCH FORWARD 0 and "cursor can only scan forward" error

2013-09-04 Thread Giuseppe Broccolo

Hi Trigve,

Il 04/09/2013 15:06, Trigve Siver ha scritto:

Hi,
I'm on PostgreSQL 9.2.2 and trying to use no scroll cursor for some data fetch. But when 
moving cursor ahead one record and the fetching the actual record the error "cursor 
can only scan forward" is returned. I don't know if I'm doing something wrong but I 
don't think I'm going backward with cursor. Here is the code that demonstrate it

BEGIN;

DECLARE CUR_1 NO SCROLL CURSOR WITHOUT HOLD FOR SELECT * FROM 
GENERATE_SERIES(1, 2);

FETCH FORWARD 0 FROM CUR_1; -- 0

MOVE FORWARD FROM CUR_1;

FETCH FORWARD 0 FROM CUR_1; -- 1

ABORT;

The line marked as "-- 0" is working OK, the line marked as "-- 1" is throwing 
error:

ERROR:  cursor can only scan forward
HINT:  Declare it with SCROLL option to enable backward scan.

** Error **

ERROR: cursor can only scan forward
SQL state: 55000
Hint: Declare it with SCROLL option to enable backward scan.

I want to iterate all records with cursor from beginning to end. This sample 
could be rewritten using FETCH FORWARD 1 ... without using MOVE but I'm 
interested with solution which throws error.


When you fetch a record you move inherently the cursor to the next 
position relative to the last fetched record. Consider this example:


postgres=# BEGIN;
BEGIN
postgres=# DECLARE CUR_1 CURSOR WITHOUT HOLD FOR SELECT * FROM 
GENERATE_SERIES(1, 10);

DECLARE CURSOR
postgres=# FETCH FORWARD 1 FROM CUR_1;
 generate_series
-
   1
(1 row)

postgres=# FETCH FORWARD 1 FROM CUR_1;
 generate_series
-
   2
(1 row)

postgres=# FETCH FORWARD 0 FROM CUR_1;
 generate_series
-
   2
(1 row)


If you specify "FORWARD 0" you move ahead of zero places instead of one; 
therefore you obtain the same record *that was yet fetched* by the 
previous FETCH statement. If the cursor is declared with the NO ROLL 
options, this operation is forbidden, and an error is raised, as in your 
case.


Giuseppe.

--
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it



Re: [GENERAL] Query on a record variable

2013-08-19 Thread Giuseppe Broccolo

Hi Janek,


Hi,

ok :)


I suppose you have a table 'table' with 'col' (text), 'dede' (text) and 
'vectors' (tsvector) as fields. In this case, you can do


SELECT levenshtein(col, 'string') FROM table AS lev WHERE 
levenshtein(col, 'string') < 10 AND LENGTH(dede) BETWEEN x AND y AND 
plainto_tsquery('string') @@ vectors;


Hope it can help.

Giuseppe.

--
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Alter table never finishes

2013-07-30 Thread Giuseppe Broccolo

How can I force disconnect all clients to let me alter that table?.

Regards,


There are two ways: the first|is based on pg_terminate_backend() 
function and 'pg_stat_activity' catalog |||to kill idle processes.

So in a psql session type (tried on PostgreSQL 8.4):

==# SELECT procpid, (SELECT pg_terminate_backend(procid)) AS killed from 
pg_stat_activity WHERE current_query LIKE '';


A more heavy handed approach then should be used on terminal, forcing 
kill of idle processes using their pid:


:$ for x in `ps -ef | grep -e "postgres.*idle" | awk '{print $2}'`; do 
kill -9 $x; done


Hope it can help.

Giuseppe.

--
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it



Re: [GENERAL] postgres FDW cost estimation options unrecognized in 9.3-beta1

2013-07-29 Thread Giuseppe Broccolo

Hi Lonni,


Greetings,
I have a postgresql-9.3-beta1 cluster setup (from the
yum.postgresql.org RPMs), where I'm experimenting with the postgres
FDW extension.  The documentation (
http://www.postgresql.org/docs/9.3/static/postgres-fdw.html )
references three Cost Estimation Options which can be set for a
foreign table or a foreign server.  However when I attempt to set
them, I always get an error that the option is not found:
###
nightly=# show SERVER_VERSION ;
  server_version

  9.3beta1

nightly=# \des+
List of
foreign servers
Name|   Owner   | Foreign-data wrapper | Access privileges |
Type | Version |
   FDW Options| Description
---+---+--+---+--+-+--
-+-
  cuda_db10 | lfriedman | postgres_fdw |   |
   | | (host 'cuda-db10', dbname 'nightly', port '5432') |
(1 row)

nightly=# ALTER SERVER cuda_db10 OPTIONS (SET use_remote_estimate 'true') ;
ERROR:  option "use_remote_estimate" not found
###

Am I doing something wrong, or is this a bug?


You got this error because you can't alter, in a server, an option which 
is not yet defined using 'SET'.
You could do in this way if your server was already created with the 
option 'use_remote_estimate' set, just for instance, to 'false':


nightly=# \des+
List of foreign servers
   Name|   Owner   | Foreign-data wrapper | Access privileges | Type | 
Version |FDW Options
| Description
---+---+--+---+--+-+---+-
 cuda_db10 | lfriedman | postgres_fdw |   |  |  
   | (host 'cuda-db10', dbname 'nightly', port '5432', use_remote_estimate 
'false') |
(1 row)

nightly=# ALTER SERVER cuda_db10 OPTIONS (SET use_remote_estimate 'true') ;
ALTER SERVER

nightly=# \des+
List of foreign servers
   Name|   Owner   | Foreign-data wrapper | Access privileges | Type | 
Version |FDW Options
| Description
---+---+--+---+--+-+---+-
 cuda_db10 | lfriedman | postgres_fdw |   |  |  
   | (host 'cuda-db10', dbname 'nightly', port '5432', use_remote_estimate 
'true') |
(1 row)

If your server was not created with any 'use_remote_estimate' option, 
you have to add it in this way:


nightly=# \des+
List of foreign servers
   Name|   Owner   | Foreign-data wrapper | Access privileges | Type | 
Version |FDW Options| Description
---+---+--+---+--+-+
 --+-
 cuda_db10 | lfriedman | postgres_fdw |   |  |  
   | (host 'cuda-db10', dbname 'nightly', port '5432') |
(1 row)

nightly=# ALTER SERVER cuda_db10 OPTIONS (use_remote_estimate 'true') ;
ALTER SERVER

nightly=# \des+
List of foreign servers
   Name|   Owner   | Foreign-data wrapper | Access privileges | Type | 
Version |FDW Options
| Description
---+---+--+---+--+-+---+-
 cuda_db10 | lfriedman | postgres_fdw |   |  |  
   | (host 'cuda-db10', dbname 'nightly', port '5432', use_remote_estimate 
'true') |
(1 row)


To create your server with 'use_remote_estimate' option already set to 
'true' you have to do:


nightly=# CREATE SERVER cuda_db10 FOREIGN DATA WRAPPER postgres_fdw 
OPTIONS(host 'cuda-db10', dbname 'nightly', port '5432', use_remote_estimate 
'true');
CREATE SERVER

nightly=# \des+
List of foreign servers
   Name|   Owner   | Foreign-data wrapper | Access privileges | Type | 
Version |FDW Options
| Description
---+---+--+---+

Re: [GENERAL] How to do incremental / differential backup every hour in Postgres 9.1?

2013-07-26 Thread Giuseppe Broccolo

Hi Neil,

Il 26/07/2013 00:24, Neil McGuigan ha scritto:
Trying to do an hourly hot incremental backup of a single postgres 
server (windows).


I have the following setup in postgresql.conf:

max_wal_senders=2
wal_level=archive
archive_mode=on
archive_command='copy "%p" "c:\\postgres\\archive\\%f"'
I did a base backup with pg_basebackup -U postgres -D ..\archive -Ft 
-l 2013-07-07 -x


Which made a big base.tar file in the archive folder and added some 
long file name files, which I assume are the WALs.


Yes, they are WAL files. WAL filename follows a definite format, made by 
24 digits.




pg_start_backup('label') and pg_stop_backup() seem to create the WAL 
files in xp_log, and then copy them to the archive folder.


Questions:

1. what command(s) do I run to do a new incremental backup 
(pg_basebackup does a new base backup which I don't want right now)? 
do I just run select pg_start_backup('label'); select 
pg_stop_backup(); on a schedule?


Yes, you have to schedule on Windows Scheduler pg_start_backup() and 
pg_stop_backup() every time is needed, without doing a base backup. 
pg_start_backup() function performs on-line backup and pg_stop_backup() 
finishes the performing, meaning that they take care of WAL and backup 
copy from the data directory, not of backup creation. So, try to 
schedule also a single pg_basebackup to have an updated base backup.




2. What does the label in pg_basebackup and pg_start_backup() do exactly?


The label in pg_start_backup() can be any arbitrary user-defined label. 
A good practice is to usethe name under which the backup dump file will 
be stored.The label is not used later by any other PostgreSQL command.




3. WAL Files don't seem to be removed from pg_xlog. What should I do 
about that? It seems to keep 5 or so WALs in pg_xlog. Is that to be 
expected?


Which PostgreSQL version are you using? WALs should be removed 
automatically after the archive_command starting from 8.2. Anyway, do a 
check in pg_xlog/archive_status/ if it contains a matching

.backup.done file.



4. Do I need to backup the .backup files in the archive folder or just 
the 16,384KB WAL files?


The .backup file is very small, and contains some information about the 
backup. I'm not sure that it is strictly necessary to move it in the 
archive folder, anyway it is so small (<1kB) so it could be lightly copied.




5. should I use the --xlog parameter and if so do I need to change 
wal_keep_segments from 0?


You're already using the -x option, this is the reason you're including 
WAL files in the backup when you launch pg_basebackup. Notice that WALs 
are collected at the end of the backup, so you need to set 
wal_keep_segments parameter high enough that the log is not removed 
before the end of the backup.


Giuseppe.

--
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rule Question

2013-07-25 Thread Giuseppe Broccolo


Unrelated to the OP's question, the suggestion above could be more 
simply rewritten as


TG_OP = 'UPDATE'
AND NEW.b IS DISTINCT FROM OLD.b

You're right! :)

Giuseppe.

--
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rule Question

2013-07-25 Thread Giuseppe Broccolo



I am trying to do something like this

create table cats (a text,b text);

create rule cats_test as on update to cats do set a = new.b;

Can i manipulate column "a" sort of like this...  or is  there a 
better way.

I think the easiest way to do this is to use a trigger like this:

CREATE FUNCTION update_column() RETURNS TRIGGER AS $update_column$
BEGIN
IF TG_OP = 'INSERT' OR
(TG_OP = 'UPDATE' AND
(NEW.b != OLD.b OR
(NEW.b IS NULL AND OLD.b IS NOT NULL) OR
(NEW.b IS NOT NULL AND OLD.b IS NULL)
)
) THEN
NEW.a = NEW.b;
END IF;
RETURN NEW;
END;
$update_column$ LANGUAGE plpgsql;

CREATE TRIGGER update_column BEFORE INSERT OR UPDATE ON cats
FOR EACH ROW
EXECUTE PROCEDURE update_column();

So for instance, if you insert a new "column b" value

INSERT INTO cats (b) VALUES ('byebye');

you'll get a='byebye' and b='byebye', and if you update this value

UPDATE cats SET b = 'goodbye' WHERE a = 'byebye';

you'll get a='goodbye' and b='goodbye'. Anyway, this is just an example. 
I suggest that you look at the CREATE TRIGGER page in the documentation


http://www.postgresql.org/docs/9.2/static/sql-createtrigger.html

as you can also consider conditional triggers to be executed, for 
example, only when the b column is updated.


Hope it can help.

Giuseppe.

--
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] process deadlocking on its own transactionid?

2013-07-24 Thread Giuseppe Broccolo

Hi Kevin,

Il 23/07/2013 21:54, Kevin Goess ha scritto:
We're seeing a problem with some of our processes hanging on locks. 
 The select below makes it look like it's *waiting* for a ShareLock on 
transactionid, but it *has* an ExclusiveLock on the same value in 
virtualxid.


You are seeing a 'blocked' (and not 'blocking') process, so you can 
state about the expected lock mode of the blocked process but you can't 
say anything about the applied lock mode. You have to compare two 
(different) processes with same transaction id to compare blocked and 
blocking processes.


Following  http://wiki.postgresql.org/wiki/Lock_Monitoring it is 
possible to have a good locks monitoring. For instance, I create the 
following query which lists all blocked processes, and respective 
blocking processes with expected lock modes of blocking ones and blocked 
ones.


SELECT
 bl.pid AS locked_pid, a.usename AS locked_user, 
a.current_query AS locked_query, bl.virtualtransaction AS locked_vxid, 
bl.transactionid AS locked_xid,
 kl.pid AS locking_pid, ka.usename AS locking_user, 
ka.current_query AS locking_query, kl.virtualtransaction AS 
locking_vxid, kl.transactionid AS locking_xid,
 bl.mode AS locked_expected_lock, kl.mode AS 
locking_expected_lock

FROM
 pg_catalog.pg_locks bl JOIN pg_catalog.pg_stat_activity a 
ON bl.pid = a.procpid

JOIN
 pg_catalog.pg_locks kl JOIN pg_catalog.pg_stat_activity ka 
ON kl.pid = ka.procpid

ON
 bl.transactionid = kl.transactionid AND bl.pid != kl.pid
WHERE NOT
 bl.granted;

Hope it can help.

Giuseppe.

--
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index for Levenshtein distance (better format)

2013-07-22 Thread Giuseppe Broccolo

Hi Janek,

Il 21/07/2013 16:46, Janek Sendrowski ha scritto:

Hi,

Im searching for a suitable Index for my query witch compares Strings with the 
Levenshtein distance.

I read that a prefix index would fit, but I dont know how to build it. I only 
know that its supported by Gist.

I couldn't find an instructions so I hope someone can help me.


Consider this simple example: suppose you have a table "table1" with a 
column "string" of type TEXT, and you are interested to find "string" in 
your table equal to 'ciao' basing your query in Levenshtein distance. 
Then, improve your query creating an index based on gist.


First of all, create extensions to use levenshtein() function, and gist 
indexing for types different from PostGIS objects:


CREATE EXTENSION fuzzystrmatch;
CREATE EXTENSION btree_gist;

You can check that all works fine trying a query like:

SELECT string FROM table1 WHERE levenshtein(string,'ciao') = 0;

which finds also "string" equal to 'ciao'. Create now the index to 
improve this query:


CREATE INDEX lev_idx ON table1 USING GIST(levenshtein(string,'ciao'));

And relaunch the same query. I made a simple check with a very simple 
table1 example with ten raws using EXPLAIN ANALYSE to exploit the 
performances: query time changes from 1.077 ms to 0.677 ms after gist 
index creation.


Hope it helps,

Giuseppe.

--
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it  |www.2ndQuadrant.it



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] last_vacuum field is not updating

2013-07-15 Thread Giuseppe Broccolo

Hi Al,

Il 15/07/2013 16:58, AI Rumman ha scritto:
Why does vacuum table is not updating the field last_vacuum of 
pg_stat_user_tables?


To vacuum a table, one must ordinarily be the table's owner. However, 
database owners are allowed to vacuum all tables in their databases.
VACUUM will skip over any tables that the calling user does not have 
permission to vacuum.


Are you sure you are the table's owner?

Regards,

Giuseppe.

--
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg recovery

2013-07-11 Thread Giuseppe Broccolo

Hi Jayadevan,

Il 11/07/2013 09:39, Jayadevan M ha scritto:


Hi,

I have postgresql streaming replication set up. I forgot to add an 
entry for trigger_file in recovery.conf. So I added that entry and did 
a pg_ctl reload. Is there a way to confirm that the entry has been 
read by the server? Any view/function?




The only way to be sure that your configuration file is read is to stop 
and restart the PostgreSQL service.


Regards,

Giuseppe.

--
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it



Re: [GENERAL] Removing duplicates

2013-07-10 Thread Giuseppe Broccolo

Dear Johann,

I tried (with PostgreSQL 9.2) to run the two DELETE statements you 
describe in your mail (the first based on the "id" field, the second on 
the ctid) and they work! I have to point out that if you use the DELETE 
based on the "id" field YOU'LL DELETE ALL RECORDS having at least one 
duplicate.
The "q1.id != q.id" doesn't work because query doesn't know yet its 
result so every row with a duplicate will match without exception.
If you use the DELETE based on ctid order you delete only records with 
the same "rart_id", keeping the record with the lowest "id".


Remember that if you run the first DELETE query you remove more data 
than you expect.


Regards,

Giuseppe.

Il 10/07/2013 09:11, Johann Spies ha scritto:

I have used this method successfully on another table but this one is not
working.

I have a table of nearly 800 million records with some duplicates in.

Here is an example:

select rart_id, r9, ra, ry, rw, rv, rp, rs, ri
from isi.rcited_ref
where rart_id = 'A1986D733500019';

renders a result of 72 records. When I do

select count(id), rart_id, r9, ra, ry, rw, rv, rp, rs, ri
from isi.rcited_ref
where rart_id = 'A1986D733500019'
group by rart_id, r9, ra, ry, rw, rv, rp, rs, ri;

It shows that there are 36 duplicates with this rart_id.

So as a test I did the following (the id-field is the primary key):

DELETE FROM isi.rcited_ref q
WHERE EXISTS(SELECT 1 FROM   isi.rcited_ref q1
WHERE  q1.id != q.id
ANDq.rart_id = q1.rart_id
ANDq.r9 = q1.r9
ANDq.ra = q1.ra
ANDq.ry = q1.ry
ANDq.rw = q1.rw
ANDq.rv = q1.rv
ANDq.rp = q1.rp
ANDq.rs = q1.rs
ANDq.rart_id = 'A1986D733500019'
 );

But that deletes none.  And I cannot see what went wrong.

I have also tried the same query with ctid without success:

DELETE FROM isi.rcited_ref q
WHERE EXISTS(SELECT 1 FROM   isi.rcited_ref q1
WHERE  q1.ctid < q.ctid
ANDq.rart_id = q1.rart_id
ANDq.r9 = q1.r9
ANDq.ra = q1.ra
ANDq.ry = q1.ry
ANDq.rw = q1.rw
ANDq.rv = q1.rv
ANDq.rp = q1.rp
ANDq.rs = q1.rs
ANDq.rart_id = 'A1986D733500019'
 );


The size of the table makes it difficult to use a 'group by'  method to
delete all duplcates.

What am I doing wrong?

Regards
Johann
--
Because experiencing your loyal love is better than life itself,
my lips will praise you.  (Psalm 63:3)