Re: [GENERAL] Shared memory changes in 9.4?

2014-05-27 Thread Maciek Sakrejda
On Tue, May 27, 2014 at 8:23 PM, Robert Haas  wrote:
>
> I think it would be good to understand why initdb isn't getting this
> right.  Did you run initdb outside the LXC container, where /dev/shm
> would have worked, but then run postgres inside the LXC container,
> where /dev/shm does not work?  I ask because initdb is supposed to be
> doing the same thing that postgres does, so it really ought to come to
> the same conclusion about what will and won't work.

You're absolutely right--I thought initdb was containerized as well, but
I looked at our code and this is exactly what's happening.

> We've already fixed a bunch of DSM-related issues
> as a result of the fact that the default *isn't* none, and I dunno how
> many of those we would have found if the default had been none.

For what it's worth, +1. I'm not sure whether or not we had a good reason
for
doing initdb outside the container, but it's definitely an aberrant
configuration,
and should not be taken as evidence that the current default is a problem.


Re: [GENERAL] Shared memory changes in 9.4?

2014-05-27 Thread Robert Haas
On Tue, May 27, 2014 at 8:22 PM, Maciek Sakrejda  wrote:
> On Mon, May 26, 2014 at 12:24 AM, Andres Freund 
> wrote:
>>
>> Any chance you're using a 9.3 configuration file instead of the one
>> generated by initdb?
>> dynamic_shared_memory_type defaults to 'posix' if not specified in the
>> config file (on platforms supporting it). If initdb detects that 'posix'
>> can't be used it'll emit a different value. If you're copying the config
>> from 9.3 and your environment doesn't support posix shm that'll cause
>> the above error.
>> I still think dynamic_shared_memory_type should default to 'none'
>> because of such problems
>
> It works with 'none' and 'sysv'--I think the issue is that technically our
> environment does support 'posix', but '/dev/shm' is indeed not mounted in
> the LXC container, leading to a discrepancy between what initdb decides and
> what's actually possible. Thanks for your help.

I think it would be good to understand why initdb isn't getting this
right.  Did you run initdb outside the LXC container, where /dev/shm
would have worked, but then run postgres inside the LXC container,
where /dev/shm does not work?  I ask because initdb is supposed to be
doing the same thing that postgres does, so it really ought to come to
the same conclusion about what will and won't work.

With regard to Andres' proposal, I'm not that keen on setting
dynamic_shared_memory_type='none' by default.  Would we leave it that
way until we get in-core users of the facility, and then change it?  I
guess that'd be OK, but frankly if enabling dynamic_shared_memory_type
by default is causing us many problems, then we'd better reconsider
the design of the facility now, before we start adding more
dependencies on it.  We've already fixed a bunch of DSM-related issues
as a result of the fact that the default *isn't* none, and I dunno how
many of those we would have found if the default had been none.  I
tend to think DSM is an important facility that we're going to be
wanting to build on in future releases, so I'm keen to have it
available by default so that we can iron out any kinks before we get
too far down that path.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgtune and massive shared_buffers recommendation

2014-05-27 Thread Shaun Thomas

On 05/21/2014 09:39 AM, Stuart Bishop wrote:


I've got some boxes with 128GB of RAM and up to 750 connections, just
upgraded to 9.3 so I'm revising my tuning. I'm getting a
recommendation from pgtune to bump my shared_buffers up to 30GB and
work_mem to 80MB. Is a shared_buffers this high now sane?


The pgtune utility from Greg Smith, so it can generally be considered 
sane. However it's based very much on "rule of thumb" calculations that 
aren't really valid for recent hardware. It was designed back in the 
days when a 32GB machine was "big". It has since been updated, but 
you'll notice it hasn't seen a significant update since PostgreSQL 9.1.


It basically boils down to this:

1. The previous rule of "8GB is the max" was simply a limitation.
2. With the limitation removed, people found greater values didn't 
necessarily increase performance.
3. Larger values can be dangerous due to increased demand on 
checkpoints, and more potential for dirty memory to cause write IO storms.

4. The OS is usually much better about managing cached pages.
5. PG can not protect itself from double-buffering currently, so every 
block in shared_buffers may *also* be present in the OS page cache.


Given all of that, I could not personally recommend anything over 8GB 
for all but the edgiest of edge-cases. The only real way to tell for 
sure is with simulated and real-world tests. I'd start at 4GB, and try 
adding 1GB at a time until benchmarks stop increasing with your working 
data size. I suspect your results will top out before you even hit 8GB 
anyway.


--
Shaun Thomas
OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com

__

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to 
this email


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Receiving many more rows than expected

2014-05-27 Thread Vincent de Phily
On Monday 12 May 2014 10:10:36 David G Johnston wrote:
> Did you try rewriting the query to avoid using an IN expression?
> 
> UPDATE foo SET processing = 't'
> FROM (
> SELECT id FROM foo WHERE processing = 'f' ORDER BY id ASC LIMIT 5000 FOR
> UPDATE
> ) src
> WHERE foo.id = src.id;
> 
> The workaround I mentioned above said that a CTE was needed but I'm thinking
> that a simply FROM would be just as effective.  Otherwise:
> 
> UPDATE foo SET processing = 't'
> FROM (
> WITH ids_to_update AS (
> SELECT id FROM foo WHERE processing = 'f' ORDER BY id ASC LIMIT 5000 FOR
> UPDATE
> )
> SELECT id FROM ids_to_update
> ) src
> WHERE foo.id = src.id;

As it happens, I've done a fair bit of refactoring in my code (but not the 
actual query), and now I cannot reproduce the bug anymore :/ The refactoring 
had to do with taking status queries to a different connection, and changing 
the timing of calling the problematic query and interruption by other threads, 
to increase throughput.


-- 
Vincent de Phily



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Code for user-defined type

2014-05-27 Thread Paul Jones
I have written a user-defined type that allows direct import and printing of
DB2 timestamps.It does correctly import and export DB2 timestamps,
butI'm wondering ifsomeone could tell me if I made anymistakes in
the C code, particularly w.r.t. memory leaks or non-portableconstructs.  


I'm doing this on 9.3.4.

Thanks,
PJ


-- SQL ---


CREATE TYPE db2tstz;

CREATE FUNCTION db2tstzin(cstring) RETURNS db2tstz
    AS '/home/paul/src/pgproj/types/db2tstz.so', 'db2tstzin'
LANGUAGE C STRICT;

CREATE FUNCTION db2tstzout(db2tstz) RETURNS cstring
    AS '/home/paul/src/pgproj/types/db2tstz.so', 'db2tstzout'
LANGUAGE C STRICT;

CREATE TYPE db2tstz (
    INPUT = db2tstzin,
    OUTPUT = db2tstzout,
    LIKE = timestamptz
);

CREATE TABLE foo
(
    aaa    INT,
    bbb    DB2TSTZ
);

INSERT INTO foo VALUES (1, '2013-10-03-17.22.18.823581'),
            (2, '2010-04-06-13.16.47.174372');

 C Code 

#include "postgres.h"
#include 
#include "fmgr.h"
#include "utils/formatting.h"
#include "utils/datetime.h"

#ifdef PG_MODULE_MAGIC
PG_MODULE_MAGIC;
#endif

/* by value */

PG_FUNCTION_INFO_V1(db2tstzin);

Datum
db2tstzin(PG_FUNCTION_ARGS)
{
    char   *date_txt = PG_GETARG_CSTRING(0);
    Timestamp tsp;

    tsp = DirectFunctionCall2(to_timestamp,
        cstring_to_text(date_txt),
        cstring_to_text("-MM-DD-HH24.MI.SS.US"));

    PG_RETURN_DATUM(tsp);
}

PG_FUNCTION_INFO_V1(db2tstzout);

Datum
db2tstzout(PG_FUNCTION_ARGS)
{
    TimestampTz tsp = PG_GETARG_TIMESTAMPTZ(0);
    text   *result;

    result = (text *)DirectFunctionCall2(timestamp_to_char,
        (int)&tsp,
        cstring_to_text("-MM-DD-HH24.MI.SS.US"));

    PG_RETURN_CSTRING(text_to_cstring(result));
}


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Downtime-free 'alter table set tablespace'

2014-05-27 Thread Vincent de Phily
On Friday 23 May 2014 15:12:47 Chris Ernst wrote:
> On 05/23/2014 08:57 AM, Vincent de Phily wrote:
> > I need to reduce downtime to a minimum, so I can't afford to let "alter
> > table set tablespace" take an exclusive lock on the table for the 2h
> > it'll take to copy the data.
> 
> You might look at pg_repack (https://github.com/reorg/pg_repack).  The
> most recent version added the ability to repack a table (or just
> indexes) to a new tablespace.  It won't be fast as it will essentially
> rebuild the entire table.  But it only needs an exclusive lock for a
> brief moment, so there's virtually zero down time and no data loss.
> 
>   - Chris

That's pretty much what I was looking for, thanks. It's not perfect because it 
still requires a fair amount of temporary space on the origin tablespace, but 
it does the job in a cleaner way than what I was attempting.

Thanks Jerry too for the slony suggestion, I didn't think slony (which I've 
used a bit) supported replicating to the same db in a different table name.

It'd still be nice to get support in core for "set tablespace concurrently" 
because it has the potential to be much more efficient, but beggers can't be 
choosers :p

-- 
Vincent de Phily



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "Interrupt requested" in postgresql-DAY.log

2014-05-27 Thread Louis Battuello

On May 27, 2014, at 10:52 AM, Tom Lane  wrote:

> Louis Battuello  writes:
>> I have a PostgreSQL 9.3.4 database running on CentOS 6 with PostGIS 2.1.2. 
>> Each night, I run a cron job to dump (pg_dump) a few schemas for development 
>> snapshots. Everything runs without error.
> 
>> However, it seems that the dump process consistently results in a single 
>> line in the postgresql-*.log file. Oddly enough, this entry seems to ignore 
>> the log_line_prefix configuration parameter.
> 
>> postgres@db:/var/lib/pgsql/9.3/data $ cat pg_log/postgresql-Fri.log 
>> Interrupt requested
> 
> That probably represents some non-Postgres bit of code deciding to bleat
> to stdout or stderr.  The logging-collector mechanism is designed to catch
> such output coming from a backend process, but it can't stick a
> log_line_prefix on it.
> 
> What exactly is bleating, I can't say.  A quick grep confirms that there
> is no such string in the Postgres sources, but I dunno about PostGIS.
> If you've got any code in plperl, plpython, etc, the culprit might lurk
> somewhere there.  glibc might even be to blame, though I don't think it
> ordinarily prints error messages.
> 
>   regards, tom lane
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 

Thanks, Tom. 

This does in fact look like something new bubbling up from PostGIS 2.1.2 and 
2.1.3.

https://github.com/postgis/postgis/blob/svn-2.1/postgis/postgis_module.c

I don’t see it triggered in 2.1.1, so I’ll take my observation over to the GIS 
folks.

Regards,
Louis

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Conversion from CHAR HEX

2014-05-27 Thread David G Johnston
saqibrafique wrote
> hi guys,
> I am trying to convert a simple char value to HEX but I am getting Error.
> 
*
> MyTable:
*
> 
> CREATE TABLE mytable (from_ip CHAR(20), to_ip CHAR(20)); 
*
> I have below values in the Table:
*
> 
> 
> fm_db_Server1=# select * from operator;
>from_ip|to_ip 
> +--
>  202.153.087.128  | 202.153.087.159 
>  196.192.015.000  | 196.192.015.063 
> (2 rows)
> 
> 
> I am trying to extract the 1st part of the IP and convert to the HEX as
> below:
> 
> 
> fm_db_Server1=# 
*
> select to_hex(to_number(substr(from_ip, 1, 3), '999'))  as price from
> operator;
*
> 
/
> ERROR:  function to_hex(numeric) does not exist
> LINE 1: select to_hex(to_number(substr(from_ip, 1, 3), '999'))  as p...
>^
> HINT:  No function matches the given name and argument types. You might
> need to add explicit type casts.
/
> 
> 
> 
>  I am not able to figure it out what is wrong here. Can Any body point out
> what is wrong here.
> Thanks is advance.

The error says that the function to_hex(numeric) does not exist.

Looking at the documentation the only to_hex function I found requires
either integer or bigint input.

Since numeric can be cast to integer/bigint the solution is simple.

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Conversion-from-CHAR-HEX-tp5805115p5805127.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need pg_dump not to dump extension-created triggers

2014-05-27 Thread Shaun Thomas

On 05/21/2014 10:18 PM, Moshe Jacobson wrote:


I'd like to know if the following changes can be made, as I think they
will fix this issue:

 1. Add command ALTER EXTENSION ... ADD TRIGGER.
 2. Modify pg_dump not to dump triggers that are dependent on an extension.


Haha. I remember this from the Unconference day at PGCon on Saturday.

For anyone who wasn't there, I think the consensus was that the 
extension author could "fix" this by creating the triggers as not owned 
by the extension, but owned by an extension role. That way, dropping the 
role would cascade to the triggers, but the triggers would still be 
created normally as dependent objects of the extension, hence no errors.


However, it would be a lot nicer if we had some explicit syntax to avoid 
this kind of situation.


--
Shaun Thomas
OptionsHouse, LLC | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com

__

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to 
this email


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade from 8.3 to 9.1 and Flag --disable-integer-datetimes

2014-05-27 Thread Bruce Momjian
On Tue, May 27, 2014 at 04:29:19PM +0200, Meik Weißbach wrote:
> Hello all,
> 
> we want to upgrade our database from Postgres 8.3.23 to 9.1.12 using
> pg_upgrade. The documentation on pg_upgrade
> (http://www.postgresql.org/docs/9.1/static/pgupgrade.html) states
> the following:
> 
> "Also, the default datetime storage format changed to integer after
> PostgreSQL 8.3. pg_upgrade will check that the datetime storage
> format used by the old and new clusters match. Make sure your new
> cluster is built with the configure flag
> --disable-integer-datetimes."
> 
> We have a SLES 11 system. We installed Postgres 9.1.12 using Yast.
> We assume that our installation was built WITHOUT
> --disable-integer-datetimes.
> 
> The pg_upgrade is running without any complaints. Since we assume
> that our 9.1-server is built without disable-integer-datetimes, we
> expect pg_upgrade to fail or giving some kind of notice.
> 
> What is the expected behavior of pg_upgrade in the case that
> 9.1-server is not built with with disable-integer-datetimes?
> 
> How do we determine, whether or not a server is built with
> disable-integer-datetimes?

pg_upgrade --check will definitely complain about a timestamp storage
mismatch.  Odds are your packager built 8.3 with integer timestamps. 
Run pg_controldata on the 8.3 cluster and look at:

Date/time type storage:   64-bit integers

This shows integer timestamps.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "Interrupt requested" in postgresql-DAY.log

2014-05-27 Thread Tom Lane
Louis Battuello  writes:
> I have a PostgreSQL 9.3.4 database running on CentOS 6 with PostGIS 2.1.2. 
> Each night, I run a cron job to dump (pg_dump) a few schemas for development 
> snapshots. Everything runs without error.

> However, it seems that the dump process consistently results in a single line 
> in the postgresql-*.log file. Oddly enough, this entry seems to ignore the 
> log_line_prefix configuration parameter.

> postgres@db:/var/lib/pgsql/9.3/data $ cat pg_log/postgresql-Fri.log 
> Interrupt requested

That probably represents some non-Postgres bit of code deciding to bleat
to stdout or stderr.  The logging-collector mechanism is designed to catch
such output coming from a backend process, but it can't stick a
log_line_prefix on it.

What exactly is bleating, I can't say.  A quick grep confirms that there
is no such string in the Postgres sources, but I dunno about PostGIS.
If you've got any code in plperl, plpython, etc, the culprit might lurk
somewhere there.  glibc might even be to blame, though I don't think it
ordinarily prints error messages.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] "Interrupt requested" in postgresql-DAY.log

2014-05-27 Thread Louis Battuello
Hi,

I have a PostgreSQL 9.3.4 database running on CentOS 6 with PostGIS 2.1.2. Each 
night, I run a cron job to dump (pg_dump) a few schemas for development 
snapshots. Everything runs without error.

However, it seems that the dump process consistently results in a single line 
in the postgresql-*.log file. Oddly enough, this entry seems to ignore the 
log_line_prefix configuration parameter.

postgres@db:/var/lib/pgsql/9.3/data $ cat pg_log/postgresql-Fri.log 
Interrupt requested
postgres@db:/var/lib/pgsql/9.3/data $ grep log_line_prefix  postgresql.conf 
log_line_prefix = '< %t > ' # special values:
postgres@db:/var/lib/pgsql/9.3/data $ psql 
psql (9.3.4)
Type "help" for help.

postgres=# show log_line_prefix;
 log_line_prefix 
-
 < %t > 
(1 row)

postgres=# \q

Any ideas where this could be coming from and why it isn’t prefixed?

Thanks,
Louis

Re: [GENERAL] pg_upgrade from 8.3 to 9.1 and Flag --disable-integer-datetimes

2014-05-27 Thread Adrian Klaver

On 05/27/2014 07:29 AM, Meik Weißbach wrote:

Hello all,

we want to upgrade our database from Postgres 8.3.23 to 9.1.12 using
pg_upgrade. The documentation on pg_upgrade
(http://www.postgresql.org/docs/9.1/static/pgupgrade.html) states the
following:

"Also, the default datetime storage format changed to integer after
PostgreSQL 8.3. pg_upgrade will check that the datetime storage format
used by the old and new clusters match. Make sure your new cluster is
built with the configure flag --disable-integer-datetimes."

We have a SLES 11 system. We installed Postgres 9.1.12 using Yast. We
assume that our installation was built WITHOUT --disable-integer-datetimes.

The pg_upgrade is running without any complaints. Since we assume that
our 9.1-server is built without disable-integer-datetimes, we expect
pg_upgrade to fail or giving some kind of notice.

What is the expected behavior of pg_upgrade in the case that 9.1-server
is not built with with disable-integer-datetimes?

How do we determine, whether or not a server is built with
disable-integer-datetimes?


As the postgres user do something like:

pg_controldata /usr/local/pgsql/data/


where the path is $DATA/ for your Postgres install

In the output should be:

Date/time type storage:   64-bit integers



Best regards

Meik Weißbach





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_upgrade from 8.3 to 9.1 and Flag --disable-integer-datetimes

2014-05-27 Thread Meik Weißbach

Hello all,

we want to upgrade our database from Postgres 8.3.23 to 9.1.12 using 
pg_upgrade. The documentation on pg_upgrade 
(http://www.postgresql.org/docs/9.1/static/pgupgrade.html) states the 
following:


"Also, the default datetime storage format changed to integer after 
PostgreSQL 8.3. pg_upgrade will check that the datetime storage format 
used by the old and new clusters match. Make sure your new cluster is 
built with the configure flag --disable-integer-datetimes."


We have a SLES 11 system. We installed Postgres 9.1.12 using Yast. We 
assume that our installation was built WITHOUT --disable-integer-datetimes.


The pg_upgrade is running without any complaints. Since we assume that 
our 9.1-server is built without disable-integer-datetimes, we expect 
pg_upgrade to fail or giving some kind of notice.


What is the expected behavior of pg_upgrade in the case that 9.1-server 
is not built with with disable-integer-datetimes?


How do we determine, whether or not a server is built with 
disable-integer-datetimes?


Best regards

Meik Weißbach


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: Fwd: [GENERAL] libpq: indefinite block on poll during network problems

2014-05-27 Thread Tom Lane
Dmitry Samonenko  writes:
> On Tue, May 27, 2014 at 2:35 PM, Albe Laurenz wrote:
>> In PostgreSQL you can handle the problem of dying connections by setting
>> the
>> tcp_keepalives_* parameters (see
>> http://www.postgresql.org/docs/current/static/runtime-config-connection.html
>> ).
>> 
>> That should take care of the problem, right?

> I am afraid it won't help:

> 1. AFAIK, in Linux TCP keepalive is used on idle connections only. If not
> all data is transmitted - connection is not idle - keep alive timer is not
> started.
> 2. POLLHUP mask is used (while setting poll fds) to catch keep alive
> timeout. Sadly, libpq sets (POLLIN | POLLERR).

Would you provide some evidence for these claims?  If the keepalive stuff
didn't work, somebody would certainly have noticed by now.

Our general approach to network-error handling is that dropping a
connection is a last resort, and thus it's usually inappropriate to try to
force the network stack to fail more quickly than it was designed to do.
While you can override the keepalive timing if you insist, we won't
consider a patch that would make PG use something other than the network
stack's default settings by default, if you get my drift.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] hidden junk files in ...data/base/oid/

2014-05-27 Thread Alvaro Herrera
Andrej Vanek wrote:
> Hello,
> 
> solved.
> This is not a postgres issue.
> 
> The system was used in HA-cluster with streaming replications.
> The hidden files I asked for were created probably by broken (killed)
> rsync. It uses such file-format for temporary files used during copying.
> 
> This rsync is used by master to slave database synchronization (full
> on-line backup of master database to slave node) before starting postgres
> in hot-standby mode on slave the node...

You not only have leftover first-order rsync temp files (.N.uvwxyz)
-- but also when those temp files were being copied over by another
rsync run, which created temp files for the first-order temp files,
leaving you with second-order temp files (..N.uvwxyz.opqrst).  Not
nice.  I wonder if this is anywhere near sanity -- it looks like you're
copying stuff from one direction first, then failed over, then copied in
the opposite direction.  I would have your setup reviewed real closely,
to avoid data-corrupting configuration mistakes.  I have seen people
make subtle mistakes in their configuration, causing their whole HA
setups to be completely broken.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] hidden junk files in ...data/base/oid/

2014-05-27 Thread Andrej Vanek
Hello,

solved.
This is not a postgres issue.

The system was used in HA-cluster with streaming replications.
The hidden files I asked for were created probably by broken (killed)
rsync. It uses such file-format for temporary files used during copying.

This rsync is used by master to slave database synchronization (full
on-line backup of master database to slave node) before starting postgres
in hot-standby mode on slave the node...

Best Regards, Andrej


Fwd: [GENERAL] libpq: indefinite block on poll during network problems

2014-05-27 Thread Dmitry Samonenko
On Tue, May 27, 2014 at 2:35 PM, Albe Laurenz wrote:

> In PostgreSQL you can handle the problem of dying connections by setting
> the
> tcp_keepalives_* parameters (see
> http://www.postgresql.org/docs/current/static/runtime-config-connection.html
> ).
>
> That should take care of the problem, right?
>
> Yours,
> Laurenz Albe
>

I am afraid it won't help:

1. AFAIK, in Linux TCP keepalive is used on idle connections only. If not
all data is transmitted - connection is not idle - keep alive timer is not
started.
2. POLLHUP mask is used (while setting poll fds) to catch keep alive
timeout. Sadly, libpq sets (POLLIN | POLLERR).

With regards,
   Dmitry Samonenko


Re: [GENERAL] Delete trigger and data integrity

2014-05-27 Thread Alban Hertroys
On 27 May 2014 12:25, Yvonne Zannoun
 wrote:
> CREATE OR REPLACE FUNCTION delete_records()
> RETURNS TRIGGER AS $$
> BEGIN
> delete from "TABLE";
> RETURN NEW;
> END;
> $$
> LANGUAGE plpgsql;

You can't return NEW in an ON DELETE trigger - there is no NEW record.
Since you're going with a STATEMENT trigger instead, that's not really
relevant anymore (no NEW _or_ OLD record, since statements aren't
necessarily involved with single records), but I thought I'd mention
that slight oversight ;)

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] memory issues with BYTEAs in JSON and hstore

2014-05-27 Thread Felix Kunde
Hey,
I've developed a database auditing/versioning using the JSON data type 
(http://github.com/fxku/audit) and doing some tests now. 
Unfortunately I'm facing some problems when dealing with tables that store 
images as BYTEA. Some of them are around 15 MB big.

My tool logs changes to JSON and can populate it back to views with 
json_populate_recordset and json_agg. When performing this procedure on tables 
containing BYTEAs I receive an 54000 error ("Cannot enlarge string buffer"). Is 
this because of json_agg or because of single rows? 

Executing to_json on the whole column that contains the binary data (size above 
500 MB) lead to out-of-memory errors. The same goes for hstore. Executing these 
functions only on the biggest image was successful but freezed my pgAdmin. When 
I encoded BYTEA to TEXT before transforming it to JSON or hstore it worked. But 
trying json_populate_recordset still runs into memory problems (but explain 
worked). 

Do you think JSONB will solve my problems in the future?

Here is also a comparison in size between the bytea (and encoded versions to 
TEXT) and JSON / hstore output which I found kinda interesting:

operation | bytea | bytea->'escape' | bytea->'hex' | bytea->'base64'
--|---|-|--|
  | 15 MB | 40 MB   | 31 MB| 21 MB
to_json() | 57 MB | 57 MB   | 31 MB| 21 MB
hstore()  | 46 MB | 40 MB   | 31 MB| 21 MB

Thanks in advance for any hints.
Cheers,
Felix


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Delete trigger and data integrity

2014-05-27 Thread Yvonne Zannoun
Thank you very much, that answers my question.

And yes, I think you are right with the FOR EACH ROW/STATEMENT, I didn't
think that through for this example.
Thank you for your help!

Kind regards,

*Yvonne Zannoun*
Graduate Technical Consultant
Snowflake Software

*Tel: +44 (0) 23 80238 232*

Email: yvonne.zann...@snowflakesoftware.com
Website: www.snowflakesoftware.com
Twitter: @sflakesoftware 
Follow us on LinkedIn 

Registered in England & Wales. Registered Number: 4294244
-



On 27 May 2014 11:44, Albe Laurenz  wrote:

> Yvonne Zannoun wrote:
> > I have this question regarding delete triggers and how it affects data
> integrity.
> > So here goes: I have this trigger which deletes everything before I
> insert new rows.
> >
> > CREATE OR REPLACE FUNCTION delete_records()
> > RETURNS TRIGGER AS $$
> > BEGIN
> > delete from "TABLE";
> > RETURN NEW;
> > END;
> > $$
> > LANGUAGE plpgsql;
> >
> > CREATE TRIGGER delete_on_insert
> > BEFORE INSERT ON "TABLE"
> > FOR EACH ROW EXECUTE PROCEDURE delete_records();
> >
> > My question is what happens while this function is executed? Is there
> any chance the table can return
> > empty data between the delete and insert commands? Or does a trigger
> like this block this possibility
> > somehow?
>
> Since the trigger has to run in the same transaction as the INSERT, no
> concurrent transaction will be able to see the "dirty" state between
> the DELETE and the INSERT.
>
> Are you sure that you want the trigger FOR EACH ROW and not FOR EACH
> STATEMENT?
> If the INSERT statement inserts more than one row, the trigger will run
> multiple
> times and you will end up with only one row in the table.
>
> Yours,
> Laurenz Albe
>

-- 
*Geospatial Technology Company of the Year*
*Read more *


Re: [GENERAL] Delete trigger and data integrity

2014-05-27 Thread Albe Laurenz
Yvonne Zannoun wrote:
> I have this question regarding delete triggers and how it affects data 
> integrity.
> So here goes: I have this trigger which deletes everything before I insert 
> new rows.
> 
> CREATE OR REPLACE FUNCTION delete_records()
> RETURNS TRIGGER AS $$
> BEGIN
> delete from "TABLE";
> RETURN NEW;
> END;
> $$
> LANGUAGE plpgsql;
> 
> CREATE TRIGGER delete_on_insert
> BEFORE INSERT ON "TABLE"
> FOR EACH ROW EXECUTE PROCEDURE delete_records();
> 
> My question is what happens while this function is executed? Is there any 
> chance the table can return
> empty data between the delete and insert commands? Or does a trigger like 
> this block this possibility
> somehow?

Since the trigger has to run in the same transaction as the INSERT, no
concurrent transaction will be able to see the "dirty" state between
the DELETE and the INSERT.

Are you sure that you want the trigger FOR EACH ROW and not FOR EACH STATEMENT?
If the INSERT statement inserts more than one row, the trigger will run multiple
times and you will end up with only one row in the table.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] libpq: indefinite block on poll during network problems

2014-05-27 Thread Albe Laurenz
Dmitry Samonenko wrote:
> I have an application which uses libpq for interaction with remote PostgreSQL 
> 9.2.4 server. Clients
> and Server nodes are running Linux and connection is established using TCPv4. 
> The client application
> has some small fault-tolerance features, which are activated when server 
> related problems are
> encountered.
> 
> One day some bad things happened with network layer hardware and, long story 
> short, host with PSQL
> server got isolated. All TCP messages routed to server node were NOT 
> delivered or acknowledged in any
> way. Client application got blocked in libpq code according to debugger.
> 
> I have successfully reproduced the problem in the laboratory environment. 
> These iptables commands
> should be run on the server node after some period of client <-> server 
> interaction:
> 
> # iptables -A OUTPUT -p tcp --sport 5432 -j DROP
> # iptables -A INPUT  -p tcp --dport 5432 -j DROP
> 
> 
> I made a glimpse over master branch of libpq sources and some questions 
> arose. Namely:
> 
> 1. Connection to PSQL server is made without an option to specify SO_RCVTIMEO 
> and SO_SNDTIMEO. Why is
> that? Is setting socket timeouts considered harmful?
> 
> 2. PQexec ultimately leads to PQwait, which after some function calls "lands" 
> in pqSocketCheck and
> pqSocketPoll. These 2 functions have parameter end_time. It is set (-1) for 
> PQexec scenario, which
> leads to infinite poll timeout in pqSocketPoll. Is it possible to implement 
> configurable timeout for
> PQexec calls? Is there some implemented features, which should be used to 
> handle situation like this?
> 
> Currently, I have changed Linux kernel tcp4 stack counters responsible for 
> retransmission, so OS
> actually closes socket after some period. This is detected by pqSocketPoll's 
> poll and libpq handles
> situation correctly - error is reported to my application. But it's just a 
> workaround.
> 
> So, this infinite poll situation looks like imperfection to me and I think it 
> should be considered as
> a bug. Is it?

In PostgreSQL you can handle the problem of dying connections by setting the
tcp_keepalives_* parameters (see 
http://www.postgresql.org/docs/current/static/runtime-config-connection.html).

That should take care of the problem, right?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Delete trigger and data integrity

2014-05-27 Thread Yvonne Zannoun
Hello everyone,



I have this question regarding delete triggers and how it affects data
integrity.

So here goes: I have this trigger which deletes everything before I insert
new rows.



CREATE OR REPLACE FUNCTION delete_records()

RETURNS TRIGGER AS $$

BEGIN

delete from "TABLE";

RETURN NEW;

END;

$$

LANGUAGE plpgsql;



CREATE TRIGGER delete_on_insert

BEFORE INSERT ON "TABLE"

FOR EACH ROW EXECUTE PROCEDURE delete_records();



My question is what happens while this function is executed? Is there any
chance the table can return empty data between the delete and insert
commands? Or does a trigger like this block this possibility somehow?


Thank you

Kind regards,
*Yvonne Zannoun*

-- 
*Geospatial Technology Company of the Year*
*Read more *


[GENERAL] hidden junk files in ...data/base/oid/

2014-05-27 Thread Andrej Vanek
Hello,

I ran upgrade from 9.1 to 9.3 on CentOS using pg_upgrade and database
cluster size dropped from 77GB to 4.3GB. I wonder how this could happen. No
data lost. This means I had about 70GB junk files in my database...

I checked top 10 biggest database tables: nearly same size on old/new
cluster, the biggest table about 2GB, next 1GB, all other tables smaller
and smaller..
I checked files of biggest database (/var/lib/pgsql/9.1/data/base/27610):

# du  -sm * |awk '{s+=$1} END {print s}'
7107
# du  -sm . |sort -n
75264   .

So size of non-hidden database files is 1/10th of the whole database size.

Question: Where does the 70GB hidden files in ...data/base/oid/ come
from? Which postgres process could generate them? Some missed
maintenance from my side? A bug? Anybody else experienced such issue?

Thanks, Andrej

-details: ...data/base/oid file listing shortened:

# ls -la
total 77069960
drwx--. 2 postgres postgres  32768 May 12 18:50 .
drwxr-xr-x. 5 postgres postgres   4096 May 12 11:47 ..
-rw---. 1 postgres postgres  354156544 Feb 25 12:06 .27623.17rLmT
-rw---. 1 postgres postgres  338952192 Feb 25 14:21 .27623.6dH1b6
-rw---. 1 postgres postgres5767168 Mar  7 16:00 ..27623.6dH1b6.6PrU4B
-rw---. 1 postgres postgres  411041792 Feb 25 15:07 .27623.aN42DG
-rw---. 1 postgres postgres  342884352 Mar  8 15:16 ..27623.aN42DG.0U5xfj
-rw---. 1 postgres postgres  343146496 Mar  8 13:13 ..27623.aN42DG.2WFmNo
-rw---. 1 postgres postgres  343408640 Mar  8 10:43 ..27623.aN42DG.384SXU
-rw---. 1 postgres postgres  357302272 Mar  8 05:26 ..27623.aN42DG.3hHjZ8
-rw---. 1 postgres postgres  360185856 Mar  7 18:19 ..27623.aN42DG.5lWta4
-rw---. 1 postgres postgres  343146496 Mar  8 10:12 ..27623.aN42DG.64lNVQ

...shortened...

-rw---. 1 postgres postgres 1005322240 Feb 25 15:38 .27731.2.JKYXGW
-rw---. 1 postgres postgres  359661568 Mar  9 14:52 ..27731.2.JKYXGW.3h8RuF
-rw---. 1 postgres postgres  331087872 Mar  9 07:37 ..27731.2.JKYXGW.3hK5aF
-rw---. 1 postgres postgres  359923712 Mar  9 09:29 ..27731.2.JKYXGW.3KA5Cq
-rw---. 1 postgres postgres  359923712 Mar  9 16:55 ..27731.2.JKYXGW.45nQei
-rw---. 1 postgres postgres  137363456 Mar  9 04:47 ..27731.2.JKYXGW.4zya2Z

...shortened...

-rw---. 1 postgres postgres  769916928 Feb 25 15:53 .27902.YboxvS
-rw---. 1 postgres postgres  671612928 Feb 20 10:01 .27902.YMEtoS
-rw---. 1 postgres postgres  159645696 Feb 25 16:24 .59866.Lkyxgs
-rw---. 1 postgres postgres  272629760 Feb 20 18:37 .59866.RTcUkC
-rw---. 1 postgres postgres  505151488 Feb 25 16:40 .59961.5BcZpK
-rw---. 1 postgres postgres   91750400 Feb 25 16:55 .60194.gUqSdJ
-rw---. 1 postgres postgres   8192 Apr  7 05:20 60592
-rw---. 1 postgres postgres   8192 Jan 31 13:03 60594
-rw---. 1 postgres postgres   8192 Apr  7 02:01 60596
-rw---. 1 postgres postgres   8192 Feb 28 14:44 60598
-rw---. 1 postgres postgres   8192 Apr  7 11:55 60600

...shortened...

-rw---. 1 postgres postgres 139264 May 12 12:08 702364
-rw---. 1 postgres postgres  24576 May  9 12:42 702364_fsm
-rw---. 1 postgres postgres   8192 May  9 12:40 702364_vm
-rw---. 1 postgres postgres  0 May  9 10:10 702369
-rw---. 1 postgres postgres 860160 May 12 12:08 702372
-rw---. 1 postgres postgres  24576 May  9 12:37 702372_fsm
-rw---. 1 postgres postgres   8192 May  9 12:42 702372_vm
-rw---. 1 postgres postgres   8192 May  9 10:10 702377
-rw---. 1 postgres postgres 499712 May 12 12:08 702381

...shortened...

-rw---. 1 postgres postgres  16384 May  9 14:34 704207
-rw---. 1 postgres postgres  16384 May  9 14:34 704208
-rw---. 1 postgres postgres   8192 May  9 14:34 704209
-rw---. 1 postgres postgres  16384 May  9 14:34 704210
-rw---. 1 postgres postgres  16384 May  9 14:34 704211
-rw---. 1 postgres postgres512 May  9 14:34 pg_filenode.map
-rw---. 1 postgres postgres 106804 May 12 18:50 pg_internal.init
-rw---. 1 postgres postgres  4 Jan 28 13:52 PG_VERSION


[GENERAL] libpq: indefinite block on poll during network problems

2014-05-27 Thread Dmitry Samonenko
I have an application which uses libpq for interaction with remote
PostgreSQL 9.2.4 server. Clients and Server nodes are running Linux and
connection is established using TCPv4. The client application has some
small fault-tolerance features, which are activated when server related
problems are encountered.

One day some bad things happened with network layer hardware and, long
story short, host with PSQL server got isolated. All TCP messages routed to
server node were NOT delivered or acknowledged in any way. Client
application got blocked in libpq code according to debugger.

I have successfully reproduced the problem in the laboratory environment.
These iptables commands should be run on the server node after some period
of client <-> server interaction:

# iptables -A OUTPUT -p tcp --sport 5432 -j DROP
# iptables -A INPUT  -p tcp --dport 5432 -j DROP


I made a glimpse over master branch of libpq sources and some questions
arose. Namely:

1. Connection to PSQL server is made without an option to specify
SO_RCVTIMEO and SO_SNDTIMEO. Why is that? Is setting socket timeouts
considered harmful?

2. PQexec ultimately leads to PQwait, which after some function calls
"lands" in pqSocketCheck and pqSocketPoll. These 2 functions have parameter
end_time. It is set (-1) for PQexec scenario, which leads to infinite poll
timeout in pqSocketPoll. Is it possible to implement configurable timeout
for PQexec calls? Is there some implemented features, which should be used
to handle situation like this?

Currently, I have changed Linux kernel tcp4 stack counters responsible for
retransmission, so OS actually closes socket after some period. This is
detected by pqSocketPoll's poll and libpq handles situation correctly -
error is reported to my application. But it's just a workaround.

So, this infinite poll situation looks like imperfection to me and I think
it should be considered as a bug. Is it?

With regards,
   Dmitry Samonenko