[GENERAL] AT TIME ZONE and interval arguments

2013-01-31 Thread Craig Ringer
Hi all

I'm mildly thrown by this:

regress= SELECT TIME '04:00' AT TIME ZONE '01:00';
  timezone  
-
 19:00:00-01
(1 row)

regress= SELECT TIME '04:00' AT TIME ZONE (INTERVAL '01:00');
  timezone  
-
 21:00:00+01
(1 row)

regress= SELECT TIME '04:00' AT TIME ZONE (TEXT '01:00');
  timezone  
-
 19:00:00-01
(1 row)


and was wondering if anyone knows why the sense of the offset is
reversed for typed intervals vs bare literal or text. Is this another
one of the issues caused by the various standards' disagreements about +
vs - time offsets?

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Optimizing query?

2013-01-31 Thread Pavel Stehule
2013/1/31 Vincent Veyron vv.li...@wanadoo.fr:
 Le mercredi 30 janvier 2013 à 11:08 +, wolfg...@noten5.maas-noten.de
 a écrit :
 Hi,

 I am trying to match items from 2 tables based on a common string.
 One is a big table which has one column with entries like XY123, ABC44, etc
 The table has an index on that column.
 The second table is, typically, much smaller

 select  from tab1, tab2 where tab1.code = tab2.code;

 This works fine and fast.
 Now, as a variant, I have some entries like XY423A, XY423B, GF55A, GF55D in 
 the
 big table and want them to match XY423, GF55 in the second table

 Variants I have tried

 select   from tab1, tab2 where tab1.code ~ (tab2.code||'($|[A-Z])');
 select   from tab1, tab2 where tab1.code ~ ('^'||tab2.code||'($|[A-Z])');


 Have you tried the substring function?

 select   from tab1, tab2 where substring(tab1.code from 1 for 5) =
 tab2.code


 both take an enormous time. In the better case that I can subset (e.g. all 
 candidates in table 2
 share initial AX) I get back to manageable times by adding
   and tab1.code ~ '^AX'
 into the recipe. Actual runtime with about a million entries in tab1 and 800 
 entries in tab2
 is about 40 seconds.

any join where result is related to some function result can be very
slow, because estimation will be out and any repeated function
evaluation is just expensive.

You can try use a functional index.

create index on tab2 ((substring(tab1.code from 1 for 5))

Regards

Pavel Stehule


 Regards
 Wolfgang Hamann







 --
 Salutations, Vincent Veyron
 http://marica.fr/site/demonstration
 Logiciel de gestion des contentieux juridiques et des sinistres d'assurance



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] naming of wal-archives

2013-01-31 Thread Neil Worden
 If your command does overwrite, then the server currently emitting the
 8D files will become unrecoverable once those files start getting
 overwritten.  If it refuses to overwrite, but returns a zero status,
 then the server currently emitting 6D would become unrecoverable once
 it reaches 8D and its archived files are not actually being archived
 but are getting deleted from the local pg_xlog anyway.


 Would it not be easier to archive the different servers to different
 directories and eliminate the possibility of name collision between
servers?

Easier?  I would say that that is the only sane way of doing it.  I
was pointing out the consequences of messing it up.  A proper
archive_command will save you from some self-inflicted disasters, but
that does not mean I'm recommending that you should invite those
disasters on yourself.

If the original author is in a production environment, he desperately
needs to figure out what is going on, especially so if archive_command
is not tested and verified to obey its contract.

Cheers,

Jeff

Thanks for your answers.

Yes, we are in a production environment and there are definitely no two
masters writing to the wal-archive directory, at least none that i am aware
of. And i can assure you that we are not intentionally working on inviting
disasters either :-).

Point is, the ..6D.. line started showing up exactly after i set up the
archive-command (which is btw: archive_command = 'test ! -f
/data/backup/walbackup/%f  cp %p /data/backup/walbackup/%f', running on
Ubuntu Linux Server)

The situation is as follows:

All concerned machines are running 9.2.2 64-bit on Ubuntu Linux Server
12.10, installed from source, all following exactly the same procedure. We
have a hot-standby running to a different location over a rather thin line
running since version 9.1 came out. That worked
flawlessly, we only were bitten by autovacuums to prevent XID wraparounds
that generated relatively high wal-volume and we
were not sure whether the network connection could keep up with it before
deleting wal-files. Since we had to physically transfer a backup once for
other reasons, we set wal_keep_segments to 8192 to have enough
fallback-time.

The total size of the database is currently at 313 GB and we are in the
process of rethinking our backup/emergency-strategy (currently daily full
dumps + hot-standby for read-only queries in the other location + one
hot-standby in the same office).

So we set up another machine, intended to become a another hot_standby
later, but for now just use it to experiment with pg_receivexlog. Since our
current backup-strategy does not allow for PIT-recovery and we have been
thinking about wal-archiving all the time, we decided to implement it and
thats what we did. Now we have the two lines in the archive-directory.

Could the the high number of wal_keep_segments have an impact ?
Does the fact that there already were a lot of existing wal-files when i
set up archiving and the archive-command have an impact ?

Jeff, you wrote:

 And how would i restore the needed file names for recovery
 if i decide to keep one base-backup und then a very long chain of
wal-files
 ?

There should be no need for that.

When you said there would be no need for that, did you mean restoring the
files for recovery or keeping a base-backup and the chain of wal-files ?

I understand that the archive-command is responsible for not overwriting
wal-files. But if that situation occurs, and if i understand you correctly
it will, what do i do ?
If the wal-archive files will be overwritten at some point in time, how is
it possible to only have one single base-backup at time-point t and from
then on only store the following wal-files ( however inconvenient that may
be ) to be able to restore to any point in time after time t ?

Thanks,

Neil


Re: [GENERAL] Optimizing select count query which often takes over 10 seconds

2013-01-31 Thread Alexander Farber
On Wed, Jan 30, 2013 at 2:06 PM, Kevin Grittner kgri...@ymail.com wrote:
 update pref_users set medals = 0 where medals  0;

Thank you all for your insightful comments

This has cured my cronjob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] naming of wal-archives

2013-01-31 Thread Neil Worden
Btw, ps shows:

postgres@darkblue:/data/pgdata/pg_xlog$ ps aux | grep post
postgres 11496  0.1  0.9 161018232 3696076 ?   SJan29   2:49 postmaster
-i -D /data/pgdata
postgres 11499  0.0  1.6 161097088 6450616 ?   Ss   Jan29   1:39 postgres:
checkpointer process
postgres 11500  0.0  0.3 161095036 1414612 ?   Ss   Jan29   0:12 postgres:
writer process
postgres 11501  0.0  0.0 161095036 17328 ? Ss   Jan29   0:05 postgres:
wal writer process
postgres 11502  0.0  0.0 161096724 3112 ?  Ss   Jan29   0:11 postgres:
autovacuum launcher process
postgres 11503  0.0  0.0  20136   884 ?Ss   Jan29   0:10 postgres:
archiver process   last was 0001006E0034
postgres 11504  0.0  0.0  20816  1412 ?Ss   Jan29   0:53 postgres:
stats collector process
postgres 11507  0.0  0.0 161096264 2652 ?  Ss   Jan29   1:08 postgres:
wal sender process postgres 192.168.30.65(45640) streaming 8E/5544E650
postgres 11864  0.0  0.0 161096136 2656 ?  Ss   Jan29   0:51 postgres:
wal sender process postgres 192.168.10.95(37378) streaming 8E/5544E650
...

The archiver process says last was 0001006E0034 and when i
look into my wal-archive-directory i see:

-rw--- 1 postgres postgres   16777216 Jan 31 10:24
0001006E0033
-rw--- 1 postgres postgres   16777216 Jan 31 10:24
0001006E0034
-rw--- 1 postgres postgres   16777216 Jan 29 16:03
0001008C008E
-rw--- 1 postgres postgres   16777216 Jan 29 16:32
0001008C008F

The 6E..34 file was just written by the archiver process. But further down
at the same time this file was written:

...
-rw--- 1 postgres postgres   16777216 Jan 31 10:24
0001008E0054

which seems to match the position of the streaming wal-senders .

Any ideas ?

Thanks, Neil.





2013/1/31 Neil Worden nworden1...@gmail.com

  If your command does overwrite, then the server currently emitting the
  8D files will become unrecoverable once those files start getting
  overwritten.  If it refuses to overwrite, but returns a zero status,
  then the server currently emitting 6D would become unrecoverable once
  it reaches 8D and its archived files are not actually being archived
  but are getting deleted from the local pg_xlog anyway.
 
 
  Would it not be easier to archive the different servers to different
  directories and eliminate the possibility of name collision between
 servers?

 Easier?  I would say that that is the only sane way of doing it.  I
 was pointing out the consequences of messing it up.  A proper
 archive_command will save you from some self-inflicted disasters, but
 that does not mean I'm recommending that you should invite those
 disasters on yourself.

 If the original author is in a production environment, he desperately
 needs to figure out what is going on, especially so if archive_command
 is not tested and verified to obey its contract.

 Cheers,

 Jeff

 Thanks for your answers.

 Yes, we are in a production environment and there are definitely no two
 masters writing to the wal-archive directory, at least none that i am aware
 of. And i can assure you that we are not intentionally working on inviting
 disasters either :-).

 Point is, the ..6D.. line started showing up exactly after i set up the
 archive-command (which is btw: archive_command = 'test ! -f
 /data/backup/walbackup/%f  cp %p /data/backup/walbackup/%f', running on
 Ubuntu Linux Server)

 The situation is as follows:

 All concerned machines are running 9.2.2 64-bit on Ubuntu Linux Server
 12.10, installed from source, all following exactly the same procedure. We
 have a hot-standby running to a different location over a rather thin line
 running since version 9.1 came out. That worked
 flawlessly, we only were bitten by autovacuums to prevent XID wraparounds
 that generated relatively high wal-volume and we
 were not sure whether the network connection could keep up with it before
 deleting wal-files. Since we had to physically transfer a backup once for
 other reasons, we set wal_keep_segments to 8192 to have enough
 fallback-time.

 The total size of the database is currently at 313 GB and we are in the
 process of rethinking our backup/emergency-strategy (currently daily full
 dumps + hot-standby for read-only queries in the other location + one
 hot-standby in the same office).

 So we set up another machine, intended to become a another hot_standby
 later, but for now just use it to experiment with pg_receivexlog. Since our
 current backup-strategy does not allow for PIT-recovery and we have been
 thinking about wal-archiving all the time, we decided to implement it and
 thats what we did. Now we have the two lines in the archive-directory.

 Could the the high number of wal_keep_segments have an impact ?
 Does the fact that there already were a lot of existing wal-files when i
 set up archiving and the archive-command have an impact ?

 Jeff, you wrote:

  And how would i restore the needed file names for 

Re: [GENERAL] naming of wal-archives

2013-01-31 Thread Neil Worden
And a few minutes later the archiver-process with the same process-id has
written a file from ..8.. line:

postgres 11502  0.0  0.0 161096724 3112 ?  Ss   Jan29   0:12 postgres:
autovacuum launcher process
postgres 11503  0.0  0.0  20136   884 ?Ss   Jan29   0:10 postgres:
archiver process   last was 0001008E0058
postgres 11504  0.0  0.0  20816  1412 ?Ss   Jan29   0:54 postgres:
stats collector process

Am i missing something ?

Thanks, Neil



2013/1/31 Neil Worden nworden1...@gmail.com


 Btw, ps shows:

 postgres@darkblue:/data/pgdata/pg_xlog$ ps aux | grep post
 postgres 11496  0.1  0.9 161018232 3696076 ?   SJan29   2:49
 postmaster -i -D /data/pgdata
 postgres 11499  0.0  1.6 161097088 6450616 ?   Ss   Jan29   1:39 postgres:
 checkpointer process
 postgres 11500  0.0  0.3 161095036 1414612 ?   Ss   Jan29   0:12 postgres:
 writer process
 postgres 11501  0.0  0.0 161095036 17328 ? Ss   Jan29   0:05 postgres:
 wal writer process
 postgres 11502  0.0  0.0 161096724 3112 ?  Ss   Jan29   0:11 postgres:
 autovacuum launcher process
 postgres 11503  0.0  0.0  20136   884 ?Ss   Jan29   0:10 postgres:
 archiver process   last was 0001006E0034
 postgres 11504  0.0  0.0  20816  1412 ?Ss   Jan29   0:53 postgres:
 stats collector process
 postgres 11507  0.0  0.0 161096264 2652 ?  Ss   Jan29   1:08 postgres:
 wal sender process postgres 192.168.30.65(45640) streaming 8E/5544E650
 postgres 11864  0.0  0.0 161096136 2656 ?  Ss   Jan29   0:51 postgres:
 wal sender process postgres 192.168.10.95(37378) streaming 8E/5544E650
 ...

 The archiver process says last was 0001006E0034 and when i
 look into my wal-archive-directory i see:

 -rw--- 1 postgres postgres   16777216 Jan 31 10:24
 0001006E0033
 -rw--- 1 postgres postgres   16777216 Jan 31 10:24
 0001006E0034
 -rw--- 1 postgres postgres   16777216 Jan 29 16:03
 0001008C008E
 -rw--- 1 postgres postgres   16777216 Jan 29 16:32
 0001008C008F

 The 6E..34 file was just written by the archiver process. But further down
 at the same time this file was written:

 ...
 -rw--- 1 postgres postgres   16777216 Jan 31 10:24
 0001008E0054

 which seems to match the position of the streaming wal-senders .

 Any ideas ?

 Thanks, Neil.





 2013/1/31 Neil Worden nworden1...@gmail.com

  If your command does overwrite, then the server currently emitting the
  8D files will become unrecoverable once those files start getting
  overwritten.  If it refuses to overwrite, but returns a zero status,
  then the server currently emitting 6D would become unrecoverable once
  it reaches 8D and its archived files are not actually being archived
  but are getting deleted from the local pg_xlog anyway.
 
 
  Would it not be easier to archive the different servers to different
  directories and eliminate the possibility of name collision between
 servers?

 Easier?  I would say that that is the only sane way of doing it.  I
 was pointing out the consequences of messing it up.  A proper
 archive_command will save you from some self-inflicted disasters, but
 that does not mean I'm recommending that you should invite those
 disasters on yourself.

 If the original author is in a production environment, he desperately
 needs to figure out what is going on, especially so if archive_command
 is not tested and verified to obey its contract.

 Cheers,

 Jeff

 Thanks for your answers.

 Yes, we are in a production environment and there are definitely no two
 masters writing to the wal-archive directory, at least none that i am aware
 of. And i can assure you that we are not intentionally working on inviting
 disasters either :-).

 Point is, the ..6D.. line started showing up exactly after i set up the
 archive-command (which is btw: archive_command = 'test ! -f
 /data/backup/walbackup/%f  cp %p /data/backup/walbackup/%f', running on
 Ubuntu Linux Server)

 The situation is as follows:

 All concerned machines are running 9.2.2 64-bit on Ubuntu Linux Server
 12.10, installed from source, all following exactly the same procedure. We
 have a hot-standby running to a different location over a rather thin line
 running since version 9.1 came out. That worked
 flawlessly, we only were bitten by autovacuums to prevent XID wraparounds
 that generated relatively high wal-volume and we
 were not sure whether the network connection could keep up with it before
 deleting wal-files. Since we had to physically transfer a backup once for
 other reasons, we set wal_keep_segments to 8192 to have enough
 fallback-time.

 The total size of the database is currently at 313 GB and we are in the
 process of rethinking our backup/emergency-strategy (currently daily full
 dumps + hot-standby for read-only queries in the other location + one
 hot-standby in the same office).

 So we set up another machine, intended to become a another hot_standby
 later, 

Re: [GENERAL] Optimizing query?

2013-01-31 Thread Vincent Veyron
Le jeudi 31 janvier 2013 à 09:49 +0100, Pavel Stehule a écrit :

 any join where result is related to some function result can be very
 slow, because estimation will be out and any repeated function
 evaluation is just expensive.
 

Hi Pavel,

Thank you for the correction. Since we're at it, I have a question
regarding functions in a query :

Suppose I have a query of the form 

SELECT my_function(column_1), column_2 
FROM my_table 
GROUP BY my_function(column_1)
ORDER BY my_function(column_1);

where my_function is a user defined function.
 
How many times is the function computed?




-- 
Salutations, Vincent Veyron
http://marica.fr/site/demonstration
Logiciel de gestion des contentieux juridiques et des sinistres d'assurance



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Optimizing query?

2013-01-31 Thread Pavel Stehule
2013/1/31 Vincent Veyron vv.li...@wanadoo.fr:
 Le jeudi 31 janvier 2013 à 09:49 +0100, Pavel Stehule a écrit :

 any join where result is related to some function result can be very
 slow, because estimation will be out and any repeated function
 evaluation is just expensive.


 Hi Pavel,

 Thank you for the correction. Since we're at it, I have a question
 regarding functions in a query :

 Suppose I have a query of the form

 SELECT my_function(column_1), column_2
 FROM my_table
 GROUP BY my_function(column_1)
 ORDER BY my_function(column_1);

 where my_function is a user defined function.

 How many times is the function computed?

if function is stable or immutable, then once per row

Pavel





 --
 Salutations, Vincent Veyron
 http://marica.fr/site/demonstration
 Logiciel de gestion des contentieux juridiques et des sinistres d'assurance



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] AT TIME ZONE and interval arguments

2013-01-31 Thread Albe Laurenz
Craig Ringer wrote:
 I'm mildly thrown by this:
 
 regress= SELECT TIME '04:00' AT TIME ZONE '01:00';
   timezone
 -
  19:00:00-01
 (1 row)
 
 regress= SELECT TIME '04:00' AT TIME ZONE (INTERVAL '01:00');
   timezone
 -
  21:00:00+01
 (1 row)
 
 regress= SELECT TIME '04:00' AT TIME ZONE (TEXT '01:00');
   timezone
 -
  19:00:00-01
 (1 row)
 
 
 and was wondering if anyone knows why the sense of the offset is
 reversed for typed intervals vs bare literal or text. Is this another
 one of the issues caused by the various standards' disagreements about +
 vs - time offsets?

The function that implements the text case has this comment:

/*  timestamp_zone()
 *  Encode timestamp type with specified time zone.
 *  This function is just timestamp2timestamptz() except instead of
 *  shifting to the global timezone, we shift to the specified timezone.
 *  This is different from the other AT TIME ZONE cases because instead
 *  of shifting to a _to_ a new time zone, it sets the time to _be_ the
 *  specified timezone.
 */

The comment was introduced in commits 5ddeffb676e6bb64b82fc98576f3fe54f8671527
and 3df8e98329e1eea9920436defc64af3594d3, there is a discussion
on -patches:
http://www.postgresql.org/message-id/slrnde2134.2k2r.andrew+non...@trinity.supernews.net

I can't say if that's correct though.

Yours,
Laurenz Albe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Windows Phone/Postgresql

2013-01-31 Thread Torello Querci
Are you try with npgsql?

I used it with minor changes with mono for android and I was able to
connect to PG database.


Best Regards


2013/1/30 Bret Stern bret_st...@machinemanagement.com

 I'm thinking about picking up a new windows phone, and
 would like to connect to a postgresql server from the
 phone.

 Anyone have some how-to links.

 I've done some initial Google searches, and found some
 leads, but hoping to draw on some experience in the group.

 Regards,

 Bret Stern



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Optimizing query?

2013-01-31 Thread Vincent Veyron
Le jeudi 31 janvier 2013 à 11:06 +0100, Pavel Stehule a écrit :
 2013/1/31 Vincent Veyron vv.li...@wanadoo.fr:
 
  Suppose I have a query of the form
 
  SELECT my_function(column_1), column_2
  FROM my_table
  GROUP BY my_function(column_1)
  ORDER BY my_function(column_1);
 
  where my_function is a user defined function.
 
  How many times is the function computed?
 
 if function is stable or immutable, then once per row
 

In this post (watch for line-wrap) :

http://www.postgresql.org/message-id/CAFj8pRAdYL1-hCxH
+qszqkht9ynoaoigkfx4cnc9mzutimc...@mail.gmail.com

you wrote that it is usually better not to mark SQL functions (as
opposed to plpgsql functions).

So should I mark SQL functions stable/immutable if I use them in a query
like the one above, or is it unnecessary?


-- 
Salutations, Vincent Veyron
http://marica.fr/site/demonstration
Logiciel de gestion des contentieux juridiques et des sinistres d'assurance



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Optimizing query?

2013-01-31 Thread Pavel Stehule
2013/1/31 Vincent Veyron vv.li...@wanadoo.fr:
 Le jeudi 31 janvier 2013 à 11:06 +0100, Pavel Stehule a écrit :
 2013/1/31 Vincent Veyron vv.li...@wanadoo.fr:
 
  Suppose I have a query of the form
 
  SELECT my_function(column_1), column_2
  FROM my_table
  GROUP BY my_function(column_1)
  ORDER BY my_function(column_1);
 
  where my_function is a user defined function.
 
  How many times is the function computed?

 if function is stable or immutable, then once per row


 In this post (watch for line-wrap) :

 http://www.postgresql.org/message-id/CAFj8pRAdYL1-hCxH
 +qszqkht9ynoaoigkfx4cnc9mzutimc...@mail.gmail.com

 you wrote that it is usually better not to mark SQL functions (as
 opposed to plpgsql functions).

 So should I mark SQL functions stable/immutable if I use them in a query
 like the one above, or is it unnecessary?


It should not be marked

Regards

Pavel


 --
 Salutations, Vincent Veyron
 http://marica.fr/site/demonstration
 Logiciel de gestion des contentieux juridiques et des sinistres d'assurance



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Version numbers for binaries

2013-01-31 Thread Magnus Hagander
On Thu, Jan 31, 2013 at 12:20 AM, deepak deepak...@gmail.com wrote:
 Hi !

 We bundle Postgres into a Windows MSI, Postgres built with VS2008.

 One of the issues we ran into recently is Windows getting confused with the
 file
 versions of Postgres binaries, confused meaning, it was treating newer
 binaries
 as older, and hence skipping copying certain files during an upgrade.

 Looking further, I came across the file where version numbers for some of
 the
 binaries are generated (including pg_ctl.exe), and it used to use 2 digit
 year
 followed by day of year, without any padding.  We need to pad the day of
 year
 with leading zeros to make the version non-decreasing.

 I've included a patch below, could this be patched into Postgres?

Thanks, I've applied a change like this (though with the description
mainly in the commit message and not in a code comment, so not using
your patch) for head and backpatched it into the supported branches,
as it's clearly wrong.

For future reference, please post patches to the pgsql-hackers
mailinglist, instead of the pgsql-general, to make sure that it gets
the attention of the developers. It worked this time, but that's
better for next time.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Parsing COPY ... WITH BINARY

2013-01-31 Thread Leonardo M . Ramé
I'm using this:

COPY( select field1, field2, field3 from table ) TO 'C://Program 
Files/PostgreSql//8.4//data//output.dat' WITH BINARY

To export some fields to a file, one of them is a ByteA field. Now, I
need to read the file with a custom made program.

How can I parse this file?

BTW: I cannot export as CSV, because the binary data is modified when I
do that.

-- 
Leonardo M. Ramé
Medical IT - Griensu S.A.
Av. Colón 636 - Piso 8 Of. A
X5000EPT -- Córdoba
Tel.: +54(351)4246924 +54(351)4247788 +54(351)4247979 int. 19
Cel.: +54 9 351 6629292



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] naming of wal-archives

2013-01-31 Thread Adrian Klaver

On 01/31/2013 01:48 AM, Neil Worden wrote:


Btw, ps shows:





The archiver process says last was 0001006E0034 and when i
look into my wal-archive-directory i see:

-rw--- 1 postgres postgres   16777216 Jan 31 10:24
0001006E0033
-rw--- 1 postgres postgres   16777216 Jan 31 10:24
0001006E0034
-rw--- 1 postgres postgres   16777216 Jan 29 16:03
0001008C008E
-rw--- 1 postgres postgres   16777216 Jan 29 16:32
0001008C008F

The 6E..34 file was just written by the archiver process. But further
down at the same time this file was written:

...
-rw--- 1 postgres postgres   16777216 Jan 31 10:24
0001008E0054

which seems to match the position of the streaming wal-senders .

Any ideas ?


Yes, I think you are going to have to map this out for us.

Something like:

  pg_receivexlog to ?   
 /
Machine A (Master) Archiving on  /some_archive_dir
  \
Machine B (standby)
  \
Machine C (standby)

I cannot follow what is going on from the written description.



Thanks, Neil.




--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Version numbers for binaries

2013-01-31 Thread deepak
Hi! Sounds good, thanks.


--
Deepak

On Thu, Jan 31, 2013 at 9:11 AM, Magnus Hagander mag...@hagander.netwrote:

 On Thu, Jan 31, 2013 at 12:20 AM, deepak deepak...@gmail.com wrote:
  Hi !
 
  We bundle Postgres into a Windows MSI, Postgres built with VS2008.
 
  One of the issues we ran into recently is Windows getting confused with
 the
  file
  versions of Postgres binaries, confused meaning, it was treating newer
  binaries
  as older, and hence skipping copying certain files during an upgrade.
 
  Looking further, I came across the file where version numbers for some of
  the
  binaries are generated (including pg_ctl.exe), and it used to use 2 digit
  year
  followed by day of year, without any padding.  We need to pad the day of
  year
  with leading zeros to make the version non-decreasing.
 
  I've included a patch below, could this be patched into Postgres?

 Thanks, I've applied a change like this (though with the description
 mainly in the commit message and not in a code comment, so not using
 your patch) for head and backpatched it into the supported branches,
 as it's clearly wrong.

 For future reference, please post patches to the pgsql-hackers
 mailinglist, instead of the pgsql-general, to make sure that it gets
 the attention of the developers. It worked this time, but that's
 better for next time.

 --
  Magnus Hagander
  Me: http://www.hagander.net/
  Work: http://www.redpill-linpro.com/



Re: [GENERAL] naming of wal-archives

2013-01-31 Thread Jeff Janes
On Thu, Jan 31, 2013 at 12:50 AM, Neil Worden nworden1...@gmail.com wrote:

 The situation is as follows:

 All concerned machines are running 9.2.2 64-bit on Ubuntu Linux Server
 12.10, installed from source, all following exactly the same procedure. We
 have a hot-standby running to a different location over a rather thin line
 running since version 9.1 came out. That worked
 flawlessly, we only were bitten by autovacuums to prevent XID wraparounds
 that generated relatively high wal-volume and we
 were not sure whether the network connection could keep up with it before
 deleting wal-files. Since we had to physically transfer a backup once for
 other reasons, we set wal_keep_segments to 8192 to have enough
 fallback-time.

Ah.

...

 Could the the high number of wal_keep_segments have an impact ?
 Does the fact that there already were a lot of existing wal-files when i set
 up archiving and the archive-command have an impact ?

Yes.  It is doing something like archiving the newly-finished log
files as they are completed, and interleaving that with working off
the wal_keep_segments backlog.  So everything seems normal.  At some
point they should converge without difficulty.


 Jeff, you wrote:

 And how would i restore the needed file names for recovery
 if i decide to keep one base-backup und then a very long chain of
 wal-files
 ?

There should be no need for that.

 When you said there would be no need for that, did you mean restoring the
 files for recovery or keeping a base-backup and the chain of wal-files ?

No, you need both of those.  There should be no need to restore the
*names* of the files.  It sounded like you were planning to invent
some scheme to rename files and rename them back.


 I understand that the archive-command is responsible for not overwriting
 wal-files. But if that situation occurs, and if i understand you correctly
 it will, what do i do ?

If it attempts to overwrite, refuses and returns with a non-zero
status, then your server will accumulate unarchived log files in
pg_xlog and you will get warnings in the log file something like:

LOG:  archive command failed with exit code 1

It will keep trying, but of course also keep failing, until you
manually intervene.

The risks are that pg_xlog might fill up, or that if the hard drive
that holds pg_xlog crashes you will lose log files that were scheduled
to have been archived but never made it there.

But, this should be a moot point if you indeed only have one server
archiving to that directory.

Although this has happened to me a couple times, and I just renamed
the offending archived file to something else (i.e. add .bak to the
name) to unblock the process.  And then compare to moved file to the
newly arrived archival of it and verify that they were identical (they
were).  Apparently what happened was that a network glitch caused the
file copy think it failed when it had not.  Then future attempts
failed because the file already existed.

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Fwd: [GENERAL] naming of wal-archives

2013-01-31 Thread Neil Worden
Sorry, this one should have been sent to the group.


-- Forwarded message --



Hi,

Master M - streaming via pg_receivexlog - TEST R (same location,
currently for testing and experimenting)
   - streaming to hot standby via dsl - HOT1 (other location,
hot-standby and used for read-queries)
   - archiving to /data/backup/walbackup on Master M via
archive_command (as of 2 days ago)

Up until 2 days ago, archiving on the master was not enabled, but wal-files
were streamed to the hot-standby HOT1 and TESTR  and wal_keep_segments was
all the time set to 8192 (which currently amounts to a timeframe of about 3
weeks). Everything was fine. Then, two days ago, i set archive_mode = on
and archive_command to 'test ! -f  /data/backup/walbackup/%f  cp %p
/data/backup/walbackup/%f'.

This is a current extract from the master pg_xlog directory (with ls -la |
less):

-rw---  1 postgres postgres 16777216 Jan  6 00:46
0001006E0061
-rw---  1 postgres postgres 16777216 Jan  6 00:46
0001006E0062
-rw---  1 postgres postgres 16777216 Jan  6 00:46
0001006E0063
-rw---  1 postgres postgres 16777216 Jan  6 00:46
0001006E0064
...
-rw---  1 postgres postgres 16777216 Jan 31 15:06
0001008E007F
-rw---  1 postgres postgres 16777216 Jan 31 15:30
0001008E0080
-rw---  1 postgres postgres 16777216 Jan 31 15:59
0001008E0081
-rw---  1 postgres postgres 16777216 Jan 31 16:13
0001008E0082
-rw---  1 postgres postgres 16777216 Jan  5 21:01
0001008E0083
-rw---  1 postgres postgres 16777216 Jan  5 21:01
0001008E0084
-rw---  1 postgres postgres 16777216 Jan  5 21:01
0001008E0085
-rw---  1 postgres postgres 16777216 Jan  5 21:01
0001008E0086

As you can see, the oldest wal-files in pg_xlog on the master date back to
the 5th of january and are subsequently overwritten. Wal-files from the
...6E... range were last modified around 6th of january.

Now this is a current extract from masters-wal-archive directory
/data/backup/walbackup, where the archive-command is copying files to:

...
-rw--- 1 postgres postgres   16777216 Jan 31 15:21
0001006E005F
-rw--- 1 postgres postgres   16777216 Jan 31 15:30
0001008E0080
-rw--- 1 postgres postgres   16777216 Jan 31 15:32
0001006E0060
-rw--- 1 postgres postgres   16777216 Jan 31 15:59
0001008E0081
-rw--- 1 postgres postgres   16777216 Jan 31 16:03
0001006E0061
-rw--- 1 postgres postgres   16777216 Jan 31 16:20
0001008E0082

As you can see, the filenames alternate between ...6E... and ...8E...(in
the middle) but they are created by the same archiver-process. Why are
files name ...6E... appearing in my wal-archive when the last ...6E.. file
in my pg_xlog was modified more than 3 weeks ago ? Also the ...6E...
wal-files never appear on the pg_receivexlog-machine.

Some of the comments suggested there were multiple masters writing to the
same archive-directory. That is not the case, there is only one master. The
receivers and standbys are all on different physical machines or even in
other locations.

I put the ps-output in my two previous mails to show that it is indeed the
same archiver process (same process-id in both mails) that is writing the
files

postgres 11503  0.0  0.0  20136   884 ?Ss   Jan29   0:10 postgres:
archiver process   last was 0001006E0034

and a few seconds later the same process is writing:

postgres 11503  0.0  0.0  20136   884 ?Ss   Jan29   0:10 postgres:
archiver process   last was 0001008E0058

(note the 6E vs the 8E in the middle)

So it seems to be archiving two lines or sequences of wal-archives, which
i do not understand.  All this started exactly 3 minutes after i set the
archive_command and enabled archiving.

Is this normal behaviour, or am i getting something fundamentally wrong ?

Thanks, Neil

ps.:  These things cast aside, i still do not understand how to implement
the One base-backup + an endless stream of wal files backup-strategy if
the files in my wal archive get overwritten and i don´t save them to
another place before that happens.







2013/1/31 Adrian Klaver adrian.kla...@gmail.com

 On 01/31/2013 01:48 AM, Neil Worden wrote:


 Btw, ps shows:



 The archiver process says last was 0001006E0034 and when i
 look into my wal-archive-directory i see:

 -rw--- 1 postgres postgres   16777216 Jan 31 10:24
 0001006E0033
 -rw--- 1 postgres postgres   16777216 Jan 31 10:24
 0001006E0034
 -rw--- 1 postgres postgres   16777216 Jan 29 16:03
 0001008C008E
 -rw--- 1 postgres postgres   16777216 Jan 29 16:32
 0001008C008F

 The 6E..34 file was just written by the archiver process. But further
 down at the same time this file was written:

 ...
 

Re: [GENERAL] AT TIME ZONE and interval arguments

2013-01-31 Thread Tom Lane
Craig Ringer cr...@2ndquadrant.com writes:
 I'm mildly thrown by this:
 regress= SELECT TIME '04:00' AT TIME ZONE '01:00';
 [ zone is taken as GMT-1 ]
 regress= SELECT TIME '04:00' AT TIME ZONE (INTERVAL '01:00');
 [ zone is taken as GMT+1 ]

 and was wondering if anyone knows why the sense of the offset is
 reversed for typed intervals vs bare literal or text.

Well, it's interval vs not interval, and you'll get the same
behavior from SET TIME ZONE:

regression=# set time zone '01:00';
SET
regression=# select now();
  now  
---
 2013-01-31 14:57:44.707581-01
(1 row)

regression=# set time zone interval '01:00';
SET
regression=# select now();
  now  
---
 2013-01-31 16:57:54.707828+01
(1 row)

so at least AT TIME ZONE isn't off in the weeds by itself.

The sign interpretation in the SET TIME ZONE INTERVAL case is dictated
by the SQL standard.  In the non-interval case, we treat the string as
a time zone name, and I think it must be reading it as a POSIX timezone
spec since there is certainly no entry for '01:00' in the Olson tz
database.  So yeah, this is another facet of the SQL-vs-POSIX conflict.

BTW, I happened to notice while looking at timetz_izone() that it does
this:

if (zone-month != 0)
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
 errmsg(\interval\ time zone \%s\ not valid,
DatumGetCString(DirectFunctionCall1(interval_out,
  PointerGetDatum(zone));

It looks to me like the intent is to reject day-or-more intervals,
as is done in variable.c's check_timezone().  Probably this got missed
when we added the separate day field to intervals.  Will fix ...

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] SQL sentence to insert where updated rows is zero...

2013-01-31 Thread Glus Xof
Hi guys,

It's just to ask you if there is a single SQL sentence to INSERT
values in case of UPDATE WHERE returns 0 rows updated. I mean, If I
write something like,

# update my_table set column1='value' where column2='key_value';

And system replies me with one UPDATE 0 because there is no row at
column2 with value 'key_value'... is it possible to say here: in this
case, create one new row with 'value' - 'key_value'...?  like

# insert into my_table (column1, column2) values ('value', 'key_value');

Glus


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] COPY table to file missing quotation marks

2013-01-31 Thread Rich Shepard

  I'd like to understand what I'm missing when I COPY a table to a .csv file
but the text is not quoted.

  Here's an example of a command:

copy chemistry to '/home/postgres/emapchem.csv' with (format CSV, header True, quote 
'', delimiter '|');

and here're part of the header and following row of the output:

siteid|sampdate|analdate|ph|cond|turb|tss|doc|dic|p|se|ca|mg|na|k|nh4|so4|no3
1993SBOIA049|2004-06-28|2004-06-28|7.67|117|4.21|6.1|1.13|11.96|40|0.2|503.5|206.2

  I'm sure it's something simple in the syntax I keep getting wrong, but I
don't see it.

Rich



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY table to file missing quotation marks

2013-01-31 Thread Adrian Klaver

On 01/31/2013 08:37 AM, Rich Shepard wrote:

   I'd like to understand what I'm missing when I COPY a table to a .csv
file
but the text is not quoted.

   Here's an example of a command:

copy chemistry to '/home/postgres/emapchem.csv' with (format CSV, header
True, quote '', delimiter '|');

and here're part of the header and following row of the output:

siteid|sampdate|analdate|ph|cond|turb|tss|doc|dic|p|se|ca|mg|na|k|nh4|so4|no3

1993SBOIA049|2004-06-28|2004-06-28|7.67|117|4.21|6.1|1.13|11.96|40|0.2|503.5|206.2


   I'm sure it's something simple in the syntax I keep getting wrong, but I
don't see it.


http://www.postgresql.org/docs/9.2/interactive/sql-copy.html

CSV Format

...
The values in each record are separated by the DELIMITER character. If 
the value contains the delimiter character, the QUOTE character, the 
NULL string, a carriage return, or line feed character, then the whole 
value is prefixed and suffixed by the QUOTE character, and any 
occurrence within the value of a QUOTE character or the ESCAPE character 
is preceded by the escape character. You can also use FORCE_QUOTE to 
force quotes when outputting non-NULL values in specific columns. ...



FORCE_QUOTE
Forces quoting to be used for all non-NULL values in each specified 
column. NULL output is never quoted. If * is specified, non-NULL values 
will be quoted in all columns. This option is allowed only in COPY TO, 
and only when using CSV format.





Rich






--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY table to file missing quotation marks

2013-01-31 Thread Rich Shepard

On Thu, 31 Jan 2013, Adrian Klaver wrote:


http://www.postgresql.org/docs/9.2/interactive/sql-copy.html


Adrian,

  I've read this.


CSV Format
...
The values in each record are separated by the DELIMITER character. If
the value contains the delimiter character, the QUOTE character, the NULL
string, a carriage return, or line feed character, then the whole value is
prefixed and suffixed by the QUOTE character, and any occurrence within
the value of a QUOTE character or the ESCAPE character is preceded by the
escape character.  You can also use FORCE_QUOTE to force quotes when
outputting non-NULL values in specific columns. ...


  So, if the column attribute is text of some flavor I still need to force
quotes column-by-column?

  Guess I still need to import the output into a spreadsheet, format each
text and date column, then write it out again as a .csv file ... or, better
yet, write a one-line awk program that quotes the appropriate tokens! That's
what I'll do.

Thanks,

Rich



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql question

2013-01-31 Thread Little, Douglas
Thanks to steve and Al for the suggestions.

I did get the variable  concatenation solution to work.
\set env `echo $TARGETSERVER`
\echo env :env
\set envfile 'P1GP1_ETL_STAGE_TBLS_BIO6113_100.':env'.sql'
\echo envfile :envfile
  -- envfile P1GP1_ETL_STAGE_TBLS_BIO6113_100.DEV.sql

However it seems that I can't use the psql variables within sql.
Without quotes the variable name is used in the sql and the table is created, 
but since it's created with the variable name and not value it won't execute.
With quotes,  the variable  gets resolved but the syntax is invalid.  The 
syntax requires that the  file location be in quotes.

I'm using greenplum's external table feature and the external filename needs to 
change by environment.

Create external table  (
Col_a 
)
LOCATION (
'gphdfs://':filepath'/DimSiteVisit/part*'
)


Runtime log
ERROR:  syntax error at or near :
LINE 44: 'gphdfs://':filepath'/DimSiteVisit/part*'

I think I'm going to shift down to using shell features.


Thanks again for the help.

From: Steve Crawford [mailto:scrawf...@pinpointresearch.com]
Sent: Wednesday, January 30, 2013 4:49 PM
To: Little, Douglas
Cc: PostgreSQL General (pgsql-general@postgresql.org)
Subject: Re: [GENERAL] psql question

On 01/30/2013 01:51 PM, Little, Douglas wrote:
I'm looking for a way where I can tailor DDL scripts for deployment with 
environment variables.
Support I have a requirement to prefix table names with dev_ , fqa_, or prod_

I'd like to have a file for each env with their own unique settings - host, 
dbname
Dev.sql
\set env dev
Fqa
\set env fqa

prod
\set env prod

and then

my deployment script would have
ddl.sql
\i :env.sql
Create table schema.:env_tablename


I tried it and didn't work.
p1gp1= \set env dev
p1gp1= \i P1GP1_ETL_STAGE_TBLS_BIO6113_100.:env.sql
P1GP1_ETL_STAGE_TBLS_BIO6113_100.:env.sql: No such file or directory

Any thoughts on how I might get this to work?

Perhaps try concatenating variables then executing the result. For example, 
given a file foo.psql containing select now(); and bar.psql containing 
select 'Hello world';

steve@[local] = \set env foo
steve@[local] = \set scriptname :env '.psql'
steve@[local] = \i :scriptname
  now
---
 2013-01-30 14:45:36.423836-08

steve@[local] = \set env bar
steve@[local] = \set scriptname :env '.psql'
steve@[local] = \i :scriptname
  ?column?
-
 Hello world

Cheers,
Steve


Re: [GENERAL] COPY table to file missing quotation marks

2013-01-31 Thread Steve Crawford

On 01/31/2013 08:46 AM, Adrian Klaver wrote:

On 01/31/2013 08:37 AM, Rich Shepard wrote:

   I'd like to understand what I'm missing when I COPY a table to a .csv
file
but the text is not quoted.
...

http://www.postgresql.org/docs/9.2/interactive/sql-copy.html

CSV Format

...
The values in each record are separated by the DELIMITER character. 
If the value contains the delimiter character, the QUOTE character, 
the NULL string, a carriage return, or line feed character, then the 
whole value is prefixed and suffixed by the QUOTE character, and any 
occurrence within the value of a QUOTE character or the ESCAPE 
character is preceded by the escape character. You can also use 
FORCE_QUOTE to force quotes when outputting non-NULL values in 
specific columns. ...



FORCE_QUOTE
Forces quoting to be used for all non-NULL values in each specified 
column. NULL output is never quoted. If * is specified, non-NULL 
values will be quoted in all columns. This option is allowed only in 
COPY TO, and only when using CSV format.




Adrian beat me to answering so I'll just add that different systems 
generate and require slightly different CSV formats and though RFC4180 
attempts to offer guidance, even that document states:


Definition of the CSV Format:

While there are various specifications and implementations for the CSV 
format...there is no formal specification in existence, which allows for 
a wide variety of interpretations of CSV files.


Cheers,
Steve




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY table to file missing quotation marks

2013-01-31 Thread Rich Shepard

On Thu, 31 Jan 2013, Steve Crawford wrote:


Definition of the CSV Format:

While there are various specifications and implementations for the CSV
format...there is no formal specification in existence, which allows for a
wide variety of interpretations of CSV files.


Steve,

  It's been this way since the early 1980s.

Rich



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY table to file missing quotation marks

2013-01-31 Thread Steve Crawford

On 01/31/2013 09:03 AM, Rich Shepard wrote:

On Thu, 31 Jan 2013, Steve Crawford wrote:


Definition of the CSV Format:

While there are various specifications and implementations for the CSV
format...there is no formal specification in existence, which allows 
for a

wide variety of interpretations of CSV files.


Steve,

  It's been this way since the early 1980s.

I know. I just find it an amusing contrast to the usual liberal 
sprinkling of SHALL, MUST, SHOULD, ... that pepper most RFCs. And I've 
encountered too many people who believed there actually *was* a solid 
definition.


Cheers,
Steve



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY table to file missing quotation marks

2013-01-31 Thread Adrian Klaver

On 01/31/2013 08:53 AM, Rich Shepard wrote:

On Thu, 31 Jan 2013, Adrian Klaver wrote:


http://www.postgresql.org/docs/9.2/interactive/sql-copy.html


Adrian,

   I've read this.


CSV Format
...
The values in each record are separated by the DELIMITER character. If
the value contains the delimiter character, the QUOTE character, the NULL
string, a carriage return, or line feed character, then the whole
value is
prefixed and suffixed by the QUOTE character, and any occurrence within
the value of a QUOTE character or the ESCAPE character is preceded by the
escape character.  You can also use FORCE_QUOTE to force quotes when
outputting non-NULL values in specific columns. ...


   So, if the column attribute is text of some flavor I still need to force
quotes column-by-column?


Or from:

FORCE_QUOTE
Forces quoting to be used for all non-NULL values in each specified 
column. NULL output is never quoted. If * is specified, non-NULL values 
will be quoted in all columns. This option is allowed only in COPY TO, 
and only when using CSV format.



Use FORCE_QUOTE * to quote all columns.





   Guess I still need to import the output into a spreadsheet, format each
text and date column, then write it out again as a .csv file ... or, better
yet, write a one-line awk program that quotes the appropriate tokens!
That's
what I'll do.

Thanks,

Rich






--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY table to file missing quotation marks

2013-01-31 Thread Rich Shepard

On Thu, 31 Jan 2013, Adrian Klaver wrote:


FORCE_QUOTE Forces quoting to be used for all non-NULL values in each
specified column.  NULL output is never quoted. If * is specified,
non-NULL values will be quoted in all columns. This option is allowed only
in COPY TO, and only when using CSV format.



Use FORCE_QUOTE * to quote all columns.


Adrian,

  That's correct. Most of the columns are numeric so each text column needs
to be specified.

Thanks,

Rich



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY table to file missing quotation marks

2013-01-31 Thread Adrian Klaver

On 01/31/2013 10:17 AM, Rich Shepard wrote:

On Thu, 31 Jan 2013, Adrian Klaver wrote:


FORCE_QUOTE Forces quoting to be used for all non-NULL values in each
specified column.  NULL output is never quoted. If * is specified,
non-NULL values will be quoted in all columns. This option is allowed
only
in COPY TO, and only when using CSV format.



Use FORCE_QUOTE * to quote all columns.


Adrian,

   That's correct. Most of the columns are numeric so each text column
needs
to be specified.


Any particular reason why?



Thanks,

Rich






--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Windows Phone/Postgresql

2013-01-31 Thread NickJonas
IDK, but I want to know what is the answer, so, Bumb!



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Windows-Phone-Postgresql-tp5743052p5743106.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Pg Tcl - is it dying out?

2013-01-31 Thread Carlo Stonebanks
As our production system are being upgraded to Windows 7 64-bit we are
finding that our Tcl apps which use the PgTcl client libraries are now
failing (couldn't load library libpgtcl.dll: invalid argument). We have
tried downloading the latest binaries (which are 32 bit) but the problems
persist. There is little activity on the sourceforge page for the lib. There
is little activity on the related forums and I am not optimistic about being
able to get timely responses from the developers.

 

Are we seeing the demise of PG  Tcl? Should I advise my client to NOT
upgrade any systems running Tcl apps and stick with legacy OS's?

 

Carlo

 



Re: [GENERAL] Optimizing query?

2013-01-31 Thread hamann . w

Pavel Stehlule wrote:

  Hi,
 
  I am trying to match items from 2 tables based on a common string.
  One is a big table which has one column with entries like XY123, ABC44, =
 etc
  The table has an index on that column.
  The second table is, typically, much smaller
 
  select  from tab1, tab2 where tab1.code =3D tab2.code;
 
  This works fine and fast.
  Now, as a variant, I have some entries like XY423A, XY423B, GF55A, GF55D=
  in the
  big table and want them to match XY423, GF55 in the second table
 
  Variants I have tried
 
  select   from tab1, tab2 where tab1.code ~ (tab2.code||'($|[A-Z])');
  select   from tab1, tab2 where tab1.code ~ ('^'||tab2.code||'($|[A-Z=
 ])');
 
 
  Have you tried the substring function?
 
  select   from tab1, tab2 where substring(tab1.code from 1 for 5) =3D
  tab2.code
 

Hi Pavel, it was just by chance that a fixed size substring would match the
data at hand. It is more common to have a digit/letter (or vice versa) boundary
or a hyphen there

 
  both take an enormous time. In the better case that I can subset (e.g. a=
 ll candidates in table 2
  share initial AX) I get back to manageable times by adding
and tab1.code ~ '^AX'
  into the recipe. Actual runtime with about a million entries in tab1 and=
  800 entries in tab2
  is about 40 seconds.
 
 any join where result is related to some function result can be very
 slow, because estimation will be out and any repeated function
 evaluation is just expensive.

I see the problem since obviously every the ~ operator with a non-constant
pattern is constantly recompiling the pattern.

I wonder whether it would be possible to invent a prefix-match operator that 
approaches
the performance of string equality. I noted in the past (not sure whether 
anything
has changed in regex matching) that a constant leading part of regex would 
improve
performance, i.e. use an index scan to select possible candidates.
  
 You can try use a functional index.
 
 create index on tab2 ((substring(tab1.code from 1 for 5))
 

What kind of trick is that - mixing two tables into a functional index?
What would the exact syntax be for that?

Regards
Wolfgang Hamann






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pg Tcl - is it dying out?

2013-01-31 Thread David Fetter
On Wed, Jan 30, 2013 at 05:45:06PM -0500, Carlo Stonebanks wrote:
 As our production system are being upgraded to Windows 7 64-bit we are
 finding that our Tcl apps which use the PgTcl client libraries are now
 failing (couldn't load library libpgtcl.dll: invalid argument). We have
 tried downloading the latest binaries (which are 32 bit) but the problems
 persist. There is little activity on the sourceforge page for the lib.

This is pretty strictly a problem for the PgTcl project and not for
the PostgreSQL project or any other associated projects.

Was there something constructive you might have been asking about that
you forgot to include in this email?  If so, what was it?

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pg Tcl - is it dying out?

2013-01-31 Thread Carlo Stonebanks
 Was there something constructive you might have been asking about that
you forgot to include in this email?  If so, what was it?

I don't believe I forgot anything, as there was a question in the original
post (which you did not include in your snip):

Are we seeing the demise of PG  Tcl? Should I advise my client to NOT
upgrade any systems running Tcl apps and stick with legacy OS's?

Not the demise of PG in general, but specifically of concern for the PG 
Tcl developers.

Is this not a constructive question, and is it NOT about PG in general? I
had asked in 2006 where to direct PG  Tcl questions and the answer was
PGSQL general. See:
http://www.postgresql.org/message-id/725602300608011149i2a88a5b2t96817b85318
1e...@mail.gmail.com

Is there now somewhere else I should be asking about this?

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of David Fetter
Sent: January 31, 2013 1:50 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Pg  Tcl - is it dying out?

On Wed, Jan 30, 2013 at 05:45:06PM -0500, Carlo Stonebanks wrote:
 As our production system are being upgraded to Windows 7 64-bit we are 
 finding that our Tcl apps which use the PgTcl client libraries are now 
 failing (couldn't load library libpgtcl.dll: invalid argument). We 
 have tried downloading the latest binaries (which are 32 bit) but the 
 problems persist. There is little activity on the sourceforge page for the
lib.

This is pretty strictly a problem for the PgTcl project and not for the
PostgreSQL project or any other associated projects.

Was there something constructive you might have been asking about that you
forgot to include in this email?  If so, what was it?

Cheers,
David.
--
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] CREATE TEMPORARY TABLE does not work in SQL language function?

2013-01-31 Thread David Johnston
The only difference between the following two CREATE FUNCTION command is
the language in which they are written.

 

I do not recall and cannot seem to find after a quick search any limitation
regarding the use of CREATE TEMPORARY TABLE  in an SQL language function.
It is not one of BEGIN, COMMIT, ROLLBACK, SAVEPOINT which are the only
explicit ones listed.   DDL commands are neither explicitly prohibited nor
allowed but the phase as well as other SQL commands is too vague to be
authoritative.

 

version

PostgreSQL 9.0.4, compiled by Visual C++ build 1500, 64-bit

 

--SQL Language Function Fails

--SQL Error: ERROR:  relation temptbl does not exist

--LINE 10: SELECT (one || two)::varchar AS result FROM temptbl;

 

SET LOCAL search_path = public;

CREATE FUNCTION temptabletest()

RETURNS varchar

AS $$

 

CREATE TEMPORARY TABLE temptbl 

   ON COMMIT DROP 

   AS SELECT '1'::varchar AS one, 'A'::varchar AS two;

   

SELECT (one || two)::varchar AS result FROM temptbl;

$$

LANGUAGE sql

STRICT

VOLATILE

;

 

While the following pl/pgsql function works:

 

SET LOCAL search_path = public;

CREATE FUNCTION temptabletest()

RETURNS varchar

AS $$

BEGIN

CREATE TEMPORARY TABLE temptbl 

   ON COMMIT DROP 

   AS SELECT '1'::varchar AS one, 'A'::varchar AS two;

   

RETURN (SELECT (one || two)::varchar AS result FROM temptbl);

END;

$$

LANGUAGE plpgsql

STRICT

VOLATILE

;

 

Thanks!

 

David J.

 



Re: [GENERAL] SQL sentence to insert where updated rows is zero...

2013-01-31 Thread Vincent Veyron
Le jeudi 31 janvier 2013 à 17:25 +0100, Glus Xof a écrit :

 It's just to ask you if there is a single SQL sentence to INSERT
 values in case of UPDATE WHERE returns 0 rows updated. I mean, If I
 write something like,
 
 # update my_table set column1='value' where column2='key_value';
 
 And system replies me with one UPDATE 0 because there is no row at
 column2 with value 'key_value'... is it possible to say here: in this
 case, create one new row with 'value' - 'key_value'...?  like
 
 # insert into my_table (column1, column2) values ('value', 'key_value');
 

You might want to have a look at this page : 

http://www.depesz.com/2012/06/10/why-is-upsert-so-complicated/



-- 
Salutations, Vincent Veyron
http://marica.fr/site/demonstration
Logiciel de gestion des contentieux juridiques et des sinistres d'assurance



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CREATE TEMPORARY TABLE does not work in SQL language function?

2013-01-31 Thread Tom Lane
David Johnston pol...@yahoo.com writes:
 The only difference between the following two CREATE FUNCTION command is
 the language in which they are written.

The reason the SQL function doesn't work is that SQL functions parse and
analyze their whole bodies before beginning execution --- so temptbl
doesn't exist yet when the SELECT is analyzed.  It's not about temp
tables as such, but rather that any DDL that affects the meaning of
later statements is problematic.

IIRC, there have been a few discussions about changing this behavior,
but nothing's been done, partly out of fear of breaking existing
applications.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pg Tcl - is it dying out?

2013-01-31 Thread Kevin Grittner
Carlo Stonebanks stonec.regis...@sympatico.ca wrote:

 Are we seeing the demise of PG  Tcl? Should I advise my client
 to NOT upgrade any systems running Tcl apps and stick with legacy
 OS's?

 Not the demise of PG in general, but specifically of concern for
 the PG  Tcl developers.

Your best bet to sort out the answer would probably be to look at
the github site for it that the developers created in October of
2012:

http://flightaware.github.com/Pgtcl/

It seems to be available from the Ubuntu Software Center, for both
PostgreSQL and Postgres-XC.

You could always build it from source, if there hasn't been
sufficient interest in the OS platforms in which you're interested
for anyone to have put a package out there.

-Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pg Tcl - is it dying out?

2013-01-31 Thread Adrian Klaver

On 01/31/2013 10:59 AM, Carlo Stonebanks wrote:

Was there something constructive you might have been asking about that

you forgot to include in this email?  If so, what was it?

I don't believe I forgot anything, as there was a question in the original
post (which you did not include in your snip):

Are we seeing the demise of PG  Tcl? Should I advise my client to NOT
upgrade any systems running Tcl apps and stick with legacy OS's?

Not the demise of PG in general, but specifically of concern for the PG 
Tcl developers.

Is this not a constructive question, and is it NOT about PG in general? I
had asked in 2006 where to direct PG  Tcl questions and the answer was
PGSQL general. See:
http://www.postgresql.org/message-id/725602300608011149i2a88a5b2t96817b85318
1e...@mail.gmail.com

Is there now somewhere else I should be asking about this?


Well I think part of the problem is that we may be talking apples and 
oranges.  If I am following correctly there are two PG/Tcl software 
packages in play here.


) PL/Tcl

http://www.postgresql.org/docs/9.2/interactive/pltcl.html

Working with Tcl in the database.

2) PGtcl

http://flightaware.github.com/Pgtcl/

Working with PG from Tcl







--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Parsing COPY ... WITH BINARY

2013-01-31 Thread Merlin Moncure
On Thu, Jan 31, 2013 at 8:47 AM, Leonardo M. Ramé l.r...@griensu.com wrote:
 I'm using this:

 COPY( select field1, field2, field3 from table ) TO 'C://Program 
 Files/PostgreSql//8.4//data//output.dat' WITH BINARY

 To export some fields to a file, one of them is a ByteA field. Now, I
 need to read the file with a custom made program.

 How can I parse this file?

 BTW: I cannot export as CSV, because the binary data is modified when I
 do that.

this type of thing is probably better handled with libpq based C
application if you're willing to write one.  all this is assuming you
don't want to decode your bytea from encoded format such as hex.  why
do you specifically need to dump in binary?

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pg Tcl - is it dying out?

2013-01-31 Thread Carlo Stonebanks
 .  If I am following correctly there are two PG/Tcl software packages in
play here.

Yep, but I am JUST talking about PgTcl. 

There are three possible PgTcl options, two are from the same developer -
one in C, which doesn't work on Win 7 x64 (and possibly other Win platforms)
a second in pure Tcl which is platform independent but my initial tests have
it running quite a bit slower.

The third is the flightaware package, which does not have a win32 DLL
binary. There is a Visual Studio project in the Win directory (looks like an
old Visual C6.0 project which will not build) and I'm afraid I'm not
familiar with other C compilers to know how to get the make to work (in any
case, it appears to depend on a complete PostgreSQL source install - normal
for 'NIX, not for Windows). 

I cannot seem to get anything that will work on a Win 7 x64 client. Perhaps
the problem is elsewhere, but right now the same code Tcl install and libs
DOES run on Windows Server 2003 and 2008 x64 machines, so this is what makes
me question the OS. If anyone has PgTcl working on this platform, I'd love
to know about it.



-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Adrian Klaver
Sent: January 31, 2013 6:01 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Pg  Tcl - is it dying out?

On 01/31/2013 10:59 AM, Carlo Stonebanks wrote:
 Was there something constructive you might have been asking about 
 that
 you forgot to include in this email?  If so, what was it?

 I don't believe I forgot anything, as there was a question in the 
 original post (which you did not include in your snip):

 Are we seeing the demise of PG  Tcl? Should I advise my client to 
 NOT upgrade any systems running Tcl apps and stick with legacy OS's?

 Not the demise of PG in general, but specifically of concern for the 
 PG  Tcl developers.

 Is this not a constructive question, and is it NOT about PG in 
 general? I had asked in 2006 where to direct PG  Tcl questions and 
 the answer was PGSQL general. See:
 http://www.postgresql.org/message-id/725602300608011149i2a88a5b2t96817
 b85318
 1e...@mail.gmail.com

 Is there now somewhere else I should be asking about this?

Well I think part of the problem is that we may be talking apples and
oranges.  If I am following correctly there are two PG/Tcl software packages
in play here.

) PL/Tcl

http://www.postgresql.org/docs/9.2/interactive/pltcl.html

Working with Tcl in the database.

2) PGtcl

http://flightaware.github.com/Pgtcl/

Working with PG from Tcl





--
Adrian Klaver
adrian.kla...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Optimizing query?

2013-01-31 Thread Pavel Stehule
Hello

2013/1/31  haman...@t-online.de:

 Pavel Stehlule wrote:

  Hi,
 
  I am trying to match items from 2 tables based on a common string.
  One is a big table which has one column with entries like XY123, ABC44, =
 etc
  The table has an index on that column.
  The second table is, typically, much smaller
 
  select  from tab1, tab2 where tab1.code =3D tab2.code;
 
  This works fine and fast.
  Now, as a variant, I have some entries like XY423A, XY423B, GF55A, GF55D=
  in the
  big table and want them to match XY423, GF55 in the second table
 
  Variants I have tried
 
  select   from tab1, tab2 where tab1.code ~ (tab2.code||'($|[A-Z])');
  select   from tab1, tab2 where tab1.code ~ ('^'||tab2.code||'($|[A-Z=
 ])');
 
 
  Have you tried the substring function?
 
  select   from tab1, tab2 where substring(tab1.code from 1 for 5) =3D
  tab2.code
 

 Hi Pavel, it was just by chance that a fixed size substring would match the
 data at hand. It is more common to have a digit/letter (or vice versa) 
 boundary
 or a hyphen there

 
  both take an enormous time. In the better case that I can subset (e.g. a=
 ll candidates in table 2
  share initial AX) I get back to manageable times by adding
and tab1.code ~ '^AX'
  into the recipe. Actual runtime with about a million entries in tab1 and=
  800 entries in tab2
  is about 40 seconds.

 any join where result is related to some function result can be very
 slow, because estimation will be out and any repeated function
 evaluation is just expensive.

 I see the problem since obviously every the ~ operator with a non-constant
 pattern is constantly recompiling the pattern.

 I wonder whether it would be possible to invent a prefix-match operator that 
 approaches
 the performance of string equality. I noted in the past (not sure whether 
 anything
 has changed in regex matching) that a constant leading part of regex would 
 improve
 performance, i.e. use an index scan to select possible candidates.

 You can try use a functional index.

 create index on tab2 ((substring(tab1.code from 1 for 5))


 What kind of trick is that - mixing two tables into a functional index?

it is not possible - you can do some auxiliary table and creating
indexes over this table

but maybe  https://github.com/dimitri/prefix can help

Regards

Pavel

 What would the exact syntax be for that?

 Regards
 Wolfgang Hamann






 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general