Re: [GENERAL] Convert mysql to postgresql

2011-08-11 Thread Håvard Wahl Kongsgård
try
http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL


On Thu, Aug 11, 2011 at 7:32 AM, AI Rumman rumman...@gmail.com wrote:
 I have to convert some mysql queries to postgresql.
 Is there any good tools for this task?



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres on SSD

2011-08-11 Thread Amitabh Kant
There have been several discussions for SSD in recent months although not
specific to Fusion IO drives.

See http://archives.postgresql.org/pgsql-general/2011-04/msg00460.php . You
can search the archives for more such reference.


Amitabh

2011/8/11 Ondrej Ivanič ondrej.iva...@gmail.com

 Hi,

 2011/8/10 Tomas Vondra t...@fuzzy.cz:
  On 10 Srpen 2011, 1:17, Ondrej Ivanič wrote:
  - What needs to be changed at Postgres/Operating system level? The
  obvious one is to change random_page_cost (now: 2) and seq_page_cost
  (now: 4). What else should I look at?
 
  Are you sure about this? I'm not quite sure setting seq_page_cost=4 and
  random_page_cost=2 makes sense. Usually seq_page_cost is lower than
  random_page_cost, so I wonder why have you set it like that.

 Ups! Well spotted Tomas! The actual values are:
 random_page_cost = 2
 seq_page_cost = 1

 --
 Ondrej Ivanic
 (ondrej.iva...@gmail.com)

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] Copy command to handle view for my export requirement

2011-08-11 Thread Siva Palanisamy
Hi All,

I understand that copy and \copy commands in PostgreSQL work only for tables. I 
want it to export the data from varies tables. Instead, I can create a view for 
the list of tables. Can the copy or \copy commands be utilized to operate on 
views directly? Please let me know on this.

Thanks and Regards,
Siva.



::DISCLAIMER::
---

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
It shall not attach any liability on the originator or HCL or its affiliates. 
Any views or opinions presented in
this email are solely those of the author and may not necessarily reflect the 
opinions of HCL or its affiliates.
Any form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of
this message without the prior written consent of the author of this e-mail is 
strictly prohibited. If you have
received this email in error please delete it and notify the sender 
immediately. Before opening any mail and
attachments please check them for viruses and defect.

---


Re: [GENERAL] Copy command to handle view for my export requirement

2011-08-11 Thread Andreas Kretschmer
Siva Palanisamy siv...@hcl.com wrote:

 Hi All,
 
  
 
 I understand that copy and \copy commands in PostgreSQL work only for tables. 
 I
 want it to export the data from varies tables. Instead, I can create a view 
 for
 the list of tables. Can the copy or \copy commands be utilized to operate on
 views directly? Please let me know on this.

Sure, you can do that (with recent versions) with:

copy (select * from your_view) to ...


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
If I was god, I would recompile penguin with --enable-fly.   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Can't Bringing the former Primary up as a Standby

2011-08-11 Thread Sergey Levchenko
Hello!

I got:

root@reactor:~# invoke-rc.d postgresql start
Starting PostgreSQL 9.1 database server: mainThe PostgreSQL server
failed to start. Please check the log output: 2011-08-11 12:12:42 EEST
LOG: database system was interrupted; last known up at 2011-08-11
12:04:21 EEST 2011-08-11 12:12:42 EEST LOG: could not open file
pg_xlog/0001004A (log file 0, segment 74): No such
file or directory 2011-08-11 12:12:42 EEST LOG: invalid checkpoint
record 2011-08-11 12:12:42 EEST FATAL: could not locate requir

While I do not:
1. cp recovery.done recovery.conf
2. change host to new primary at recovery.conf

Is it ok? I have to do that to bring up primary up as a standby?

But it doesn't help, I cant connect postgresql, last log:

2011-08-11 12:46:02 EEST LOG:  shutting down
2011-08-11 12:46:02 EEST LOG:  restartpoint starting: shutdown
immediate
2011-08-11 12:46:02 EEST LOG:  restartpoint complete: wrote 0 buffers
(0.0%); 0 transaction log file(s) added, 0 removed, 0 recycled;
write=0.001 s, sync=0.000 s, total=0.029 s; sync files=0,
longest=0.000 s, average=0.000 s
2011-08-11 12:46:02 EEST LOG:  recovery restart point at 0/5320
2011-08-11 12:46:02 EEST LOG:  database system is shut down
2011-08-11 12:46:18 EEST LOG:  database system was shut down in
recovery at 2011-08-11 12:46:02 EEST
2011-08-11 12:46:18 EEST LOG:  entering standby mode
2011-08-11 12:46:18 EEST LOG:  consistent recovery state reached at
0/5378
2011-08-11 12:46:18 EEST LOG:  record with zero length at 0/5378
2011-08-11 12:46:18 EEST LOG:  streaming replication successfully
connected to primary
2011-08-11 12:46:18 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:18 EEST LOG:  incomplete startup packet
2011-08-11 12:46:19 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:19 EEST FATAL:  the database system is starting up
2011-08-11 12:46:19 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:19 EEST FATAL:  the database system is starting up
2011-08-11 12:46:20 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:20 EEST FATAL:  the database system is starting up
2011-08-11 12:46:20 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:20 EEST FATAL:  the database system is starting up
2011-08-11 12:46:21 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:21 EEST FATAL:  the database system is starting up
2011-08-11 12:46:21 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:21 EEST FATAL:  the database system is starting up
2011-08-11 12:46:22 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:22 EEST FATAL:  the database system is starting up
2011-08-11 12:46:22 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:22 EEST FATAL:  the database system is starting up
2011-08-11 12:46:23 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:23 EEST FATAL:  the database system is starting up
2011-08-11 12:46:23 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:23 EEST FATAL:  the database system is starting up
2011-08-11 12:46:24 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:24 EEST FATAL:  the database system is starting up
2011-08-11 12:46:24 EEST LOG:  connection received: host=[local]
2011-08-11 12:46:24 EEST LOG:  incomplete startup packet

processes:

postgres 18696  1.2  1.0 926428 40688 ?S12:54   0:00 /usr/
lib/postgresql/9.1/bin/postgres -D /var/lib/postgresql/9.1/main -c
config_file=/etc/postgresql/9.1/main/postgresql.conf
postgres 18697  0.0  0.0 926896  1832 ?Ss   12:54   0:00
postgres: startup process   waiting for
00010053
postgres 18698  0.0  0.0 926832  1812 ?Ss   12:54   0:00
postgres: writer
process
postgres 18699  0.0  0.0 937440  2848 ?Ss   12:54   0:00
postgres: wal receiver process   streaming
0/5378


All it heppens after:

postgres@reactor:~$ repmgr -D /var/lib/postgresql/9.1/main -d pgbench -
p 5432 -U eps -R postgres --verbose --force standby clone 10.0.1.123
Opening configuration file: ./repmgr.conf
repmgr: directory /var/lib/postgresql/9.1/main exists but is not
empty
repmgr connecting to master database
repmgr connected to master, checking its state
Succesfully connected to primary. Current installation size is 182 MB
Starting backup...
standby clone: master control file '/media/postgresql/9.1/data/global/
pg_control'
rsync command line:  'rsync --archive --checksum --compress --progress
--rsh=ssh --delete postg...@10.0.1.123:/media/postgresql/9.1/data/
global/pg_control /var/lib/postgresql/9.1/main/global/.'
receiving incremental file list
pg_control
8192 100%7.81MB/s0:00:00 (xfer#1, to-check=0/1)
sent 102 bytes  received 234 bytes  672.00 bytes/sec
total size is 8192  speedup is 24.38
standby clone: master data directory '/media/postgresql/9.1/data'
rsync command line:  'rsync --archive --checksum --compress --progress
--rsh=ssh --delete --exclude=pg_xlog* --exclude=pg_control --

Re: [GENERAL] Copy command to handle view for my export requirement

2011-08-11 Thread Siva Palanisamy
Hi Andreas,

I tried the command as below. It failed. Please correct me.

\copy (select * from view1) to '/sample.csv' delimiters ',' csv header;
ERROR: \copy: parse error at select

Thanks and Regards,
Siva.


-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Andreas Kretschmer
Sent: Thursday, August 11, 2011 2:23 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Copy command to handle view for my export requirement

Siva Palanisamy siv...@hcl.com wrote:

 Hi All,



 I understand that copy and \copy commands in PostgreSQL work only for tables. 
 I
 want it to export the data from varies tables. Instead, I can create a view 
 for
 the list of tables. Can the copy or \copy commands be utilized to operate on
 views directly? Please let me know on this.

Sure, you can do that (with recent versions) with:

copy (select * from your_view) to ...


Andreas
--
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
If I was god, I would recompile penguin with --enable-fly.   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

::DISCLAIMER::
---

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
It shall not attach any liability on the originator or HCL or its affiliates. 
Any views or opinions presented in
this email are solely those of the author and may not necessarily reflect the 
opinions of HCL or its affiliates.
Any form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of
this message without the prior written consent of the author of this e-mail is 
strictly prohibited. If you have
received this email in error please delete it and notify the sender 
immediately. Before opening any mail and
attachments please check them for viruses and defect.

---

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Copy command to handle view for my export requirement

2011-08-11 Thread Siva Palanisamy
Hi Andreas,

FYI, I am using PostgreSQL 8.1.4. 

Thanks and Regards,
Siva.


-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Siva Palanisamy
Sent: Thursday, August 11, 2011 4:48 PM
To: Andreas Kretschmer; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Copy command to handle view for my export requirement

Hi Andreas,

I tried the command as below. It failed. Please correct me.

\copy (select * from view1) to '/sample.csv' delimiters ',' csv header;
ERROR: \copy: parse error at select

Thanks and Regards,
Siva.


-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Andreas Kretschmer
Sent: Thursday, August 11, 2011 2:23 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Copy command to handle view for my export requirement

Siva Palanisamy siv...@hcl.com wrote:

 Hi All,



 I understand that copy and \copy commands in PostgreSQL work only for tables. 
 I
 want it to export the data from varies tables. Instead, I can create a view 
 for
 the list of tables. Can the copy or \copy commands be utilized to operate on
 views directly? Please let me know on this.

Sure, you can do that (with recent versions) with:

copy (select * from your_view) to ...


Andreas
--
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
If I was god, I would recompile penguin with --enable-fly.   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

::DISCLAIMER::
---

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
It shall not attach any liability on the originator or HCL or its affiliates. 
Any views or opinions presented in
this email are solely those of the author and may not necessarily reflect the 
opinions of HCL or its affiliates.
Any form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of
this message without the prior written consent of the author of this e-mail is 
strictly prohibited. If you have
received this email in error please delete it and notify the sender 
immediately. Before opening any mail and
attachments please check them for viruses and defect.

---

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Copy command to handle view for my export requirement

2011-08-11 Thread Raghavendra
Nope, you need to be in latest version as Andreas said.
---
 Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/



On Thu, Aug 11, 2011 at 4:51 PM, Siva Palanisamy siv...@hcl.com wrote:

 Hi Andreas,

 FYI, I am using PostgreSQL 8.1.4.

 Thanks and Regards,
 Siva.


 -Original Message-
  From: pgsql-general-ow...@postgresql.org [mailto:
 pgsql-general-ow...@postgresql.org] On Behalf Of Siva Palanisamy
 Sent: Thursday, August 11, 2011 4:48 PM
 To: Andreas Kretschmer; pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Copy command to handle view for my export
 requirement

 Hi Andreas,

 I tried the command as below. It failed. Please correct me.

 \copy (select * from view1) to '/sample.csv' delimiters ',' csv header;
 ERROR: \copy: parse error at select

 Thanks and Regards,
 Siva.


 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:
 pgsql-general-ow...@postgresql.org] On Behalf Of Andreas Kretschmer
 Sent: Thursday, August 11, 2011 2:23 PM
 To: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Copy command to handle view for my export
 requirement

 Siva Palanisamy siv...@hcl.com wrote:

  Hi All,
 
 
 
  I understand that copy and \copy commands in PostgreSQL work only for
 tables. I
  want it to export the data from varies tables. Instead, I can create a
 view for
  the list of tables. Can the copy or \copy commands be utilized to operate
 on
  views directly? Please let me know on this.

 Sure, you can do that (with recent versions) with:

 copy (select * from your_view) to ...


 Andreas
 --
 Really, I'm not out to destroy Microsoft. That will just be a completely
 unintentional side effect.  (Linus Torvalds)
 If I was god, I would recompile penguin with --enable-fly.   (unknown)
 Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

 ::DISCLAIMER::

 ---

 The contents of this e-mail and any attachment(s) are confidential and
 intended for the named recipient(s) only.
 It shall not attach any liability on the originator or HCL or its
 affiliates. Any views or opinions presented in
 this email are solely those of the author and may not necessarily reflect
 the opinions of HCL or its affiliates.
 Any form of reproduction, dissemination, copying, disclosure, modification,
 distribution and / or publication of
 this message without the prior written consent of the author of this e-mail
 is strictly prohibited. If you have
 received this email in error please delete it and notify the sender
 immediately. Before opening any mail and
 attachments please check them for viruses and defect.


 ---

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

 --
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Copy command to handle view for my export requirement

2011-08-11 Thread Boszormenyi Zoltan
Hi,

COPY (SELECT ...) appeared in 8.2.x so you need to upgrade.

Best regards,
Zoltán Böszörményi

2011-08-11 13:21 keltezéssel, Siva Palanisamy írta:
 Hi Andreas,

 FYI, I am using PostgreSQL 8.1.4. 

 Thanks and Regards,
 Siva.


 -Original Message-
 From: pgsql-general-ow...@postgresql.org 
 [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Siva Palanisamy
 Sent: Thursday, August 11, 2011 4:48 PM
 To: Andreas Kretschmer; pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Copy command to handle view for my export requirement

 Hi Andreas,

 I tried the command as below. It failed. Please correct me.

 \copy (select * from view1) to '/sample.csv' delimiters ',' csv header;
 ERROR: \copy: parse error at select

 Thanks and Regards,
 Siva.


 -Original Message-
 From: pgsql-general-ow...@postgresql.org 
 [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Andreas Kretschmer
 Sent: Thursday, August 11, 2011 2:23 PM
 To: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Copy command to handle view for my export requirement

 Siva Palanisamy siv...@hcl.com wrote:

 Hi All,



 I understand that copy and \copy commands in PostgreSQL work only for 
 tables. I
 want it to export the data from varies tables. Instead, I can create a view 
 for
 the list of tables. Can the copy or \copy commands be utilized to operate on
 views directly? Please let me know on this.
 Sure, you can do that (with recent versions) with:

 copy (select * from your_view) to ...


 Andreas
 --
 Really, I'm not out to destroy Microsoft. That will just be a completely
 unintentional side effect.  (Linus Torvalds)
 If I was god, I would recompile penguin with --enable-fly.   (unknown)
 Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

 ::DISCLAIMER::
 ---

 The contents of this e-mail and any attachment(s) are confidential and 
 intended for the named recipient(s) only.
 It shall not attach any liability on the originator or HCL or its affiliates. 
 Any views or opinions presented in
 this email are solely those of the author and may not necessarily reflect the 
 opinions of HCL or its affiliates.
 Any form of reproduction, dissemination, copying, disclosure, modification, 
 distribution and / or publication of
 this message without the prior written consent of the author of this e-mail 
 is strictly prohibited. If you have
 received this email in error please delete it and notify the sender 
 immediately. Before opening any mail and
 attachments please check them for viruses and defect.

 ---



-- 
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Copy command to handle view for my export requirement

2011-08-11 Thread Siva Palanisamy
Hi,

I have installed Windows version of Postgres 9.0.4 in my windows machine to 
test the new copy command as detailed in the below e-mails. When I run the 
command in SQL Editor, I got permission error. But I am running as an 
administrator. 

COMMAND: copy (select * from employee) to 'C:/emp.csv'
ERROR:  could not open file C:/emp.csv for writing: Permission denied
** Error **
ERROR: could not open file C:/emp.csv for writing: Permission denied
SQL state: 42501

COMMAND: \copy (select * from employee) to 'C:/emp.csv'
ERROR:  syntax error at or near \
LINE 1: \copy (select * from employee) to 'C:/emp.csv'
^
** Error **
ERROR: syntax error at or near \
SQL state: 42601

Please correct me where I am going wrong. FYI, I am running under the 
administrator accounts of both Windows Login and PostgreSQL. 

Thanks and Regards,
Siva.


-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Boszormenyi Zoltan
Sent: Thursday, August 11, 2011 5:11 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Copy command to handle view for my export requirement

Hi,

COPY (SELECT ...) appeared in 8.2.x so you need to upgrade.

Best regards,
Zoltán Böszörményi

2011-08-11 13:21 keltezéssel, Siva Palanisamy írta:
 Hi Andreas,

 FYI, I am using PostgreSQL 8.1.4. 

 Thanks and Regards,
 Siva.


 -Original Message-
 From: pgsql-general-ow...@postgresql.org 
 [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Siva Palanisamy
 Sent: Thursday, August 11, 2011 4:48 PM
 To: Andreas Kretschmer; pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Copy command to handle view for my export requirement

 Hi Andreas,

 I tried the command as below. It failed. Please correct me.

 \copy (select * from view1) to '/sample.csv' delimiters ',' csv header;
 ERROR: \copy: parse error at select

 Thanks and Regards,
 Siva.


 -Original Message-
 From: pgsql-general-ow...@postgresql.org 
 [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Andreas Kretschmer
 Sent: Thursday, August 11, 2011 2:23 PM
 To: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Copy command to handle view for my export requirement

 Siva Palanisamy siv...@hcl.com wrote:

 Hi All,



 I understand that copy and \copy commands in PostgreSQL work only for 
 tables. I
 want it to export the data from varies tables. Instead, I can create a view 
 for
 the list of tables. Can the copy or \copy commands be utilized to operate on
 views directly? Please let me know on this.
 Sure, you can do that (with recent versions) with:

 copy (select * from your_view) to ...


 Andreas
 --
 Really, I'm not out to destroy Microsoft. That will just be a completely
 unintentional side effect.  (Linus Torvalds)
 If I was god, I would recompile penguin with --enable-fly.   (unknown)
 Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

 ::DISCLAIMER::
 ---

 The contents of this e-mail and any attachment(s) are confidential and 
 intended for the named recipient(s) only.
 It shall not attach any liability on the originator or HCL or its affiliates. 
 Any views or opinions presented in
 this email are solely those of the author and may not necessarily reflect the 
 opinions of HCL or its affiliates.
 Any form of reproduction, dissemination, copying, disclosure, modification, 
 distribution and / or publication of
 this message without the prior written consent of the author of this e-mail 
 is strictly prohibited. If you have
 received this email in error please delete it and notify the sender 
 immediately. Before opening any mail and
 attachments please check them for viruses and defect.

 ---



-- 
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Copy command to handle view for my export requirement

2011-08-11 Thread Raghavendra

 COMMAND: copy (select * from employee) to 'C:/emp.csv'
 ERROR:  could not open file C:/emp.csv for writing: Permission denied
 ** Error **
 ERROR: could not open file C:/emp.csv for writing: Permission denied
 SQL state: 42501

 COMMAND: \copy (select * from employee) to 'C:/emp.csv'
 ERROR:  syntax error at or near \
 LINE 1: \copy (select * from employee) to 'C:/emp.csv'
^
 ** Error **
 ERROR: syntax error at or near \
 SQL state: 42601

 Please correct me where I am going wrong. FYI, I am running under the
 administrator accounts of both Windows Login and PostgreSQL.


Two things,
1. you need to have a proper permissions where the .csv file creating.
2. In windows you need to use as below
postgres=#\copy (select * from employee) to 'C:\\emp.sql'

 Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/


Re: [GENERAL] Copy command to handle view for my export requirement

2011-08-11 Thread Craig Ringer

On 11/08/2011 7:56 PM, Siva Palanisamy wrote:

 FYI, I am using PostgreSQL 8.1.4.

Argh, ogod why?!?!?!

That version is *totally* unsupported on Windows. Not only that, but 
you're running an ancient point-release - you are missing *19* patch 
releases worth of bug fixes. The latest point-release is 8.1.23 !


Here is a list of all the fixes you are missing out on:

  http://www.postgresql.org/docs/8.1/static/release.html


I have installed Windows version of Postgres 9.0.4 in my windows machine to 
test the new copy command as detailed in the below e-mails. When I run the 
command in SQL Editor, I got permission error. But I am running as an 
administrator.

COMMAND: copy (select * from employee) to 'C:/emp.csv'
ERROR:  could not open file C:/emp.csv for writing: Permission denied
** Error **
ERROR: could not open file C:/emp.csv for writing: Permission denied
SQL state: 42501


The COPY command (as distinct from \copy) runs on the server-side so it 
has the permissions of the postgres user. You must save the file 
somewhere the postgres user as write access. Either create a folder 
and give full control to the user postgres, or write the export 
within the existing postgresql data directory.



COMMAND: \copy (select * from employee) to 'C:/emp.csv'
ERROR:  syntax error at or near \
LINE 1: \copy (select * from employee) to 'C:/emp.csv'


You are not using psql. \copy is a psql command. I don't think it's 
supported by PgAdmin III, though I could be wrong.




--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Copy command to handle view for my export requirement

2011-08-11 Thread Raghavendra


 You are not using psql. \copy is a psql command. I don't think it's
 supported by PgAdmin III, though I could be wrong.


Right, '\copy'  is not supported in PgAdmin III.

--Raghav


Re: [GENERAL] streaming replication: one problem several questions

2011-08-11 Thread Pedro Sam
Do your machines have the same architecture?  (64 bit vs 32 bit)

-
This transmission (including any attachments) may contain confidential 
information, privileged material (including material protected by the 
solicitor-client or other applicable privileges), or constitute non-public 
information. Any use of this information by anyone other than the intended 
recipient is prohibited. If you have received this transmission in error, 
please immediately reply to the sender and delete this information from your 
system. Use, dissemination, distribution, or reproduction of this transmission 
by unintended recipients is not authorized and may be unlawful.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] streaming replication: one problem several questions

2011-08-11 Thread Lonni J Friedman
On Thu, Aug 11, 2011 at 8:17 AM, Pedro Sam pe...@rim.com wrote:
 Do your machines have the same architecture?  (64 bit vs 32 bit)

Yes, they're all Fedora15-x86_64.


-- 
~
L. Friedman                                    netll...@gmail.com
LlamaLand                       https://netllama.linux-sxs.org

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Final 48 for PgWest CFP

2011-08-11 Thread Joshua D. Drake

Hello,

We are in the final 48 hours of the CFP for PgWest. Let's get those 
talks in.


https://www.postgresqlconference.org/talk_types

Sincerely,

Joshua D. Drake

--
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
The PostgreSQL Conference - http://www.postgresqlconference.org/
@cmdpromptinc - @postgresconf - 509-416-6579

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] is max connections in a database table somewhere

2011-08-11 Thread Diego Augusto Molina
2011/8/10, Greg Smith g...@2ndquadrant.com:
 On 08/10/2011 02:46 PM, Geoffrey Myers wrote:
 Is the max connections value in a system table somewhere?

Yes, it is in the table pg_catalog.pg_database. The column is named
datconnlimit and is of type int4. See this:
http://www.postgresql.org/docs/9.0/interactive/catalog-pg-database.html.

But this would be the appropiate way of getting the value:
 SELECT CAST(current_setting('max_connections') AS integer);

You would rather use that form instead of messing up with the catalogs.
-- 
Diego Augusto Molina
diegoaugustomol...@gmail.com

ES: Por favor, evite adjuntar documentos de Microsoft Office. Serán
desestimados.
EN: Please, avoid attaching Microsoft Office documents. They shall be discarded.
LINK: http://www.gnu.org/philosophy/no-word-attachments.html

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] FK violation on (emtpy) parent table

2011-08-11 Thread Louis-David Mitterrand
Hi,

I have an empty parent 'price' table with several partitioned child
tables that contain the actual data.

How can I reference the parent 'price' table in a FK? When I try I get a
FK violation. Is that expected behavior? Is there another way to do it?

Thanks,

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Filling Missing Primary Key Values

2011-08-11 Thread Rich Shepard

  I've a table (from a client, not created here) with a column that should
be the primary key, but not all rows have a value for this attribute. The
column format is VARCHAR(12) and has a variety of values, such as 96-A000672
and 9612-0881 (probably assigned by different analytical laboratories).

  A simple sequence of numbers would do the job of replacing NULL values.
What is the most parsimonious way to replace NULLs with unique values for
this column? I also need to add such values for a new set of data that I'm
in the process of translating from spreadsheet format to the table
structure.

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Filling Missing Primary Key Values

2011-08-11 Thread Chris Travers
On Thu, Aug 11, 2011 at 11:47 AM, Rich Shepard rshep...@appl-ecosys.com wrote:
  I've a table (from a client, not created here) with a column that should
 be the primary key, but not all rows have a value for this attribute. The
 column format is VARCHAR(12) and has a variety of values, such as 96-A000672
 and 9612-0881 (probably assigned by different analytical laboratories).

  A simple sequence of numbers would do the job of replacing NULL values.
 What is the most parsimonious way to replace NULLs with unique values for
 this column? I also need to add such values for a new set of data that I'm
 in the process of translating from spreadsheet format to the table
 structure.

The simplest seems to me to be a sequence and use nextval() to
populate the null values.  The major advantage would be that the
sequence could stay around in case you need it again.  So for example:

create sequence my_varchar_values;

UPDATE my_table set my_varchar =
nextval('my_varchar_values')::varchar(12) where my_varchar IS NULL;

You could also use windowing functions to get rid of the sequence, but
the queries become a lot more complicated.  For example, see
http://stackoverflow.com/questions/4358613/using-window-functions-in-an-update-statement

Best Wishes,
Chris Travers

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Filling Missing Primary Key Values

2011-08-11 Thread Rich Shepard

On Thu, 11 Aug 2011, Chris Travers wrote:


The simplest seems to me to be a sequence and use nextval() to populate
the null values. The major advantage would be that the sequence could stay
around in case you need it again. So for example:

create sequence my_varchar_values;



UPDATE my_table set my_varchar =
nextval('my_varchar_values')::varchar(12) where my_varchar IS NULL;


Chris,

  I was wondering if this was the best approach since I have new data to add
to the table. Don't need a starting value, eh?

Many thanks,

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Filling Missing Primary Key Values

2011-08-11 Thread Chris Travers
On Thu, Aug 11, 2011 at 12:34 PM, Rich Shepard rshep...@appl-ecosys.com wrote:
 On Thu, 11 Aug 2011, Chris Travers wrote:

 The simplest seems to me to be a sequence and use nextval() to populate
 the null values. The major advantage would be that the sequence could stay
 around in case you need it again. So for example:

 create sequence my_varchar_values;

 UPDATE my_table set my_varchar =
 nextval('my_varchar_values')::varchar(12) where my_varchar IS NULL;

 Chris,

  I was wondering if this was the best approach since I have new data to add
 to the table. Don't need a starting value, eh?


TBH, it's the approach I would use.  It creates one additional
database object but the queries are simpler and thus more
maintainable.

Best Wishes,
Chris Travers

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Filling Missing Primary Key Values

2011-08-11 Thread David Johnston
On Aug 11, 2011, at 15:08, Chris Travers chris.trav...@gmail.com wrote:

 On Thu, Aug 11, 2011 at 11:47 AM, Rich Shepard rshep...@appl-ecosys.com 
 wrote:
  I've a table (from a client, not created here) with a column that should
 be the primary key, but not all rows have a value for this attribute. The
 column format is VARCHAR(12) and has a variety of values, such as 96-A000672
 and 9612-0881 (probably assigned by different analytical laboratories).
 
  A simple sequence of numbers would do the job of replacing NULL values.
 What is the most parsimonious way to replace NULLs with unique values for
 this column? I also need to add such values for a new set of data that I'm
 in the process of translating from spreadsheet format to the table
 structure.
 

The technical aspect is covered but consider using one or two characters as a 
prefix related to the data source.  You've already taken the hit for using a 
text data type so you might as well take advantage of it.  Even if you have a 
source field this can be useful.  The only costraint is you limit the number of 
sequence values you can use (per source).

If you use the sequence you can always reset it between imports.

David J.
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Filling Missing Primary Key Values

2011-08-11 Thread Steve Atkins

On Aug 11, 2011, at 12:34 PM, Rich Shepard wrote:

 On Thu, 11 Aug 2011, Chris Travers wrote:
 
 The simplest seems to me to be a sequence and use nextval() to populate
 the null values. The major advantage would be that the sequence could stay
 around in case you need it again. So for example:
 
 create sequence my_varchar_values;
 
 UPDATE my_table set my_varchar =
 nextval('my_varchar_values')::varchar(12) where my_varchar IS NULL;
 
 Chris,
 
  I was wondering if this was the best approach since I have new data to add
 to the table. Don't need a starting value, eh?

This will fail if any of the existing values are integers in the range that
you're inserting - and it may fail in the future, as you add new records
if they clash with existing entries.

It's still a good way to go, but might need some care or some tweaking -
adding a prefix, maybe.

Cheers,
  Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is max connections in a table somewhere?

2011-08-11 Thread Geoffrey Myers

Adrian Klaver wrote:

On Wednesday, August 10, 2011 11:47:25 am Geoffrey Myers wrote:

Is max connections in any table in the database I can access?


SELECT current_setting('max_connections');
 current_setting 
-

 100



Thanks for all the responses folks.  Obviously, there's more then one 
way to skin this cat.



--
Until later, Geoffrey

I predict future happiness for America if they can prevent
the government from wasting the labors of the people under
the pretense of taking care of them.
- Thomas Jefferson

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] is max connections in a database table somewhere

2011-08-11 Thread Geoffrey Myers

Greg Smith wrote:

On 08/10/2011 02:46 PM, Geoffrey Myers wrote:

Is the max connections value in a system table somewhere?


If you intend to do anything with the value you probably want one of 
these forms:


SELECT CAST(current_setting('max_connections') AS integer);
SELECT CAST(setting AS integer) FROM pg_settings WHERE 
name='max_connections';


The setting comes back as a text field when using current_setting on the 
pg_settings view (which isn't a real table, under the hood it's calling 
a system function)


Actually, just pulling it out of the database to display it in a report.


--
Until later, Geoffrey

I predict future happiness for America if they can prevent
the government from wasting the labors of the people under
the pretense of taking care of them.
- Thomas Jefferson

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Filling Missing Primary Key Values

2011-08-11 Thread Rich Shepard

On Thu, 11 Aug 2011, Steve Atkins wrote:


This will fail if any of the existing values are integers in the range that
you're inserting - and it may fail in the future, as you add new records
if they clash with existing entries.


Steve/Chris/Dave:

  I had not looked in deatil at that column before. Having just done this, I
see that it's really a laboratory number, not a unique sample ID. So, I
renamed sample_id to lab_nbr, added a sample_id column, created the sequence
sample_id_seq, updated the table with it, then added the constraint that
sample_id is the primary key.

  Thank you all very much!

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Regex Query Index question

2011-08-11 Thread Naoko Reeves
Hello,
I have query phone number in database as follows:

[123) 456-7890

(123) 456-7890

When I query like this:

SELECT * FROM phone

WHERE phone_number ~ ('^\[123' || '[-\s\)]{0,2}' || '456' || '[-\s\)]{0,2}'
|| '7890')

it use Index but if I query like this (notice first character is
open parenthesis instead of open square blacket ) :

SELECT phn_fk_key FROM phn WHERE

phn_fk_table = 14

AND llx_decrypt(phn_phone_enc) ~ ('^\(123' || '[-\s\)]{0,2}' || '456' ||
'[-\s\)]{0,2}' || '7890')

It doesn't use Index

co-worker suggested me to use chr(40) instead so I tried this:

SELECT phn_fk_key FROM phn WHERE

phn_fk_table = 14

AND llx_decrypt(phn_phone_enc) ~ ('^\' || chr(40) || '123' || '[-\s\)]{0,2}'
|| '456' || '[-\s\)]{0,2}' || '7890')

No success...

Also { and period doesn't seems to use index either but } ) [ ] $ #
works.

Could you guide me to right direction for me please?


Thank you very much for your time in advance.


Naoko Reeves


Re: [HACKERS] [GENERAL] Dropping extensions

2011-08-11 Thread Marc Munro
On Sat, 2011-07-30 at 22:46 +0200, Dimitri Fontaine wrote:
 Tom Lane t...@sss.pgh.pa.us writes:
  Hmm.  I don't think we have any code in there to prohibit the same
  object from being made a member of two different extensions ... but this
  example suggests that maybe we had better check that.
 
 I see you did take care of that, thank you!
 
   
 http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=988620dd8c16d77f88ede167b22056176324

I thought I'd document how I fixed Veil's drop extension issue.

The problem is that veil_init() needs to be able to do different things
depending on how Veil has been extended.  In the past, we simply
re-wrote veil_init() for the application.  Now that we have proper
extensions this is no longer viable.

So, I have modified veil_init() to call functions that have been defined
in a configuration table.  An extension can now register its own init
functions by inserting their details into the config table.

This is almost perfect, except that when an extension is dropped, the
inserted records must be deleted.

We achieve this by creating a new config table for each extension, which
inherits from the veil config table.  When veil queries its config
table, it sees the inherited tables too, and can find their init
functions.  When the extension is dropped, the inherited table is also
dropped and veil_init() reverts to its previous behaviour.

Yay.
__
Marc


signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] Regex Query Index question

2011-08-11 Thread Tom Lane
Naoko Reeves naokoree...@gmail.com writes:
 I have query phone number in database as follows:
 [123) 456-7890
 (123) 456-7890

 When I query like this:

 SELECT * FROM phone

 WHERE phone_number ~ ('^\[123' || '[-\s\)]{0,2}' || '456' || '[-\s\)]{0,2}'
 || '7890')

 it use Index but if I query like this (notice first character is
 open parenthesis instead of open square blacket ) :

 SELECT phn_fk_key FROM phn WHERE

 phn_fk_table = 14

 AND llx_decrypt(phn_phone_enc) ~ ('^\(123' || '[-\s\)]{0,2}' || '456' ||
 '[-\s\)]{0,2}' || '7890')

 It doesn't use Index

Probably it thinks the index isn't selective enough for that case.  How
many entries are there starting with (123?

(BTW, I assume you've got standard_conforming_strings turned on, else
there are some other issues with these backslashes ...)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Regex Query Index question

2011-08-11 Thread Naoko Reeves
Tom,
Thank you for your quick reply. Data start with (123 only returns 28
records where as phone number start with[123 returns 1.
Changed the data so that both will return 1 row.

One with (999 query takes about 30 seconds (30983ms) without index.
One with [999 take about 28 ms with index.

Yes, standard_conforming_strings is ON.
Also forgot to mentioned the version:
select version()  PostgreSQL 8.4.6 on i386-apple-darwin, compiled by GCC
i686-apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 (Apple Computer, Inc. build 5370),
32-bit

Thank you very much for your time.

Naoko Reeves

On Thu, Aug 11, 2011 at 3:49 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Naoko Reeves naokoree...@gmail.com writes:
  I have query phone number in database as follows:
  [123) 456-7890
  (123) 456-7890

  When I query like this:

  SELECT * FROM phone

  WHERE phone_number ~ ('^\[123' || '[-\s\)]{0,2}' || '456' ||
 '[-\s\)]{0,2}'
  || '7890')

  it use Index but if I query like this (notice first character is
  open parenthesis instead of open square blacket ) :

  SELECT phn_fk_key FROM phn WHERE

  phn_fk_table = 14

  AND llx_decrypt(phn_phone_enc) ~ ('^\(123' || '[-\s\)]{0,2}' || '456' ||
  '[-\s\)]{0,2}' || '7890')

  It doesn't use Index

 Probably it thinks the index isn't selective enough for that case.  How
 many entries are there starting with (123?

 (BTW, I assume you've got standard_conforming_strings turned on, else
 there are some other issues with these backslashes ...)

regards, tom lane




-- 
Naoko Reeves
http://www.anypossibility.com/


[GENERAL] COPY from .csv File and Remove Duplicates

2011-08-11 Thread Rich Shepard

  A table has a sequence to generate a primary key for inserted records with
NULLs in that column.

  I have a .csv file of approximately 10k rows to copy into this table. My
two questions which have not been answered by reference to my postgres
reference book or Google searches are:

  1) Will the sequence automatically add the nextval() to each new record as
the copy command runs?

  2) Many of these rows almost certainly are already in the table. I would
like to remove duplicates either during the COPY command or immediately
after. I'm considering copying the new data into a clone of the table then
running a SELECT to add only those rows in the new cloned table to the
existing table.

  Suggestions on getting these data in without duplicating existing rows
will be very helpful.

TIA,

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] suggestions for archive_command to a remote standby

2011-08-11 Thread John DeSoi
rsync seems to be suggested in a number of references for the archive_command 
when copying WAL files to another server. But the documentation states in bold 
letters that the command should refuse to overwrite existing files, *and that 
it returns nonzero status in this case*. You can keep rsync from overwriting 
files, but I don't see any options for generating an error if the file exists.

Anyone care to share a method or script for handling this correctly with rsync 
or some other common utility?

Thanks!


John DeSoi, Ph.D.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] suggestions for archive_command to a remote standby

2011-08-11 Thread Rich Shepard

On Thu, 11 Aug 2011, John DeSoi wrote:


rsync seems to be suggested in a number of references for the
archive_command when copying WAL files to another server. But the
documentation states in bold letters that the command should refuse to
overwrite existing files, *and that it returns nonzero status in this
case*. You can keep rsync from overwriting files, but I don't see any
options for generating an error if the file exists.


John,

  I've not followed this thread, but I think you misunderstand rsync. I use
it (via dirvish) to back up files every night to an external drive (all
files on the server) and to copy files between my server/workstation and my
laptop computer.

  If I change a file on the workstation, then want the latest version on the
laptop I run rsync with the -avz options and I get the file revised.

  Of course, I may be completely off base here with your needs.

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Regex Query Index question

2011-08-11 Thread David Johnston


On Aug 11, 2011, at 18:26, Naoko Reeves naokoree...@gmail.com wrote:

 Hello,
 I have query phone number in database as follows:
 [123) 456-7890
 
 (123) 456-7890
 
Store phone numbers without formatting...the data is the numbers themselves the 
formatting is presentation.
 When I query like this:
 
 SELECT * FROM phone 
 
 WHERE phone_number ~ ('^\[123' || '[-\s\)]{0,2}' || '456' || '[-\s\)]{0,2}' 
 || '7890')
 
 it use Index but if I query like this (notice first character is open 
 parenthesis instead of open square blacket ) :
 
 SELECT phn_fk_key FROM phn WHERE 
 
 phn_fk_table = 14 
 
 AND llx_decrypt(phn_phone_enc) ~ ('^\(123' || '[-\s\)]{0,2}' || '456' || 
 '[-\s\)]{0,2}' || '7890')
 
 It doesn't use Index
 
The left side of the two where clauses are different fields/expressions.  Since 
you do not specify what your table and indexes look like your problem is 
impossible to solve but likely has nothing to do with RegEx.  Keep in mind, 
however, that an index can only be used if the pattern is fully anchored.  With 
alternation in the RegEx you want the^ outside of the part the part that uses 
| otherwise only the first expression ends up being anchored.  E.g,  '^(a|b)' 
!='^a|b'. The first one matches a string that stars with a or b whereas the 
second matches a string that starts with a or contains b anywhere in the 
string.  The second one cannot use the index since it is not guaranteed to be 
anchored at the start of a string.
 co-worker suggested me to use chr(40) instead so I tried this:
 
 SELECT phn_fk_key FROM phn WHERE 
 
 phn_fk_table = 14 
 
 AND llx_decrypt(phn_phone_enc) ~ ('^\' || chr(40) || '123' || '[-\s\)]{0,2}' 
 || '456' || '[-\s\)]{0,2}' || '7890')
 
 No success...
 
 Also { and period doesn't seems to use index either but } ) [ ] $ # 
 works.Could you guide me to right direction for me please?
 
Particularly with RegEx you want to tell people what you are trying to do and 
not just give the expressions themselves.

Not testing here but... and ignore whitespace

'^( [ \[\( ]? \s* \d{3} \s* [ -\s\]\) ] \d{3} [ -\s ] \d{4} )$'

The above should match both of your samples and use the index on the regular 
phone column. If you want to store encrypted and search the unencrypted you 
have to create a functional index.  See documentation for syntax and 
requirements.

In this case you can replace the \d{n} with your desired search strings.

It would be a lot simpler if you strip out the non-numbers, via functional 
index if needed, and perform an equality string search.  The question becomes, 
using the example data above, what happens if two people have the same phone 
number with only the format being different.  The answer is the difference 
between a unique index and a non-unique one...

example: create index name on table (clean_and_decrypt_phone(enc_phone))

Where clean_and_decrypt_phone(enc_phone) = clean_phone( search_string )

This can be done without changing columns, only indexes and queries.

David J.

Re: [GENERAL] COPY from .csv File and Remove Duplicates

2011-08-11 Thread David Johnston
On Aug 11, 2011, at 19:13, Rich Shepard rshep...@appl-ecosys.com wrote:

  A table has a sequence to generate a primary key for inserted records with
 NULLs in that column.
 
  I have a .csv file of approximately 10k rows to copy into this table. My
 two questions which have not been answered by reference to my postgres
 reference book or Google searches are:
 
  1) Will the sequence automatically add the nextval() to each new record as
 the copy command runs?
 
  2) Many of these rows almost certainly are already in the table. I would
 like to remove duplicates either during the COPY command or immediately
 after. I'm considering copying the new data into a clone of the table then
 running a SELECT to add only those rows in the new cloned table to the
 existing table.
 
  Suggestions on getting these data in without duplicating existing rows
 will be very helpful.
 

If you have duplicates with matching real keys inserting into a staging table 
and then moving new records to the final table is your best option (in general 
it is better to do a two-step with a staging table since you can readily use 
Postgresql to perform any intermediate translations)  As for the import itself, 
I believe if the column with the sequence is present in the csv the sequence 
will not be used and, if no value is present, a null will be stored for that 
column - causing any not-null constraint to throw an error.  In this case I 
would just import the data to a staging table without any kind of artificial 
key, just the true key, and then during the merge with the live table you 
simply omit the pk field from the insert statement and the sequence will kick 
in at that point.

David J.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Regex Query Index question

2011-08-11 Thread Tom Lane
Naoko Reeves naokoree...@gmail.com writes:
 Also forgot to mentioned the version:
 select version()  PostgreSQL 8.4.6 on i386-apple-darwin, compiled by GCC
 i686-apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 (Apple Computer, Inc. build 5370),

Oh --- there's your problem.  In 8.4 and earlier, we don't trust \( to be
a literal character in a regex pattern, because it's not a literal if
you have regex_flavor set to 'basic'.  9.0 and up removed that option,
so it works as-expected in newer versions.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY from .csv File and Remove Duplicates

2011-08-11 Thread Rich Shepard

On Thu, 11 Aug 2011, David Johnston wrote:


If you have duplicates with matching real keys inserting into a staging
table and then moving new records to the final table is your best option
(in general it is better to do a two-step with a staging table since you
can readily use Postgresql to perform any intermediate translations) As
for the import itself,


David,

  I presume what you call a staging table is what I refer to as a copy of
the main table, but with no key attribute.

  Writing the SELECT statement to delete from the staging table those rows
that already exist in the main table is where I'm open to suggestions.


In this case I would just import the data to a staging table without any
kind of artificial key, just the true key,


  There is no true key, only an artificial key so I can ensure that rows are
unique. That's in the main table with the 50K rows. No key column in the
.csv file.

Thanks,

Rich



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Regex Query Index question

2011-08-11 Thread David Johnston
Now that I read more closely the alternation is actually concatenation.  My 
point still stands but your issue is that you have not created a functional 
index on the decryption result of the encrypted phone number.  PostgreSQL does 
not know that the decrypted phone number is equivalent to the unencrypted 
field.  It only can look at expressions to determine whether an index is usable 
- not values.  A table can have more than one index.

David J. 

On Aug 11, 2011, at 19:58, David Johnston pol...@yahoo.com wrote:

 
 
 Not testing here but... and ignore whitespace
 
 '^( [ \[\( ]? \s* \d{3} \s* [ -\s\]\) ] \d{3} [ -\s ] \d{4} )$'
 
 Some tweaks needed but seriously consider dropping RegEx and going the 
 functional index route.
 
 '^( [ \[\( ]? \s* \d{3} \s* [ -\s\]\) ] \s* \d{3} \s* [ -\s ] \s* \d{4} )$'
 
 Added some extra white-space checking but again not tested.
 
 You can probably find better/more flexible expressions online.
 
 David J.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Regex Query Index question

2011-08-11 Thread David Johnston

 
 Not testing here but... and ignore whitespace
 
 '^( [ \[\( ]? \s* \d{3} \s* [ -\s\]\) ] \d{3} [ -\s ] \d{4} )$'
 
Some tweaks needed but seriously consider dropping RegEx and going the 
functional index route.

 '^( [ \[\( ]? \s* \d{3} \s* [ -\s\]\) ] \s* \d{3} \s* [ -\s ] \s* \d{4} )$'

Added some extra white-space checking but again not tested.

You can probably find better/more flexible expressions online.

David J.
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres on SSD

2011-08-11 Thread Ondrej Ivanič
Hi,

2011/8/11 Amitabh Kant amitabhk...@gmail.com:
 There have been several discussions for SSD in recent months although not
 specific to Fusion IO drives.

 See http://archives.postgresql.org/pgsql-general/2011-04/msg00460.php . You
 can search the archives for more such reference.

I've read this one several days ago but the discussion turned into
flamewar about SSD longevity...

Thanks,
-- 
Ondrej Ivanic
(ondrej.iva...@gmail.com)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY from .csv File and Remove Duplicates

2011-08-11 Thread Craig Ringer

On 12/08/2011 7:13 AM, Rich Shepard wrote:


I have a .csv file of approximately 10k rows to copy into this table. My
two questions which have not been answered by reference to my postgres
reference book or Google searches are:

1) Will the sequence automatically add the nextval() to each new record as
the copy command runs?


No, AFAIK COPY input needs to match the table structure and can't have 
default fields etc.



2) Many of these rows almost certainly are already in the table. I would
like to remove duplicates either during the COPY command or immediately
after. I'm considering copying the new data into a clone of the table then
running a SELECT to add only those rows in the new cloned table to the
existing table.


Rather than removing them after, I'd recommend COPYing into a temporary 
staging table, then doing an


INSERT INTO realtable SELECT FROM stagingtable LEFT OUTER JOIN realtable 
ON (conditions) WHERE realtable.primarykey IS NULL;


... where conditions are whatever rules you use to decide that a row 
in the real table is the same as a row in the staging table.


In other words: Only insert a row into the final destination table if it 
does not already exist in the final destination table.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY from .csv File and Remove Duplicates

2011-08-11 Thread David Johnston
  There is no true key, only an artificial key so I can ensure that rows
are
 unique. That's in the main table with the 50K rows. No key column in the
 .csv file.

If you have no true key then you have no way to ensure uniqueness.  By
adding an artificial key two records that are otherwise duplicates would now
be considered unique.  

Since you have not given data/structure for either the CSV or Main Table
more specific help is not possible but when using a serialized PK in almost
every case the table should also have a candidate key with a UNIQUE index
defined.  If you cannot explain why yours does not, and why it cannot, I
would offer that you need to gain further understanding of your data model.
It is generally wise to create a UNIQUE index on a candidate key and risk
being wrong.  At least you will be given an actual error and, in the worst
case, can always drop the UNIQUE index if indeed the duplicate record
should be valid; though in that situation you now have more data to input
into you model analysis and should be able to correctly modify the table to
create a new candidate key.

Slight tangent but I have an external accounting source where I know that,
with respect to the available data, duplicates can occur (a PK field is not
available).  Since I have no candidate key I am forced to use an artificial
(serial) key and take extra precautions to ensure I do not inadvertently
introduce unintentional duplicate data during import.  In my case I handle
data at the day level.  My source gives me every transaction for a given
date and I then modify my live table to add only the correct number of
records so that, after the merge process, I have an exact duplicate of the
data in the source file.  Thus, since I trust the source file (and cannot
enter data via any other method), I know immediately after processing that
any duplicates on a given date are expected duplicates as opposed to, say,
me accidentally importing the same file twice and thus having twice as many
records.  I also understand that if, say for reconciliation purposes, I need
to choose one of a duplicate record it does not matter, initially, which one
I choose but afterwards, if I only add records, I can ensure that I always
pick the same record in the future.  However, if I am forced to DELETE a
record, from a practical perspective I DELETE BOTH/ALL of the records and
then ADD back the correct number of records for that date.  Any data that
cared about the original records will now need to decide how to handle the
fact that their record may no longer be present (instead of deleting only
some of the existing records at random without knowing which ones are the
correct ones to delete).

This is one example I've come across where the data I am working with has
absolutely NO inherent PK that I can see but where I can trust that, for a
given dataset, I only have valid data.  I did have to assign a SERIAL PK to
my copy of the data but I also recognized where problems could occur and
mitigated them via specific processing routines.  One alternative solution
would be to simply DELETE everything for a given date and then import every
record from the source file into the main table.  I rejected that solution
because I could not afford to continually delete the existing records as
other tables claimed FK relationships to them and continually breaking (ON
DELETE SET NULL) them was unacceptable.  I still have to do so when I need
to delete a record (rare given this is accounting data) but simply adding a
new record does not affect existing records.

Whether this situation mirrors yours I do not know but I hope this brief
description is at least informative and educational for you and others.
Feedback/Thoughts are greatly welcomed.

  I presume what you call a staging table is what I refer to as a copy of
 the main table, but with no key attribute.

 Writing the SELECT statement to delete from the staging table those rows
 that already exist in the main table is where I'm open to suggestions.

The big question to ask is how you would be able to identify a record in the
CSV file as already being on the main table (either directly or, as my above
example, indirectly)?

My use of staging table reflects the fact that the structure of the table
should roughly match the CSV file and NOT the main table.  The SQL you
issue to move records from the staging table to the main table will then
account for any differences between the two.

The general idea is to load up the staging table, optionally update
matching records on the main table, insert non-matching records, then
truncate/clear the staging table.

The general structure for the insert would be:

INSERT INTO maintable (cols)
SELECT cols FROM staging WHERE staging.idcols NOT IN (SELECT
maintable.idcols FROM maintable);

There may be more efficient ways to write the query but the idea is the
same.

David J.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your 

Re: [GENERAL] COPY from .csv File and Remove Duplicates

2011-08-11 Thread Craig Ringer

On 12/08/2011 10:32 AM, David Johnston wrote:


The general structure for the insert would be:

INSERT INTO maintable (cols)
SELECT cols FROM staging WHERE staging.idcols NOT IN (SELECT
maintable.idcols FROM maintable);

There may be more efficient ways to write the query but the idea is the
same.


Yeah... I'd favour an EXISTS test or a join.

INSERT INTO maintable (cols)
SELECT cols FROM staging WHERE NOT EXISTS (SELECT
1 FROM maintable WHERE maintable.idcol = staging.idcol);

... as the NOT IN(...) test can have less than lovely behavior for large 
key sets.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres on SSD

2011-08-11 Thread Greg Smith

On 08/09/2011 07:17 PM, Ondrej Ivanič wrote:

I'm about to evaluate this SSD card: FusionIO ioDrive Duo [1]. The
main reason for this experiment is to see if SSD can significantly
improve query performance
Database size is around ~1.4TB. Main tables occupied around 1/3
(450GB, ~220mil rows) and aggregated data occupied 2/3 (900GB). All
indexes are on separate table space (~550GB)
   


ioDrive hardware is fast at executing all sorts of I/O, but it 
particularly excels compared to normal drives with really random 
workloads.  For example, I recently tested them in two different 
systems, both head to head against regular 20 disk RAID10 arrays (Dell 
MD units).  At sequential reads and writes, all drives were basically 
the same; 1.2GB/s reads, 600MB/s writes.  The regular drive array was 
actually a bit faster on sequential writes, which is common with SSD 
showdowns.


Your tables are pretty big; not much of them will fit in memory.  If 
your aggregated queries end up executing a lot of sequential scans of 
the data set in order to compute, or for them to be utilized, you will 
probably discover this is barely faster on FusionIO.  And you certainly 
could speed that up for far less money spent on other hardware.


Is there a component to your workload that does a lot of random read or 
write requests?  If so, is that chunk of the data set bigger than RAM, 
but small enough to fit on the FusionIO drive?  Only when all those 
conditions are true does that hardware really make sense.  For example, 
running a 300GB pgbench test on a system with 128GB of RAM, the FusionIO 
drive was almost 10X as fast as the 20 disk array.  And its raw seek 
rate was 20X as fast at all concurrency levels.


But at the same time, tests on database sizes that fit into RAM were 
slower on FusionIO than the regular disk array.  When there's no random 
I/O to worry about, the slower read/write write of the SSD meant it lost 
the small database tests.


You really need to measure your current system carefully to figure out 
just what it's doing as far as I/O goes to make this sort of decision.  
Given what ioDrives cost, if you're not sure how to do that yourself 
it's surely worth hiring a storage oriented database consultant for a 
few days to help figure it out.



XEN host with 16 CPU (Intel(R) Xeon(R) CPU L5520  @ 2.27GHz). CentOS 5.6
80GB RAM
Storage: some Hitachi Fibre channel SAN with two LUNs:
1st LUN has *everything* under $PG_DATA (used 850 GB)
2nd LUN has *all* indexes (index table space) (used 550GB)
   


Make sure you do basic benchmarks of all this hardware before you start 
mixing even more stuff into the mix.  Both Xen hosts and SANs can cause 
all sorts of performance bottlenecks.  It's possible you won't even be 
able to fully utilize the hardware you've already got if it's running 
with a virtual machine layer in there.  I have no idea how a FusionIO 
drive will work in that environment, but I wouldn't expect  it to be 
great.  They need a fast CPU to run well, and some processing is done in 
the driver rather than on the card.



  checkpoint_segments  | 48
  maintenance_work_mem | 256MB
  shared_buffers   | 9GB
  wal_buffers  | 50MB
  work_mem | 256MB
   


checkpoint_segments should be higher, at least 64 and probably 128 to 
256.  shared_buffers should be lower (at most 8GB, maybe even less).  
maintenance_work_mem should be 1 to 2GB on a server with 80GB of RAM.  
There's no proven benefit to increasing wal_buffers over 16MB.


This setting for work_mem can easily allow your server to allocate over 
250GB of RAM for query working memory, if all 100 connections do 
something.  Either reduce that a lot, or decrease max_connections, if 
you want this server to run safely.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general