Re: [GENERAL] Primary key Index Error

2011-10-25 Thread Manoj K P
*Server log*

Oct  1 00:06:59 server_host_name postgres[1453]: [5-1] 2011-10-01
00:06:59.831 EDT 1453 4e869041.5ad postgres [local] postgres LOG:  duration:
418583.238 ms  statement: select pg_start_backup('fortnightly');

Oct  2 03:03:18 server_host_name postgres[1453]: [6-1] 2011-10-02
03:03:18.243 EDT 1453 4e869041.5ad postgres [local] postgres LOG:  duration:
8034.385 ms  statement: select pg_stop_backup();


In between stop and start process  server_host_name is receiving all type of
DML  DDL and  generating new WAL file 

Taking base backup in between start and stop process 

/*Client Log Details*/

Oct 25 05:16:18 client_server_name postgres[28858]: [2-1] 2011-10-25
05:16:18.202 BST 28858 LOG:  could not open file
pg_xlog/271047B1008C (log file 18353, segment 140): No such
file or directory
Oct 25 05:16:18 client_server_name postgres[28858]: [3-1] 2011-10-25
05:16:18.203 BST 28858 LOG:  invalid checkpoint record
Oct 25 05:16:18 client_server_name postgres[28858]: [4-1] 2011-10-25
05:16:18.203 BST 28858 FATAL:  could not locate required checkpoint record
Oct 25 05:16:18 client_server_name postgres[28858]: [4-2] 2011-10-25
05:16:18.203 BST 28858 HINT:  If you are not restoring from a backup, try
removing the file /mnt/new_cluster/backup_label.
Oct 25 05:16:18 client_server_name postgres[28857]: [1-1] 2011-10-25
05:16:18.205 BST 28857 LOG:  startup process (PID 28858) exited with exit
code 1
Oct 25 05:16:18 client_server_name postgres[28857]: [2-1] 2011-10-25
05:16:18.205 BST 28857 LOG:  aborting startup due to startup process failure


Oct 25 05:20:53 client_server_name postgres[29030]: [2-1] 2011-10-25
05:20:53.630 BST 29030 LOG:  could not open file
pg_xlog/271047B10068 (log file 18353, segment 104): No such
file or directory
Oct 25 05:20:53 client_server_name postgres[29030]: [3-1] 2011-10-25
05:20:53.630 BST 29030 FATAL:  could not find redo location referenced by
checkpoint record
Oct 25 05:20:53 client_server_name postgres[29030]: [3-2] 2011-10-25
05:20:53.630 BST 29030 HINT:  If you are not restoring from a backup, try
removing the file /mnt/new_cluster/backup_label.
Oct 25 05:20:53 client_server_name postgres[29029]: [1-1] 2011-10-25
05:20:53.633 BST 29029 LOG:  startup process (PID 29030) exited with exit
code 1
Oct 25 05:20:53 client_server_name postgres[29029]: [2-1] 2011-10-25
05:20:53.633 BST 29029 LOG:  aborting startup due to startup process failure



manually copy  following file to pg_xlog folder

   271047B1008C
   271047B10068

After words i can start postgres and accessing the database , but same error 

   



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Primary-key-Index-Error-tp4931714p4935172.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

[GENERAL]

2011-10-25 Thread 飞机
Hi,friends.
 Please tell me how to build a debian package in detail,I would like to build a 
debian package for postgreSQL.
 Thanks!
 fei

Re: [GENERAL]

2011-10-25 Thread John R Pierce

On 10/24/11 11:17 PM, 飞机 wrote:
Please tell me how to build a debian package in detail,I would like to 
build a debian package for postgreSQL.


http://lmgtfy.com?q=how+to+build+debian+package



--
john r pierceN 37, W 122
santa cruz ca mid-left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL]

2011-10-25 Thread Craig Ringer
On 25/10/11 14:17,  wrote:
 Hi,friends.
 Please tell me how to build a debian package in detail,I would like
 to build a debian package for postgreSQL.

Why?

To get a good answer, you will need to explain more about what you are
trying to achieve and why you want it.

There are already Debian packages for PostgreSQL. What is wrong with them?

Do you want to build a package of a newer PostgreSQL for an older
version of Debian/Ubuntu? If so, check backports.org before trying to
build your own. If there isn't a backport to the version you want, I
recommend that you get the source to the latest package for the latest
debian release and backport that yourself rather than trying to make a
new package from scratch.

Do you actually want to *compile* an existing Debian package of
PostgreSQL from source into a binary .deb? If so, use dpkg-build .


--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can someone help explain what's going on from the attached logs?

2011-10-25 Thread Simon Riggs
On Tue, Oct 18, 2011 at 6:39 PM, Samuel Hwang sam...@replicon.com wrote:

 The log is getting from PostgreSQL 9.0.4
 Basically we set up streaming replication hot-standby slave while master is
 under heavy load
 The slave started but not accepting read-only queries,
 every request will trigger the FATAL:  the database system is starting up
 error.

 The slave will eventually be able to accept read-only queries after the load
 on master is removed.
 We don't see the same problem if we set up hot-standby slave while master is
 not under heavy load.

 the logs follow:

 2011-10-18 10:34:06 MDT [17570]: [13-1] LOG:  consistent state delayed
 because recovery snapshot incomplete

Caveat #2 applies here
http://developer.postgresql.org/pgdocs/postgres/hot-standby.html#HOT-STANDBY-CAVEATS

The consistent state is delayed until your long running transactions
end, which is workload dependent but transient.

It's possible we will find another way of doing this in a future release.

Until then, I suggest starting base backup to create the standby when
not running both long transactions and transactions with many
subtransactions.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Primary key Index Error

2011-10-25 Thread Raghavendra
On Tue, Oct 25, 2011 at 11:40 AM, Manoj K P ma...@comodo.com wrote:

 *Server log* Oct 1 00:06:59 server_host_name postgres[1453]: [5-1]
 2011-10-01 00:06:59.831 EDT 1453 4e869041.5ad postgres [local] postgres LOG:
 duration: 418583.238 ms statement: select pg_start_backup('fortnightly');
 Oct 2 03:03:18 server_host_name postgres[1453]: [6-1] 2011-10-02
 03:03:18.243 EDT 1453 4e869041.5ad postgres [local] postgres LOG: duration:
 8034.385 ms statement: select pg_stop_backup(); In between stop and start
 process server_host_name is receiving all type of DML  DDL and generating
 new WAL file Taking base backup in between start and stop process *Client
 Log Details* Oct 25 05:16:18 client_server_name postgres[28858]: [2-1]
 2011-10-25 05:16:18.202 BST 28858 LOG: could not open file
 pg_xlog/271047B1008C (log file 18353, segment 140): No such
 file or directory Oct 25 05:16:18 client_server_name postgres[28858]: [3-1]
 2011-10-25 05:16:18.203 BST 28858 LOG: invalid checkpoint record Oct 25
 05:16:18 client_server_name postgres[28858]: [4-1] 2011-10-25 05:16:18.203
 BST 28858 FATAL: could not locate required checkpoint record Oct 25 05:16:18
 client_server_name postgres[28858]: [4-2] 2011-10-25 05:16:18.203 BST 28858
 HINT: If you are not restoring from a backup, try removing the file
 /mnt/new_cluster/backup_label. Oct 25 05:16:18 client_server_name
 postgres[28857]: [1-1] 2011-10-25 05:16:18.205 BST 28857 LOG: startup
 process (PID 28858) exited with exit code 1 Oct 25 05:16:18
 client_server_name postgres[28857]: [2-1] 2011-10-25 05:16:18.205 BST 28857
 LOG: aborting startup due to startup process failure Oct 25 05:20:53
 client_server_name postgres[29030]: [2-1] 2011-10-25 05:20:53.630 BST 29030
 LOG: could not open file pg_xlog/271047B10068 (log file 18353,
 segment 104): No such file or directory Oct 25 05:20:53 client_server_name
 postgres[29030]: [3-1] 2011-10-25 05:20:53.630 BST 29030 FATAL: could not
 find redo location referenced by checkpoint record Oct 25 05:20:53
 client_server_name postgres[29030]: [3-2] 2011-10-25 05:20:53.630 BST 29030
 HINT: If you are not restoring from a backup, try removing the file
 /mnt/new_cluster/backup_label. Oct 25 05:20:53 client_server_name
 postgres[29029]: [1-1] 2011-10-25 05:20:53.633 BST 29029 LOG: startup
 process (PID 29030) exited with exit code 1 Oct 25 05:20:53
 client_server_name postgres[29029]: [2-1] 2011-10-25 05:20:53.633 BST 29029
 LOG: aborting startup due to startup process failure manually copy following
 file to pg_xlog folder 271047B1008C 271047B10068
 After words i can start postgres and accessing the database , but same
 error


As per the logs, do you see missing XLOG files in Archive Destination ? becz
these kind of situations mostly missing files will be in WAL-Archive
location. You need to copy to pg_xlog directory and start the instance.

As Merlin Said, you need to dig more to know why its crashing by increasing
the LOG-DEBUG level's. Increasing DEBUG level may occupy good amount of
space in log-location, so make sure you have good space for logs to get what
exactly happening at the time of backup in particular. Am not sure whether
its safe to attach ***backtrace*** to instance for information.

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/


[GENERAL] unnest and string_to_array on two columns

2011-10-25 Thread Michael Graham
Hi all,

I'm trying to migrate an old (and sucky) schema to a new one and I'm
having some difficulties coming up with a sane select.

I have basically id, a, and b where a and b contain a list of flags like

id | a| b  |

1  | abc  | abcdef |

and what to convert this to multiple ids with single flags, like:

id | a| b|
--
1  | a| a|
1  | b| b|
1  | c| c|
1  | NULL | d|
1  | NULL | e|
1  | NULL | f|

My first attempt was

SELECT id, unnest(string_to_array(a,NULL)),
unnest(string_to_array(b,NULL)) FROM foo;

But this causes the shorter string to be repeated until it is the same
length as the shorter string.  In the end I have managed to get the
behaviour that I want but the select is horrible:

SELECT COALESCE(aa.id,bb.id) AS id,
aa.unnest AS a,
bb.unnest AS b FROM
(
SELECT *, row_number() OVER() FROM
(
 SELECT id,unnest(string_to_array(a,NULL)) FROM foo
) AS a
) AS aa  
FULL JOIN 
(
SELECT *, row_number() OVER() FROM
(
SELECT id,unnest(string_to_array(b,NULL)) FROM foo
) AS b
) AS bb
ON aa.row_number=bb.row_number AND aa.id=bb.id;

So I was wondering if anyone had any better solutions.

Thanks,
-- 
Michael Graham mgra...@bloxx.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with copy (loading TSV file into table as text)

2011-10-25 Thread Albe Laurenz
Allan Kamau wrote:
 #COPY a.t(raw_data)FROM '/data/tmp/t.txt' WITH FORMAT text;
 
 yields ERROR:  syntax error at or near FORMAT

You'll have to use the syntax as documented:

COPY ... FROM ... WITH (FORMAT 'text');

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostGIS in a commercial project

2011-10-25 Thread Mark Cave-Ayland

On -10/01/37 20:59, Thomas Kellerer wrote:


Now PostGIS is licensed under the GPL and I wonder if we can use it
in a commercial (customer specific) project then. The source code
will not be made open source, but of course the customer will get
the source code.

Is it still OK to use the GPL licensed PostGIS in this case? Is
that then considered a derivative work because the application will
not work without PostGIS?


If it's pure GPL, then postgresql is automagically relicenced to GPL,
because postgresql allows relicencing and GPL force it to be GPL.
Your source code must be in GPL too. Remember, it's a virus licence
and has the same problem that Midas king had.


Thanks for the answer.

I think we'll better be safe than sorry and we will not use PostGIS then.

Regards
Thomas


Hi Thomas,

As Robert has suggested, you have misunderstood the GPL license - if you 
make changes to the *PostGIS* source code AND you distribute the 
modified code to your customer (rather than offering a managed service), 
you would need to make the changes available to your *customer* upon 
request but there is no obligation to make them available to anyone 
else. But then if your application connects remotely to the PostgreSQL 
server then your application isn't linking directly to the PostGIS 
libraries, so then this becomes a non-issue anyway.


I guess strictly speaking you could call using stored procedures with 
PostGIS functions a GPL violation, but I don't believe anyone 
associated with the project would have a problem with this. The aim of 
the GPL license for PostGIS was to ensure that code was contributed back 
to the project core, not because we want to claim ownership on 
everyone's GIS application code.


If you have any further questions related to licensing, we would be glad 
to discuss this further on the postgis-users mailing list.



Kind regards,

Mark.
(Member of the PostGIS PSC)

--
Mark Cave-Ayland - Senior Technical Architect
PostgreSQL - PostGIS
Sirius Corporation plc - control through freedom
http://www.siriusit.co.uk
t: +44 870 608 0063

Sirius Labs: http://www.siriusit.co.uk/labs

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostGIS in a commercial project

2011-10-25 Thread John R Pierce

On 10/25/11 3:51 AM, Mark Cave-Ayland wrote:
As Robert has suggested, you have misunderstood the GPL license - if 
you make changes to the *PostGIS* source code AND you distribute the 
modified code to your customer (rather than offering a managed 
service), you would need to make the changes available to your 
*customer* upon request but there is no obligation to make them 
available to anyone else. But then if your application connects 
remotely to the PostgreSQL server then your application isn't linking 
directly to the PostGIS libraries, so then this becomes a non-issue 
anyway.


I guess strictly speaking you could call using stored procedures with 
PostGIS functions a GPL violation, but I don't believe anyone 
associated with the project would have a problem with this. The aim of 
the GPL license for PostGIS was to ensure that code was contributed 
back to the project core, not because we want to claim ownership on 
everyone's GIS application code.


If you have any further questions related to licensing, we would be 
glad to discuss this further on the postgis-users mailing list.


as I read the GPL, if he's distributing his software bundled on a 
turnkey computer with linux(GPL) and PostGIS(GPL) then the GPL license 
wants to encompass the whole package, and he has to make FULL source 
code available to his customers, who can freely redistribute said source 
any way they want.   the reality is, this is rather unenforcable.


if he's distributing his application software separately, and the user 
has to install linux and postgis etc and integrate his application, then 
this doesn't apply at all.





--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostGIS in a commercial project

2011-10-25 Thread Thomas Kellerer

Mark Cave-Ayland, 25.10.2011 12:51:

As Robert has suggested, you have misunderstood the GPL license - if
you make changes to the *PostGIS* source code AND you distribute the
modified code to your customer (rather than offering a managed
service), you would need to make the changes available to your
*customer* upon request but there is no obligation to make them
available to anyone else. But then if your application connects
remotely to the PostgreSQL server then your application isn't linking
directly to the PostGIS libraries, so then this becomes a non-issue
anyway.

I guess strictly speaking you could call using stored procedures with
PostGIS functions a GPL violation, but I don't believe anyone
associated with the project would have a problem with this. The aim
of the GPL license for PostGIS was to ensure that code was
contributed back to the project core, not because we want to claim
ownership on everyone's GIS application code.

If you have any further questions related to licensing, we would be
glad to discuss this further on the postgis-users mailing list.


Thank you very much for the detailed explanation.

I always have a hard time to understand the GPL especially the dividing line between 
using, linkin and creating a derived work.

Kind regards
Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] List Permissions

2011-10-25 Thread Maton, Brett
Hi,

  How can I list a users permissions table by table?

  i.e.  User Joe
 has read/write on table1
 has read on table2
 no access on table 3

Or something

Thanks for any help!


Re: [GENERAL] List Permissions

2011-10-25 Thread Raghavendra
You can get it from psql terminal.

postgres=# \z  table-name

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/



On Tue, Oct 25, 2011 at 4:50 PM, Maton, Brett mat...@ltresources.co.ukwrote:

 Hi,

   How can I list a users permissions table by table?

   i.e.  User Joe
  has read/write on table1
  has read on table2
  no access on table 3

 Or something

 Thanks for any help!



Re: [GENERAL] hi, friends. are there any performance tuning materials for postgreSQL recommended?

2011-10-25 Thread Scott Marlowe
On Mon, Oct 24, 2011 at 9:38 PM, Craig Ringer ring...@ringerc.id.au wrote:
 On 25/10/11 11:00, sunpeng wrote:

 Hi, friends.
  Are there any performance tuning resouces for postgreSQL recommended, such
 as ppt, books or articles?
 Thanks!
 peng

 http://wiki.postgresql.org/wiki/Performance_Optimization

 http://www.postgresql.org/docs/books/

 ... of which the most recent I know of is:
 http://www.postgresql.org/about/news.1249

I'll second the last one.  Great resource.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] List Permissions

2011-10-25 Thread Raghavendra
Forgot to post the reference manual link. Here you go.

http://www.postgresql.org/docs/9.0/static/sql-grant.html

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/



On Tue, Oct 25, 2011 at 5:21 PM, Raghavendra 
raghavendra@enterprisedb.com wrote:

 You can get it from psql terminal.

 postgres=# \z  table-name

 ---
 Regards,
 Raghavendra
 EnterpriseDB Corporation
 Blog: http://raghavt.blogspot.com/



 On Tue, Oct 25, 2011 at 4:50 PM, Maton, Brett mat...@ltresources.co.ukwrote:

 Hi,

   How can I list a users permissions table by table?

   i.e.  User Joe
  has read/write on table1
  has read on table2
  no access on table 3

 Or something

 Thanks for any help!





Re: [GENERAL] List Permissions

2011-10-25 Thread Venkat Balaji
My answers are in line in RED -

  How can I list a users permissions table by table?

   i.e.  User Joe
  has read/write on table1

 has read on table2
  no access on table 3


For a particular user you can use below function. You can write a SQL query
or script which takes table names from pg_tables one by one.

has_table_privilege(user, table, privilege)

Example :

I am checking if user postgres has select privilege on table1.

postgres=# select has_table_privilege('postgres','public.table1','select');

has_table_privilege
-
 t
(1 row)


For current user (user you logged in as) you can use the following function

has_table_privilege(table, privilege)

I am checking if the current_user has select privilege on table1

Example:

postgres=# select current_user;

current_user
--
postgres

(1 row)

postgres=# select has_table_privilege('public.table1','select');

has_table_privilege
-
 t

Below link has all the other functions regarding checking permissions

http://www.postgresql.org/docs/9.0/static/functions-info.html

Hope this helps !

Thanks
VB


Re: [GENERAL] List Permissions

2011-10-25 Thread Maton, Brett
Thanks for the replies.
  Actually this was question posed by one of my colleagues, what he really
wants to know is if there is the equivalent of MySQL's

select * from all_tab_privs_recd where grantee = 'your user'

Thanks again,
Brett

On 25 October 2011 13:21, Venkat Balaji venkat.bal...@verse.in wrote:

 My answers are in line in RED -

   How can I list a users permissions table by table?

   i.e.  User Joe
  has read/write on table1

  has read on table2
  no access on table 3


 For a particular user you can use below function. You can write a SQL query
 or script which takes table names from pg_tables one by one.

 has_table_privilege(user, table, privilege)

 Example :

 I am checking if user postgres has select privilege on table1.

 postgres=# select has_table_privilege('postgres','public.table1','select');

 has_table_privilege
 -
  t
 (1 row)


 For current user (user you logged in as) you can use the following function

 has_table_privilege(table, privilege)

 I am checking if the current_user has select privilege on table1

 Example:

 postgres=# select current_user;

 current_user
 --
 postgres

 (1 row)

 postgres=# select has_table_privilege('public.table1','select');

 has_table_privilege
 -
  t

 Below link has all the other functions regarding checking permissions

 http://www.postgresql.org/docs/9.0/static/functions-info.html

 Hope this helps !

 Thanks
 VB



Re: [GENERAL] List Permissions

2011-10-25 Thread Raghavendra
On Tue, Oct 25, 2011 at 6:04 PM, Maton, Brett mat...@ltresources.co.ukwrote:

 Thanks for the replies.
   Actually this was question posed by one of my colleagues, what he really
 wants to know is if there is the equivalent of MySQL's

 select * from all_tab_privs_recd where grantee = 'your user'

 Thanks again,
 Brett



You have that too...

 select * from information_schema.role_table_grants where grantee='your
user';

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/





 On 25 October 2011 13:21, Venkat Balaji venkat.bal...@verse.in wrote:

 My answers are in line in RED -

   How can I list a users permissions table by table?

   i.e.  User Joe
  has read/write on table1

  has read on table2
  no access on table 3


 For a particular user you can use below function. You can write a SQL
 query or script which takes table names from pg_tables one by one.

 has_table_privilege(user, table, privilege)

 Example :

 I am checking if user postgres has select privilege on table1.

 postgres=# select
 has_table_privilege('postgres','public.table1','select');

 has_table_privilege
 -
  t
 (1 row)


 For current user (user you logged in as) you can use the following
 function

 has_table_privilege(table, privilege)

 I am checking if the current_user has select privilege on table1

 Example:

 postgres=# select current_user;

 current_user
 --
 postgres

 (1 row)

 postgres=# select has_table_privilege('public.table1','select');

 has_table_privilege
 -
  t

 Below link has all the other functions regarding checking permissions

 http://www.postgresql.org/docs/9.0/static/functions-info.html

 Hope this helps !

 Thanks
 VB





Re: [GENERAL] List Permissions

2011-10-25 Thread Maton, Brett
Bingo!

  Thanks very much

On 25 October 2011 13:47, Raghavendra raghavendra@enterprisedb.comwrote:

 On Tue, Oct 25, 2011 at 6:04 PM, Maton, Brett mat...@ltresources.co.ukwrote:

 Thanks for the replies.
   Actually this was question posed by one of my colleagues, what he really
 wants to know is if there is the equivalent of MySQL's

 select * from all_tab_privs_recd where grantee = 'your user'

 Thanks again,
 Brett



 You have that too...

  select * from information_schema.role_table_grants where grantee='your
 user';

 ---
 Regards,
 Raghavendra
 EnterpriseDB Corporation
 Blog: http://raghavt.blogspot.com/





 On 25 October 2011 13:21, Venkat Balaji venkat.bal...@verse.in wrote:

 My answers are in line in RED -

   How can I list a users permissions table by table?

   i.e.  User Joe
  has read/write on table1

  has read on table2
  no access on table 3


 For a particular user you can use below function. You can write a SQL
 query or script which takes table names from pg_tables one by one.

 has_table_privilege(user, table, privilege)

 Example :

 I am checking if user postgres has select privilege on table1.

 postgres=# select
 has_table_privilege('postgres','public.table1','select');

 has_table_privilege
 -
  t
 (1 row)


 For current user (user you logged in as) you can use the following
 function

 has_table_privilege(table, privilege)

 I am checking if the current_user has select privilege on table1

 Example:

 postgres=# select current_user;

 current_user
 --
 postgres

 (1 row)

 postgres=# select has_table_privilege('public.table1','select');

 has_table_privilege
 -
  t

 Below link has all the other functions regarding checking permissions

 http://www.postgresql.org/docs/9.0/static/functions-info.html

 Hope this helps !

 Thanks
 VB






Re: [GENERAL] Primary key Index Error

2011-10-25 Thread Merlin Moncure
On Tue, Oct 25, 2011 at 2:41 AM, Raghavendra
raghavendra@enterprisedb.com wrote:
 On Tue, Oct 25, 2011 at 11:40 AM, Manoj K P ma...@comodo.com wrote:

 Server log Oct 1 00:06:59 server_host_name postgres[1453]: [5-1]
 2011-10-01 00:06:59.831 EDT 1453 4e869041.5ad postgres [local] postgres LOG:
 duration: 418583.238 ms statement: select pg_start_backup('fortnightly');
 Oct 2 03:03:18 server_host_name postgres[1453]: [6-1] 2011-10-02
 03:03:18.243 EDT 1453 4e869041.5ad postgres [local] postgres LOG: duration:
 8034.385 ms statement: select pg_stop_backup(); In between stop and start
 process server_host_name is receiving all type of DML  DDL and generating
 new WAL file Taking base backup in between start and stop process Client Log
 Details Oct 25 05:16:18 client_server_name postgres[28858]: [2-1] 2011-10-25
 05:16:18.202 BST 28858 LOG: could not open file
 pg_xlog/271047B1008C (log file 18353, segment 140): No such
 file or directory Oct 25 05:16:18 client_server_name postgres[28858]: [3-1]
 2011-10-25 05:16:18.203 BST 28858 LOG: invalid checkpoint record Oct 25
 05:16:18 client_server_name postgres[28858]: [4-1] 2011-10-25 05:16:18.203
 BST 28858 FATAL: could not locate required checkpoint record Oct 25 05:16:18
 client_server_name postgres[28858]: [4-2] 2011-10-25 05:16:18.203 BST 28858
 HINT: If you are not restoring from a backup, try removing the file
 /mnt/new_cluster/backup_label. Oct 25 05:16:18 client_server_name
 postgres[28857]: [1-1] 2011-10-25 05:16:18.205 BST 28857 LOG: startup
 process (PID 28858) exited with exit code 1 Oct 25 05:16:18
 client_server_name postgres[28857]: [2-1] 2011-10-25 05:16:18.205 BST 28857
 LOG: aborting startup due to startup process failure Oct 25 05:20:53
 client_server_name postgres[29030]: [2-1] 2011-10-25 05:20:53.630 BST 29030
 LOG: could not open file pg_xlog/271047B10068 (log file 18353,
 segment 104): No such file or directory Oct 25 05:20:53 client_server_name
 postgres[29030]: [3-1] 2011-10-25 05:20:53.630 BST 29030 FATAL: could not
 find redo location referenced by checkpoint record Oct 25 05:20:53
 client_server_name postgres[29030]: [3-2] 2011-10-25 05:20:53.630 BST 29030
 HINT: If you are not restoring from a backup, try removing the file
 /mnt/new_cluster/backup_label. Oct 25 05:20:53 client_server_name
 postgres[29029]: [1-1] 2011-10-25 05:20:53.633 BST 29029 LOG: startup
 process (PID 29030) exited with exit code 1 Oct 25 05:20:53
 client_server_name postgres[29029]: [2-1] 2011-10-25 05:20:53.633 BST 29029
 LOG: aborting startup due to startup process failure manually copy following
 file to pg_xlog folder 271047B1008C 271047B10068
 After words i can start postgres and accessing the database , but same
 error

 As per the logs, do you see missing XLOG files in Archive Destination ? becz
 these kind of situations mostly missing files will be in WAL-Archive
 location. You need to copy to pg_xlog directory and start the instance.

 As Merlin Said, you need to dig more to know why its crashing by increasing
 the LOG-DEBUG level's. Increasing DEBUG level may occupy good amount of
 space in log-location, so make sure you have good space for logs to get what
 exactly happening at the time of backup in particular. Am not sure whether
 its safe to attach ***backtrace*** to instance for information.

yeah. also, what's the setting of archive_command (or is it even set)?
 taking a 'hot' filesystem backup without having an archive_command
and not doing any other intervention to guarantee the necessary WAL
segments are present will not give you a complete backup.  my money is
on you having an invalid backup procedure.  the only way to take a
filesystem snapshot without dealing with WAL files is to bring the
database down.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] unnest and string_to_array on two columns

2011-10-25 Thread David Johnston
-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Michael Graham
Sent: Tuesday, October 25, 2011 4:36 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] unnest and string_to_array on two columns

Hi all,

I'm trying to migrate an old (and sucky) schema to a new one and I'm having 
some difficulties coming up with a sane select.

I have basically id, a, and b where a and b contain a list of flags like

id | a| b  |

1  | abc  | abcdef |

and what to convert this to multiple ids with single flags, like:

id | a| b|
--
1  | a| a|
1  | b| b|
1  | c| c|
1  | NULL | d|
1  | NULL | e|
1  | NULL | f|

My first attempt was

SELECT id, unnest(string_to_array(a,NULL)),
unnest(string_to_array(b,NULL)) FROM foo;

But this causes the shorter string to be repeated until it is the same length 
as the shorter string.  In the end I have managed to get the behaviour that I 
want but the select is horrible:

SELECT COALESCE(aa.id,bb.id) AS id,
aa.unnest AS a,
bb.unnest AS b FROM
(
SELECT *, row_number() OVER() FROM
(
 SELECT id,unnest(string_to_array(a,NULL)) FROM foo
) AS a
) AS aa  
FULL JOIN 
(
SELECT *, row_number() OVER() FROM
(
SELECT id,unnest(string_to_array(b,NULL)) FROM foo
) AS b
) AS bb
ON aa.row_number=bb.row_number AND aa.id=bb.id;

So I was wondering if anyone had any better solutions.

Thanks,
--
Michael Graham mgra...@bloxx.com

- /Original Message -
Same solution but using CTEs.

WITH val_src AS (
SELECT * FROM (VALUES (1, 'a,b,c','a,b,c,d,e')) vals (id, a, b)
), a_expanded AS (
   SELECT *, ROW_NUMBER() OVER () AS row_index FROM (
SELECT id, unnest(string_to_array(a, ',')) AS a_item FROM val_src
  ) a_src
), b_expanded aS (
   SELECT *, ROW_NUMBER() OVER () AS row_index FROM (
SELECT id, unnest(string_to_array(b, ',')) AS b_item FROM val_src
  ) b_src
)
SELECT *
FROM b_expanded NATURAL FULL OUTER JOIN a_expanded;

The only other thought would be to limit your first query to [pseudocode] 
WHERE ROW_NUMBER() OVER () = MIN(b.length, a.length) and the UNION the 
remainder of A and B where the row number is  MIN(b.length, a.length).  This 
seems worse though.

David J.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] strange java query behaviour

2011-10-25 Thread Marti Raudsepp
On Mon, Oct 24, 2011 at 23:23, Szymon Guz mabew...@gmail.com wrote:
 String query1 = SELECT * FROM information_schema.schemata WHERE schema_name 
 = ?;

 When I query the database using psql, both queries return sensible data
 (even when I prepare statements in postgres).
 I'd like to use information_schema rather than using pg_catalog.

The documentation says: The view schemata contains all schemas in the
current database that are owned by a currently enabled role.

In other words: this view only displays schemas that are *owned* by
your user, or roles that your current user inherits from (superuser
sees everything of course). Sadly it doesn't list visible/accessible
schemas.

I think this is pretty surprising; not sure if it's just bad legacy or
if there is some good reason for this behavior. I couldn't find any
justification in the source code.

I think we should add a TODO item for fixing this?

Regards,
Marti

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] strange java query behaviour

2011-10-25 Thread Szymon Guz
On 25 October 2011 17:04, Marti Raudsepp ma...@juffo.org wrote:

 On Mon, Oct 24, 2011 at 23:23, Szymon Guz mabew...@gmail.com wrote:
  String query1 = SELECT * FROM information_schema.schemata WHERE
 schema_name = ?;

  When I query the database using psql, both queries return sensible data
  (even when I prepare statements in postgres).
  I'd like to use information_schema rather than using pg_catalog.

 The documentation says: The view schemata contains all schemas in the
 current database that are owned by a currently enabled role.

 In other words: this view only displays schemas that are *owned* by
 your user, or roles that your current user inherits from (superuser
 sees everything of course). Sadly it doesn't list visible/accessible
 schemas.

 I think this is pretty surprising; not sure if it's just bad legacy or
 if there is some good reason for this behavior. I couldn't find any
 justification in the source code.

 I think we should add a TODO item for fixing this?

 Regards,
 Marti


Hi,
thanks for the answer. I was really my fault: I've been using user test in
the java test and postgres user in psql. I've changed the user to
postgres in java and results are OK now.

regards
Szymon


[GENERAL] GPU and pgcrypto

2011-10-25 Thread Wim Bertels
Gc,

For passwords the use of 
http://www.postgresql.org/docs/8.4/interactive/pgcrypto.html
is advised.

But what about GPU cracking?
Is crypt-bf/8 strong enough?

cf bcrypt in php:
http://stackoverflow.com/questions/6791126/how-is-bcrypt-more-future-proof-than-increasing-the-number-of-sha-iterations

mvg,
Wim


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] strange java query behaviour

2011-10-25 Thread Tom Lane
Marti Raudsepp ma...@juffo.org writes:
 The documentation says: The view schemata contains all schemas in the
 current database that are owned by a currently enabled role.

 In other words: this view only displays schemas that are *owned* by
 your user, or roles that your current user inherits from (superuser
 sees everything of course). Sadly it doesn't list visible/accessible
 schemas.

 I think this is pretty surprising; not sure if it's just bad legacy or
 if there is some good reason for this behavior. I couldn't find any
 justification in the source code.

The justification is that the SQL standard requires the view to act that
way.

 20.46  SCHEMATA view

 Function

 Identify the schemata in a catalog that are owned by a given user.

 Definition

 CREATE VIEW SCHEMATA AS
 SELECT CATALOG_NAME, SCHEMA_NAME, SCHEMA_OWNER,
DEFAULT_CHARACTER_SET_CATALOG, DEFAULT_CHARACTER_SET_
 SCHEMA,
DEFAULT_CHARACTER_SET_NAME, SQL_PATH
 FROM DEFINITION_SCHEMA.SCHEMATA

 WHERE ( SCHEMA_OWNER = CURRENT_USER
   OR
 SCHEMA_OWNER IN
 ( SELECT ROLE_NAME
   FROM ENABLED_ROLES ) )
   AND
   CATALOG_NAME
 = ( SELECT CATALOG_NAME
 FROM INFORMATION_SCHEMA_CATALOG_NAME );

 GRANT SELECT ON TABLE SCHEMATA
 TO PUBLIC WITH GRANT OPTION;

 I think we should add a TODO item for fixing this?

Waste of breath.  You could try lobbying the SQL committee to change the
standard, perhaps.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem installing PostgreSQL 9.0 via Macports on OS X Server 10.6

2011-10-25 Thread Basil Bourque
 By default the postgres user's group is 'daemon' on my Mac.  This is probably 
 true for you as well.  Change occurrences of 'postgres:postgres' in your 
 chown commands to 'postgres:daemon'.

For what it's worth, here's the user groups and permissions I see on my Mac. I 
used the installer from EnterpriseDB.com for my Snow Leopard MacBook.

Typing the 'id' command-line tool:
id postgres

Renders this:
uid=502(postgres) gid=1(daemon) 
groups=1(daemon),403(com.apple.sharepoint.group.2),61(localaccounts),12(everyone),402(com.apple.sharepoint.group.1)

So, daemon is the primary group, along with 4 other groups.

/Library/PostgreSQL
+--+-+
|system|  Read  Write   |
+--+-+
|admin |Read only|
+--+-+
|   everyone   |Read only|
+--+-+

/Library/PostgreSQL/9.0
/Library/PostgreSQL/9.1
/Library/PostgreSQL/9.1/bin
/Library/PostgreSQL/9.1/lib
/Library/PostgreSQL/9.1/pgAdmin3.app
+--+-+
|system|  Read  Write   |
+--+-+
|daemon|Read only|
+--+-+
|   everyone   |Read only|
+--+-+

/Library/PostgreSQL/9.1/data
+--+-+
|   postgres   |  Read  Write   |
+--+-+
|   everyone   |No Access|
+--+-+

--Basil Bourque

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Saving score of 3 players into a table

2011-10-25 Thread Alexander Farber
Hello,

I'm trying to save results of card game with 3 players into a table.

It is bad enough, that I had to introduce
3 columns for user ids: id0, id1, id2 and
3 columns for their scores: money0, money1, money2 -

create table pref_results (
id0 varchar(32) references pref_users,
id1 varchar(32) references pref_users,
id2 varchar(32) references pref_users,
money0 integer not null,
money1 integer not null,
money2 integer not null,
rounds integer not null,
finished timestamp default current_timestamp
);

But now I've also realized, that I don't know,
how to join that table with the pref_users,
so that I get first_name for each of 3 players -

$sth = $db-prepare(
select
 id0,
 id1,
 id2,
 money0,
 money1,
 money2,
 rounds,
 to_char(finished,'DD.MM.') as day
from pref_results
where finished  now() - interval '1 week'
 and (id0=? or id1=? or id2=?)
   );
$sth-execute(array($id, $id, $id));

while ($row = $sth-fetch(PDO::FETCH_ASSOC)) {
   # XXX print the table with day, first_names and money
}

I'm probably doing something wrong here?

Thank you
Alex

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Saving score of 3 players into a table

2011-10-25 Thread David Johnston
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Alexander Farber
Sent: Tuesday, October 25, 2011 3:33 PM
To: pgsql-general
Subject: [GENERAL] Saving score of 3 players into a table

Hello,

I'm trying to save results of card game with 3 players into a table.

It is bad enough, that I had to introduce
3 columns for user ids: id0, id1, id2 and
3 columns for their scores: money0, money1, money2 -

create table pref_results (
id0 varchar(32) references pref_users,
id1 varchar(32) references pref_users,
id2 varchar(32) references pref_users,
money0 integer not null,
money1 integer not null,
money2 integer not null,
rounds integer not null,
finished timestamp default current_timestamp
);

But now I've also realized, that I don't know, how to join that table with
the pref_users, so that I get first_name for each of 3 players -


[...]

I'm probably doing something wrong here?

Thank you
Alex

/Original Message --

Yes, you are creating multiple columns to hold data for each of the players.
Each player should go into a separate row.

You want something like:

CREATE TABLE pref_results (
Game_id varchar,
Player_id varchar,
Player_winnings numeric,
Player_position integer -- not truly required but useful for
generating columns later
);

CREATE TABLE pref_games (
Game_id varchar,
Game_rounds integer,
Game_finished_ts timestamptz
);

It is almost always wrong to have columns where you are simply adding a
sequential integer to the same base name.

However, to answer your question, you would need to JOIN the pref_users
table to the pref_results table THREE TIMES, once for each of (id0, id1,
id2).

SELECT *
FROM pref_results
JOIN pref_users user_0 ON (id0 = user_0.id)
JOIN perf_users user_1 ON (id1 = user_1.id)
JOIN perf_users user_2 ON (id1 = user_2.id)

Note the aliases for the pref_users table, and you would want to alias any
columns you end up pulling into the SELECT list.

Then you hope you never need to add a 4th player.

If you still want to present the data using 3 sets of columns for the
players you would need to perform a limited self-join:

SELECT 
Game_id, 
p1.Player_id AS P1_ID, 
p2.Player_id AS P2_ID, 
p3.Player_id AS P3_ID
FROM (SELECT ... FROM pref_results WHERE Player_position = 1) p1 USING
(Game_id)
JOIN (SELECT .. FROM pref_results WHERE Player_position = 2) p2 USING
(Game_id)
JOIN (SELECT .. FROM pref_results WHERE Player_position = 3) p2 USING
(Game_id)

Then add whatever columns and JOIN you need to get all the desired fields
into the output.

In this way you have a database model that is easy to query and insert data
into while still having the ability to view the data in a natural way
(horizontally).  Add should you want to track a game with four players you
can still use the same data model and simply add a VIEW similar to the
three-person view but with a fourth set of columns for the fourth player.

David J.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Saving score of 3 players into a table

2011-10-25 Thread Michael Glaesemann

On Oct 25, 2011, at 15:32, Alexander Farber wrote:

 Hello,
 
 I'm trying to save results of card game with 3 players into a table.
 
 It is bad enough, that I had to introduce
 3 columns for user ids: id0, id1, id2 and
 3 columns for their scores: money0, money1, money2 -
 
create table pref_results (
id0 varchar(32) references pref_users,
id1 varchar(32) references pref_users,
id2 varchar(32) references pref_users,
money0 integer not null,
money1 integer not null,
money2 integer not null,
rounds integer not null,
finished timestamp default current_timestamp
);
 
 But now I've also realized, that I don't know,
 how to join that table with the pref_users,
 so that I get first_name for each of 3 players -
 
$sth = $db-prepare(
select
 id0,
 id1,
 id2,
 money0,
 money1,
 money2,
 rounds,
 to_char(finished,'DD.MM.') as day
from pref_results
where finished  now() - interval '1 week'
 and (id0=? or id1=? or id2=?)
   );
$sth-execute(array($id, $id, $id));
 
while ($row = $sth-fetch(PDO::FETCH_ASSOC)) {
   # XXX print the table with day, first_names and money
}
 
 I'm probably doing something wrong here?

Likely. Are you only ever going to have three players per table? Seems unlikely.

Without knowing anything else about your application, I suspect you need more 
tables:
a games table
a games_players table with each row associating a single player with a game. A 
three-player game has three rows per game. A four-player game would have four.

Depending on your app, you might also have finished_games and 
game_player_results tables.

Michael Glaesemann
grzm seespotcode net


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] writing a foreign data wrapper for hdfs, but getting and undefined symbol error for hdfsConnect

2011-10-25 Thread Rob_pg
I've been working on creating a foreign data wrapper for hdfs on using version 
9.1.0. This is my first time creating C functions against postgres, so 
hopefully this falls under the 'newbie' category and is easy to solve.

The source code code does compile resulting in a shared library:

file mylibrary.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV),  

dynamically linked, not stripped

ldd mylibrary.so
linux-vdso.so.1 =  (0x7fff40fff000)
libc.so.6 = /lib/libc.so.6 (0x7f3adb8cc000)
/lib64/ld-linux-x86-64.so.2 (0x7f3adbe72000)

But, the library fails to load when I use the LOAD statement:

LOAD mylibrary.so

The error is:

 ERROR:  could not load library mylibrary.so: mylibrary.so: undefined symbol: 
hdfsConnect

I already figured it needs to recognize the hadoop shared library, libhdfs.so.0 
but loading hdfs directly, of course, results in the following error:

ERROR:  incompatible library /home/robert/lib/libhdfs.so.0: missing magic 
block
HINT:  Extension libraries are required to use the PG_MODULE_MAGIC macro.

So how do I manage to load hdfs? All that is required at this point for the 
data wrapper to do is to open and read a very small file.

Here's the snippet in question:

/
#include hdfs.h

#include string.h
#include stdio.h

#include postgres.h
#include fmgr.h
#include funcapi.h
#include foreign/fdwapi.h
#include foreign/foreign.h
#include commands/explain.h
#include commands/defrem.h
#include catalog/pg_foreign_table.h

PG_MODULE_MAGIC;

typedef struct {
  char *connection,
   *filename,
   *limit,
   *offset;
  hdfsFS   *fs;
  hdfsFile *fp;
} hdfsFdwExecutionState;

extern Datum hdfs_fdw_handler(PG_FUNCTION_ARGS);
.
.
.
// here's where the function hdfsConnect is first called

static void hdfsBeginForeignScan(ForeignScanState *node, int eflags) {
  hdfsFdwExecutionState *festate;
  char  *recordset  = malloc(LINE_LENGTH*sizeof(*recordset)),
*connection = default,
*filename,
*limit,
*offset;
  hdfsFS   fs;
  hdfsFile fp;

  if (eflags  EXEC_FLAG_EXPLAIN_ONLY) return;

  hdfsGetOptions(RelationGetRelid(node-ss.ss_currentRelation), filename, 
limit, offset);

  festate   = (hdfsFdwExecutionState *) 
palloc(sizeof(hdfsFdwExecutionState));

  fs= hdfsConnect(connection, 0);
  fp= setFILEoffset(fs, filename, offset);

  festate-filename = filename;
  festate-limit= limit;
  festate-offset   = offset;
  festate-fs   = fs;
  festate-fp   = (void *) fp;

  node-fdw_state = (void *) festate;
}
/




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] writing a foreign data wrapper for hdfs, but getting and undefined symbol error for hdfsConnect

2011-10-25 Thread Tom Lane
Rob_pg robert7...@comcast.net writes:
 I've been working on creating a foreign data wrapper for hdfs on using 
 version 
 9.1.0. This is my first time creating C functions against postgres, so 
 hopefully this falls under the 'newbie' category and is easy to solve.

 The source code code does compile resulting in a shared library:

 file mylibrary.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV),
 
 dynamically linked, not stripped

 ldd mylibrary.so
 linux-vdso.so.1 =  (0x7fff40fff000)
 libc.so.6 = /lib/libc.so.6 (0x7f3adb8cc000)
 /lib64/ld-linux-x86-64.so.2 (0x7f3adbe72000)

The reason it's not working is that libhdfs.so isn't listed as a
requirement for mylibrary.so.  You did not show us your link command
for mylibrary.so, but most likely there needs to be a -lhdfs in it.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] missing chunk 0 for toast value ...

2011-10-25 Thread Andrew Hammond
I found the following error message in my logfiles.

Oct 24 04:05:57 db-app02 postgres[24640]: [2-1]
user=nutricate,db=admin_production ERROR:  missing chunk number 0 for
toast value
2411466 in pg_toast_2619
Oct 24 04:05:57 db-app02 postgres[24640]: [2-2]
user=nutricate,db=admin_production STATEMENT:  SELECT devices.id,
devices.name, devices.info, devices.pos_id,
devices.device_group_id, devices.header_id, devices.footer_id,
devices.device_type_id, devices.auth_code, devices.pos_comm_id,
devices.printer_comm_id, devices.sw_version, devices.hw_version,
devices.device_status, devices.entity_id, devices.address,
devices.created_by, devices.create_method, devices.created_on,
devices.modified_by, devices.updated_on, devices.version_id,
devices.unique_id, devices.hw_key, devices.config_status,
devices.activated_on, devices.store_id,
devices.last_status_update, devices.loaded_on,
devices.header2_id, devices.footer2_id, devices.timezone_id,
devices.scanner_comm_id, devices.public_address,
devices.hostname, devices.update_sw_version,
devices.proxy_address, devices.proxy_type_id,
devices.build_error,
Oct 24 04:05:57 db-app02 postgres[24640]: [2-3]  devices.local_info,
devices.associated_on FROM devices WHERE (devices.entity_id IN
(SELECT U0.id FROM entities U0 WHERE ((U0.lft  280  AND U0.rgt 
2597 ) OR U0.id = 140 )) AND devices.auth_code = E'0063' )

I tried reproducing it with

SELECT * FROM devices WHERE (devices.entity_id IN (SELECT U0.id FROM
entities U0 WHERE ((U0.lft  280  AND U0.rgt  2597 ) OR U0.id = 140
)) AND devices.auth_code = E'0063' )

This returned no rows and did not reproduce the error. I tried reproducing
with an even broader approach

CREATE TEMP TABLE foo AS SELECT * FROM devices;
CREATE TEMP TABLE bar AS SELECT * FROM entities;

Still no repro. This is the second time I've seen this error, and the last
time it also did not reproduce. I'm wondering a couple of things. First, is
there some other way to reproduce this error? Second, would simply deleting
the row cause the problem to go away? I wouldn't think so, but why else is
it not reproducing? Third, what is the recommended course of action here. We
have hourly backups, but this is a production database, so I would prefer
not to have to restore. I tested a restore from an hour ago and it ran
successfully.

A


Re: [GENERAL] missing chunk 0 for toast value ...

2011-10-25 Thread Merlin Moncure
On Tue, Oct 25, 2011 at 3:47 PM, Andrew Hammond
andrew.george.hamm...@gmail.com wrote:
 I found the following error message in my logfiles.
 Oct 24 04:05:57 db-app02 postgres[24640]: [2-1]
 user=nutricate,db=admin_production ERROR:  missing chunk number 0
 for toast value 2411466 in pg_toast_2619
 Oct 24 04:05:57 db-app02 postgres[24640]: [2-2]
 user=nutricate,db=admin_production STATEMENT:  SELECT devices.id,
 devices.name, devices.info, devices.pos_id,
 devices.device_group_id, devices.header_id, devices.footer_id,
 devices.device_type_id, devices.auth_code, devices.pos_comm_id,
 devices.printer_comm_id, devices.sw_version, devices.hw_version,
 devices.device_status, devices.entity_id, devices.address,
 devices.created_by, devices.create_method, devices.created_on,
 devices.modified_by, devices.updated_on, devices.version_id,
 devices.unique_id, devices.hw_key, devices.config_status,
 devices.activated_on, devices.store_id,
 devices.last_status_update, devices.loaded_on,
 devices.header2_id, devices.footer2_id, devices.timezone_id,
 devices.scanner_comm_id, devices.public_address,
 devices.hostname, devices.update_sw_version,
 devices.proxy_address, devices.proxy_type_id,
 devices.build_error,
 Oct 24 04:05:57 db-app02 postgres[24640]: [2-3]  devices.local_info,
 devices.associated_on FROM devices WHERE (devices.entity_id IN
 (SELECT U0.id FROM entities U0 WHERE ((U0.lft  280  AND U0.rgt 
 2597 ) OR U0.id = 140 )) AND devices.auth_code = E'0063' )
 I tried reproducing it with
 SELECT * FROM devices WHERE (devices.entity_id IN (SELECT U0.id FROM
 entities U0 WHERE ((U0.lft  280  AND U0.rgt  2597 ) OR U0.id = 140
 )) AND devices.auth_code = E'0063' )
 This returned no rows and did not reproduce the error. I tried reproducing
 with an even broader approach
 CREATE TEMP TABLE foo AS SELECT * FROM devices;
 CREATE TEMP TABLE bar AS SELECT * FROM entities;
 Still no repro. This is the second time I've seen this error, and the last
 time it also did not reproduce. I'm wondering a couple of things. First, is
 there some other way to reproduce this error? Second, would simply deleting
 the row cause the problem to go away? I wouldn't think so, but why else is
 it not reproducing? Third, what is the recommended course of action here. We
 have hourly backups, but this is a production database, so I would prefer
 not to have to restore. I tested a restore from an hour ago and it ran
 successfully.

hrm, that's bizarre -- version etc?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] missing chunk 0 for toast value ...

2011-10-25 Thread Tom Lane
Andrew Hammond andrew.george.hamm...@gmail.com writes:
 Oct 24 04:05:57 db-app02 postgres[24640]: [2-1]
 user=nutricate,db=admin_production ERROR:  missing chunk number 0 for
 toast value
 2411466 in pg_toast_2619

Hmm ... pg_toast_2619 is pg_statistic's toast table.  Is this 9.0.x,
and are you by any chance in the habit of running CLUSTER or VACUUM FULL
on your system catalogs?  Could one have been running in the background
when this happened?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] writing a foreign data wrapper for hdfs, but getting and undefined symbol error for hdfsConnect

2011-10-25 Thread Rob_pg
Hi Tom,

Thanks for the tip, I altered the gcc invocation as follows:

Here are the two gcc invocations originally creating the shared library:

gcc -Wall -fPIC -c mylibrary.c -o mylibrary.o \
-I $(A) -I $(B) -I $(C) -I $(E) -lhdfs

gcc -I $(A) -I $(B) -I $(C) -I $(E) -shared\
-Wl,-soname,mylibrary.so -o mylibrary.so mylibrary.o

#
Here's the new invocations: I added -lhdfs to the second gcc invocation.

gcc -I $(A) -I $(B) -I $(C) -I $(E) -shared\
 -lhdfs -Wl,-soname,mylibrary.so -o mylibrary.so mylibrary.o

Now I can see the libraries!

ldd mylibrary.so
linux-vdso.so.1 =  (0x7fff499c5000)
libhdfs.so.0 = /usr/lib/libhdfs.so.0 (0x7f44e4739000)
libc.so.6 = /lib/libc.so.6 (0x7f44e43b6000)
libjvm.so = /usr/lib/jvm/java-6-
openjdk/jre/lib/amd64/server/libjvm.so (0x7f44e3866000)
libdl.so.2 = /lib/libdl.so.2 (0x7f44e3662000)
libpthread.so.0 = /lib/libpthread.so.0 (0x7f44e3445000)
/lib64/ld-linux-x86-64.so.2 (0x7f44e4b66000)
libm.so.6 = /lib/libm.so.6 (0x7f44e31c1000)


 Rob_pg robert7...@comcast.net writes:
  I've been working on creating a foreign data wrapper for hdfs on using
  version 9.1.0. This is my first time creating C functions against
  postgres, so hopefully this falls under the 'newbie' category and is
  easy to solve.
  
  The source code code does compile resulting in a shared library:
  
  file mylibrary.so: ELF 64-bit LSB shared object, x86-64, version 1
  (SYSV), dynamically linked, not stripped
  
  ldd mylibrary.so
  
  linux-vdso.so.1 =  (0x7fff40fff000)
  libc.so.6 = /lib/libc.so.6 (0x7f3adb8cc000)
  /lib64/ld-linux-x86-64.so.2 (0x7f3adbe72000)
 
 The reason it's not working is that libhdfs.so isn't listed as a
 requirement for mylibrary.so.  You did not show us your link command
 for mylibrary.so, but most likely there needs to be a -lhdfs in it.
 
   regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] explicit deadlock-victim-priority mechanism

2011-10-25 Thread david.sahagian
ref =  [ e1qzdjc-xv...@gemulon.postgresql.org ]

I note with interest that [deadlock_timeout] can be used as . . .
a poor-man's deadlock priority mechanism: a transaction with a high 
[deadlock_timeout] is less
likely to be chosen as the victim than one with a low [deadlock_timeout]

I for one, would definitely be interested in a more explicit priority 
mechanism.

My use case is:
We have some batch work processes, whose deadlock-victim-priority I would 
like to set to HIGH.
(at the expense of smaller transactions who I would like to be the victim)

-dvs-




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] writing a foreign data wrapper for hdfs, but getting and undefined symbol error for hdfsConnect

2011-10-25 Thread Tom Lane
Rob_pg robert7...@comcast.net writes:
 Thanks for the tip, I altered the gcc invocation as follows:

 Here are the two gcc invocations originally creating the shared library:

 gcc -Wall -fPIC -c mylibrary.c -o mylibrary.o \
 -I $(A) -I $(B) -I $(C) -I $(E) -lhdfs

 gcc -I $(A) -I $(B) -I $(C) -I $(E) -shared\
 -Wl,-soname,mylibrary.so -o mylibrary.so mylibrary.o

 Here's the new invocations: I added -lhdfs to the second gcc invocation.

Yeah, -l is useless when building a .o file; gcc will just ignore it.
(Conversely, there's not much point in -I switches in a link step.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] missing chunk 0 for toast value ...

2011-10-25 Thread Andrew Hammond
On Tue, Oct 25, 2011 at 2:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Andrew Hammond andrew.george.hamm...@gmail.com writes:
  Oct 24 04:05:57 db-app02 postgres[24640]: [2-1]
  user=nutricate,db=admin_production ERROR:  missing chunk number 0 for
  toast value
  2411466 in pg_toast_2619

 Hmm ... pg_toast_2619 is pg_statistic's toast table.  Is this 9.0.x,
 and are you by any chance in the habit of running CLUSTER or VACUUM FULL
 on your system catalogs?  Could one have been running in the background
 when this happened?


This is 9.0.4.

I am not aware of any VACUUM FULL or CLUSTER operations that are scheduled.
To the best of my knowledge this DB has never been either clustered or
vacuum full'd. The following occur in the log file immediately before the
error message above.

Oct 24 04:05:57 db-app02 postgres[24639]: [2-1] user=,db= LOG:  automatic
vacuum of table admin_production.pg_catalog.pg_statistic: index scans: 0
Oct 24 04:05:57 db-app02 postgres[24639]: [2-2] #011pages: 0 removed, 150
remain
Oct 24 04:05:57 db-app02 postgres[24639]: [2-3] #011tuples: 254 removed, 925
remain
Oct 24 04:05:57 db-app02 postgres[24639]: [2-4] #011system usage: CPU
0.01s/0.00u sec elapsed 0.02 sec
Oct 24 04:05:57 db-app02 postgres[24639]: [3-1] user=,db= LOG:  automatic
vacuum of table admin_production.public.devices: index scans: 0
Oct 24 04:05:57 db-app02 postgres[24639]: [3-2] #011pages: 0 removed, 353
remain
Oct 24 04:05:57 db-app02 postgres[24639]: [3-3] #011tuples: 824 removed,
2261 remain
Oct 24 04:05:57 db-app02 postgres[24639]: [3-4] #011system usage: CPU
0.00s/0.00u sec elapsed 0.08 sec
Oct 24 04:05:57 db-app02 postgres[24639]: [4-1] user=,db= LOG:  automatic
analyze of table admin_production.public.devices system usage: CPU
0.00s/0.08u sec elapsed 0.17 sec


Re: [GENERAL] missing chunk 0 for toast value ...

2011-10-25 Thread Tom Lane
Andrew Hammond andrew.george.hamm...@gmail.com writes:
 The following occur in the log file immediately before the
 error message above.

Hmm, that's pretty interesting ... are there any nearby autovacuums of
pg_toast_2619?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] missing chunk 0 for toast value ...

2011-10-25 Thread Andrew Hammond
On Tue, Oct 25, 2011 at 4:11 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Andrew Hammond andrew.george.hamm...@gmail.com writes:
  The following occur in the log file immediately before the
  error message above.

 Hmm, that's pretty interesting ... are there any nearby autovacuums of
 pg_toast_2619?


Several, both before and after the error message:

Oct 24 03:49:57 db-app02 postgres[23554]: [4-1] user=,db= LOG:  automatic
vacuum of table admin_production.pg_toast.pg_toast_2619: index scans: 1
Oct 24 03:49:57 db-app02 postgres[23554]: [4-2] #011pages: 0 removed, 206
remain
Oct 24 03:49:57 db-app02 postgres[23554]: [4-3] #011tuples: 220 removed, 608
remain
Oct 24 03:49:57 db-app02 postgres[23554]: [4-4] #011system usage: CPU
0.00s/0.00u sec elapsed 0.10 sec

Oct 24 03:53:57 db-app02 postgres[23800]: [5-1] user=,db= LOG:  automatic
vacuum of table admin_production.pg_toast.pg_toast_2619: index scans: 1
Oct 24 03:53:57 db-app02 postgres[23800]: [5-2] #011pages: 0 removed, 206
remain
Oct 24 03:53:57 db-app02 postgres[23800]: [5-3] #011tuples: 220 removed, 608
remain
Oct 24 03:53:57 db-app02 postgres[23800]: [5-4] #011system usage: CPU
0.01s/0.00u sec elapsed 0.12 sec

Oct 24 03:57:57 db-app02 postgres[24059]: [4-1] user=,db= LOG:  automatic
vacuum of table admin_production.pg_toast.pg_toast_2619: index scans: 1
Oct 24 03:57:57 db-app02 postgres[24059]: [4-2] #011pages: 0 removed, 206
remain
Oct 24 03:57:57 db-app02 postgres[24059]: [4-3] #011tuples: 220 removed, 608
remain
Oct 24 03:57:57 db-app02 postgres[24059]: [4-4] #011system usage: CPU
0.00s/0.00u sec elapsed 0.04 sec

Oct 24 04:01:57 db-app02 postgres[24394]: [4-1] user=,db= LOG:  automatic
vacuum of table admin_production.pg_toast.pg_toast_2619: index scans: 1
Oct 24 04:01:57 db-app02 postgres[24394]: [4-2] #011pages: 0 removed, 206
remain
Oct 24 04:01:57 db-app02 postgres[24394]: [4-3] #011tuples: 220 removed, 608
remain
Oct 24 04:01:57 db-app02 postgres[24394]: [4-4] #011system usage: CPU
0.00s/0.01u sec elapsed 0.04 sec

Oct 24 04:05:57 db-app02 postgres[24640]: [2-1]
user=nutricate,db=admin_production ERROR:  missing chunk number 0 for toast
value 2411466 in pg_toast_2619

Oct 24 04:05:57 db-app02 postgres[24639]: [6-1] user=,db= LOG:  automatic
vacuum of table admin_production.pg_toast.pg_toast_2619: index scans: 1
Oct 24 04:05:57 db-app02 postgres[24639]: [6-2] #011pages: 0 removed, 206
remain
Oct 24 04:05:57 db-app02 postgres[24639]: [6-3] #011tuples: 216 removed, 608
remain
Oct 24 04:05:57 db-app02 postgres[24639]: [6-4] #011system usage: CPU
0.00s/0.00u sec elapsed 0.08 sec

Oct 24 04:09:57 db-app02 postgres[24877]: [5-1] user=,db= LOG:  automatic
vacuum of table admin_production.pg_toast.pg_toast_2619: index scans: 1
Oct 24 04:09:57 db-app02 postgres[24877]: [5-2] #011pages: 0 removed, 206
remain
Oct 24 04:09:57 db-app02 postgres[24877]: [5-3] #011tuples: 220 removed, 608
remain
Oct 24 04:09:57 db-app02 postgres[24877]: [5-4] #011system usage: CPU
0.00s/0.00u sec elapsed 0.10 sec

Oct 24 04:13:57 db-app02 postgres[25116]: [6-1] user=,db= LOG:  automatic
vacuum of table admin_production.pg_toast.pg_toast_2619: index scans: 1
Oct 24 04:13:57 db-app02 postgres[25116]: [6-2] #011pages: 0 removed, 206
remain
Oct 24 04:13:57 db-app02 postgres[25116]: [6-3] #011tuples: 220 removed, 608
remain
Oct 24 04:13:57 db-app02 postgres[25116]: [6-4] #011system usage: CPU
0.00s/0.00u sec elapsed 0.12 sec


Re: [GENERAL] missing chunk 0 for toast value ...

2011-10-25 Thread Tom Lane
Andrew Hammond andrew.george.hamm...@gmail.com writes:
 On Tue, Oct 25, 2011 at 4:11 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Hmm, that's pretty interesting ... are there any nearby autovacuums of
 pg_toast_2619?

 Several, both before and after the error message:

Well, it seems clear that somehow the vacuum deleted a toast tuple that
the other statement was about to fetch, but it's not clear how this
could be.  The pg_statistic fetch must have come from the planner, which
should always be called with a transaction snapshot established, and
that ought to protect it against vacuum deleting anything that could be
visible to SnapshotNow.  Weird.

[ pokes around for a bit ... ]  Hmm, can you say how the failing query
was submitted, exactly?  I'm wondering if it came in via simple Query
(PQexec) or extended-query protocol (anything with parameters).

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] missing chunk 0 for toast value ...

2011-10-25 Thread Andrew Hammond
On Tue, Oct 25, 2011 at 4:53 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Andrew Hammond andrew.george.hamm...@gmail.com writes:
  On Tue, Oct 25, 2011 at 4:11 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  Hmm, that's pretty interesting ... are there any nearby autovacuums of
  pg_toast_2619?

  Several, both before and after the error message:

 Well, it seems clear that somehow the vacuum deleted a toast tuple that
 the other statement was about to fetch, but it's not clear how this
 could be.  The pg_statistic fetch must have come from the planner, which
 should always be called with a transaction snapshot established, and
 that ought to protect it against vacuum deleting anything that could be
 visible to SnapshotNow.  Weird.

 [ pokes around for a bit ... ]  Hmm, can you say how the failing query
 was submitted, exactly?  I'm wondering if it came in via simple Query
 (PQexec) or extended-query protocol (anything with parameters).


The command was sent from a python via django-1.2.1 using psycopg2-2.2.1

Andrew


[GENERAL] Large Rows

2011-10-25 Thread Lee Hachadoorian
I need some advice on storing/retrieving data in large rows. Invariably 
someone points out that very long rows are probably poorly normalized, 
but I have to deal with how to store a dataset which cannot be changed, 
specifically the ~23,000 column US Census American Community Survey.


The Census releases these data in 117 sequences of  256 columns (in 
order to be read by spreadsheet applications with a 256 column limit). I 
have previously stored each sequence in its own table, which is pretty 
straightforward.


My problem is that some of the demographic researchers I work with want 
a one-table dump of the entire dataset. This would primarily be for data 
transfer. This is of limited actual use in analysis, but nonetheless, 
that's what we want to be able to do.


Now, I can't join all the sequences in one SQL query for export because 
of the 1600 column limit. So based on previous list activity (Tom Lane: 
Perhaps you could collapse multiple similar columns into an array 
column? 
http://archives.postgresql.org/pgsql-admin/2008-05/msg00211.php), I 
decided to try to combine all the sequences into one table using array 
columns. (This would actually make querying easier since the users 
wouldn't have to constantly JOIN the sequences in their queries.) Next 
problem: I run into the 8k row size limit once about half the columns 
are populated. As far as I can understand, even though a row 
theoretically supports a 1.6TB (!) row size, this only works for 
TOASTable data types (primarily text?). The vast majority of the 23k 
columns I'm storing are bigint.


Questions:

1) Is there any way to solve problem 1, which is to export the 23k 
columns from the database as it is, with 117 linked tables?
2) Is there any way to store the data all in one row? If numeric types 
are un-TOASTable, 23k columns will necessarily break the 8k limit even 
if they were all smallint, correct?


Regards,
--Lee

--
Lee Hachadoorian
PhD, Earth  Environmental Sciences (Geography)
Research Associate, CUNY Center for Urban Research
http://freecity.commons.gc.cuny.edu


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large Rows

2011-10-25 Thread David Johnston
On Oct 25, 2011, at 22:17, Lee Hachadoorian lee.hachadoor...@gmail.com wrote:

 I need some advice on storing/retrieving data in large rows. Invariably 
 someone points out that very long rows are probably poorly normalized, but I 
 have to deal with how to store a dataset which cannot be changed, 
 specifically the ~23,000 column US Census American Community Survey.
 
 The Census releases these data in 117 sequences of  256 columns (in order 
 to be read by spreadsheet applications with a 256 column limit). I have 
 previously stored each sequence in its own table, which is pretty 
 straightforward.
 
 My problem is that some of the demographic researchers I work with want a 
 one-table dump of the entire dataset. This would primarily be for data 
 transfer. This is of limited actual use in analysis, but nonetheless, that's 
 what we want to be able to do.
 
 Now, I can't join all the sequences in one SQL query for export because of 
 the 1600 column limit. So based on previous list activity (Tom Lane: Perhaps 
 you could collapse multiple similar columns into an array column? 
 http://archives.postgresql.org/pgsql-admin/2008-05/msg00211.php), I decided 
 to try to combine all the sequences into one table using array columns. (This 
 would actually make querying easier since the users wouldn't have to 
 constantly JOIN the sequences in their queries.) Next problem: I run into the 
 8k row size limit once about half the columns are populated. As far as I can 
 understand, even though a row theoretically supports a 1.6TB (!) row size, 
 this only works for TOASTable data types (primarily text?). The vast majority 
 of the 23k columns I'm storing are bigint.
 
 Questions:
 
 1) Is there any way to solve problem 1, which is to export the 23k columns 
 from the database as it is, with 117 linked tables?
 2) Is there any way to store the data all in one row? If numeric types are 
 un-TOASTable, 23k columns will necessarily break the 8k limit even if they 
 were all smallint, correct?
 
 Regards,
 --Lee
 
 -- 
 Lee Hachadoorian
 PhD, Earth  Environmental Sciences (Geography)
 Research Associate, CUNY Center for Urban Research
 http://freecity.commons.gc.cuny.edu
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

You can brute-force a 23k column CSV output file using a programming language 
but if you need to keep it in a database the fact we are talking about being 
over the numeric column limit by a factor of twenty means you are basically SOL 
with PostgreSQL.

Even if such a table were possible how it, in it's entirety, would be useful is 
beyond me.  

There are few things that cannot be changed, and this requirement is unlikely 
to be one of those things.  Your problems are more political than technical and 
those are hard to provide advice for in an e-mail.

If you need technical solutions there may be another tool out there that can 
get you what you want but stock PostgreSQL isn't going to cut it.

Not having any idea what those 23k columns are doesn't help either; the census 
questionnaire isn't that big...

Instead of giving them what they think they want talk to them and then try to 
provide them what they actually need given the limitations of your current 
toolset, or resolve to find a more suitable tool if the needs are valid but 
cannot be met with the existing tools.

David J.
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large Rows

2011-10-25 Thread Royce Ausburn

On 26/10/2011, at 1:17 PM, Lee Hachadoorian wrote:

 I need some advice on storing/retrieving data in large rows. Invariably 
 someone points out that very long rows are probably poorly normalized, but I 
 have to deal with how to store a dataset which cannot be changed, 
 specifically the ~23,000 column US Census American Community Survey.
 
 The Census releases these data in 117 sequences of  256 columns (in order 
 to be read by spreadsheet applications with a 256 column limit). I have 
 previously stored each sequence in its own table, which is pretty 
 straightforward.
 
 My problem is that some of the demographic researchers I work with want a 
 one-table dump of the entire dataset. This would primarily be for data 
 transfer. This is of limited actual use in analysis, but nonetheless, that's 
 what we want to be able to do.
 
 Now, I can't join all the sequences in one SQL query for export because of 
 the 1600 column limit. So based on previous list activity (Tom Lane: Perhaps 
 you could collapse multiple similar columns into an array column? 
 http://archives.postgresql.org/pgsql-admin/2008-05/msg00211.php), I decided 
 to try to combine all the sequences into one table using array columns. (This 
 would actually make querying easier since the users wouldn't have to 
 constantly JOIN the sequences in their queries.) Next problem: I run into the 
 8k row size limit once about half the columns are populated. As far as I can 
 understand, even though a row theoretically supports a 1.6TB (!) row size, 
 this only works for TOASTable data types (primarily text?). The vast majority 
 of the 23k columns I'm storing are bigint.
 
 Questions:
 
 1) Is there any way to solve problem 1, which is to export the 23k columns 
 from the database as it is, with 117 linked tables?
 2) Is there any way to store the data all in one row? If numeric types are 
 un-TOASTable, 23k columns will necessarily break the 8k limit even if they 
 were all smallint, correct?
 


Perhaps hstore would help? 
http://www.postgresql.org/docs/9.0/static/hstore.html  

I'm not sure if they're TOASTable, though.  

--Royce


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general