[GENERAL] Excessive planner time for some queries with high statistics

2011-11-04 Thread Stuart Bishop
I'm seeing some queries, possibly to do with using a UNIQUE index,
that have fast time reported by EXPLAIN ANALYZE but the actual time as
reported by \timing at 150ms+ higher. PostgreSQL 8.4.9

Simple example queries: http://paste.ubuntu.com/726131/

Table definitions: http://paste.ubuntu.com/726193/

Rewriting the query to use common table expressions worked around the
original problem: http://paste.ubuntu.com/726141/ ('fixed' version of
the original more complex query).

We also found this problem did not occur on one of our staging
systems, which had a default statistics target of 100. Lowering the
statistics on the relavant columns from 1000 to 100 and reanalyzing
made the overhead unnoticeable.

Thoughts on IRC was this might be a regression in 8.4.9, but I haven't
got earlier versions to test with at the moment. I was asked to obtain
some traces but have not been able to organize getting tools on a
suitable server yet.

-- 
Stuart Bishop stu...@stuartbishop.net
http://www.stuartbishop.net/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR from pg_restore - From OS X to Ubuntu

2011-11-04 Thread Magnus Hagander
On Fri, Nov 4, 2011 at 05:40, Naoko Reeves naokoree...@gmail.com wrote:
 I dumped from:
 OS: OS X 10.5.8
 pg version: PostgreSQL 9.0.4 on x86_64-apple-darwin, compiled by GCC
 i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664), 64-bit
 Installation Method: EDB installer
 to:
 OS: Ubuntu 10.04.3 64bit
 pg version: PostgreSQL 9.1.1 on x86_64-pc-linux-gnu, compiled by
 gcc-4.4.real (Ubuntu 4.4.3-4ubuntu5) 4.4.3, 64-bit
 installation Method: apt-get install postgresql-9.1 postgresql-contrib-9.1
 During the restoration I got the following errors:
 ERROR:  could not access file $libdir/targetinfo: No such file or
 directory
 ERROR:  function public.pldbg_get_target_info(text, char) does not exist
 ERROR:  could not access file $libdir/plugins/plugin_debugger: No such
 file or directory
 ERROR:  function public.plpgsql_oid_debug(oid) does not exist

 My question is:
 1. Is this safe to ignore? I don't recall using any of the function
 2. If not, how can I install those missing libraries?
 Thank you very much for your time in advance.

It looks like it's the pl/pgsql debugger that has those functions. If
you're not using it then you can safely ignore them, but if you do use
it, expect it to be broken.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR from pg_restore - From OS X to Ubuntu

2011-11-04 Thread Albe Laurenz
Naoko Reeves wrote:
 I dumped from:
 [...] PostgreSQL 9.0.4 [...]

 to:
 [...] PostgreSQL 9.1.1 [...]

 During the restoration I got the following errors:
 
 ERROR:  could not access file $libdir/targetinfo: No such file or
directory
 ERROR:  function public.pldbg_get_target_info(text, char) does not
exist
 ERROR:  could not access file $libdir/plugins/plugin_debugger: No
such file or directory
 ERROR:  function public.plpgsql_oid_debug(oid) does not exist
 
 My question is:
 1. Is this safe to ignore? I don't recall using any of the function
 2. If not, how can I install those missing libraries?

I'd say it is safe to ignore.
You must have the EDB debugger installed in the 9.0.4 database, but not
in the destination database.

You'll probably end up with some garbage (types etc.) in the public
schema
that you should remove if you ever want to install the EDB debugger in
the
target database, but other than that they should not bother you.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recommendations for SSDs in production?

2011-11-04 Thread Thomas Mieslinger

Am 03.11.2011 18:59, schrieb Robert Treat:

On Wed, Nov 2, 2011 at 11:02 PM, Benjamin Smith
li...@benjamindsmith.com  wrote:

On Wednesday, November 02, 2011 11:39:25 AM Thomas Strunz wrote:

I guess go Intel
route or some other crazy expensive enterprise stuff.


It's advice about some of the crazy expensive enterprise stuff that I'm
seeking...? I don't mind spending some money if I get to keep up this level of


Stec (http://stec-inc.com/) or texas memory systems 
(http://www.ramsan.com/) do the kind of ssds you want for enterprise 
application. Reading the specs for intel 320, 710 you can calculate how 
long ssds will live when loaded with maximum random io workload.


intel 320  80GB   10TB written 1 4k IOPS
 about 3 days to the of end design lifetime

intel 710 100GB  500TB written  2700 4k IOPS
 about 575 days to the of end design lifetime

If you are using Linux you can use the values in /proc/iostats to get a 
rough idea what your system is doing and how many tb get written per day.


stec offers a wear resistant ssd which is composed from 8GB RAM, a big 
capacitor, 8GB Flash and some logic to write the ram contents into flash 
when the power has gone.


see
http://www.intel.com/content/dam/doc/product-brief/ssd-320-brief.pdf

http://www.intel.com/content/dam/doc/product-specification/ssd-710-series-specification.pdf

http://embeddedcomputingsystems.com/wp-content/uploads/2011/05/STEC_AVNET_SSD_Spring2011.pdf


performance, but also am not looking to make somebody's private plane payment,
either.


There's a pretty varied mix of speed, durability, and price with any
SSD based architecture, but the two that have proven best in our
testing and production use (for ourselves and our clients) seem to be
Intel (mostly 320 series iirc), and Fusion-IO. I'd start with looking
at those.

Robert Treat
conjecture: xzilla.net
consulting: omniti.com




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hint for a query

2011-11-04 Thread Uwe Schroeder


 I have this tables
 
 
 Table: Contact
 IdContact
 First Name
 Second Name
 … other columns
 
 Table: Employee
 IdEmployee
 IdContact, related to Contact table
 … other columns
 
 Table: Salesman
 IdSaleman
 IdEmployee, if salesman is employee, related to Employee table
 IdContact, if salesman is not an employee, related to Contact table
 
 
 I need a query
 
 Id Salesman - Second name - First name
 
 But I can't figure how to do it, can someone can give advise?
 
 Thanks


Needless to say, this is bit of an odd table layout. You always end up at the 
contact table, but the layout makes it harder to query. Personally I'd have a 
foreign key from the contact table to the employee table, rather than the 
employee table to the contact table - that would also eliminate the employee 
foreign key in the salesman table. It would also allow you to just join the 
salesman table to the contact table and then figure out if the contact is an 
employee.

well, that said. Here's a quick one without a lot of deep thought...

select a.idsalesman, b.firstname, b.secondname from salesman a join contact b 
on b.idcontact=a.idcontact union  select c.idsalesman, d.firstname, 
d.secondname from salesman c join employee e on e.idemployee=c.idemployee join 
contact d on d.idcontact=e.idcontact

No guarantees though. It's midnight here and I had a long day...

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR from pg_restore - From OS X to Ubuntu

2011-11-04 Thread Naoko Reeves
Got it. Thank you very much!

On Fri, Nov 4, 2011 at 2:06 AM, Albe Laurenz laurenz.a...@wien.gv.atwrote:

 Naoko Reeves wrote:
  I dumped from:
  [...] PostgreSQL 9.0.4 [...]

  to:
  [...] PostgreSQL 9.1.1 [...]

  During the restoration I got the following errors:
 
  ERROR:  could not access file $libdir/targetinfo: No such file or
 directory
  ERROR:  function public.pldbg_get_target_info(text, char) does not
 exist
  ERROR:  could not access file $libdir/plugins/plugin_debugger: No
 such file or directory
  ERROR:  function public.plpgsql_oid_debug(oid) does not exist
 
  My question is:
  1. Is this safe to ignore? I don't recall using any of the function
  2. If not, how can I install those missing libraries?

 I'd say it is safe to ignore.
 You must have the EDB debugger installed in the 9.0.4 database, but not
 in the destination database.

 You'll probably end up with some garbage (types etc.) in the public
 schema
 that you should remove if you ever want to install the EDB debugger in
 the
 target database, but other than that they should not bother you.

 Yours,
 Laurenz Albe




-- 
Naoko Reeves
http://www.anypossibility.com/


Re: [GENERAL] 9.1 replication on different arch

2011-11-04 Thread Hannes Erven
Am 2011-11-03 02:40, schrieb Martín Marqués:
 
 Sad thing is that it's not so easy on Debian. With Fedora all I had to
 do is select the arch type and that's all.

Have a look at dpkg --force-architecture .


-hannes

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Issue

2011-11-04 Thread Ioana Danes
Hi Scott,

I followed your advise and I run the test with the changes suggested at 
points 1,2 and 3 below and my performance test run for 18 hours without
 swapping. I did have a 40% drop in performance but I think that is a different 
problem. I will run more tests and post the results if anyone is interested.

BTW, I did read Postgresql 9.0 High Performance written by Greg Smith yesterday 
and he does mention about these parameters. I should have done that earlier, it 
is a great book.

Thank you for your help,
Ioana




- Original Message -
From: Scott Marlowe scott.marl...@gmail.com
To: Ioana Danes ioanasoftw...@yahoo.ca
Cc: PostgreSQL General pgsql-general@postgresql.org
Sent: Thursday, November 3, 2011 10:30:27 AM
Subject: Re: [GENERAL] Memory Issue

On Thu, Nov 3, 2011 at 7:34 AM, Ioana Danes ioanasoftw...@yahoo.ca wrote:

 After another half an hour almost the entire swap is used and the system 
 performs really bad 100 TPS or lower.
 It never runs out of memory though!

 I would like to ask for your opinion on this issue.
 My concerns are why the memory is not reused earlier and it is using the 
 swapping when the system does only these 2 inserts.
 Is this an OS issue, postgres issue, configuration issue?
 Your advice is greatly appreciated.

You can try a few things.

1: lower your shared_buffers.  It's unlikely you really need 4G
 for
them.  A few hundred megs is probably plenty for the type of work
you're doing.  Let the kernel cache the data you're not hitting right
this second.

2: Set swappiness to 0.    I.e. edit /etc/sysctl.conf and add a line
like vm.swappiness = 0 then run sudo sysctl
 -p

3: Turn off overcommit.  Same as number 2, set vm.overcommit_memory =
2 which will turn off the ability of linux to overcommit memory and
should then turn off the OOM killer.

4: just turn off swap. With only 16Gigs this is a tad dangerous,
especially if you haven't turned off the OOM in step 3.  Memory is
cheap, throw 32G at least into the machine.  With 1200 users, you
really need plenty of memory.  To turn off swap add something like
/sbin/swapoff -a to the /etc/rc.local file (before the exit line
natch)

Re: [GENERAL] equivalent to replication_timeout on standby server

2011-11-04 Thread Samba
Thanks Fuji for that I hint...

I searched around on the internet for that trick and it looks like we can
make the Standby close its connection to the master much earlier than it
otherwise would;it is good for me now.

But still there seems to be two problem areas that can be improved over
time...

   - although both master(with replication_timeout)  and slave (with tcp
   timeout option in primary_conninfo parameter) closes the connection in
   quick time (based on tcp idle connection  timeout), as of now they do not
   log such information. It would be really helpful if such disconnects are
   logged with appropriate severity so that the problem can identified early
   and help in keeping track of patterns and history of such issues.
   -
   - Presently, neither master nor standby server attempts
   to resume streaming replication when they happen to see each other after
   some prolonged disconnect. It would be better if either master or slave or
   both the servers makes periodic checks to find if the other is reachable
   and resume the replication( if possible, or else log the message that a
   full sync may be required).


Thanks and Regards,
Samba

--
On Fri, Nov 4, 2011 at 7:25 AM, Fujii Masao masao.fu...@gmail.com wrote:

 On Thu, Nov 3, 2011 at 12:25 AM, Samba saas...@gmail.com wrote:
  The postgres manual explains the replication_timeout to be used to
 
  Terminate replication connections that are inactive longer than the
  specified number of milliseconds. This is useful for the primary server
 to
  detect a standby crash or network outage
 
  Is there a similar configuration parameter that helps the WAL receiver
  processes to terminate the idle connections on the standby servers?

 No.

 But setting keepalive libpq parameters in primary_conninfo might be useful
 to detect the termination of connection from the standby server.

 Regards,

 --
 Fujii Masao
 NIPPON TELEGRAPH AND TELEPHONE CORPORATION
 NTT Open Source Software Center



Re: [GENERAL] Recommendations for SSDs in production?

2011-11-04 Thread Yeb Havinga

On 2011-11-04 04:21, Kurt Buff wrote:
Oddly enough, Tom's Hardware has a review of the Intel offering today 
- might be worth your while to take a look at it. Kurt 


Thanks for that link! Seeing media wearout comparisons between 'consumer 
grade' and 'enterprise' disks was enough for me to stop thinking about 
the vertex 3 and intel 510 behind hardware raid: I'm going to stick with 
Intel 710 and Vertex 2 Pro on onboard SATA.


Tom's Hardware also showed how to test wearout using the workload 
indicator, so I thought lets do that with a pgbench workload.


First, if your'e interested in doing a test like this yourself, I'm 
testing on ubuntu 11.10, but even though this is a brand new 
distribution, the smart database was a few months old. 
'update-smart-drivedb' had as effect that the names of the values turned 
into something useful: instead of #LBA's written, it now shows #32MiB's 
written. Also there are now three 'workload' related parameters.


225 Host_Writes_32MiB   0x0032   100   100   000Old_age   
Always   -   108551
226 Workld_Media_Wear_Indic 0x0032   100   100   000Old_age   
Always   -   17
227 Workld_Host_Reads_Perc  0x0032   100   100   000Old_age   
Always   -   0
228 Workload_Minutes0x0032   100   100   000Old_age   
Always   -   211
232 Available_Reservd_Space 0x0033   100   100   010Pre-fail  
Always   -   0
233 Media_Wearout_Indicator 0x0032   100   100   000Old_age   
Always   -   0
241 Host_Writes_32MiB   0x0032   100   100   000Old_age   
Always   -   108551
242 Host_Reads_32MiB0x0032   100   100   000Old_age   
Always   -   21510


Tom's hardware on page 
http://www.tomshardware.com/reviews/ssd-710-enterprise-x25-e,3038-4.html 
shows how to turn these numbers into useful values.


The numbers above were taken 211 minutes after I cleared the workload 
values with smartctl -t vendor,0x40 /dev/sda. If you do that, the 
workload values become 0, then after a few minutes they all become 65535 
and not before 60 minutes of testing you'll see some useful values returned.


During the test, I did two one hour pgbench runs on a md raid1 with the 
intel 710 and vertex 2 pro, wal in ram.

pgbench -i -s 300 t (fits in ram)
pgbench -j 20 -c 20 -M prepared -T 3600 -l  t  (two times)

% mediawear by workload is Workld_Media_Wear_Indic / 1024
17/1024 = .0166015625 %

Lets turn this into # days. I take the most pessimistic number of 120 
minutes of actual pgbench testing, instead of the total minutes since 
workload reset of 211 minutes.

120/(17/1024/100)/60/24 = 501.9608599031 days

The Host_Reads_32MiB value was 91099 before the test, now it is at 108551.
(108551-91099)*32/1024 = 545 GB written during the test.

(108551-91099)*32/1024/1024/(17/1024/100) = 3208 TB before media wearout.

This number fits between Tom's hardware's calculated wearout numbers, 
7268 TB for sequential and 1437 TB for random load.


-- Yeb


PS: info on test setup
Model Number:   INTEL SSDSA2BZ100G3  Firmware Revision:  6PB10362
Model Number:   OCZ-VERTEX2 PRO  Firmware Revision:  1.35

partitions aligned on 512kB boundary.
workload on ~20GB software raid mirror (drives are 100GB).

Linux client46 3.0.0-12-generic #20-Ubuntu SMP Fri Oct 7 14:56:25 UTC 
2011 x86_64 x86_64 x86_64 GNU/Linux
PostgreSQL 9.2devel on x86_64-unknown-linux-gnu, compiled by gcc 
(Ubuntu/Linaro 4.6.1-9ubuntu3) 4.6.1, 64-bit


/proc/sys/vm/dirty_background_bytes set to 17850

non standard parameters of pg are:
maintenance_work_mem = 1GB # pgtune wizard 2011-10-28
checkpoint_completion_target = 0.9 # pgtune wizard 2011-10-28
effective_cache_size = 16GB # pgtune wizard 2011-10-28
work_mem = 80MB # pgtune wizard 2011-10-28
wal_buffers = 8MB # pgtune wizard 2011-10-28
checkpoint_segments = 96
shared_buffers = 5632MB # pgtune wizard 2011-10-28
max_connections = 300 # pgtune wizard 2011-10-28

Latency and tps graphs of *one* of the 20 clients during the second 
pgbench test are here: http://imgur.com/a/jjl13 - note that max latency 
has dropped from ~ 3 seconds from earlier tests to ~ 1 second - this is 
mainly due to an increase of checkpoint segments from 16 to 96.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Streaming Replication woes

2011-11-04 Thread Sean Patronis

I am running Postgres 9.1

I have followed the howto here:
http://wiki.postgresql.org/wiki/Streaming_Replication

I am attempting to replicate an existing database.

On the Master, I get the following error in the postgres log file:

FATAL:  must be replication role to start walsender


On the slave I get this:
FATAL:  could not connect to the primary server: FATAL:  must be 
replication role to start walsender


I have googled both of those log entries to no avail.

note that the sender process on the master is not running.

What simple step am I missing?




Re: [GENERAL] Streaming Replication woes

2011-11-04 Thread Raghavendra
*
*
On Fri, Nov 4, 2011 at 8:20 PM, Sean Patronis spatro...@add123.com wrote:

  I am running Postgres 9.1

 I have followed the howto here:
 http://wiki.postgresql.org/wiki/Streaming_Replication

 I am attempting to replicate an existing database.

 On the Master, I get the following error in the postgres log file:

 FATAL:  must be replication role to start walsender



On the slave I get this:
 FATAL:  could not connect to the primary server: FATAL:  must be
 replication role to start walsender

 I have googled both of those log entries to no avail.

 note that the sender process on the master is not running.

 What simple step am I missing?



Step 3 from wiki. and reload PG-instance


   - *3.* Set up connections and authentication so that the standby server
   can successfully connect to the *replication* pseudo-database on the
   primary.


$ $EDITOR postgresql.conf

listen_addresses = '192.168.0.10'

$ $EDITOR pg_hba.conf

# The standby server must have superuser access privileges.
host  replication  postgres  192.168.0.20/22  trust


---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/


Re: [GENERAL] Streaming Replication woes

2011-11-04 Thread Sean Patronis

On 11/04/2011 10:59 AM, Thom Brown wrote:

On 4 November 2011 16:50, Sean Patronisspatro...@add123.com  wrote:

I am running Postgres 9.1

I have followed the howto here:
http://wiki.postgresql.org/wiki/Streaming_Replication

I am attempting to replicate an existing database.

On the Master, I get the following error in the postgres log file:

FATAL:  must be replication role to start walsender


On the slave I get this:
FATAL:  could not connect to the primary server: FATAL:  must be replication
role to start walsender

I have googled both of those log entries to no avail.

note that the sender process on the master is not running.

What simple step am I missing?

What have you got primary_conninfo set to on the standby in
recovery.conf?  Are you trying to use a regular user?  If so, you will
have to grant it REPLICATION permissions on the primary, which was
introduced in 9.1.




The primary_conninfo in the recovery.conf is set to :
primary_conninfo  = 'host=192.168.127.12 port=5432 user=postgres'

So I should just have to grant the postgres user REPLICATION 
permissions, and be good?





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recommendations for SSDs in production?

2011-11-04 Thread David Boreham

On 11/4/2011 8:26 AM, Yeb Havinga wrote:


First, if your'e interested in doing a test like this yourself, I'm 
testing on ubuntu 11.10, but even though this is a brand new 
distribution, the smart database was a few months old. 
'update-smart-drivedb' had as effect that the names of the values 
turned into something useful: instead of #LBA's written, it now shows 
#32MiB's written. Also there are now three 'workload' related parameters.


I submitted the patch for these to smartmontools a few weeks ago and it 
is now in the current db but not yet in any of the distro update 
packages. I probably forgot to mention in my post here that you need the 
latest db for the 710. Also, if you pull the trunk source code and build 
it yourself it has the ability to decode the drive stats log data 
(example pasted below). I haven't yet found a use for this myself, but 
it does seem to have a little more informaiton than the SMART 
attributes. (Thanks to Christian Franke of the smartmontools project for 
implementing this feature)


Your figures from the workload wear roughly match mine. In production we 
don't expect to subject the drives to anything close to 100% of the 
pgbench workload (probably around 1/10 of that on average), so the 
predicted wear life of the drive is 10+ years in our estimates, under 
production loads.


The big question of course is can the drive's wearout estimate be 
trusted ? A little more information from Intel about how it is 
calculated would help allay concerns in this area.


# ./smartctl -l devstat,0 /dev/sda
smartctl 5.42 2011-10-10 r3434 [x86_64-linux-2.6.32-71.29.1.el6.x86_64] (local 
build)
Copyright (C) 2002-11 by Bruce Allen,http://smartmontools.sourceforge.net

Device Statistics (GP Log 0x04) supported pages
Page Description
  0  List of supported log pages
  1  General Statistics
  4  General Errors Statistics
  5  Temperature Statistics
  6  Transport Statistics
  7  Solid State Device Statistics

# ./smartctl -l devstat /dev/sda
smartctl 5.42 2011-10-10 r3434 [x86_64-linux-2.6.32-71.29.1.el6.x86_64] (local 
build)
Copyright (C) 2002-11 by Bruce Allen,http://smartmontools.sourceforge.net

Device Statistics (GP Log 0x04)
Page Offset Flg Size   Value  Description
  1  =  ==  =  =  == General Statistics (rev 2) ==
  1  0x008  V-  4 10  Lifetime Power-On Resets
  1  0x010  V-  4200  Power-on Hours
  1  0x018  V-  6 3366822529  Logical Sectors Written
  1  0x020  V-  6  248189788  Number of Write Commands
  1  0x028  V-  6   54653524  Logical Sectors Read
  1  0x030  V-  62626204  Number of Read Commands
  4  =  ==  =  =  == General Errors Statistics (rev 1) ==
  4  0x008  V-  4  0  Number of Reported Uncorrectable Errors
  4  0x010  V-  4  0  Resets Between Cmd Acceptance and Completion
  5  =  ==  =  =  == Temperature Statistics (rev 1) ==
  5  0x008  V-  1 21  Current Temperature
  5  0x010  V-  1 20  Average Short Term Temperature
  5  0x018  --  1 20  Average Long Term Temperature
  5  0x020  V-  1 30  Highest Temperature
  5  0x028  V-  1 17  Lowest Temperature
  5  0x030  V-  1 23  Highest Average Short Term Temperature
  5  0x038  V-  1 18  Lowest Average Short Term Temperature
  5  0x040  --  1   -128  Highest Average Long Term Temperature
  5  0x048  --  1   -128  Lowest Average Long Term Temperature
  5  0x050  V-  4  0  Time in Over-Temperature
  5  0x058  V-  1 70  Specified Maximum Operating Temperature
  5  0x060  V-  4  0  Time in Under-Temperature
  5  0x068  V-  1  0  Specified Minimum Operating Temperature
  6  =  ==  =  =  == Transport Statistics (rev 1) ==
  6  0x008  V-  4 77  Number of hardware resets
  6  0x010  V-  4 22  Number of ASR Events
  6  0x018  V-  4  0  Number of Interface CRC Errors
  7  =  ==  =  =  == Solid State Device Statistics (rev 1) ==
  7  0x008  V-  1  0  Percentage Used Endurance Indicator
||_ N normalized
|__ V valid



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.1 replication on different arch

2011-11-04 Thread Martín Marqués
2011/11/4 Hannes Erven h...@gmx.at:
 Am 2011-11-03 02:40, schrieb Martín Marqués:

 Sad thing is that it's not so easy on Debian. With Fedora all I had to
 do is select the arch type and that's all.

 Have a look at dpkg --force-architecture .

I'm having a lot of trouble with this. The server has an application
written in perl, which connects to the master PG server.

The thing is that perl needs libdbd-pg-perl to connect, which needs
libpq5, all this in amd64, but the i386 of postgresql-9.1 needs an
i386 version of libpq5 (which debian doesn't distinguish between
packages of different archs), and I can't install both versions.

What a mess!

-- 
Martín Marqués
select 'martin.marques' || '@' || 'gmail.com'
DBA, Programador, Administrador

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming Replication woes

2011-11-04 Thread Thom Brown
On 4 November 2011 17:19, Sean Patronis spatro...@add123.com wrote:
 On 11/04/2011 10:59 AM, Thom Brown wrote:

 On 4 November 2011 16:50, Sean Patronisspatro...@add123.com  wrote:

 I am running Postgres 9.1

 I have followed the howto here:
 http://wiki.postgresql.org/wiki/Streaming_Replication

 I am attempting to replicate an existing database.

 On the Master, I get the following error in the postgres log file:

 FATAL:  must be replication role to start walsender


 On the slave I get this:
 FATAL:  could not connect to the primary server: FATAL:  must be
 replication
 role to start walsender

 I have googled both of those log entries to no avail.

 note that the sender process on the master is not running.

 What simple step am I missing?

 What have you got primary_conninfo set to on the standby in
 recovery.conf?  Are you trying to use a regular user?  If so, you will
 have to grant it REPLICATION permissions on the primary, which was
 introduced in 9.1.



 The primary_conninfo in the recovery.conf is set to :
 primary_conninfo      = 'host=192.168.127.12 port=5432 user=postgres'

 So I should just have to grant the postgres user REPLICATION permissions,
 and be good?

Well the postgres user will be a superuser, so doesn't need to be
granted such a permission.

Have you got the necessary entry in pg_hba.conf as Raghavendra
highlighted?  It will need configuring to accept a connection from the
IP address of the standby server.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming Replication woes

2011-11-04 Thread Simon Riggs
On Fri, Nov 4, 2011 at 2:56 PM, Raghavendra
raghavendra@enterprisedb.com wrote:

 # The standby server must have superuser access privileges.
 host  replication  postgres  192.168.0.20/22  trust

I strongly recommend you don't use those settings, since they result
in no security at all.

It won't block you from getting replication working, but it won't
block anyone else either.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming Replication woes

2011-11-04 Thread Sean Patronis

On 11/04/2011 11:25 AM, Thom Brown wrote:

On 4 November 2011 17:19, Sean Patronisspatro...@add123.com  wrote:

On 11/04/2011 10:59 AM, Thom Brown wrote:

On 4 November 2011 16:50, Sean Patronisspatro...@add123.comwrote:

I am running Postgres 9.1

I have followed the howto here:
http://wiki.postgresql.org/wiki/Streaming_Replication

I am attempting to replicate an existing database.

On the Master, I get the following error in the postgres log file:

FATAL:  must be replication role to start walsender


On the slave I get this:
FATAL:  could not connect to the primary server: FATAL:  must be
replication
role to start walsender

I have googled both of those log entries to no avail.

note that the sender process on the master is not running.

What simple step am I missing?

What have you got primary_conninfo set to on the standby in
recovery.conf?  Are you trying to use a regular user?  If so, you will
have to grant it REPLICATION permissions on the primary, which was
introduced in 9.1.



The primary_conninfo in the recovery.conf is set to :
primary_conninfo  = 'host=192.168.127.12 port=5432 user=postgres'

So I should just have to grant the postgres user REPLICATION permissions,
and be good?

Well the postgres user will be a superuser, so doesn't need to be
granted such a permission.

Have you got the necessary entry in pg_hba.conf as Raghavendra
highlighted?  It will need configuring to accept a connection from the
IP address of the standby server.


I have both these entries on the pg_hba.conf Master server:
hostreplication all 192.168.127.6/32 trust
hostall all 192.168.127.6/32 trust

and still cannot get replication to start.

I can make normal postgresql database connections fine to the master 
database from the slave with these pg_hba.conf settings, so it is surely 
not a firewall issue.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [BULK] Re: [GENERAL] Streaming Replication woes

2011-11-04 Thread Sean Patronis

On 11/04/2011 11:31 AM, Simon Riggs wrote:

On Fri, Nov 4, 2011 at 2:56 PM, Raghavendra
raghavendra@enterprisedb.com  wrote:


# The standby server must have superuser access privileges.
host  replication  postgres  192.168.0.20/22  trust

I strongly recommend you don't use those settings, since they result
in no security at all.

It won't block you from getting replication working, but it won't
block anyone else either.

I agree, you should use the default trust of that network.  But at this 
point, I just want it to replicate in this test environment.  I can lock 
it down after it is working.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.1 replication on different arch

2011-11-04 Thread Hannes Erven
2011-11-04 16:24, Martín Marqués:
 Have a look at dpkg --force-architecture .
 
 The thing is that perl needs libdbd-pg-perl to connect, which needs
 libpq5, all this in amd64, but the i386 of postgresql-9.1 needs an
 i386 version of libpq5

Oh, I see, that's a mess. Probably there really isn't a way to solve
this... maybe you could do a parallel install of a complete 32bit Perl?
Or run your Perl app on a different machine?

In my case, I forced a 32bit libpq5 onto the system, which is required
for the postgresql-client to work, which in turn is used by the init
scripts to detect successful starting. But I don't need DBD and thus do
not have other depencies on libpq...


-hanney

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Replication Across Two Servers?

2011-11-04 Thread Carlos Mennens
We had a 8.4.8 production server of PostgreSQL on a Dell blade server
which ran for 3 years fine. The server housed all our database needs
perfectly but sadly the entire machine died. The drives were dead and
the motherboard was fried but we did have daily full backups of the
entire machine. Today I received our new blade servers which will run
VMware  I get to create two new PostgreSQL servers. I wanted to make
a master database server and a slave in case the master dies. My
question is does PostgreSQL 8.4 or 9.1support synchronization between
two physical machines over Ethernet? I've never replicated any kind of
database before so I don't know if that's possible and the more I
search this on my own, the more confused I am. It appears in
PostgreSQL, the word replication has several different meanings.

If you had to stand up two individual Debian Linux servers running a
specific version of PostgreSQL, could / would you be able to have the
master also synchronize all data to a slave server?

Thanks for any info!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication Across Two Servers?

2011-11-04 Thread Brandon Phelps

Carlos,

Streaming replication was introduced in PostgreSQL 9.0 and should do 
what you want.


http://wiki.postgresql.org/wiki/Streaming_Replication


On 11/04/2011 11:47 AM, Carlos Mennens wrote:

We had a 8.4.8 production server of PostgreSQL on a Dell blade server
which ran for 3 years fine. The server housed all our database needs
perfectly but sadly the entire machine died. The drives were dead and
the motherboard was fried but we did have daily full backups of the
entire machine. Today I received our new blade servers which will run
VMware  I get to create two new PostgreSQL servers. I wanted to make
a master database server and a slave in case the master dies. My
question is does PostgreSQL 8.4 or 9.1support synchronization between
two physical machines over Ethernet? I've never replicated any kind of
database before so I don't know if that's possible and the more I
search this on my own, the more confused I am. It appears in
PostgreSQL, the word replication has several different meanings.

If you had to stand up two individual Debian Linux servers running a
specific version of PostgreSQL, could / would you be able to have the
master also synchronize all data to a slave server?

Thanks for any info!



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication Across Two Servers?

2011-11-04 Thread Carlos Mennens
On Fri, Nov 4, 2011 at 11:52 AM, Brandon Phelps bphe...@gls.com wrote:
 Carlos,

 Streaming replication was introduced in PostgreSQL 9.0 and should do what
 you want.

 http://wiki.postgresql.org/wiki/Streaming_Replication

Oh great! I didn't see that in the 8.4 manual since that is what
Debian 6 has as the most stable version in it's package manager.
Anyone know of a stable Linux distribution that offers 9.0+? I know
Debian Wheezy (testing) has 9.1 but sadly it's testing and not
recommended for production utilization. RHEL is years behind as far as
packages go which makes them stable to an annoying degree.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication Across Two Servers?

2011-11-04 Thread Carlos Mennens
On Fri, Nov 4, 2011 at 11:56 AM, Prashant Bharucha 
prashantbharu...@yahoo.ca wrote:

 Hi Carlos

 Use Slony master to multiple slaves replication system for PostgreSQL
 http://www.postgresql.org/ supporting cascading (*e.g.* - a node can
 feed another node which feeds another node...) and failover.
 http://slony.info/


I'm not sure I see the point of using a third party application to do
something PostgreSQL can do natively. Am I missing something here?


Re: [GENERAL] Foreign Keys and Deadlocks

2011-11-04 Thread David Kerr
On Thu, Nov 03, 2011 at 03:30:20PM -0700, David Kerr wrote:
- Howdy,
- 
- We have a process that's deadlocking frequently. It's basically multiple 
threads inserting data into a single table.
- 
- That table has FK constraints to 3 other tables. 
- 
- I understand how an FK check will cause a sharelock to be acquired on the 
reference table and in some instances that
- leads to or at least participates in a deadlock.
- 
- I don't think that's the case here, (or at least not the entire case) but I 
could use some assistance in helping 
- to convince my developers of that ;). They'd like to just remove the FK and 
be done with it.

[snip]

So it appears that I'm the big dummy, and that you can deadlock with just 
inserts. 

I did more digging and found some good discussions on the subject in general, 
but 
most of the examples out there contain explicit updates (which is why i was 
confused) 
but it looks like it's being addressed. 


http://justatheory.com/computers/databases/postgresql/fk-locks-project.html
http://www.mail-archive.com/pgsql-hackers@postgresql.org/msg158205.html
http://www.commandprompt.com/blogs/alvaro_herrera/2010/11/fixing_foreign_key_deadlocks/

Attached is the script to reproduce it with only inserts (for postarities sake)

drop table a;
drop table b;
drop table c;
drop table d;

create table b ( bref int, description text);
alter table b add primary key (bref);

create table c ( cref int, description text);
alter table c add primary key (cref);

create table d ( dref int, description text);
alter table d add primary key (dref);

create table a ( bref int, cref int, dref int, description text);
alter table a add primary key (bref, cref);
alter table a add foreign key (bref) REFERENCES b(bref);
alter table a add foreign key (cref) REFERENCES c(cref);
alter table a add foreign key (dref) REFERENCES d(dref);


insert into b values (1,'hello');
insert into b values (2,'hello2');
insert into b values (3,'hello3');
insert into b values (4,'hello4');


insert into c values (1,'hello');
insert into c values (2,'hello2');
insert into c values (3,'hello3');
insert into c values (4,'hello4');

insert into d values (1,'hello');
insert into d values (2,'hello2');
insert into d values (3,'hello3');
insert into d values (4,'hello4');


Fire up 2 psqls
#SESSION1
## STEP1
begin;
insert into a values (1,1,1,'hello');
##STEP3
insert into a values (1,2,1,'hello2');


#SESSION2
## STEP2
begin;
insert into a values (1,2,1,'hello2');
## STEP4
insert into a values (1,1,1,'hello');


You'll get:
ERROR:  deadlock detected
DETAIL:  Process 8382 waits for ShareLock on transaction 7222455; blocked by 
process 6981.
Process 6981 waits for ShareLock on transaction 7222456; blocked by process 
8382.
HINT:  See server log for query details

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication Across Two Servers?

2011-11-04 Thread Scott Marlowe
On Fri, Nov 4, 2011 at 9:59 AM, Carlos Mennens carlos.menn...@gmail.com wrote:

 On Fri, Nov 4, 2011 at 11:56 AM, Prashant Bharucha 
 prashantbharu...@yahoo.ca wrote:

 Hi Carlos

 Use Slony master to multiple slaves replication system for PostgreSQL 
 supporting cascading (e.g. - a node can feed another node which feeds 
 another node...) and failover.
 http://slony.info/


 I'm not sure I see the point of using a third party application to do 
 something PostgreSQL can do natively. Am I missing something here?

Whether it's third party is immaterial really, the real issue is
what are your requirements and which method best meets those
requirements.  For certain more complex replication setups, slony is a
better method.  For instance you can create interesting indexes on a
slony slave that are independent of the master, or create views,
materialized or otherwise on a reporting server and so on.  While
streaming replication is easier to setup and maintain, and generally a
bit  more efficient, it's also got a more limited scope of operation.

Also, if you want to run 8.4 for now, which you've tested against, and
move from 8.4 to 9.1 or 9.2 at a later date, slony is built to do just
that, with running from one major version to another being one of the
things it's really good at.

The real answer then is that it comes down to which meets your
requirements the best.  Both are well tested and supported.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.1 replication on different arch

2011-11-04 Thread Scott Marlowe
2011/11/4 Hannes Erven h...@gmx.at:
 2011-11-04 16:24, Martín Marqués:
 Have a look at dpkg --force-architecture .

 The thing is that perl needs libdbd-pg-perl to connect, which needs
 libpq5, all this in amd64, but the i386 of postgresql-9.1 needs an
 i386 version of libpq5

 Oh, I see, that's a mess. Probably there really isn't a way to solve
 this... maybe you could do a parallel install of a complete 32bit Perl?
 Or run your Perl app on a different machine?

 In my case, I forced a 32bit libpq5 onto the system, which is required
 for the postgresql-client to work, which in turn is used by the init
 scripts to detect successful starting. But I don't need DBD and thus do
 not have other depencies on libpq...


I'd install postgresql in a 32 bit VM then.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication Across Two Servers?

2011-11-04 Thread Brandon Phelps

Carlos,

I would recommend you simply stick with Debian 6 and add the debian 
backports repository.  Add the following to your /etc/apt/sources.list:


deb http://backports.debian.org/debian-backports squeeze-backports main

After adding that just do an 'apt-get update' and you will be able to 
install the postgresql-9.1 package.


If you would prefer to use Aptitude to install your packages, then do a 
search for ^postgres, then when you find the postgresql-8.4 package just 
hit the V key to view other available versions, 9.1 should be one of the 
additional versions available.


-Brandon

On 11/04/2011 11:58 AM, Carlos Mennens wrote:

On Fri, Nov 4, 2011 at 11:52 AM, Brandon Phelpsbphe...@gls.com  wrote:

Carlos,

Streaming replication was introduced in PostgreSQL 9.0 and should do what
you want.

http://wiki.postgresql.org/wiki/Streaming_Replication


Oh great! I didn't see that in the 8.4 manual since that is what
Debian 6 has as the most stable version in it's package manager.
Anyone know of a stable Linux distribution that offers 9.0+? I know
Debian Wheezy (testing) has 9.1 but sadly it's testing and not
recommended for production utilization. RHEL is years behind as far as
packages go which makes them stable to an annoying degree.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Dump Error Message

2011-11-04 Thread Bob Pawley
Hi

I am attempting to dump a database using PostgreDAC.

I am getting the following error message which I don’t understand. 

Can someone shed some light on this?

“Error message from server: ERROR:  column tgisconstraint does not exist
LINE 1: ...c AS tgfname, tgtype, tgnargs, tgargs, tgenabled, tgisconstr...
 ^
FileHandler: The command was: SELECT tgname, tgfoid::pg_catalog.regproc AS 
tgfname, tgtype, tgnargs, tgargs, tgenabled, tgisconstraint, tgconstrname, 
tgdeferrable, tgconstrrelid, tginitdeferred, tableoid, oid, 
tgconstrrelid::pg_catalog.regclass AS tgconstrrelname FROM 
pg_catalog.pg_trigger t WHERE tgrelid = '19069'::pg_catalog.oid AND 
tgconstraint = 0”

When I execute this command in the SQL pane of PGAdmin and remove “AND 
tgconstraint = 0” the selct shows a row which is unfamiliar to me.

Bob

Re: [GENERAL] Dump Error Message

2011-11-04 Thread John R Pierce

On 11/04/11 10:22 AM, Bob Pawley wrote:

I am attempting to dump a database using PostgreDAC.


this postgresDAC?
http://www.microolap.com/products/connectivity/postgresdac/

thats a commercial product, you probably should contact them for support.


--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] psql is too slow to connect

2011-11-04 Thread Ing.Edmundo.Robles.Lopez
Hi, I have a problem with psql, is very slow to connect. I Checked the 
status of my network and the server and the client respond ok.


Any other ideas?
Perhaps, should  i need review, connections, shared memory, or  if a big 
 table  is being accessed??

El contenido de este correo electrónico y sus archivos adjuntos son privados y 
confidenciales y va dirigido exclusivamente a su destinatario.  No se autoriza 
la utilización, retransmisión, diseminación, o cualquier otro uso de esta 
información por un receptor o entidades distintas al destinatario.  Si recibe 
este correo sin ser el destinatario se le solicita eliminarlo y hacerlo del 
conocimiento del emisor. La empresa no se hace responsable de transmisiones o 
comunicaciones no autorizadas o emitidas por personas ajenas a sus 
colaboradores utilizando éste medio electrónico.

The content of this email and its attached files are private and confidential 
and intended exclusively for the use of the individual or entity to which they 
are addressed. The retransmission, dissemination, or any other use of this 
information other than by the intended recipient is prohibited.  If you have 
received this email in error please delete it and notify the sender.  The 
company cannot be held liable for unauthorized electronic transmissions or 
communications, nor for those emitted by non-company individuals and entities.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Explicitly adding a table to a schema.

2011-11-04 Thread thatsanicehatyouhave
Hi,

I have a database where I wasn't explicitly using schemas when I started it 
(i.e. everything was simply under public). I've since created several schemas 
and renamed the public schema to something else. When I look at the 
definitions (in PGAdmin III), the CREATE statement for the old tables look like 
this:

CREATE TABLE foo ( ...

whereas for my newer tables the full schema path is there:

CREATE TABLE myschema.bar ( ...

Is there a way that the explicit schema can be added to my older tables? The 
reason I want to do this is that I'm having a problem with external code 
(SQLAlchemy) when trying to get foreign key information through reflection-- 
the table doesn't seem to be found.

Does this even make sense since the tables are definitely in the new schema 
anyway?

Cheers,
Demitri

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dump Error Message

2011-11-04 Thread Tom Lane
Bob Pawley rjpaw...@shaw.ca writes:
 I am attempting to dump a database using PostgreDAC.

 I am getting the following error message which I don’t understand. 
 Can someone shed some light on this?

 Error message from server: ERROR:  column tgisconstraint does not exist

The pg_trigger.tgisconstraint column was removed in 9.0.  You evidently
need a newer version of PostgreDAC that has heard of 9.0.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Explicitly adding a table to a schema.

2011-11-04 Thread Adam Cornett
On Fri, Nov 4, 2011 at 1:57 PM, thatsanicehatyouh...@mac.com wrote:

 Hi,

 I have a database where I wasn't explicitly using schemas when I started
 it (i.e. everything was simply under public). I've since created several
 schemas and renamed the public schema to something else. When I look at
 the definitions (in PGAdmin III), the CREATE statement for the old tables
 look like this:

 CREATE TABLE foo ( ...

 whereas for my newer tables the full schema path is there:

 CREATE TABLE myschema.bar ( ...

 Is there a way that the explicit schema can be added to my older tables?
 The reason I want to do this is that I'm having a problem with external
 code (SQLAlchemy) when trying to get foreign key information through
 reflection-- the table doesn't seem to be found.

 Does this even make sense since the tables are definitely in the new
 schema anyway?

 Cheers,
 Demitri

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



You can use ALTER TABLE (
http://www.postgresql.org/docs/current/static/sql-altertable.html) to set
the schema of existing tables:
ALTER TABLE foo SET SCHEMA bar

-Adam


Re: [GENERAL] psql is too slow to connect

2011-11-04 Thread Tom Lane
Ing.Edmundo.Robles.Lopez erob...@sensacd.com.mx writes:
 Hi, I have a problem with psql, is very slow to connect. I Checked the 
 status of my network and the server and the client respond ok.

First thing that comes to mind is DNS lookup problems.  It's hard to
speculate more than that on such little information.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Excessive planner time for some queries with high statistics

2011-11-04 Thread Tom Lane
Stuart Bishop stu...@stuartbishop.net writes:
 We also found this problem did not occur on one of our staging
 systems, which had a default statistics target of 100. Lowering the
 statistics on the relavant columns from 1000 to 100 and reanalyzing
 made the overhead unnoticeable.

eqjoinsel() is O(N^2) in the number of entries in the MCV lists.
I wouldn't expect this to be an issue unless comparison is pretty
expensive, but maybe those are string not integer columns?

 Thoughts on IRC was this might be a regression in 8.4.9, but I haven't
 got earlier versions to test with at the moment.

eqjoinsel has worked like that for many years.  Are you claiming you
didn't see this behavior in a prior release?  If so, which one?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.1 replication on different arch

2011-11-04 Thread Martín Marqués
El día 4 de noviembre de 2011 13:15, Scott Marlowe
scott.marl...@gmail.com escribió:

 I'd install postgresql in a 32 bit VM then.


We're looking into it. Look's like the only option available for now,
at least for using WAL replication.

-- 
Martín Marqués
select 'martin.marques' || '@' || 'gmail.com'
DBA, Programador, Administrador

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Explicitly adding a table to a schema.

2011-11-04 Thread Demitri Muna
Hi,

On Nov 4, 2011, at 2:09 PM, Adam Cornett wrote:

 You can use ALTER TABLE 
 (http://www.postgresql.org/docs/current/static/sql-altertable.html) to set 
 the schema of existing tables:
 ALTER TABLE foo SET SCHEMA bar

Thanks. I did try that, but that command moves the table to a different schema, 
which is not what I'm trying to do. It struck me to try to move it to another 
schema (where the definition then explicitly included the schema prefix) and 
then move it back, but it still doesn't have the schema prefix.

Cheers,
Demitri

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Explicitly adding a table to a schema.

2011-11-04 Thread Adam Cornett
On Fri, Nov 4, 2011 at 2:32 PM, Demitri Muna
thatsanicehatyouh...@mac.comwrote:

 Hi,

 On Nov 4, 2011, at 2:09 PM, Adam Cornett wrote:

  You can use ALTER TABLE (
 http://www.postgresql.org/docs/current/static/sql-altertable.html) to set
 the schema of existing tables:
  ALTER TABLE foo SET SCHEMA bar

 Thanks. I did try that, but that command moves the table to a different
 schema, which is not what I'm trying to do. It struck me to try to move it
 to another schema (where the definition then explicitly included the schema
 prefix) and then move it back, but it still doesn't have the schema prefix.

 Cheers,
 Demitri


What you might be looking for then is the search_path
http://www.postgresql.org/docs/current/static/ddl-schemas.html#DDL-SCHEMAS-PATH
when you specify an unqualified table, Postgres uses the search path to
look for it, and when creating tables, unqualified tables go into 'public'
which is the default search path.

I'm not sure if you get pgadmin to add public to the create table
statements, pg_dump might though.

-Adam


Re: [GENERAL] Explicitly adding a table to a schema.

2011-11-04 Thread Pavel Stehule
2011/11/4 Demitri Muna thatsanicehatyouh...@mac.com:
 Hi,

 On Nov 4, 2011, at 2:09 PM, Adam Cornett wrote:

 You can use ALTER TABLE 
 (http://www.postgresql.org/docs/current/static/sql-altertable.html) to set 
 the schema of existing tables:
 ALTER TABLE foo SET SCHEMA bar

 Thanks. I did try that, but that command moves the table to a different 
 schema, which is not what I'm trying to do. It struck me to try to move it to 
 another schema (where the definition then explicitly included the schema 
 prefix) and then move it back, but it still doesn't have the schema prefix.

what you want?

any table is in one schema - you can do some like simlink via view -
and you can set a search_patch - a list of schemas that are acesable
by default

Regards

Pavel


 Cheers,
 Demitri

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Explicitly adding a table to a schema.

2011-11-04 Thread Guillaume Lelarge
On Fri, 2011-11-04 at 14:32 -0400, Demitri Muna wrote:
 Hi,
 
 On Nov 4, 2011, at 2:09 PM, Adam Cornett wrote:
 
  You can use ALTER TABLE 
  (http://www.postgresql.org/docs/current/static/sql-altertable.html) to set 
  the schema of existing tables:
  ALTER TABLE foo SET SCHEMA bar
 
 Thanks. I did try that, but that command moves the table to a different 
 schema, which is not what I'm trying to do. It struck me to try to move it to 
 another schema (where the definition then explicitly included the schema 
 prefix) and then move it back, but it still doesn't have the schema prefix.
 

pgAdmin doesn't add the schema name if the object is visible within your
search_path. So, some objects will have their name prefixed with the
schema name, and others won't.


-- 
Guillaume
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql is too slow to connect

2011-11-04 Thread Rob Sargent


On 11/04/2011 12:08 PM, Tom Lane wrote:
 Ing.Edmundo.Robles.Lopez erob...@sensacd.com.mx writes:
 Hi, I have a problem with psql, is very slow to connect. I Checked the 
 status of my network and the server and the client respond ok.

 First thing that comes to mind is DNS lookup problems.  It's hard to
 speculate more than that on such little information.

   regards, tom lane

Others have found their .psqlrc file had expensive look-ups


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange problem with create table as select * from table;

2011-11-04 Thread hubert depesz lubaczewski
On Thu, Nov 03, 2011 at 11:03:45PM +0100, hubert depesz lubaczewski wrote:
 looking for some other info. will post as soon as i'll gather it, but
 that will be in utc morning :(


I looked closer at the rows that got -1 xobject_id.
$ select magic_id, count(*) from qqq where xobject_id = -1 group by 1 order by 
1;  


 magic_id | count 
--+---
 30343295 | 2
 30408831 | 3
 30539903 | 1
 30605439 | 2
 30670975 | 3
 30802047 | 1
 30867583 | 1
 30933119 | 1
 31195263 | 2
 31260799 | 1
 31326335 | 1
 31588479 | 3
 31654015 | 1
 31719551 | 1
 31785087 | 3
 31850623 | 4
 31981695 | 2
 32047231 | 2
 32112767 | 1
 32309375 | 1
 32374911 | 1
 32440447 | 1
 32505983 | 2
(23 rows)

So, I checked original counts for these magic_id:

$ select magic_id, count(*) from sss.xobjects where magic_id in 
(30343295,30408831,30539903,30605439,30670975,30802047,30867583,30933119,31195263,31260799,31326335,31588479,31654015,31719551,31785087,31850623,31981695,32047231,32112767,32309375,32374911,32440447,32505983)
 group by 1 order by 1;
 magic_id | count 
--+---
 30343295 | 1
 30408831 | 1
 30539903 | 1
 30605439 | 1
 30670975 | 1
 30802047 | 1
 30867583 | 1
 30933119 | 1
 31195263 | 1
 31260799 | 1
 31326335 | 1
 31588479 | 1
 31654015 | 1
 31719551 | 1
 31785087 | 1
 31850623 | 1
 31981695 | 1
 32047231 | 1
 32112767 | 1
 32309375 | 1
 32374911 | 1
 32440447 | 1
 32505983 | 1
(23 rows)

$ select min(magic_id), max(magic_id), count(distinct magic_id), sum( case when 
magic_id between 30343295 and 32505983 then 1 else 0 end ) as count_in_range 
from sss.xobjects;
 min  |   max|  count   | count_in_range


   
--+--+--+
 1000 | 37830834 | 32030523 |2079327
(1 row)

So, the ids are not clustered.

at least the magic_id. but since these are more or less chronological, it means 
that these rows were added to db quite some time apart:

 magic_id | creation_tsz
--+
 30343295 | 2011-05-28 00:57:36+00
 30408831 | 2011-05-30 01:51:09+00
 30539903 | 2011-06-02 04:06:20+00
 30605439 | 2011-06-03 18:23:06+00
 30670975 | 2011-06-05 16:49:49+00
 30802047 | 2011-06-08 16:46:22+00
 30867583 | 2011-06-10 01:39:41+00
 30933119 | 2011-06-11 19:48:07+00
 31195263 | 2011-06-18 00:33:24+00
 31260799 | 2011-06-20 01:49:46+00
 31326335 | 2011-06-21 17:53:41+00
 31588479 | 2011-06-28 07:07:19+00
 31654015 | 2011-06-29 20:30:52+00
 31719551 | 2011-07-01 09:50:54+00
 31785087 | 2011-07-03 03:42:02+00
 31850623 | 2011-07-05 05:02:27+00
 31981695 | 2011-07-08 04:49:21+00
 32047231 | 2011-07-09 21:59:25+00
 32112767 | 2011-07-11 16:53:10+00
 32309375 | 2011-07-15 21:52:31+00
 32374911 | 2011-07-17 19:20:34+00
 32440447 | 2011-07-19 03:13:21+00
 32505983 | 2011-07-20 16:15:38+00

So, Let's see how the bad rows (-1) look, in comparison to good ones, in copy 
of the table:

$ select xobject_id, magic_id, creation_tsz from qqq where magic_id in 
(30343295,30408831,30539903,30605439,30670975,30802047,30867583,30933119,31195263,31260799,31326335,31588479,31654015,31719551,31785087,31850623,31981695,32047231,32112767,32309375,32374911,32440447,32505983)
 order by 2, 1;
 xobject_id | magic_id |  creation_tsz  


   
+--+
 -1 | 30343295 | 2011-05-28 17:24:58+00
 -1 | 30343295 | 2011-05-28 04:57:23+00
   33695980 | 30343295 | 2011-05-28 00:57:36+00
 -1 | 30408831 | 2011-05-31 04:25:44+00
 -1 | 30408831 | 2011-05-31 01:03:03+00
 -1 | 30408831 | 2011-05-30 14:52:29+00
   33761515 | 30408831 | 2011-05-30 01:51:09+00
 -1 | 30539903 | 2011-06-02 05:05:08+00
   33892588 | 30539903 | 2011-06-02 04:06:20+00
 -1 | 30605439 | 2011-06-05 05:13:01+00
 -1 | 30605439 | 2011-06-04 03:22:08+00
   33958124 | 30605439 | 2011-06-03 18:23:06+00
 -1 | 30670975 | 2011-06-05 17:54:06+00
 -1 | 30670975 | 2011-06-06 13:59:01+00
 -1 | 30670975 | 2011-06-06 14:44:22+00
   34023662 | 30670975 | 2011-06-05 

Re: [GENERAL] Strange problem with create table as select * from table;

2011-11-04 Thread Adrian Klaver

On 11/04/2011 01:17 PM, hubert depesz lubaczewski wrote:

On Thu, Nov 03, 2011 at 11:03:45PM +0100, hubert depesz lubaczewski wrote:

looking for some other info. will post as soon as i'll gather it, but
that will be in utc morning :(



I looked closer at the rows that got -1 xobject_id.




Does it tell you anything?


You are very thorough.
I don't know enough about Postgres internals to be much help there. All 
I can point out is the problem seemed to appear over roughly a two month 
period 5/28/11-7/20/11.

Any new code or procedures rolled out then?
Any bug reports?

Given the turnover on this table it seems important everything is 
'normal' since 7/20/11.




Best regards,

depesz




--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange problem with create table as select * from table;

2011-11-04 Thread hubert depesz lubaczewski
On Fri, Nov 04, 2011 at 01:43:55PM -0700, Adrian Klaver wrote:
 Does it tell you anything?
 You are very thorough.

I hate mysteries. Especially the ones that break stuff.

 I don't know enough about Postgres internals to be much help there.
 All I can point out is the problem seemed to appear over roughly a
 two month period 5/28/11-7/20/11.
 Any new code or procedures rolled out then?
 Any bug reports?

not as far as I know, but I'll check with guys. Still - app changes
shouldn't cause something like this in db?

 Given the turnover on this table it seems important everything is
 'normal' since 7/20/11.

yes, but we can't do pg_reorg of the table (it has quite a lot of bloat)

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange problem with create table as select * from table;

2011-11-04 Thread Adrian Klaver

On 11/04/2011 01:47 PM, hubert depesz lubaczewski wrote:

On Fri, Nov 04, 2011 at 01:43:55PM -0700, Adrian Klaver wrote:

Does it tell you anything?

You are very thorough.


I hate mysteries. Especially the ones that break stuff.


Know the feeling.




I don't know enough about Postgres internals to be much help there.
All I can point out is the problem seemed to appear over roughly a
two month period 5/28/11-7/20/11.
Any new code or procedures rolled out then?
Any bug reports?


not as far as I know, but I'll check with guys. Still - app changes
shouldn't cause something like this in db?


Then it wouldn't be a bug:) Seriously, what should happen and what does 
happen is very often different.





Given the turnover on this table it seems important everything is
'normal' since 7/20/11.


yes, but we can't do pg_reorg of the table (it has quite a lot of bloat)


Probably should have been clearer here. I would look at what information 
is available for changes on 7/20-21/11.




Best regards,

depesz




--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange problem with create table as select * from table;

2011-11-04 Thread Tom Lane
hubert depesz lubaczewski dep...@depesz.com writes:
 OK. So based on it all, it looks like for some rows, first two columns got 
 mangled.

Good detective work.  So now we at least have a believable theory about
*what* is happening (something is stomping the first 8 data bytes of
these particular rows), if not *why*.

You said that pg_dump does not show the corruption.  That could be
because the data is coming out through the COPY code path instead of
the SELECT code path.  Could you try a pg_dump with --inserts (which
will fetch the data with SELECTs) and see if it shows corrupt data?

Another thing we probably should ask at this point is whether you have
any nonstandard software loaded into your server, like auto_explain
or pg_stat_statements or some such.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange problem with create table as select * from table;

2011-11-04 Thread hubert depesz lubaczewski
On Fri, Nov 04, 2011 at 05:49:44PM -0400, Tom Lane wrote:
 You said that pg_dump does not show the corruption.  That could be
 because the data is coming out through the COPY code path instead of
 the SELECT code path.  Could you try a pg_dump with --inserts (which
 will fetch the data with SELECTs) and see if it shows corrupt data?

Sure. Testing.

 Another thing we probably should ask at this point is whether you have
 any nonstandard software loaded into your server, like auto_explain
 or pg_stat_statements or some such.

No. Nothing like this. Just base Pg.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] inconsistent interval normalization

2011-11-04 Thread mark
hi all,

pgsql version: 9.0.5
intervalstyle: postgres



I am stumped why I am seeing inconsistent interval normalization with
a given query.


select date_trunc('week', datetime_submitted), avg(datetime_modified -
datetime_submitted)
FROM interval_test
group by 1 order by 1;


returned rows that don't make sense to me are like:

2011-10-03 00:00:00-06 | 26:27:26.471216

I would expect these to be normalize into something like...
2011-10-03 00:00:00-06 | 1 day 02:24:26.471216


I do have other rows in the result set that are normalizing to N days



thoughts ? any ideas on how I can make the normalization consistent.



  (I can upload some dummy data and a dummy ddl if needed)

Mark

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange problem with create table as select * from table;

2011-11-04 Thread Tom Lane
I wrote:
 Good detective work.  So now we at least have a believable theory about
 *what* is happening (something is stomping the first 8 data bytes of
 these particular rows), if not *why*.

Scratch that: something is stomping the first *six* bytes of data.
On a hunch I converted the original and bad xobject_id and magic_id
values to hex, and look at the pattern that pops out:

  badxi   |  badmi  | origxi  | origmi  
--+-+-+-
  | 1e2007f | 215a0f2 | 1e27862
  | 1e2007f | 215da81 | 1e2b1f1
  | 1e2007f | 215330e | 1e20a86
  | 1e5007f | 2184b11 | 1e52281
  | 1e5007f | 218597f | 1e530ef
  | 1e5007f | 2184e4f | 1e525bf
  | 1e6007f | 21939f6 | 1e61166
  | 1e6007f | 21a1054 | 1e6e7c4
  | 1e6007f | 219d7de | 1e6af4e
  | 1e6007f | 219d9f6 | 1e6b166
  | 1e8007f | 21b3861 | 1e80fd1
  | 1e8007f | 21b361d | 1e80d8d
  | 1f0007f | 223bde0 | 1f09528
  | 1f0007f | 223a81a | 1f07f62

I'm assuming this is little-endian hardware, so the low-order half of
magic_id is adjacent to xobject_id.  We can see that in each case the
first six bytes are being overwritten with ff ff ff ff 7f 00, while the
high-order half of magic_id remains unchanged.

Not sure what it means yet, but this seems like confirmation of the
idea that something's stomping on the data while it passes through
CREATE TABLE AS.

BTW, did you try the separate INSERT/SELECT yet?  Does that show
corruption?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL table history tracking

2011-11-04 Thread Ivan Mincik
Dear PostgreSQL users,

I have created set of functions functions which adds possibility to
store full editing history of Your database tables, recover its state
to any time, visualize diffs and place tags to mark particular table
state.
I would be very happy, if somebody will make a try and/or review a
code and post some feedback.

More information:
https://github.com/imincik/pghistory-tracker/blob/master/README
Source code: https://github.com/imincik/pghistory-tracker


Thanks,
Ivan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange problem with create table as select * from table;

2011-11-04 Thread hubert depesz lubaczewski
On Fri, Nov 04, 2011 at 06:18:55PM -0400, Tom Lane wrote:
 BTW, did you try the separate INSERT/SELECT yet?  Does that show
 corruption?

pg_dump --inserts is still working.

i did create table (like), insert into ... select and it also shows the
problem, as I showed (with other data) in email:
2003200312.ga4...@depesz.com

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange problem with create table as select * from table;

2011-11-04 Thread hubert depesz lubaczewski
On Fri, Nov 04, 2011 at 05:49:44PM -0400, Tom Lane wrote:
 You said that pg_dump does not show the corruption.  That could be
 because the data is coming out through the COPY code path instead of
 the SELECT code path.  Could you try a pg_dump with --inserts (which
 will fetch the data with SELECTs) and see if it shows corrupt data?

i'm running the pg_dump (it will take some time, so don't hold your
breath), but at the same time - I can select these rows, correctly, with
normal SELECT from table (xobjects table). Doesn't it disprove this
theory?

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange problem with create table as select * from table;

2011-11-04 Thread Tom Lane
hubert depesz lubaczewski dep...@depesz.com writes:
 On Fri, Nov 04, 2011 at 05:49:44PM -0400, Tom Lane wrote:
 You said that pg_dump does not show the corruption.  That could be
 because the data is coming out through the COPY code path instead of
 the SELECT code path.  Could you try a pg_dump with --inserts (which
 will fetch the data with SELECTs) and see if it shows corrupt data?

 i'm running the pg_dump (it will take some time, so don't hold your
 breath), but at the same time - I can select these rows, correctly, with
 normal SELECT from table (xobjects table). Doesn't it disprove this
 theory?

Well, we don't know what the triggering condition is for the corruption,
so it's premature to draw conclusions on what will or won't cause it.
I was wondering if selecting the entire table was necessary.

A different line of thought is that there's something about these
specific source rows, and only these rows, that makes them vulnerable to
corruption during INSERT/SELECT.  Do they by any chance contain any
values that are unusual elsewhere in your table?  One thing I'm
wondering about right now is the nulls bitmap --- so do these rows have
nulls (or not-nulls) in any place that's unusual elsewhere?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general