[GENERAL] Create View from command line

2010-05-10 Thread OisinJK
Hi

 

I'm trying to build a Windows Form application that needs to alter the
definition of a View, depending on user input/selection. Essentially,
the user is selecting a property of known coordinates ('x','y') and
setting a 'buffer' distance . I then want to retrieve the records which
represent the map features within the specified distance of the
property. The WHERE clause of my view is:

 

st_dwithin(mytable.geom, st_setsrid(st_point(x, y), 27700), buffer);

 

I'm thinking this could be achieved either by making x, y and buffer
parameters or, I simply ALTER the View statement  with literal values.
Whichever, I need to do this from a command line attached to a windows
form event, e.g. button_click.

 

I cannot work out how to do this. Can anyone point me in the right
direction. I've looked at psql.exe, but can't work out if this holds the
answer.

 

Thanks

 

Oisin

 



Re: [GENERAL] Finding rows with text columns beginning with other text columns

2010-05-10 Thread Alban Hertroys
On 10 May 2010, at 24:01, Christoph Zwerschke wrote:

 We want to find all entries in b where txt begins with an
 existing txt entry in a:
 
 select * from b join a on b.txt like a.txt||'%'
 
 On the first glance you would expect that this is performant
 since it can use the index, but sadly it doesn't work.
 The problem seems to be that Postgres can not guarantee that
 column a.txt does not contain a '%', so it cannot optimize.
 
 I feel there should be a performat way to query these entries,
 but I can't come up with anything. Can anybody help me?


Have you tried using substring instead of like?

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4be7d6ec10411051620847!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query that produces index information for a Table

2010-05-10 Thread Alban Hertroys
On 10 May 2010, at 6:02, Boyd, Craig wrote:

 I have been using PostgreSQL for a short while, but I have not had to use the 
 pg_catalog tables before and the columns are a little cryptic to me.  I think 
 it ties to pg_class, but I am not sure how to relate them.  Also, I have not 
 had a chance to us the \dt function.  Can you give me some pointers or point 
 me to the docs?

Why didn't you just try it?

dalroi:solfertje  psql -E development
Welcome to psql 8.3.9, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help with psql commands
   \g or terminate with semicolon to execute query
   \q to quit

development= \dt
* QUERY **
SELECT n.nspname as Schema,
  c.relname as Name,
  CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i' THEN 'index
' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' END as Type,
  r.rolname as Owner
FROM pg_catalog.pg_class c
 JOIN pg_catalog.pg_roles r ON r.oid = c.relowner
 LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('r','')
  AND n.nspname  'pg_catalog'
  AND n.nspname !~ '^pg_toast'
  AND pg_catalog.pg_table_is_visible(c.oid)
ORDER BY 1,2;
**

No relations found.

 SPAMfighter has removed 1388 of my spam emails to date. 

You shouldn't be sending spam, you know ;) Or isn't that what's meant here?
That's a pretty useless number anyhow, the spam filter I use (dspam) catches 
about that much in a weeks time for my account alone. Without a total or a 
starting date its just a meaningless number.

Do you have any influence on what it prints under your messages?

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4be7da5510411734319221!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question about unique indexes

2010-05-10 Thread Alban Hertroys
On 10 May 2010, at 2:09, Jonathan Vanasco wrote:

 i was given a unique index on
   (country_id, state_id, city_id, postal_code_id)

 in the two records below, only country_id and state_id are assigned  ( aside 
 from the serial )
 
 geographic_location_id | coordinates_latitude | coordinates_longitude | 
 country_id | state_id | city_id | postal_code_id
 +--+---++--+-+
312 |  |   |   
  233 |   65 | |
443 |  |   |   
  233 |   65 | |
 
 i was under the expectation that the unique constraint would apply in this 
 place.
 
 from the docs:
   When an index is declared unique, multiple table rows with equal 
 indexed values are not allowed. Null values are not considered equal. A 
 multicolumn unique index will only reject cases where all indexed columns are 
 equal in multiple rows.


As the docs state and as others already mentioned, Null values are not 
considered equal.

You're about to encounter an interesting problem. You have several optional 
foreign keys, so they have to be declared nullable, yet your constraints are 
such that you can't enforce uniqueness because nulls can't be compared.

You could (as mentioned by Al Rumman) create a unique index using coalesce() 
for each nullable column, but that index wouldn't be usable for normal queries 
- your query expressions won't match the indexes expressions ('WHERE city_id=7' 
vs. 'COALESCE(city_id, 0)=7'). I don't think the query planner sees the 
similarity between those expressions.

It is possible to add another index over those columns, without the coalesces, 
but it would take another time that amount of disk/memory space and the planner 
likely wouldn't be aware of the uniqueness of the data in it and hence not plan 
for that.

You could also change all your queries to use coalesce() for each of those 
columns. Hiding that in a view (maybe an updatable one so that CRUD operations 
can use the index too) would be a solution.

An alternative solution is to actually use 0 for those foreign keys and create 
a special record for that in the related tables. It depends on the table 
definitions how easy that is to do, you don't want to end up with all kinds of 
dummy-data in your client application, but you also don't want to remove any 
constraints that guarantee sensibility of the data in those tables.

None of these solutions are pretty. It should be quite a common problem though, 
how do people normally solve this?

Alban Hertroys

--
Screwing up is an excellent way to attach something to the ceiling.


!DSPAM:737,4be7e01210416358213314!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] hstore problem with UNION?

2010-05-10 Thread Ivan Voras

I've encountered the following problem:

ivoras=# create table htest2(id integer, t hstore);
CREATE TABLE

ivoras=# create table htest3(id integer, t2 hstore);
CREATE TABLE

ivoras=# select id, t from htest2 union select id,t2 as t from htest3;
ERROR:  could not identify an ordering operator for type hstore
HINT:  Use an explicit ordering operator or modify the query.

I think it has something to do with UNION being the type of operation 
that could, as a variation, include DISTINCT, which would require 
sorting, but still... UNION by itself doesn't.


How to get around this? I really don't care how hstores get sorted and 
more, would like to avoid sorting them at all as they could get big.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Documentation availability as a single page of text

2010-05-10 Thread Geoffrey

Bruce Momjian wrote:

Bruce Momjian wrote:

John Gage wrote:
Is the documentation available anywhere as a single page text file?   
This would be enormously helpful for searching using regular  
expressions in Vim, for example, or excerpting pieces for future  
reference.

Uh, no, and no one has ever asked for that.  There must be some tool
that will dump an HTML tree as a single text file.


Or maybe convert the PDF file to text.


On Linux:

/usr/bin/pdftotext


--
Until later, Geoffrey

I predict future happiness for America if they can prevent
the government from wasting the labors of the people under
the pretense of taking care of them.
- Thomas Jefferson

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] hstore problem with UNION?

2010-05-10 Thread Jayadevan M
 How to get around this? I really don't care how hstores get sorted and 
 more, would like to avoid sorting them at all as they could get big.
union all 
seems to work. Would that serve the purpose?
Regards,
Jayadevan 





DISCLAIMER: 

The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect.






Re: [GENERAL] hstore problem with UNION?

2010-05-10 Thread Jayadevan M
When we do a union, the database has to get rid of duplicates and get 
distinct values. To achieve this, probably it does a sort. Just 
guesswork
Regards,
Jayadevan





DISCLAIMER: 

The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect.






Re: [GENERAL] hstore problem with UNION?

2010-05-10 Thread Ivan Voras

On 05/10/10 14:10, Jayadevan M wrote:

When we do a union, the database has to get rid of duplicates and get
distinct values. To achieve this, probably it does a sort. Just
guesswork


You are right, it looks like I have inverted the logic of UNION and 
UNION ALL - I actually needed UNION ALL here, thanks!




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Create View from command line

2010-05-10 Thread Andy Colson

On 5/10/2010 4:43 AM, OisinJK wrote:

Hi

I’m trying to build a Windows Form application that needs to alter the
definition of a View, depending on user input/selection. Essentially,
the user is selecting a property of known coordinates (‘x’,’y’) and
setting a ‘buffer’ distance . I then want to retrieve the records which
represent the map features within the specified distance of the
property. The WHERE clause of my view is:

st_dwithin(mytable.geom, st_setsrid(st_point(x, y), 27700), buffer);

I’m thinking this could be achieved either by making x, y and buffer
parameters or, I simply ALTER the View statement with literal values.
Whichever, I need to do this from a command line attached to a windows
form event, e.g. button_click.

I cannot work out how to do this. Can anyone point me in the right
direction. I’ve looked at psql.exe, but can’t work out if this holds the
answer.

Thanks

Oisin




Changing the view on the fly is a bad idea.  Multiple users would step 
all over themselves.


I see two options:

1) don't do the where inside the view.  Have the view return the column 
and have the client generate the where. so client would run:


select * from view where st_dwithin(geom, st_setsrid(st_point(x, y), 
27700), buffer);


2) convert it to a stored procedure, which can take arguments, then have 
the client run something like:


select * from mapFeat(42, 42, 27700)



-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question about unique indexes

2010-05-10 Thread Jonathan Vanasco


On May 10, 2010, at 6:29 AM, Alban Hertroys wrote:
As the docs state and as others already mentioned, Null values are  
not considered equal.



Ah.  I interpreted that wrong.  I thought it applied to indexes  
differently.  I'll have to experiment now...



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] peer-to-peer replication with Postgres

2010-05-10 Thread Vick Khera
On Sat, May 8, 2010 at 12:12 AM, Mike Christensen m...@kitchenpc.com wrote:
 What's the best way to do this?  Looks like something like pgPool
 might be what I want, but I haven't looked into it deeply yet.

I don't think your requirement and postgres are consistent with each
other.Unless your data volume is *so* tiny that copying it takes
just a few seconds, this concept just won't work.  Besides the fact
that I don't think there is a master-master solution that does not
impose a lot of overhead and will deal gracefully with nodes
disappearing and appearing arbitrarily.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Documentation availability as a single page of text

2010-05-10 Thread Alvaro Herrera
Excerpts from John Gage's message of sáb may 08 05:06:35 -0400 2010:
 Is the documentation available anywhere as a single page text file?   
 This would be enormously helpful for searching using regular  
 expressions in Vim, for example, or excerpting pieces for future  
 reference.

There's a texinfo output that could perhaps be useful.  Try
make postgres.info in the doc/src/sgml directory; while it's tagged
experimental and outputs a boatload of warnings, it does work for me and the
text it produces is plain enough.
-- 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Documentation availability as a single page of text

2010-05-10 Thread Alvaro Herrera
Excerpts from Alvaro Herrera's message of lun may 10 12:01:22 -0400 2010:

 There's a texinfo output that could perhaps be useful.  Try
 make postgres.info in the doc/src/sgml directory; while it's tagged
 experimental and outputs a boatload of warnings, it does work for me and the
 text it produces is plain enough.

Ah, it's also *a lot* faster to produce than either PDF or HTML.
-- 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] peer-to-peer replication with Postgres

2010-05-10 Thread Scott Marlowe
On Fri, May 7, 2010 at 10:12 PM, Mike Christensen m...@kitchenpc.com wrote:
 I'm considering using a cloud hosting solution for my website.  It
 will probably be either Amazon, Rackspace or Hosting.com.  I'm still
 comparing.  Either way, my site will consist of multiple virtual
 server instances that I can create and destroy as needed.  Each
 virtual machine instance will be self contained, meaning it'll run the
 website and its own instance of postgres.  The website will only talk
 to the local DB instance.  However, if I'm running several machine
 instances, I want all the databases to keep in sync preferably with as
 little lag as possible.

 This is not a master/slave replication issue where there's one big DB
 that's always up and everything syncs to, this is basically total
 peer-to-peer replication where any time data is updated on one server,
 an update command gets sent to all the other servers.  I would also
 have to address the issue when I provision a new virtual server, I'd
 have to import the current data into the DB seamlessly.

 What's the best way to do this?

I think right now you're stuck coding it up yourself.  No small task.

  Looks like something like pgPool
 might be what I want, but I haven't looked into it deeply yet.
 Thanks!!

The only thing that gets close is bucardo.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to do pg_dump + pg_restore within Perl script?

2010-05-10 Thread Kynn Jones
I would like to replicate the following Unix pipe within a Perl script,
perhaps using DBD::Pg:

% pg_dump -Z9 -Fc -U DB_USER FROM_DB | pg_restore -v -d TO_DB -p
SSH_TUNNEL_PORT -h localhost -U DB_USER

Of course, I can try to use Perl's system, and the like, to run this pipe
verbatim, but I this as a last-resort approach.

Is there a more direct way?

Thanks!

~K


[GENERAL] PostgreSQL 9.0 - support for RANGE value PRECEDING window functions

2010-05-10 Thread Daniel Scott
Hi,

I have a question about a feature in PostgreSQL 9.0.

I am looking for support for windowing functions when using: RANGE
BETWEEN value PRECEDING/FOLLOWING AND value
PRECEDING/FOLLOWING

The latest documentation:

http://www.postgresql.org/docs/9.0/static/sql-expressions.html#SYNTAX-WINDOW-FUNCTIONS

Says The value PRECEDING and value FOLLOWING cases are currently only
allowed in ROWS mode.

However, I have found this post:

http://archives.postgresql.org/message-id/e08cc0400912310149me7150cek3c9aa92e4d396...@mail.gmail.com

Which appears to provide a patch supporting: - allow all of RANGE
BETWEEN value PRECEDING/FOLLOWING AND value
PRECEDING/FOLLOWING. However, I cannot find any further information
related to this feature.

Can anyone confirm whether or not this feature will be available in
PostgreSQL 9.0?

Thanks,

Dan Scott

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Documentation availability as a single page of text

2010-05-10 Thread John Gage
I am using the Mac and, although the Mac does not ship with this, the  
Zotero add-on to Firefox includes it:


/Users/johngage/Library/Application Support/Firefox/Profiles/ 
m35vu1ez.default/zotero/pdftotext-MacIntel


Will try it out.  Thanks very much,

John



On May 10, 2010, at 1:58 PM, Geoffrey wrote:


Bruce Momjian wrote:

Bruce Momjian wrote:

John Gage wrote:
Is the documentation available anywhere as a single page text  
file?   This would be enormously helpful for searching using  
regular  expressions in Vim, for example, or excerpting pieces  
for future  reference.

Uh, no, and no one has ever asked for that.  There must be some tool
that will dump an HTML tree as a single text file.

Or maybe convert the PDF file to text.


On Linux:

/usr/bin/pdftotext


--
Until later, Geoffrey

I predict future happiness for America if they can prevent
the government from wasting the labors of the people under
the pretense of taking care of them.
- Thomas Jefferson

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to do pg_dump + pg_restore within Perl script?

2010-05-10 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


 I would like to replicate the following Unix pipe within a Perl script,
 perhaps using DBD::Pg:

 
 % pg_dump -Z9 -Fc -U DB_USER FROM_DB | pg_restore -v -d TO_DB -p
 SSH_TUNNEL_PORT -h localhost -U DB_USER

 Of course, I can try to use Perl's system, and the like, to run this pipe
 verbatim, but I this as a last-resort approach.

 Is there a more direct way?

If you mean emulate the pg_dump, no. pg_dump does a *lot* of stuff behind 
the scenes, and trying to rewrite all that in Perl would be madness. And I 
say that having written some serious madness into DBD::Pg already :). Stick 
with the shell script, even if it means calling system.

If you simply want to avoid the pipes, you can think about calling pg_dump 
from the remote box, using a authorized_keys with a specific command in it, 
and other tricks, but nothing will be as straightforward and error proof 
as the line you gave, I suspect.

- -- 
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201005101331
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAkvoQ20ACgkQvJuQZxSWSsiaqQCgv6698Bo37q7cVuVngZJez11M
4nEAoOmYW8EFDbjBFtAR4qDZLmHRhNPa
=NUq2
-END PGP SIGNATURE-



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 9.0 - support for RANGE value PRECEDING window functions

2010-05-10 Thread Alvaro Herrera
Excerpts from Daniel Scott's message of lun may 10 13:20:06 -0400 2010:

 Says The value PRECEDING and value FOLLOWING cases are currently only
 allowed in ROWS mode.
 
 However, I have found this post:
 
 http://archives.postgresql.org/message-id/e08cc0400912310149me7150cek3c9aa92e4d396...@mail.gmail.com
 
 Which appears to provide a patch supporting: - allow all of RANGE
 BETWEEN value PRECEDING/FOLLOWING AND value
 PRECEDING/FOLLOWING. However, I cannot find any further information
 related to this feature.

It was ripped out of the patch before commit because the implementation was not
acceptable.

 Can anyone confirm whether or not this feature will be available in
 PostgreSQL 9.0?

No.
-- 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting with materialized paths

2010-05-10 Thread Peter Hunsberger
On Sun, May 9, 2010 at 8:33 AM, Ovid curtis_ovid_...@yahoo.com wrote:
 My apologies. This isn't PG-specific, but since this is running on PostgreSQL 
 8.4, maybe there are specific features which might help.

 I have a tree structure in a table and it uses materialized paths to allow me 
 to find children quickly. However, I also need to sort the results 
 depth-first, as one would expect with threaded forum replies.

  id | parent_id | matpath |          created
 +---+-+
  2 |         1 | 1       | 2010-05-08 15:18:37.987544
  3 |         1 | 1       | 2010-05-08 17:38:14.125377
  4 |         1 | 1       | 2010-05-08 17:38:57.26743
  5 |         1 | 1       | 2010-05-08 17:43:28.211708
  7 |         1 | 1       | 2010-05-08 18:18:11.849735
  6 |         2 | 1.2     | 2010-05-08 17:50:43.288759
  9 |         5 | 1.5     | 2010-05-09 14:02:43.818646
  8 |         6 | 1.2.6   | 2010-05-09 14:01:17.632695

 So the final results should actually be sorted like this:

  id | parent_id | matpath |          created
 +---+-+
  2 |         1 | 1       | 2010-05-08 15:18:37.987544
  6 |         2 | 1.2     | 2010-05-08 17:50:43.288759
  8 |         6 | 1.2.6   | 2010-05-09 14:01:17.632695
  3 |         1 | 1       | 2010-05-08 17:38:14.125377
  4 |         1 | 1       | 2010-05-08 17:38:57.26743
  5 |         1 | 1       | 2010-05-08 17:43:28.211708
  9 |         5 | 1.5     | 2010-05-09 14:02:43.818646
  7 |         1 | 1       | 2010-05-08 18:18:11.849735

 Rationale:  this is for a threaded forum and id 6 is a reply to id 2, so it 
 needs to show up after that one.  Here's the rough structure of what the 
 output would look like (imagine an HTML forum):

 * id 1 (root post)
    * id 2
        * id 6
            * id 8
    * id 3
    * id 4
    * id 5
        * id 9
    * id 7

 How would I work that out? Can I do that in straight SQL or should additional 
 information be added to this table?


This is (once more) a flat query if you use a set / subset tree
implementation.  Joe Celko's book Trees and Hierarchies in SQL for
Smarties might be the fastest way to get up to speed on this, but you
can also figure it out if you spend a bit of time with Google
Basically, every node in the tree is a table row with two columns, say
left and right. All children are contained within the left and right
of the parent.  Pre-order tree traversal gives the algorithm for
assigning left and right.  Once done, your problem is solved by
ordering on left.



-- 
Peter Hunsberger

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting with materialized paths

2010-05-10 Thread Greg Stark
On Sun, May 9, 2010 at 4:47 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Ovid curtis_ovid_...@yahoo.com writes:
 My apologies. This isn't PG-specific, but since this is running on 
 PostgreSQL 8.4, maybe there are specific features which might help.
 I have a tree structure in a table and it uses materialized paths to allow 
 me to find children quickly. However, I also need to sort the results 
 depth-first, as one would expect with threaded forum replies.

 I think contrib/ltree might help you here.  However, it seems to sort
 node names textually rather than numerically, so you might need to
 change it a bit for your own purposes.


That's rather unfortunate. Ltree is awfully convenient and it would be
nice to be able to use it.

If you just used plain Postgres arrays of integers you would get the
sorting you want. But you lose all the useful ltree operators for
trees.

-- 
greg

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to do pg_dump + pg_restore within Perl script?

2010-05-10 Thread Joshua D. Drake
On Mon, 2010-05-10 at 17:33 +, Greg Sabino Mullane wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: RIPEMD160
 
 
  I would like to replicate the following Unix pipe within a Perl script,
  perhaps using DBD::Pg:
 
  
  % pg_dump -Z9 -Fc -U DB_USER FROM_DB | pg_restore -v -d TO_DB -p
  SSH_TUNNEL_PORT -h localhost -U DB_USER
 
  Of course, I can try to use Perl's system, and the like, to run this pipe
  verbatim, but I this as a last-resort approach.
 
  Is there a more direct way?
 
 If you mean emulate the pg_dump, no. pg_dump does a *lot* of stuff behind 
 the scenes, and trying to rewrite all that in Perl would be madness. And I 
 say that having written some serious madness into DBD::Pg already :). Stick 
 with the shell script, even if it means calling system.
 
 If you simply want to avoid the pipes, you can think about calling pg_dump 
 from the remote box, using a authorized_keys with a specific command in it, 
 and other tricks, but nothing will be as straightforward and error proof 
 as the line you gave, I suspect.

With one minor exception. I don't think he needs -Z9 since he is using
SSH which will compress anyway.

Joshua D. Drake



 
 - -- 
 Greg Sabino Mullane g...@turnstep.com
 End Point Corporation http://www.endpoint.com/
 PGP Key: 0x14964AC8 201005101331
 http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
 -BEGIN PGP SIGNATURE-
 
 iEYEAREDAAYFAkvoQ20ACgkQvJuQZxSWSsiaqQCgv6698Bo37q7cVuVngZJez11M
 4nEAoOmYW8EFDbjBFtAR4qDZLmHRhNPa
 =NUq2
 -END PGP SIGNATURE-
 
 
 


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 9.0 - support for RANGE value PRECEDING window functions

2010-05-10 Thread Daniel Scott
Hi,

On Mon, May 10, 2010 at 13:35, Alvaro Herrera alvhe...@alvh.no-ip.org wrote:
 It was ripped out of the patch before commit because the implementation was 
 not
 acceptable.

That's strange - the CommitFest says that it was committed and I can't
find any mention of it being removed. Is there somewhere I can see a
discussion of this?

https://commitfest.postgresql.org/action/commitfest_view?id=5

Also, the documentation

http://www.postgresql.org/docs/9.0/static/sql-expressions.html#SYNTAX-WINDOW-FUNCTIONS

shows:

[ RANGE | ROWS ] BETWEEN frame_start AND frame_end

Can you point me to the right place for getting this changed to remove
RANGE. Maybe the developer mailing list?

Thanks,

Dan Scott

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Finding rows with text columns beginning with other text columns

2010-05-10 Thread Christoph Zwerschke

Am 10.05.2010 11:50 schrieb Alban Hertroys:
 On 10 May 2010, at 24:01, Christoph Zwerschke wrote:

 select * from b join a on b.txt like a.txt||'%'

 I feel there should be a performat way to query these entries,
 but I can't come up with anything. Can anybody help me?

 Have you tried using substring instead of like?

How exactly? I tried this:

substr(b.txt, 1, length(a.txt)) = a.txt

but it cannot be optimized and results in a nested loop, too.

It only works with a fixed length:

substr(b.txt, 1, 3) = a.txt

So theoretically I could do something like

select * from b join a
on substr(b.txt, 1, 1) = a.txt and length(b.txt) = 1
union select * from b join a
on substr(b.txt, 1, 2) = a.txt and length(b.txt) = 2
union select * from b join a
on substr(b.txt, 1, 3) = a.txt and length(b.txt) = 3
union ...

... up to the maximum possible string length in a.txt. Not very elegant.

If the question is not finding text cols in b starting with text cols in 
a, but text cols in b starting with text cols in a as their first word, 
then the following join condition works very well:


split_part(b.txt, ' ', 1) = a.txt

But I'm still looking for a simple solution to the original problem.

-- Christoph

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to do pg_dump + pg_restore within Perl script?

2010-05-10 Thread Kynn Jones
On Mon, May 10, 2010 at 1:33 PM, Greg Sabino Mullane g...@turnstep.comwrote:


 -BEGIN PGP SIGNED MESSAGE-
 Hash: RIPEMD160


  I would like to replicate the following Unix pipe within a Perl script,
  perhaps using DBD::Pg:
 
 
  % pg_dump -Z9 -Fc -U DB_USER FROM_DB | pg_restore -v -d TO_DB -p
  SSH_TUNNEL_PORT -h localhost -U DB_USER
 
  Of course, I can try to use Perl's system, and the like, to run this pipe
  verbatim, but I this as a last-resort approach.
 
  Is there a more direct way?

 ...

 If you simply want to avoid the pipes, you can think about calling pg_dump
 from the remote box, using a authorized_keys with a specific command in it,
 and other tricks...




I can work with pg_dump, I think.  What I'm trying to avoid is the
SSH-tunneling, which I find too fragile for reliable automated operation.

My script can use DBI::connect to provide a password when connecting to the
remote host, so I can run regular SQL on the remote host via Perl DBI
without SSH-tunneling.

But I have not found a way for my script to provide a password when it
runs commands like dropdb, createdb, and pg_restore with the -h REMOTE
HOST flag.  So I end up resorting to SSH-tunneling.  This is what I'm
trying to avoid.

Your idea of having the remote host run the pg_dump is worth looking into,
although I'm reluctant because involving the remote host like this would
significantly complicate my whole set up.

Anyway, thanks!

~K


Re: [GENERAL] How to do pg_dump + pg_restore within Perl script?

2010-05-10 Thread Kynn Jones
On Mon, May 10, 2010 at 2:59 PM, Joshua D. Drake j...@commandprompt.comwrote:

 On Mon, 2010-05-10 at 17:33 +, Greg Sabino Mullane wrote:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: RIPEMD160
 
 
   I would like to replicate the following Unix pipe within a Perl script,
   perhaps using DBD::Pg:
  
  
   % pg_dump -Z9 -Fc -U DB_USER FROM_DB | pg_restore -v -d TO_DB -p
   SSH_TUNNEL_PORT -h localhost -U DB_USER
  
   Of course, I can try to use Perl's system, and the like, to run this
 pipe
   verbatim, but I this as a last-resort approach.
  
   Is there a more direct way?
 
  If you mean emulate the pg_dump, no. pg_dump does a *lot* of stuff behind
  the scenes, and trying to rewrite all that in Perl would be madness. And
 I
  say that having written some serious madness into DBD::Pg already :).
 Stick
  with the shell script, even if it means calling system.
 
  If you simply want to avoid the pipes, you can think about calling
 pg_dump
  from the remote box, using a authorized_keys with a specific command in
 it,
  and other tricks, but nothing will be as straightforward and error proof
  as the line you gave, I suspect.

 With one minor exception. I don't think he needs -Z9 since he is using
 SSH which will compress anyway.


Actually, that was a mistake on my part.  That should have been -Ft rather
than -Z9 -Fc, since I *don't* want compression (most of the data being
transmitted consists of highly incompressible blobs anyway).  Regarding SSH,
my understanding is that to get compression one needs to pass to it the -C
flag at the time of creating the tunnel.  But my grasp of these details is
tenuous as best.

~K


Re: [GENERAL] PostgreSQL 9.0 - support for RANGE value PRECEDING window functions

2010-05-10 Thread Tom Lane
Daniel Scott djsc...@mit.edu writes:
 On Mon, May 10, 2010 at 13:35, Alvaro Herrera alvhe...@alvh.no-ip.org wrote:
 It was ripped out of the patch before commit because the implementation was 
 not
 acceptable.

 That's strange - the CommitFest says that it was committed and I can't
 find any mention of it being removed. Is there somewhere I can see a
 discussion of this?

Look into the pgsql-hackers thread about it.  (The commitfest notes are
not meant to be a complete record of the mailing list discussions.)
I think the relevant part starts here:
http://archives.postgresql.org/pgsql-hackers/2010-02/msg00540.php

 Also, the documentation
 http://www.postgresql.org/docs/9.0/static/sql-expressions.html#SYNTAX-WINDOW-FUNCTIONS
 shows:
 [ RANGE | ROWS ] BETWEEN frame_start AND frame_end
 Can you point me to the right place for getting this changed to remove
 RANGE. Maybe the developer mailing list?

There's nothing to remove there, since RANGE is in fact valid with many
of the alternatives for frame_start and frame_end.  It would be very
awkward to try to show this limitation as part of the syntax diagram,
so it's just specified in the text.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to do pg_dump + pg_restore within Perl script?

2010-05-10 Thread Tom Lane
Kynn Jones kyn...@gmail.com writes:
 But I have not found a way for my script to provide a password when it
 runs commands like dropdb, createdb, and pg_restore with the -h REMOTE
 HOST flag.  So I end up resorting to SSH-tunneling.  This is what I'm
 trying to avoid.

You don't really want to embed a password in the script anyway.
Consider using a ~/.pgpass file, or look at non-password-based
authentication mechanisms.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question about unique indexes

2010-05-10 Thread Tim Landscheidt
Alban Hertroys dal...@solfertje.student.utwente.nl wrote:

 [...]
 None of these solutions are pretty. It should be quite a common problem 
 though, how do people normally solve this?

Partial indexes? Doesn't look pretty either though:

| tim=# \d DE_Postcodes
|  Tabelle »public.de_postcodes«
|   Spalte  |   Typ   | Attribute
| --+-+---
|  postcode | integer | not null
|  city | text| not null
|  suffix   | text|
|  street   | text| not null
|  first| integer |
|  last | integer |
| Indexe:
| de_postcodes_key1 UNIQUE, btree (postcode, city, suffix, street, first, 
last) WHERE suffix IS NOT NULL AND first IS NOT NULL AND last IS NOT NULL
| de_postcodes_key2 UNIQUE, btree (postcode, city, suffix, street, first) 
WHERE suffix IS NOT NULL AND first IS NOT NULL AND last IS NULL
| de_postcodes_key3 UNIQUE, btree (postcode, city, suffix, street, last) 
WHERE suffix IS NOT NULL AND first IS NULL AND last IS NOT NULL
| de_postcodes_key4 UNIQUE, btree (postcode, city, suffix, street) WHERE 
suffix IS NOT NULL AND first IS NULL AND last IS NULL
| de_postcodes_key5 UNIQUE, btree (postcode, city, street, first, last) 
WHERE suffix IS NULL AND first IS NOT NULL AND last IS NOT NULL
| de_postcodes_key6 UNIQUE, btree (postcode, city, street, first) WHERE 
suffix IS NULL AND first IS NOT NULL AND last IS NULL
| de_postcodes_key7 UNIQUE, btree (postcode, city, street, last) WHERE 
suffix IS NULL AND first IS NULL AND last IS NOT NULL
| de_postcodes_key8 UNIQUE, btree (postcode, city, street) WHERE suffix 
IS NULL AND first IS NULL AND last IS NULL
| de_postcodes_postcodecity btree (postcode, city)

| tim=#

Tim


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to do pg_dump + pg_restore within Perl script?

2010-05-10 Thread Tom Lane
Kynn Jones kyn...@gmail.com writes:
 Actually, that was a mistake on my part.  That should have been -Ft rather
 than -Z9 -Fc, since I *don't* want compression (most of the data being
 transmitted consists of highly incompressible blobs anyway).  Regarding SSH,
 my understanding is that to get compression one needs to pass to it the -C
 flag at the time of creating the tunnel.  But my grasp of these details is
 tenuous as best.

Actually, I'd suggest -Fc -Z0, or maybe plain text dump, if your
motivation is to avoid compression.  -Ft has its own issues that
make it a less-than-desirable choice; you shouldn't pick it unless
you really specifically need a tar-compatible dump format.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] files stored in the database

2010-05-10 Thread jus...@magwerks.com
as it keeps coming up on the list off and on, I decided to write a wiki article,

comments suggestions
http://wiki.postgresql.org/wiki/BinaryFilesInDB

I also read over the 9.0 beta release notes, bytea type now allows hex values??
http://developer.postgresql.org/pgdocs/postgres/release-9-0.html#AEN98905





All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


Re: [GENERAL] initdb fails on Centos 5.4 x64

2010-05-10 Thread Valentin Hocher
  The solution is very simple and can be done in the cPanel configuration, 
  just disabled Shell Fork Bomb Protection in the security center. That's 
  all. The ulimit restrictions are removed!

 Huh, that's interesting.  With a name like that, I'd have thought it
 would set limits on number of processes, not virtual memory size.
 What ulimit settings did it change exactly?


Shell Fork Bomb Protection
Description:
Fork Bomb Protection will prevent users with terminal access (ssh/telnet) from 
using up all the resources on the server. Unchecked resource allocation can 
potentially lead to a server crash.
It is recommended that this protection be enabled for servers providing 
terminal access.


The only thing I discoverd if enabled is the following entry in /etc/profile 
(if disabled, there is no ulimit set)


#cPanel Added Limit Protections -- BEGIN

#unlimit so we can run the whoami
ulimit -n 4096 -u 14335 -m unlimited -d unlimited -s 8192 -c 100 -v 
unlimited 2/dev/null

LIMITUSER=$USER
if [ -e /usr/bin/whoami ]; then
LIMITUSER=`/usr/bin/whoami`
fi
if [ $LIMITUSER != root ]; then
ulimit -n 100 -u 20 -m 20 -d 20 -s 8192 -c 20 -v 20 
2/dev/null
else
ulimit -n 4096 -u 14335 -m unlimited -d unlimited -s 8192 -c 100 -v 
unlimited 2/dev/null
fi
#cPanel Added Limit Protections -- END


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to do pg_dump + pg_restore within Perl script?

2010-05-10 Thread Andy Colson

On 5/10/2010 2:46 PM, Kynn Jones wrote:

On Mon, May 10, 2010 at 1:33 PM, Greg Sabino Mullane g...@turnstep.com
mailto:g...@turnstep.com wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


  I would like to replicate the following Unix pipe within a Perl
script,
  perhaps using DBD::Pg:
 
 
  % pg_dump -Z9 -Fc -U DB_USER FROM_DB | pg_restore -v -d
TO_DB -p
  SSH_TUNNEL_PORT -h localhost -U DB_USER
 
  Of course, I can try to use Perl's system, and the like, to run
this pipe
  verbatim, but I this as a last-resort approach.
 
  Is there a more direct way?

...

If you simply want to avoid the pipes, you can think about calling
pg_dump
from the remote box, using a authorized_keys with a specific command
in it,
and other tricks...




I can work with pg_dump, I think.  What I'm trying to avoid is the
SSH-tunneling, which I find too fragile for reliable automated operation.

My script can use DBI::connect to provide a password when connecting to
the remote host, so I can run regular SQL on the remote host via Perl
DBI without SSH-tunneling.

But I have not found a way for my script to provide a password when it
runs commands like dropdb, createdb, and pg_restore with the -h REMOTE
HOST flag.  So I end up resorting to SSH-tunneling.  This is what I'm
trying to avoid.

Your idea of having the remote host run the pg_dump is worth looking
into, although I'm reluctant because involving the remote host like this
would significantly complicate my whole set up.

Anyway, thanks!

~K




Ah, this one I have hit too.  I have very large database updates to send 
to the web boxes... and I'd sometimes loose connection mid way.


I changed the process to dump to file, then rsync the file to the dest, 
then remote exec the restore via ssh.


-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting with materialized paths

2010-05-10 Thread Alban Hertroys
On 10 May 2010, at 20:06, Greg Stark wrote:

 On Sun, May 9, 2010 at 4:47 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Ovid curtis_ovid_...@yahoo.com writes:
 My apologies. This isn't PG-specific, but since this is running on 
 PostgreSQL 8.4, maybe there are specific features which might help.
 I have a tree structure in a table and it uses materialized paths to allow 
 me to find children quickly. However, I also need to sort the results 
 depth-first, as one would expect with threaded forum replies.
 
 I think contrib/ltree might help you here.  However, it seems to sort
 node names textually rather than numerically, so you might need to
 change it a bit for your own purposes.
 
 
 That's rather unfortunate. Ltree is awfully convenient and it would be
 nice to be able to use it.
 
 If you just used plain Postgres arrays of integers you would get the
 sorting you want. But you lose all the useful ltree operators for
 trees.


I recall from the docs that you can create arrays of ltrees. It has some 
special operators for that. I couldn't figure out what the use case for those 
ltree-arrays was (the docs are rather sparse), but it might just be what you're 
looking for.

Alban Hertroys

--
Screwing up is an excellent way to attach something to the ceiling.


!DSPAM:737,4be8791c10411720337464!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Finding rows with text columns beginning with other text columns

2010-05-10 Thread Alban Hertroys

On 10 May 2010, at 21:24, Christoph Zwerschke wrote:

 Am 10.05.2010 11:50 schrieb Alban Hertroys:
  On 10 May 2010, at 24:01, Christoph Zwerschke wrote:
 
  select * from b join a on b.txt like a.txt||'%'
 
  I feel there should be a performat way to query these entries,
  but I can't come up with anything. Can anybody help me?
 
  Have you tried using substring instead of like?
 
 How exactly? I tried this:
 
substr(b.txt, 1, length(a.txt)) = a.txt
 
 but it cannot be optimized and results in a nested loop, too.


I feared as much, but it was worth a try.

Thinking more on the issue, I don't see a way to prevent the nested loop as 
there's no way to decide beforehand what part of the string to index for b.txt. 
It depends on a.txt after all.

You would basically need a cross-table index, those are not supported. If it 
were, you could create a functional index of substrings of b.txt with string 
lengths from a.txt (eeps, that'd be a table product!).

Your best solution is probably to add a column to b that contains the substring 
of b.txt that would match a.txt.

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4be87c0b10418212361837!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting with materialized paths

2010-05-10 Thread Thomas Kellerer

Ovid wrote on 09.05.2010 15:33:

My apologies. This isn't PG-specific, but since this is running on
PostgreSQL 8.4, maybe there are specific features which might help.

I have a tree structure in a table and it uses materialized paths to
allow me to find children quickly. However, I also need to sort the
results depth-first, as one would expect with threaded forum
replies.

  id | parent_id | matpath |  created
+---+-+
   2 | 1 | 1   | 2010-05-08 15:18:37.987544
   3 | 1 | 1   | 2010-05-08 17:38:14.125377
   4 | 1 | 1   | 2010-05-08 17:38:57.26743
   5 | 1 | 1   | 2010-05-08 17:43:28.211708
   7 | 1 | 1   | 2010-05-08 18:18:11.849735
   6 | 2 | 1.2 | 2010-05-08 17:50:43.288759
   9 | 5 | 1.5 | 2010-05-09 14:02:43.818646
   8 | 6 | 1.2.6   | 2010-05-09 14:01:17.632695

So the final results should actually be sorted like this:

  id | parent_id | matpath |  created
+---+-+
   2 | 1 | 1   | 2010-05-08 15:18:37.987544
   6 | 2 | 1.2 | 2010-05-08 17:50:43.288759
   8 | 6 | 1.2.6   | 2010-05-09 14:01:17.632695
   3 | 1 | 1   | 2010-05-08 17:38:14.125377
   4 | 1 | 1   | 2010-05-08 17:38:57.26743
   5 | 1 | 1   | 2010-05-08 17:43:28.211708
   9 | 5 | 1.5 | 2010-05-09 14:02:43.818646
   7 | 1 | 1   | 2010-05-08 18:18:11.849735



Try this:

with recursive thread_display (id, parent_id, matpath, created, sort_key)
as
(
   select id, parent_id, matpath, created, array[id] as sort_key
   from threads
   where id = 1
   union all
   select c.id, c.parent_id, c.matpath, c.created, p.sort_key||array[c.id]
   from threads c
 join thread_display p on c.parent_id = p.id
)
select id, parent_id, matpath, created
from thread_display
order by sort_key;

Thomas




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Crazy looking actual row count from explain analyze

2010-05-10 Thread Gordon Shannon

Running 8.4.3, I have a table with 43 million rows. Two of the columns are
(topic_id int not null) and (status message_status_enum not null), where
message_status_enum is defined as
CREATE TYPE message_status_enum AS ENUM ( 'V', 'X', 'S', 'R', 'U', 'D' ); 

Among the indexes there is this:
m_20100201_topic_multi btree (topic_id, status, source_category_id,
alg_ci_rank_rollup)

..see that topic_id and status are the leading edge of the index.
Fact: there are no rows with status 'S' or 'X'
Fact: there are no rows with topic_id = 1

Consider, then...

explain analyze select count(*) from m_20100201 where status in ('S','X');
 QUERY
PLAN  
-
 Aggregate  (cost=987810.75..987810.76 rows=1 width=0) (actual
time=2340.193..2340.194 rows=1 loops=1)
   -  Bitmap Heap Scan on m_20100201  (cost=987806.75..987810.75 rows=1
width=0) (actual time=2340.191..2340.191 rows=0 loops=1)
 Recheck Cond: (status = ANY ('{S,X}'::message_status_enum[]))
 -  Bitmap Index Scan on m_20100201_topic_multi 
(cost=0.00..987806.75 rows=1 width=0) (actual time=2334.371..2334.371
rows=126336 loops=1)
   Index Cond: (status = ANY ('{S,X}'::message_status_enum[]))

What I don't understand is the actual rows of 126,336 in the bitmap index
scan.  I would expect it to have to scan every index entry, but doesn't this
output mean that it's *returning* 126K rows from that scan?  Whereas I think
it should return zero.

I have already fixed this query by adding a better index.  But the point of
this post is simply to understand this explain analyze output.  Thanks!

--gordon 


-- 
View this message in context: 
http://old.nabble.com/Crazy-looking-actual-row-count-from-explain-analyze-tp28517643p28517643.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Crazy looking actual row count from explain analyze

2010-05-10 Thread Tom Lane
Gordon Shannon gordo...@gmail.com writes:
-  Bitmap Heap Scan on m_20100201  (cost=987806.75..987810.75 rows=1
 width=0) (actual time=2340.191..2340.191 rows=0 loops=1)
  Recheck Cond: (status = ANY ('{S,X}'::message_status_enum[]))
  -  Bitmap Index Scan on m_20100201_topic_multi 
 (cost=0.00..987806.75 rows=1 width=0) (actual time=2334.371..2334.371
 rows=126336 loops=1)
Index Cond: (status = ANY ('{S,X}'::message_status_enum[]))

 What I don't understand is the actual rows of 126,336 in the bitmap index
 scan.  I would expect it to have to scan every index entry, but doesn't this
 output mean that it's *returning* 126K rows from that scan?  Whereas I think
 it should return zero.

Well, it does return zero rows from the actual heapscan.  What the above
is telling you is that a whole lot of rows are being returned by the
index and then filtered out at the table scan stage.  My first suspicion
is that those are unvacuumed dead rows ... what's your vacuuming policy
on this database?

 I have already fixed this query by adding a better index.

I think the new index might have fixed things largely by not bothering
to index already-dead rows.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Archiver not picking up changes to archive_command

2010-05-10 Thread bricklen
Hi,

I'm stumped by an issue we are experiencing at the moment. We have
been successfully archiving logs to two standby sites for many months
now using the following command:

rsync -a %p postg...@192.168.80.174:/WAL_Archive/  rsync
--bwlimit=1250 -az %p postg...@14.121.70.98:/WAL_Archive/

Due to some heavy processing today, we have been falling behind on
shipping log files (by about a 1000 logs or so), so wanted to up our
bwlimit like so:

rsync -a %p postg...@192.168.80.174:/WAL_Archive/  rsync
--bwlimit=1875 -az %p postg...@14.121.70.98:/WAL_Archive/


The db is showing the change.
SHOW archive_command:
rsync -a %p postg...@192.168.80.174:/WAL_Archive/  rsync
--bwlimit=1875 -az %p postg...@14.121.70.98:/WAL_Archive/


Yet, the running processes never get above the original bwlimit of
1250. Have I missed a step? Would kill -HUP archiver pid help?
(I'm leery of trying that untested though)

ps aux | grep rsync
postgres 27704  0.0  0.0  63820  1068 ?S16:55   0:00 sh -c
rsync -a pg_xlog/000107170070
postg...@192.168.80.174:/WAL_Archive/  rsync --bwlimit=1250 -az
pg_xlog/000107170070 postg...@14.121.70.98:/WAL_Archive/
postgres 27714 37.2  0.0  68716  1612 ?S16:55   0:01 rsync
--bwlimit=1250 -az pg_xlog/000107170070
postg...@14.121.70.98:/WAL_Archive/
postgres 27715  3.0  0.0  60764  5648 ?S16:55   0:00 ssh
-l postgres 14.121.70.98 rsync --server -logDtprz --bwlimit=1250 .
/WAL_Archive/


Thanks,

bricklen

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] initdb fails on Centos 5.4 x64

2010-05-10 Thread Tom Lane
valentin.hoc...@kabelbw.de (Valentin Hocher) writes:
 [ cPanel's Shell Fork Bomb Protection actually does this: ]
 ulimit -n 100 -u 20 -m 20 -d 20 -s 8192 -c 20 -v 20 
 2/dev/null

Just to annotate that: some experimentation I did confirms that on
RHEL5 x86_64, PG 8.4.3 falls over with the mentioned error when run
under ulimit -v in the vicinity of 20 (ie 200MB).  It's kind of
surprising that initdb eats that much virtual memory space, although
certainly loading all the encoding translation libraries simultaneously
is a bit of a stress test.  But the actual memory footprint is surely a
lot less than that.  Apparently there is a good deal of inefficiency in
address-space consumption when loading a bunch of .so's on this
platform.  I'd be interested to know if people can reproduce similar
problems on other Linux variants.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Archiver not picking up changes to archive_command

2010-05-10 Thread bricklen
Sorry, version: PostgreSQL 8.4.2 on x86_64-redhat-linux-gnu, compiled
by GCC gcc (GCC) 4.1.2 20071124 (Red Hat 4.1.2-42), 64-bit


On Mon, May 10, 2010 at 5:01 PM, bricklen brick...@gmail.com wrote:
 Hi,

 I'm stumped by an issue we are experiencing at the moment. We have
 been successfully archiving logs to two standby sites for many months
 now using the following command:

 rsync -a %p postg...@192.168.80.174:/WAL_Archive/  rsync
 --bwlimit=1250 -az %p postg...@14.121.70.98:/WAL_Archive/

 Due to some heavy processing today, we have been falling behind on
 shipping log files (by about a 1000 logs or so), so wanted to up our
 bwlimit like so:

 rsync -a %p postg...@192.168.80.174:/WAL_Archive/  rsync
 --bwlimit=1875 -az %p postg...@14.121.70.98:/WAL_Archive/


 The db is showing the change.
 SHOW archive_command:
 rsync -a %p postg...@192.168.80.174:/WAL_Archive/  rsync
 --bwlimit=1875 -az %p postg...@14.121.70.98:/WAL_Archive/


 Yet, the running processes never get above the original bwlimit of
 1250. Have I missed a step? Would kill -HUP archiver pid help?
 (I'm leery of trying that untested though)

 ps aux | grep rsync
 postgres 27704  0.0  0.0  63820  1068 ?        S    16:55   0:00 sh -c
 rsync -a pg_xlog/000107170070
 postg...@192.168.80.174:/WAL_Archive/  rsync --bwlimit=1250 -az
 pg_xlog/000107170070 postg...@14.121.70.98:/WAL_Archive/
 postgres 27714 37.2  0.0  68716  1612 ?        S    16:55   0:01 rsync
 --bwlimit=1250 -az pg_xlog/000107170070
 postg...@14.121.70.98:/WAL_Archive/
 postgres 27715  3.0  0.0  60764  5648 ?        S    16:55   0:00 ssh
 -l postgres 14.121.70.98 rsync --server -logDtprz --bwlimit=1250 .
 /WAL_Archive/


 Thanks,

 bricklen


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Archiver not picking up changes to archive_command

2010-05-10 Thread Tom Lane
bricklen brick...@gmail.com writes:
 Due to some heavy processing today, we have been falling behind on
 shipping log files (by about a 1000 logs or so), so wanted to up our
 bwlimit like so:

 rsync -a %p postg...@192.168.80.174:/WAL_Archive/  rsync
 --bwlimit=1875 -az %p postg...@14.121.70.98:/WAL_Archive/

 The db is showing the change.
 SHOW archive_command:
 rsync -a %p postg...@192.168.80.174:/WAL_Archive/  rsync
 --bwlimit=1875 -az %p postg...@14.121.70.98:/WAL_Archive/

 Yet, the running processes never get above the original bwlimit of
 1250. Have I missed a step? Would kill -HUP archiver pid help?
 (I'm leery of trying that untested though)

A look at the code shows that the archiver only notices SIGHUP once
per outer loop, so the change would only take effect once you catch up,
which is not going to help much in this case.  Possibly we should change
it to check for SIGHUP after each archive_command execution.

If you kill -9 the archiver process, the postmaster will just start
a new one, but realize that that would result in two concurrent
rsync's.  It might work ok to kill -9 the archiver and the current
rsync in the same command.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] peer-to-peer replication with Postgres

2010-05-10 Thread Mike Christensen
Thanks for the advice.  In that case, I'll stick with the standard
approach of having a single SQL server and several web frontends and
employ a caching mechanism such as memcache as well.  Thanks!

Mike

On Mon, May 10, 2010 at 9:30 AM, Scott Marlowe scott.marl...@gmail.com wrote:
 On Fri, May 7, 2010 at 10:12 PM, Mike Christensen m...@kitchenpc.com wrote:
 I'm considering using a cloud hosting solution for my website.  It
 will probably be either Amazon, Rackspace or Hosting.com.  I'm still
 comparing.  Either way, my site will consist of multiple virtual
 server instances that I can create and destroy as needed.  Each
 virtual machine instance will be self contained, meaning it'll run the
 website and its own instance of postgres.  The website will only talk
 to the local DB instance.  However, if I'm running several machine
 instances, I want all the databases to keep in sync preferably with as
 little lag as possible.

 This is not a master/slave replication issue where there's one big DB
 that's always up and everything syncs to, this is basically total
 peer-to-peer replication where any time data is updated on one server,
 an update command gets sent to all the other servers.  I would also
 have to address the issue when I provision a new virtual server, I'd
 have to import the current data into the DB seamlessly.

 What's the best way to do this?

 I think right now you're stuck coding it up yourself.  No small task.

  Looks like something like pgPool
 might be what I want, but I haven't looked into it deeply yet.
 Thanks!!

 The only thing that gets close is bucardo.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Crazy looking actual row count from explain analyze

2010-05-10 Thread Gordon Shannon


Tom Lane-2 wrote:
 
 My first suspicion
 is that those are unvacuumed dead rows ... what's your vacuuming policy
 on this database?
 

Ah, I didn't know that number included dead tuples.  That probably explains
it.  pg_stat_user_tables says the table has 370,269 dead tuples.  On this
table, I have autovacuum_vacuum_scale_factor set to 0.02, so I believe the
table will have to have 869K dead tuples before vacuum will kick in.

 I have already fixed this query by adding a better index.


Tom Lane-2 wrote:
 
 I think the new index might have fixed things largely by not bothering
 to index already-dead rows.
 

Actually, I put a partial index on status, where != 'V'.  That fits our
usage pattern of 99% of the records being 'V', so it's a tiny index and
satisifies this type of query very quickly.

Thanks,

--gordon

-- 
View this message in context: 
http://old.nabble.com/Crazy-looking-actual-row-count-from-explain-analyze-tp28517643p28518862.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] peer-to-peer replication with Postgres

2010-05-10 Thread Scott Marlowe
On Mon, May 10, 2010 at 7:04 PM, Mike Christensen m...@kitchenpc.com wrote:
 Thanks for the advice.  In that case, I'll stick with the standard
 approach of having a single SQL server and several web frontends and
 employ a caching mechanism such as memcache as well.  Thanks!

And with 9.0 it will be pretty easy to setup hot read PITR slaves so
you can build a pretty simple failover system.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Archiver not picking up changes to archive_command

2010-05-10 Thread Greg Smith

Tom Lane wrote:

A look at the code shows that the archiver only notices SIGHUP once
per outer loop, so the change would only take effect once you catch up,
which is not going to help much in this case.  Possibly we should change
it to check for SIGHUP after each archive_command execution.
  


I never considered this a really important issue to sort out because I 
tell everybody it's unwise to put something complicated directly into 
archive_command.  Much better to call a script that gets passed %f/%p, 
then let that script do all the work; don't even have to touch the 
server config if you need to fix something then.  The lack of error 
checking that you get when just writing some shell commands directly in 
the archive_command itself horrifies me in a production environment.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] peer-to-peer replication with Postgres

2010-05-10 Thread Scott Marlowe
On Mon, May 10, 2010 at 7:21 PM, Mike Christensen m...@kitchenpc.com wrote:
 Man that sounds awesome.  I need that now.  So does that mean you'd
 have one beefy SQL server for all the updates and everything writes to
 that, and then you'd have a bunch of read-only servers and new data
 trickles into them from the master continuously?

Yep.  You can also do something similar but less efficient now with
slony or some other replication engine.  But they're less simple to
set up and usually less efficient than log shipping.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Archiver not picking up changes to archive_command

2010-05-10 Thread bricklen
On Mon, May 10, 2010 at 5:50 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 A look at the code shows that the archiver only notices SIGHUP once
 per outer loop, so the change would only take effect once you catch up,
 which is not going to help much in this case.  Possibly we should change
 it to check for SIGHUP after each archive_command execution.

 If you kill -9 the archiver process, the postmaster will just start
 a new one, but realize that that would result in two concurrent
 rsync's.  It might work ok to kill -9 the archiver and the current
 rsync in the same command.

                        regards, tom lane


I think I'll just wait it out, then sighup.

Thanks for looking into this.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Archiver not picking up changes to archive_command

2010-05-10 Thread bricklen
On Mon, May 10, 2010 at 6:12 PM, Greg Smith g...@2ndquadrant.com wrote:
 Tom Lane wrote:

 A look at the code shows that the archiver only notices SIGHUP once
 per outer loop, so the change would only take effect once you catch up,
 which is not going to help much in this case.  Possibly we should change
 it to check for SIGHUP after each archive_command execution.


 I never considered this a really important issue to sort out because I tell
 everybody it's unwise to put something complicated directly into
 archive_command.  Much better to call a script that gets passed %f/%p, then
 let that script do all the work; don't even have to touch the server config
 if you need to fix something then.  The lack of error checking that you get
 when just writing some shell commands directly in the archive_command itself
 horrifies me in a production environment.

 --
 Greg Smith  2ndQuadrant US  Baltimore, MD
 PostgreSQL Training, Services and Support
 g...@2ndquadrant.com   www.2ndQuadrant.us

Thanks Greg, that's a good idea. I'll revise that series of commands
into a script, and add some error handling as you suggest.


Cheers,

Bricklen

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] peer-to-peer replication with Postgres

2010-05-10 Thread Mike Christensen
The concept of updating one database and doing all your reads from
another database is kinda confusing to me.  Does that mean you have to
design your whole app around that concept, have a different connection
string and what not for your writable database and read-only
databases?  I'm using Castle ActiveRecord which I'm not even sure
supports that (without a ton of custom code anyway).

Is there any sort of abstraction layer (like in the driver level) that
can abstract that and just make updates go to one DB and reads
round-robin to other DBs?  Hopefully there's a way to make this design
simple to implement.

Mike

On Mon, May 10, 2010 at 6:23 PM, Scott Marlowe scott.marl...@gmail.com wrote:
 On Mon, May 10, 2010 at 7:21 PM, Mike Christensen m...@kitchenpc.com wrote:
 Man that sounds awesome.  I need that now.  So does that mean you'd
 have one beefy SQL server for all the updates and everything writes to
 that, and then you'd have a bunch of read-only servers and new data
 trickles into them from the master continuously?

 Yep.  You can also do something similar but less efficient now with
 slony or some other replication engine.  But they're less simple to
 set up and usually less efficient than log shipping.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] peer-to-peer replication with Postgres

2010-05-10 Thread Scott Marlowe
On Mon, May 10, 2010 at 8:00 PM, Mike Christensen m...@kitchenpc.com wrote:
 The concept of updating one database and doing all your reads from
 another database is kinda confusing to me.  Does that mean you have to
 design your whole app around that concept, have a different connection
 string and what not for your writable database and read-only
 databases?  I'm using Castle ActiveRecord which I'm not even sure
 supports that (without a ton of custom code anyway).

 Is there any sort of abstraction layer (like in the driver level) that
 can abstract that and just make updates go to one DB and reads
 round-robin to other DBs?  Hopefully there's a way to make this design
 simple to implement.

Pretty sure pgpool can do the read from these dbs, write to this one.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] peer-to-peer replication with Postgres

2010-05-10 Thread Mike Christensen
Man that sounds awesome.  I need that now.  So does that mean you'd
have one beefy SQL server for all the updates and everything writes to
that, and then you'd have a bunch of read-only servers and new data
trickles into them from the master continuously?

Mike

On Mon, May 10, 2010 at 6:09 PM, Scott Marlowe scott.marl...@gmail.com wrote:
 On Mon, May 10, 2010 at 7:04 PM, Mike Christensen m...@kitchenpc.com wrote:
 Thanks for the advice.  In that case, I'll stick with the standard
 approach of having a single SQL server and several web frontends and
 employ a caching mechanism such as memcache as well.  Thanks!

 And with 9.0 it will be pretty easy to setup hot read PITR slaves so
 you can build a pretty simple failover system.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] peer-to-peer replication with Postgres

2010-05-10 Thread John R Pierce

Scott Marlowe wrote:

Is there any sort of abstraction layer (like in the driver level) that
can abstract that and just make updates go to one DB and reads
round-robin to other DBs?  Hopefully there's a way to make this design
simple to implement.



Pretty sure pgpool can do the read from these dbs, write to this one.
  



how would it know if you're going to do updates later on in a transaction?



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] peer-to-peer replication with Postgres

2010-05-10 Thread Scott Marlowe
On Mon, May 10, 2010 at 8:59 PM, John R Pierce pie...@hogranch.com wrote:
 Scott Marlowe wrote:

 Is there any sort of abstraction layer (like in the driver level) that
 can abstract that and just make updates go to one DB and reads
 round-robin to other DBs?  Hopefully there's a way to make this design
 simple to implement.


 Pretty sure pgpool can do the read from these dbs, write to this one.



 how would it know if you're going to do updates later on in a transaction?

Go here:
http://pgpool.projects.postgresql.org/pgpool-II/doc/pgpool-en.html
look for replicate_select

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Archiver not picking up changes to archive_command

2010-05-10 Thread Fujii Masao
On Tue, May 11, 2010 at 9:50 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 bricklen brick...@gmail.com writes:
 Due to some heavy processing today, we have been falling behind on
 shipping log files (by about a 1000 logs or so), so wanted to up our
 bwlimit like so:

 rsync -a %p postg...@192.168.80.174:/WAL_Archive/  rsync
 --bwlimit=1875 -az %p postg...@14.121.70.98:/WAL_Archive/

 The db is showing the change.
 SHOW archive_command:
 rsync -a %p postg...@192.168.80.174:/WAL_Archive/  rsync
 --bwlimit=1875 -az %p postg...@14.121.70.98:/WAL_Archive/

 Yet, the running processes never get above the original bwlimit of
 1250. Have I missed a step? Would kill -HUP archiver pid help?
 (I'm leery of trying that untested though)

 A look at the code shows that the archiver only notices SIGHUP once
 per outer loop, so the change would only take effect once you catch up,
 which is not going to help much in this case.  Possibly we should change
 it to check for SIGHUP after each archive_command execution.

+1

Here is the simple patch to do so.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


pgarch_check_sighup_v1.patch
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Run Vacuum Through JDBC

2010-05-10 Thread Yan Cheng CHEOK
I was wondering, how can I check whether Vacuum operation had been executed 
without problem?

I use the following Java code to execute Vacuum operation

final Statement st2 = connection.createStatement();
st2.executeUpdate(VACUUM FULL ANALYZE VERBOSE);
st2.close();

Nothing print out at console. 

I check at server status through pgadmin, also get no hint whether Vacuum 
operation had been executed.

Thanks and Regards
Yan Cheng CHEOK


  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general