Re: [GENERAL] Merge replication with Postgresql on Windows?

2010-09-28 Thread novnovice

That's a surprising response. But it makes sense, at least as one
perspective. I have written light duty sync systems but figured that there
would be some battle tested postgresql solution that was more robust than I
could cobble together. As in, if I invest 40 hours learning replication
system X, I'd be further along than if I'd invested the same 40 hours
writing my own system from scratch. It's not simple stuff. It would still be
good to eval whatever canned solutions are out there. I have googled this
topic of course; among the candidates none seemed to be a great match up
with what I hoped to find.
-- 
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Merge-replication-with-Postgresql-on-Windows-tp2856176p2856288.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Killing stuck queries and preventing queries from getting stuck

2010-09-28 Thread Craig Ringer
On 28/09/10 11:25, Tim Uckun wrote:
 On Tue, Sep 28, 2010 at 3:39 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Tim Uckun timuc...@gmail.com writes:
 Is there a way to tell postgres to stop any query that runs longer
 than a specified amount of time? Say an hour?

 Setting statement_timeout would do that.  You ought to figure out
 what's causing the performance problem, though, instead of just
 zapping things ...
 
 Well the query is pretty heavy but it gets run a lot. There is a
 distinct in there which seems to be the cause of most of the headaches
 but it's going to take a while to redo the application to not use
 distinct.
 
 The query gets run a lot and 99.99% of the time it runs succesfully
 and the daemon goes on it's merry way. Occasionally it seems to get
 stuck and killing the daemon does not unstick it.

Useful things to try when you have a stuck backend:

- attach strace to it and see if it's doing anything
  that involves system calls

- attach gdb to it and get a backtrace to see what
  it's up to. If it's using CPU, do this multiple times
  to see if it's in some kind of infinite loop, as you'll
  get a snapshot of different stacks if so. See:

http://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD

- (on linux; you didn't mention your OS):
  cat /proc/$pid/stack , where $pid is the process id
  of the stuck backend, to see what the backend process is
  up to in the kernel.


... then post the output of all those tests here, along with the
contents of select * from pg_stat_activity, select * from pg_locks
and anything from the postgresql log files that looks possibly relevant.

-- 
Craig Ringer

Tech-related writing: http://soapyfrogs.blogspot.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Implicit CAST is not working in Postgresql 8.4

2010-09-28 Thread AI Rumman
I migrated data from Postgresql 8.1 to  Postgresql 8.4 using pg_dump.
But now I found that, most of the queries in my applicaiton are being
failed. Invesitigating the problem, I found that no function is available in
the DB to CAST INT to TEXT etc.
Most of the queries are failed because implicit cast is not working
properly.
Any idea how to solve it.


Re: [GENERAL] Implicit CAST is not working in Postgresql 8.4

2010-09-28 Thread Thom Brown
On 28 September 2010 07:37, AI Rumman rumman...@gmail.com wrote:
 I migrated data from Postgresql 8.1 to  Postgresql 8.4 using pg_dump.
 But now I found that, most of the queries in my applicaiton are being
 failed. Invesitigating the problem, I found that no function is available in
 the DB to CAST INT to TEXT etc.
 Most of the queries are failed because implicit cast is not working
 properly.
 Any idea how to solve it.

As of 8.3 non-text types don't automatically cast to text.  You'll
need to use a cast.

So:

WHERE my_int_col = my_text_col

becomes

WHERE my_int_col::text = my_text_col

Regards
-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implicit CAST is not working in Postgresql 8.4

2010-09-28 Thread Pavel Stehule
Hello

see 
http://petereisentraut.blogspot.com/2008/03/readding-implicit-casts-in-postgresql.html

Regards

Pavel Stehule

2010/9/28 AI Rumman rumman...@gmail.com:
 I migrated data from Postgresql 8.1 to  Postgresql 8.4 using pg_dump.
 But now I found that, most of the queries in my applicaiton are being
 failed. Invesitigating the problem, I found that no function is available in
 the DB to CAST INT to TEXT etc.
 Most of the queries are failed because implicit cast is not working
 properly.
 Any idea how to solve it.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Merge replication with Postgresql on Windows?

2010-09-28 Thread John R Pierce

 On 09/27/10 11:18 PM, novnovice wrote:

That's a surprising response. But it makes sense, at least as one
perspective. I have written light duty sync systems but figured that there
would be some battle tested postgresql solution that was more robust than I
could cobble together. As in, if I invest 40 hours learning replication
system X, I'd be further along than if I'd invested the same 40 hours
writing my own system from scratch. It's not simple stuff. It would still be
good to eval whatever canned solutions are out there. I have googled this
topic of course; among the candidates none seemed to be a great match up
with what I hoped to find.


the general case of asynchronous offline replication fundamentally 
breaks one of the tenets of SQL, that COMMIT only returns true if the 
data is validly and reliably committed to the Truth.


multimaster databases create a lot of problems for which there are no 
good answers that don't compromise data integrity.  delaying the 
synchronization by indeterminate intervals via offline updatable 
replicas aggravates this enormously




btw, I don't speak for the 'postgresql community', i'm just s database 
user who happens to be on this list.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query to get the next available unique suffix for a name

2010-09-28 Thread Arjen Nienhuis
 4) Do a SELECT on each row that starts with MikeChristensen and then
 trying to append the row count to the end, this might not be exact but
 it's somewhat intelligent as a starting point.  However, this might
 require some special indexes on this table to quickly scan rows that
 start with a certain name.  However, if I get to the point where this
 becomes slow then I can worry about it at that point since this would
 only be run on new account creation and not super speed critical.


CREATE TABLE test (a text PRIMARY KEY);

INSERT INTO test (a)
SELECT * FROM (SELECT 'MikeChristensen' || generate_series AS t FROM
generate_series(1,1)) x
WHERE t NOT in (SELECT a FROM test) LIMIT 1 RETURNING *;

Groeten, Arjen

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query to get the next available unique suffix for a name

2010-09-28 Thread Mike Christensen
On Tue, Sep 28, 2010 at 12:44 AM, Arjen Nienhuis a.g.nienh...@gmail.com wrote:
 4) Do a SELECT on each row that starts with MikeChristensen and then
 trying to append the row count to the end, this might not be exact but
 it's somewhat intelligent as a starting point.  However, this might
 require some special indexes on this table to quickly scan rows that
 start with a certain name.  However, if I get to the point where this
 becomes slow then I can worry about it at that point since this would
 only be run on new account creation and not super speed critical.


 CREATE TABLE test (a text PRIMARY KEY);

 INSERT INTO test (a)
 SELECT * FROM (SELECT 'MikeChristensen' || generate_series AS t FROM
 generate_series(1,1)) x
 WHERE t NOT in (SELECT a FROM test) LIMIT 1 RETURNING *;

Now that's an interesting approach, I will play around with that - thanks!!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Scaling PostgreSQL-9

2010-09-28 Thread sandeep prakash dhumale
Hello All,

Need some help in scaling PostgreSQL:

I have a table with 400M records with 5 int columns having index only on 1 
column.

Rows are updated by a perl script which takes 10k numbers in one transactions 
and fires single single update in a loop on database keeping track of the 
result returned . If zero returned then at later stage it does an insert. In 
short if the record is present in the DB then it gets updated and if not then 
get inserted. gt; 80% the records are always there in the DB so updates are 
more.

We need to speed up this process as it takes about 150 sec to complete 10k 
batch. From database logs on the avg each update takes about 15ms.

I tried to do a bulk delete of 1M numbers and copy of the same but no luck so 
far. Delete and copy also take a longer time more than 1 hour each.

Few Details:

PostgreSQL 9.0.0 on 11.0 open SuSe-64bit,Shared 
buffer=2.5GB,effective_cache_size=20GB,checkpoint_segments=200 (raised for 
bulkloading)

Server: dell dual quadcore ,32GB RAM, DB partition on RAID 10 and pg_xlog on 
RAID 1.

p.s. Previously we were having slony read only slave on 8.4.2 , where delete 
was fast about 7 min and copy 6 min, we moved to PostgreSQL 9 for read only 
Stand by slave to remove overhead caused by slony due to triggers (also the 
slave was always lagging in case of bulkloads on master)nbsp; in the hope of 
speeding up the process.

Any help would be much appriciated ...

With Regards
sandy



 

[GENERAL] huge difference in performance between MS SQL and pg 8.3 on UPDATE with full text search

2010-09-28 Thread Ivan Sergio Borgonovo
I know I'm comparing apples and orange but still the difference in
performance was quite astonishing.

I've 2 tables that look like:

create table products(
  id bigint 
  price double precision, /* legacy, don't ask */
  sometextfield1 varchar(128),
  sometextfield2 varchar(128),
  ...
);

one on a MS SQL 2005 and another one on pg 8.3.

MS SQL has full text search on the text fields (I don't know the
details).

pg product table has a tsvector field and a gin index defined on it +
trigger that update the tsvector field when the textfields change.
The trigger is made in a way that it actually update the tsvector
just if the text fields are changed.

The hardware on the 2 machines is a bit different.
MS SQL 2005 runs on a box that is 2 years old, 2 SATA drives on RAID
1 hw, 2 Xeon dual core (I can't check details right now)
PG runs on a box that has more than 5 years, 3 SCSI drives on RAID 5
hw, 2 Xeon single core HT (Intel(R) Xeon(TM) CPU 3.20GHz, cpu fam
14, model 4)

Both have 4Gb of ram.
shared_buffers is 240Mb.
Both share a similar workload.
Both boxes were in the same price class when they were bought.

In both tables I've to update price.
VACUUM FULL was performed just before updating the prices.

MS SQL receives a large sql file that contain all the UPDATE
statements.
PG receive a csv file that is loaded into a table with COPY and then
does the update as
update products set price=p.price from temp_price where id=p.id and
pricep.price;

MS SQL ingurgitate the whole sql file in around 10sec.
pg takes more than 5 min to just run the single update statement.

I'd like to know if such a large difference can be justified just by
HW difference or by a difference in the process on how data are
loaded [1] or by the difference in performance of the 2 servers on
this kind of workload or by some postgres config before I decide how
to manage my time to redesign the import procedure.

If HW can justify such huge difference I'll devote my time to other
problems.

I'd say that a potential culprit could be the gin index. No matter
if the tsvector is updated or not, if the row is changed I think the
index is going to be updated anyway.
Somehow MS SQL circumvent this problem, possibly by building the
equivalent of a tsvector column in a hidden table that
automatically join to the text table.
This add a join but reduce the cost of table modification since
simpler (btree) indexes are faster to update.

Still huge updates are rare and that table is mostly read and very
rarely written. During unusually huge updates I may consider to drop
the gin index.

[1] I'd expect that excluding the time it takes to load the csv a
single update should run faster than a huge list of single statement
update

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] huge difference in performance between MS SQL and pg 8.3 on UPDATE with full text search

2010-09-28 Thread Allan Kamau
On Tue, Sep 28, 2010 at 1:49 PM, Ivan Sergio Borgonovo
m...@webthatworks.it wrote:
 I know I'm comparing apples and orange but still the difference in
 performance was quite astonishing.

 I've 2 tables that look like:

 create table products(
  id bigint
  price double precision, /* legacy, don't ask */
  sometextfield1 varchar(128),
  sometextfield2 varchar(128),
  ...
 );

 one on a MS SQL 2005 and another one on pg 8.3.

 MS SQL has full text search on the text fields (I don't know the
 details).

 pg product table has a tsvector field and a gin index defined on it +
 trigger that update the tsvector field when the textfields change.
 The trigger is made in a way that it actually update the tsvector
 just if the text fields are changed.

 The hardware on the 2 machines is a bit different.
 MS SQL 2005 runs on a box that is 2 years old, 2 SATA drives on RAID
 1 hw, 2 Xeon dual core (I can't check details right now)
 PG runs on a box that has more than 5 years, 3 SCSI drives on RAID 5
 hw, 2 Xeon single core HT (Intel(R) Xeon(TM) CPU 3.20GHz, cpu fam
 14, model 4)

 Both have 4Gb of ram.
 shared_buffers is 240Mb.
 Both share a similar workload.
 Both boxes were in the same price class when they were bought.

 In both tables I've to update price.
 VACUUM FULL was performed just before updating the prices.

 MS SQL receives a large sql file that contain all the UPDATE
 statements.
 PG receive a csv file that is loaded into a table with COPY and then
 does the update as
 update products set price=p.price from temp_price where id=p.id and
 pricep.price;

 MS SQL ingurgitate the whole sql file in around 10sec.
 pg takes more than 5 min to just run the single update statement.

 I'd like to know if such a large difference can be justified just by
 HW difference or by a difference in the process on how data are
 loaded [1] or by the difference in performance of the 2 servers on
 this kind of workload or by some postgres config before I decide how
 to manage my time to redesign the import procedure.

 If HW can justify such huge difference I'll devote my time to other
 problems.

 I'd say that a potential culprit could be the gin index. No matter
 if the tsvector is updated or not, if the row is changed I think the
 index is going to be updated anyway.
 Somehow MS SQL circumvent this problem, possibly by building the
 equivalent of a tsvector column in a hidden table that
 automatically join to the text table.
 This add a join but reduce the cost of table modification since
 simpler (btree) indexes are faster to update.

 Still huge updates are rare and that table is mostly read and very
 rarely written. During unusually huge updates I may consider to drop
 the gin index.

 [1] I'd expect that excluding the time it takes to load the csv a
 single update should run faster than a huge list of single statement
 update

 --
 Ivan Sergio Borgonovo
 http://www.webthatworks.it


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Restarting the computer on which PG is running may help. I have access
to a server running PG 8.4 on Ubuntu and I have noticed that after a
day of intense use the PG slows down significantly, free -g reports
almost no free memory available (something seems to leak memory on
this Ubuntu box). But when I restart the OS (Ubuntu), the PG executes
my queries in good time. I seem not to have similar problems on the
other servers running Fedora 12 and 13. But it could be my
configuration(s) on the Ubuntu box at fault, I am still investigating.

Allan.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Scaling PostgreSQL-9

2010-09-28 Thread Vick Khera
On Tue, Sep 28, 2010 at 6:31 AM, sandeep prakash dhumale 
sandy9...@rediffmail.com wrote:

 I have a table with 400M records with 5 int columns having index only on 1
 column.


How is your data used?  Is the update done by the primary key?  Are the
queries segmented in some way that may divide the data based on one of the
other columns?

You should investigate using partitions to hold your data.  I'd recommend at
least 100 partitions.  I've done this with great success by dividing some
tables along one of the foreign keys.  My table was just a pure relation
relating the PKs of two other tables.  After analyzing the queries that were
most often run, we decided to split along the one which resulted in the
fewest partitions being referenced per search.

By splitting, we reduced a 200M+ row table into 100 tables of ~2M rows, and
the constraint exclusion (or altering the queries to directly access the
proper partition) reduced our query times dramatically.


Re: [GENERAL] PostgreSQL 9 Mac OS X one-click install - PL/perl broken

2010-09-28 Thread Dave Page
On Mon, Sep 27, 2010 at 11:35 PM, Larry Leszczynski
lar...@emailplus.org wrote:
 Hi -

 I use Dave Page's one-click installers for Mac OS X:

    http://www.enterprisedb.com/products/pgdownload.do#osx

 I recently installed PostgreSQL 9.0.0 on Mac OS X 10.5.8.  PL/perl will
 not load because it is looking for Perl 5.10 in the System dirs and I
 only have 5.8.8:

grumble. That's a PITA. We build on Snow Leopard now, because we
were getting more requests for x86_64 support than PPC.

 even though the plperl.so I built looks ok:

    $ file plperl.so
    plperl.so: Mach-O bundle i386

 Has anyone else run into this?  Anybody have any suggestions?

I could understand that if it's running the 64 bit image in the
binary, but that shouldn't be the case on Leopard I don't think -
unless this is Leopard Server? If so, you could try building the 64
bit binary:

CFLAGS=-isysroot /Developer/SDKs/MacOSX10.5.sdk
-mmacosx-version-min=10.5 -O2 -arch x86_64 ./configure

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Re: [GENERAL] Scaling PostgreSQL-9

2010-09-28 Thread Sandy


On Tue, 28 Sep 2010 17:45:16 +0530  wrote
On Tue, Sep 28, 2010 at 6:31 AM, sandeep prakash dhumale  wrote:

I have a table with 400M records with 5 int columns having index only on 1 
column.

How is your data used?  Is the update done by the primary key?  Are the queries 
segmented in some way that may divide the data based on one of the other 
columns?

You should investigate using partitions to hold your data.  I'd recommend at 
least 100 partitions.  I've done this with great success by dividing some 
tables along one of the foreign keys.  My table was just a pure relation 
relating the PKs of two other tables.  After analyzing the queries that were 
most often run, we decided to split along the one which resulted in the fewest 
partitions being referenced per search.

By splitting, we reduced a 200M+ row table into 100 tables of ~2M rows, and the 
constraint exclusion (or altering the queries to directly access the proper 
partition) reduced our query times dramatically.

Thanks Vivek for your replay,

We did have investigated the partitioning but looks like this wont be an ideal 
candidate for the same perhaps you might be able to share some more light on it.

Table contains unique mobile numbers and update is based on this mobile number. 
Initially we thought of partitioning by range of mobile series and ended up 
with about 50 partitions (can be increased as per your suggestion to 100), 
problem we faced update was also slow as update was based on mobile number and 
constraint was on mobile series. moreover if i have SELECT queries which has IN 
clause with random mobile numbers which may end up scanning all the tables.

Table has mobile number,status and expiry date. I can not partition on expiry 
date as all SELECT's are on mobile number. 


Please suggest...

 

Snady
 



Re: [GENERAL] Scaling PostgreSQL-9

2010-09-28 Thread Vick Khera
On Tue, Sep 28, 2010 at 8:56 AM, Sandy sandy9...@rediffmail.com wrote:

 Table has mobile number,status and expiry date. I can not partition on
 expiry date as all SELECT's are on mobile number.


Then partition on the mobile number.  If your updates and queries are all
tied to that, then it is the ideal candidate.

You should also make sure that you have proper indexes on each partition to
let the query executer quickly decide that a given table is not of any use
(you really don't want sequence scan on each partition).

You will get speedups if you can achieve one or both of getting the query
panner to use the constraint exclusions to eliminate partitions and getting
the index scans to quickly skip over partitions which will not have the data
you want.


Re: [GENERAL] Implicit CAST is not working in Postgresql 8.4

2010-09-28 Thread David Fetter
On Tue, Sep 28, 2010 at 12:37:46PM +0600, AI Rumman wrote:
 I migrated data from Postgresql 8.1 to  Postgresql 8.4 using
 pg_dump.  But now I found that, most of the queries in my
 applicaiton are being failed. Invesitigating the problem, I found
 that no function is available in the DB to CAST INT to TEXT etc.

This is due to sloppy coding in your code base, which was, unknown to
you, capable of producing surprising, and by surprising, I mean
glaringly wrong answers.  You need to find the places where your
code base contains this slop and clean it up.

I can't really recommend that you put in workarounds, as they don't
actually fix the bugs you've found.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] is there a 'replace' for tables/views?

2010-09-28 Thread Michael Enke
Hello group,

is there a functionality like 'create or REPLACE table/view'
to circumvent drop with 'cascade' and recreation of depending objects?

I have a table schema1.x (col1, col2)
and a view schema1.v - schema1.x(col1)

Another table is schema2.x(col1, col2) with same structure as x in schema1.

I have to drop the table schema1.x and create a view with same name:
schema1.x (view) - schema2.x (table)

The drop will work only with cascade, and after creation of view schema1.x
I have to recreate the view schema1.v - schema1.x(col1)

If there is no way today, would it be a good idea for a future release? ;-)

Michael

WINCOR NIXDORF International GmbH
Sitz der Gesellschaft: Paderborn
Registergericht Paderborn HRB 3507
Geschäftsführer: Eckard Heidloff (Vorsitzender), Stefan Auerbach, Dr. Jürgen 
Wunram
Vorsitzender des Aufsichtsrats: Karl-Heinz Stiller
Steuernummer: 339/5884/0020 - Ust-ID Nr.: DE812927716 - WEEE-Reg.-Nr. DE44477193

Diese E-Mail enthält vertrauliche Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser E-Mail ist nicht 
gestattet.

This e-mail may contain confidential information.
If you are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail.
Any unauthorised copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Visualize GiST Index

2010-09-28 Thread paolo

I firstly tried to solve the problem deleting the second parameter from all
the calls to the stringToQualifiedNameList function, I wouldn't expect it,
but it worked out, of course it was not the most elegant way.


Oleg Bartunov wrote:
 
 Get gevel from cvs, address is on http://www.sai.msu.su/~megera/wiki/Gevel
 

Thanks, the version form cvs compiles fine, there are some differences in
the when you make the installcheck but it works.
I noticed that the expected output has been generated several months ago, so
I supposed the differences can come from using different versions of
postgresql.


Oleg Bartunov wrote:
 
 btw, have you seen http://www.sai.msu.su/~megera/wiki/Rtree_Index ?
 

This also helps.

Paolo Fogliaroni
-- 
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Visualize-GiST-Index-tp2849197p2857004.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Merge replication with Postgresql on Windows?

2010-09-28 Thread Joshua D. Drake
On Mon, 2010-09-27 at 20:08 -0700, novnovice wrote:
 Can anyone recommend a relatively simple merge replication package that would
 work well on windows and which relies on one of the current postgresql
 versions? 9 would be fine for my needs. I'm a fairly unsophisticated
 postgresql user; and not very experienced with replication on other
 databases. So the simplest to configure and maintain solution would be best.
 I am not sure but it sounds like the new replication features that come
 baked in to postgres 9 are not so oriented towards merge replication - is
 that right?
 
 My use case would involve a primary postgresql database and several
 postgresql databases on disconnected notebook computers. All dbs need to be
 able to support updates inserts etc; and then hopefully the replication
 software would help with things like conflict resolution.

As far as I know, this does not exist for Postgresql.

Joshua D. Drake

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Merge replication with Postgresql on Windows?

2010-09-28 Thread Randal L. Schwartz
 novnovice == novnovice  novnov...@gmail.com writes:

novnovice That's a surprising response. But it makes sense, at least as
novnovice one perspective. I have written light duty sync systems but
novnovice figured that there would be some battle tested postgresql
novnovice solution that was more robust than I could cobble
novnovice together. As in, if I invest 40 hours learning replication
novnovice system X, I'd be further along than if I'd invested the same
novnovice 40 hours writing my own system from scratch. It's not simple
novnovice stuff. It would still be good to eval whatever canned
novnovice solutions are out there. I have googled this topic of course;
novnovice among the candidates none seemed to be a great match up with
novnovice what I hoped to find.

CouchDB is open source, and was designed with this sort of disconnected
syncing in mind.  You can hear my interview with Jan Lehnardt at
http://twit.tv/floss36.

-- 
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
mer...@stonehenge.com URL:http://www.stonehenge.com/merlyn/
Smalltalk/Perl/Unix consulting, Technical writing, Comedy, etc. etc.
See http://methodsandmessages.vox.com/ for Smalltalk and Seaside discussion

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Merge replication with Postgresql on Windows?

2010-09-28 Thread novnovice

Joshua, you're with command prompt...you had/have a product called mammoth
replicator which I looked at. It seemed approx what I was after but the
project didn't seem very alive. Was my use case not what mammoth was about?
Or is it just that mammoth is basically gone?
-- 
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Merge-replication-with-Postgresql-on-Windows-tp2856176p2857075.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Merge replication with Postgresql on Windows?

2010-09-28 Thread Joshua D. Drake
On Tue, 2010-09-28 at 09:32 -0700, novnovice wrote:
 Joshua, you're with command prompt...you had/have a product called mammoth
 replicator which I looked at. It seemed approx what I was after but the
 project didn't seem very alive. Was my use case not what mammoth was about?
 Or is it just that mammoth is basically gone?

Oh it's alive. We are down to the last feature of the 1.9 release before
going into the wild again. That said, no -- what we offer would not help
you.

We offer Master-N-Slave replication. 

Sincerely,

Joshua D. Drake

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade

2010-09-28 Thread Tom Lane
Brian Hirt bh...@me.com writes:
 I'm testing pg_upgrade out and ran into a couple of problems.   First when I 
 did pg_upgrade --check I got the tsearch2 tables preventing the upgrade from 
 happening:
 Database:  testdatabase
   public.pg_ts_dict.dict_init
   public.pg_ts_dict.dict_lexize
   public.pg_ts_parser.prs_start
   public.pg_ts_parser.prs_nexttoken
   public.pg_ts_parser.prs_end
   public.pg_ts_parser.prs_headline
   public.pg_ts_parser.prs_lextype

 For testing, at this point I really didn't care about tsearch, so I simply 
 dropped those tables so I could revisit them later -- however, I'm confused 
 about these tables in general, both pg_catalog.pg_ts_parser and 
 public.pg_ts_parser exist with different, albeit similar, schemas.   I think 
 that the table in public is no longer used and was a remnant from pre-8.3 
 when tsearch2 wasn't part of the distribution, can anyone confirm this?

Correct, you should just drop the ones that aren't in pg_catalog.


 Anyway, after removing the tsearch tables, I did pg_upgrade --check again and 
 it said the clusters were compatible. I proceeded to run the upgrade command 
 and it bombed out in the Restoring user relation files section.

That sure looks like a bug, but there's not enough info here to
diagnose.  Is there actually a pg_toast.pg_toast_2147483647 table
in the 8.4 cluster?  (I'm betting not.)  Could you try extracting
a test case?  I wonder whether pg_dump -s from the 8.4 database,
loaded into a fresh 8.4 database, would be enough to reproduce.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG 9.0 large object permissions

2010-09-28 Thread Tom Lane
Andy Colson a...@squeakycode.net writes:
 Having a little problem with my large objects.  In 8.4 the db owner was 
 'andy', but my web connected as payuser (which had appropriate permissions).

 The backup/restore to pg9 made all the large objects owned by 'andy', 
 and I'm guessing payuser does not have select rights.

If you just want it to work like it did before, I believe there's a server
parameter you can change to disable permissions checks on large objects.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] A note on pg_upgrade and missing pg_upgrade_support.so

2010-09-28 Thread Bruce Momjian
Ian Barwick wrote:
  Well, that is step #4:
 
  ? ? ? http://www.postgresql.org/docs/9.0/static/pgupgrade.html
  ? ? ? 4.
 
  ? ? ? Install pg_upgrade
 
  ? ? ? Install pg_upgrade and pg_upgrade_support in the new PostgreSQL 
  cluster
 
  Was that not clear enough?
 
  I hope my comment didn't sound insulting. ?I really want to know how
  that doc item can be made clearer.
 
 No insult taken :) .With the benefit of hindsight it's plenty clear;
 the problem was:
 
 a) I was doing this in a hurry (had a small amount of time to kill and
 a dev machine with an older beta on it)
 b) got sidetracked by this thread which appears to describe the same
 problem: http://archives.postgresql.org/pgsql-testers/2010-06/msg0.php
 and which was popping up pretty high in Google.

Yep, we need to address this.

 Looking over the doc page again, if scanning over it, it's a bit easy
 to misread it as something like Install pg_upgrade for pg_upgrade
 support in the new PostgreSQL cluster.

The big problem was that the title said Install pg_upgrade, but the
detail had you installing two things, one of which was pg_upgrade.  It
was using pg_upgrade in both a generic sense, and in the
/contrib/pg_upgrade sense, which is bound to confuse, as you said.

I have attached a doc diff I backpatched to 9.0 that includes
pg_upgrade_support in the section title. This should avoid future
confusing.  Thanks for your report.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +
diff --git a/doc/src/sgml/pgupgrade.sgml b/doc/src/sgml/pgupgrade.sgml
index 2a806b5..7bc939c 100644
--- a/doc/src/sgml/pgupgrade.sgml
+++ b/doc/src/sgml/pgupgrade.sgml
@@ -199,7 +199,7 @@ gmake prefix=/usr/local/pgsql.new install
/step
  
step
-titleInstall pg_upgrade/title
+titleInstall pg_upgrade and pg_upgrade_support/title
 
 para
  Install applicationpg_upgrade/ and

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 9 Mac OS X one-click install - PL/perl broken

2010-09-28 Thread Larry Leszczynski

On Tue, 28 Sep 2010 13:35 +0100, Dave Page dp...@pgadmin.org wrote:
 
  I recently installed PostgreSQL 9.0.0 on Mac OS X 10.5.8.  PL/perl will
  not load because it is looking for Perl 5.10 in the System dirs and I
  only have 5.8.8:
 
 grumble. That's a PITA. We build on Snow Leopard now, because we
 were getting more requests for x86_64 support than PPC.
 
 [snip]
 
 you could try building the 64 bit binary:
 
 CFLAGS=-isysroot /Developer/SDKs/MacOSX10.5.sdk
 -mmacosx-version-min=10.5 -O2 -arch x86_64 ./configure

Excellent!  Looks like that worked fine.  I just added the --with-perl
option to configure.

Thanks Dave!

Larry

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 9 Mac OS X one-click install - PL/perl broken

2010-09-28 Thread Dave Page
On Tue, Sep 28, 2010 at 6:46 PM, Larry Leszczynski lar...@emailplus.org wrote:

 On Tue, 28 Sep 2010 13:35 +0100, Dave Page dp...@pgadmin.org wrote:

  I recently installed PostgreSQL 9.0.0 on Mac OS X 10.5.8.  PL/perl will
  not load because it is looking for Perl 5.10 in the System dirs and I
  only have 5.8.8:

 grumble. That's a PITA. We build on Snow Leopard now, because we
 were getting more requests for x86_64 support than PPC.

 [snip]

 you could try building the 64 bit binary:

 CFLAGS=-isysroot /Developer/SDKs/MacOSX10.5.sdk
 -mmacosx-version-min=10.5 -O2 -arch x86_64 ./configure

 Excellent!  Looks like that worked fine.  I just added the --with-perl
 option to configure.

 Thanks Dave!

You're welcome. I guess it is running the 64bit image - is your
machine Leopard Server?

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Killing stuck queries and preventing queries from getting stuck

2010-09-28 Thread Alban Hertroys
On 28 Sep 2010, at 1:41, Tim Uckun wrote:

 Sometimes some queries get stuck in that they run for hours and
 hours. They never stop running.  Killing the deamon does not stop the
 query from running.


You really should try to find out why they get stuck. Killing stuck clients 
isn't going to solve your problem (aside from the fact that you probably 
shouldn't be using kill -9 on them, that's like using a jackhammer on a jammed 
door).

Some things to look into: Are those queries waiting on a lock by another daemon 
maybe? Are some of them keeping transactions open for a long time without 
committing them (or rolling them back)?

I recall you were having another problem (with deleting records). This all 
smells like you either are waiting for locks on records or that the statistics 
used for query planning aren't reflecting the actual situation.

Have a look in pg_locks and check the query plans of some of your more 
problematic queries (see: explain analyse) to see what's going on there. 
Posting the results of those here would allow more eyes to look into your 
issues, in case it isn't obvious to you. There's some good documentation on 
these subjects too.

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4ca22c9f678304378921584!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Exclusion constraint issue

2010-09-28 Thread Eric McKeeth
On Fri, Sep 24, 2010 at 3:22 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Eric McKeeth eldi...@gmail.com writes:
  why would I get the following error, since the period() function is in
 fact
  declared as immutable?

  test=# ALTER TABLE test3 ADD exclude using
  gist(period(effect_date::timestamptz, expire_date::timestamptz) with 
 );
  ERROR:  functions in index expression must be marked IMMUTABLE

 period() might be immutable, but those casts from date to timestamptz
 are not, because they depend on the TimeZone parameter.

regards, tom lane



Thanks for pointing out what I was overlooking. After a bit of further
investigation and testing it seems like the period type I found isn't going
to work without modification for my constraint, so I ended up with the
following to get the semantics I need:

alter table test3 add exclude using gist(
box(
point(
case when effect_date = '-Infinity'::date
then '-Infinity'::double precision
else date_part('epoch'::text, effect_date)
end,
1
),
point(
case when expire_date = 'Infinity'::date
then 'Infinity'::double precision
else date_part('epoch', expire_date) - 1
end,
1
)
)
with 
);

This is ugly, but it does seem to enforce the constraint I need, of
non-overlapping dates where sharing an endpoint is not considered an
overlap. The case blocks are because the date_part bit always returns 0 for
infinite dates, which seemed a bit counter-intuitive. Any suggestions on how
I could improve on it?


Re: [GENERAL] huge difference in performance between MS SQL and pg 8.3 on UPDATE with full text search

2010-09-28 Thread Alban Hertroys
On 28 Sep 2010, at 12:49, Ivan Sergio Borgonovo wrote:
 The hardware on the 2 machines is a bit different.
 MS SQL 2005 runs on a box that is 2 years old, 2 SATA drives on RAID
 1 hw, 2 Xeon dual core (I can't check details right now)
 PG runs on a box that has more than 5 years, 3 SCSI drives on RAID 5
 hw, 2 Xeon single core HT (Intel(R) Xeon(TM) CPU 3.20GHz, cpu fam
 14, model 4)

RAID-5 isn't ideal for databases, the RAID-1 in the other box is probably 
faster. Especially since it's on more modern hardware.

 In both tables I've to update price.
 VACUUM FULL was performed just before updating the prices.

VACUUM FULL? Was that really necessary? You did REINDEX after that, didn't you? 
If not, your indexes became bloated.
If the table wasn't empty before you probably meant to do a VACUUM ANALYSE, but 
if it was, just ANALYSE would have been sufficient.

 MS SQL receives a large sql file that contain all the UPDATE
 statements.
 PG receive a csv file that is loaded into a table with COPY and then
 does the update as
 update products set price=p.price from temp_price where id=p.id and
 pricep.price;

Did you ANALYSE between loading the data and updating?

Also, executing the trigger on each copied line is likely to be a little slow, 
due to the overhead of calling a stored procedure (especially if it's plpgsql). 
It's probably quite a bit faster to disable the trigger and create the 
gin-index after loading the data.

 MS SQL ingurgitate the whole sql file in around 10sec.
 pg takes more than 5 min to just run the single update statement.

An EXPLAIN ANALYSE of that statement would tell what it's doing and what's 
taking so long. A simple EXPLAIN would probably be sufficient to see what query 
plan it thinks it needs though.

Since an update means PG has to insert and delete records, the fact that the 
database is on RAID-5 is probably a factor here. How much? No idea.

 I'd like to know if such a large difference can be justified just by
 HW difference or by a difference in the process on how data are
 loaded [1] or by the difference in performance of the 2 servers on
 this kind of workload or by some postgres config before I decide how
 to manage my time to redesign the import procedure.

Did you tune that database? Several options (work_mem for example) could 
significantly improve your performance if you can set them higher (or reduce it 
if you set them too high). You can do that per session too.

 If HW can justify such huge difference I'll devote my time to other
 problems.

Partially, yes, but not that much I think.

 I'd say that a potential culprit could be the gin index. No matter
 if the tsvector is updated or not, if the row is changed I think the
 index is going to be updated anyway.

gin indexes require relatively much RAM. If you didn't assign much in your 
settings then it's quite possible that the database can't keep the index in 
memory or that things have to spill to disk. Leave enough room for the OS's 
disk cache though, Postgres benefits from that as well.

Is there any particular reason you went with a gin index and not a gist one? 
Gin can be faster, but consumes (much) more memory, but gist is also quite good 
with text searches and doesn't require quite as much memory.

 [1] I'd expect that excluding the time it takes to load the csv a
 single update should run faster than a huge list of single statement
 update


Correct.

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4ca231ae678301692839670!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 9 Mac OS X one-click install - PL/perl broken

2010-09-28 Thread Larry Leszczynski
Hi Dave -

  you could try building the 64 bit binary:
 
  CFLAGS=-isysroot /Developer/SDKs/MacOSX10.5.sdk
  -mmacosx-version-min=10.5 -O2 -arch x86_64 ./configure
 
  Excellent!  Looks like that worked fine.  I just added the --with-perl
  option to configure.
 
  Thanks Dave!
 
 You're welcome. I guess it is running the 64bit image - is your
 machine Leopard Server?

Not sure how I would check...   sw_vers give me:

ProductName:Mac OS X
ProductVersion: 10.5.8
BuildVersion:   9L31a


Thanks!
Larry

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query to get the next available unique suffix for a name

2010-09-28 Thread Sam Mason
On Mon, Sep 27, 2010 at 06:36:25PM -0700, Mike Christensen wrote:
 Thus, the users table already has:
 
 MikeChristensen1
 MikeChristensen2
 MikeChristensen3
 MikeChristensen4
 
 I want to write a SQL query that figures out that MikeChristensen5 is
 the next available username and thus suggest it.

Why not do something like:

  SELECT max(nullif(substring(username FROM '[0-9]*$'),'')::numeric) AS lastnum
  FROM users
  WHERE username ~ '^MikeChristensen[0-9]*$';

It's a pretty direct translation from what I'd do in any imperative
language.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 9 Mac OS X one-click install - PL/perl broken

2010-09-28 Thread Scott Ribe
On Sep 28, 2010, at 11:50 AM, Dave Page wrote:

 You're welcome. I guess it is running the 64bit image - is your
 machine Leopard Server?

That's irrelevant. The 32-bit vs 64-bit default is for the kernel and 
extensions, not for applications. On 64-bit hardware, apps can be run as 
64-bit, and will be if there's a 64-bit executable, regardless of which mode 
the kernel is booted into.

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] huge difference in performance between MS SQL and pg 8.3 on UPDATE with full text search

2010-09-28 Thread Sam Mason
On Tue, Sep 28, 2010 at 02:35:09PM +0300, Allan Kamau wrote:
 I have access
 to a server running PG 8.4 on Ubuntu and I have noticed that after a
 day of intense use the PG slows down significantly, free -g reports
 almost no free memory available (something seems to leak memory on
 this Ubuntu box).

I'm not sure which values you're looking at in free, but you generally
want there to be very little free memory--you want the memory to be used
for caching the disk.  As long as the cached data isn't dirty (i.e.
unwritten data) then it can be released very quickly and made available
for whatever is needed, or better serve as a useful cache.

Rebooting normally just hides other issues.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Scaling PostgreSQL-9

2010-09-28 Thread Igor Neyman
 

 -Original Message-
 From: sandeep prakash dhumale [mailto:sandy9...@rediffmail.com] 
 Sent: Tuesday, September 28, 2010 6:32 AM
 To: pgsql-general@postgresql.org
 Subject: Scaling PostgreSQL-9
 
 Hello All,
 
 Need some help in scaling PostgreSQL:
 
 I have a table with 400M records with 5 int columns having 
 index only on 1 column.
 
 Rows are updated by a perl script which takes 10k numbers in 
 one transactions and fires single single update in a loop on 
 database keeping track of the result returned . If zero 
 returned then at later stage it does an insert. In short if 
 the record is present in the DB then it gets updated and if 
 not then get inserted.  80% the records are always there in 
 the DB so updates are more.
 
 We need to speed up this process as it takes about 150 sec to 
 complete 10k batch. From database logs on the avg each update 
 takes about 15ms.
 

Your problem is that you process one record at a time in your loop,
Meaning you have to make 10k trips to the database to process 10k
records.

Try creating staging table in the database, 
copy all the records from your source into staging table, 
i.e. using COPY command if your source is a file.
Then using couple sql statements: 
insert ... where not exists (select ...)
Update ... Where exists...

Insert new and update existing records.
Here you work with data sets inside the database,
which should be much faster then procedural perl script.


 I tried to do a bulk delete of 1M numbers and copy of the 
 same but no luck so far. Delete and copy also take a longer 
 time more than 1 hour each.


How did you do copy? Again using perl script to loop through 1M records
one at a time?


 
 Few Details:
 
 PostgreSQL 9.0.0 on 11.0 open SuSe-64bit,Shared 
 buffer=2.5GB,effective_cache_size=20GB,checkpoint_segments=200
  (raised for bulkloading)
 
 Server: dell dual quadcore ,32GB RAM, DB partition on RAID 10 
 and pg_xlog on RAID 1.
 
 p.s. Previously we were having slony read only slave on 8.4.2 
 , where delete was fast about 7 min and copy 6 min, we moved 
 to PostgreSQL 9 for read only Stand by slave to remove 
 overhead caused by slony due to triggers (also the slave was 
 always lagging in case of bulkloads on master)  in the hope 
 of speeding up the process.
 
 Any help would be much appriciated ...
 
 With Regards
 sandy
 
 


Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade

2010-09-28 Thread Brian Hirt
Looks like pg_upgrade is using 32bit oids.  2147483647 is the max signed 32 bit 
int, but the oids for my tables are clearly larger than that. 

== output from pg_upgrade ==
Database: basement84_dev
relname: mit.company: reloid: 2147483647 reltblspace: 
relname: mit.company_history: reloid: 2147483647 reltblspace: 

== output from catalog query ==
basement84_dev=# select c.oid,c.relname from pg_catalog.pg_namespace n, 
pg_catalog.pg_class c where n.oid = c.relnamespace and n.nspname = 'mit';
oid |  relname   
+
 3000767630 | company
 3000767633 | company_history
(22 rows)


On Sep 28, 2010, at 10:51 AM, Tom Lane wrote:

 Brian Hirt bh...@me.com writes:
 I'm testing pg_upgrade out and ran into a couple of problems.   First when I 
 did pg_upgrade --check I got the tsearch2 tables preventing the upgrade from 
 happening:
 Database:  testdatabase
  public.pg_ts_dict.dict_init
  public.pg_ts_dict.dict_lexize
  public.pg_ts_parser.prs_start
  public.pg_ts_parser.prs_nexttoken
  public.pg_ts_parser.prs_end
  public.pg_ts_parser.prs_headline
  public.pg_ts_parser.prs_lextype
 
 For testing, at this point I really didn't care about tsearch, so I simply 
 dropped those tables so I could revisit them later -- however, I'm confused 
 about these tables in general, both pg_catalog.pg_ts_parser and 
 public.pg_ts_parser exist with different, albeit similar, schemas.   I think 
 that the table in public is no longer used and was a remnant from pre-8.3 
 when tsearch2 wasn't part of the distribution, can anyone confirm this?
 
 Correct, you should just drop the ones that aren't in pg_catalog.
 
 
 Anyway, after removing the tsearch tables, I did pg_upgrade --check again 
 and it said the clusters were compatible. I proceeded to run the upgrade 
 command and it bombed out in the Restoring user relation files section.
 
 That sure looks like a bug, but there's not enough info here to
 diagnose.  Is there actually a pg_toast.pg_toast_2147483647 table
 in the 8.4 cluster?  (I'm betting not.)  Could you try extracting
 a test case?  I wonder whether pg_dump -s from the 8.4 database,
 loaded into a fresh 8.4 database, would be enough to reproduce.
 
   regards, tom lane
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Autovacuum settings between systems

2010-09-28 Thread Chris Barnes

I am using 8.4.2 on centos 5.4 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 
2009 x86_64 x86_64 x86_64 GNU/Linux


I have autovacuum processes that appear to have been running most of the day.

There aren't any messages in the log, but there must be something wrong for it 
to take this long?



 datname |  relname   |   mode   | 
granted | usename  |substr  
  |  query_start  | | procpid 
+|+   |+ |+ 
   |+ |+ |+ 
 |+  age   |+  
database | table_pkey   | AccessShareLock  | t   | postgres | 
autovacuum: VACUUM ANALYZE database.table | 2010-09-28 10:38:23.217668-04 | 
04:55:14.134574 |   13494
database | t8040_monthly_price_min_max| ShareUpdateExclusiveLock | t
   | postgres | autovacuum: VACUUM ANALYZE database.t8040_monthly_price_min_ | 
2010-09-28 10:38:23.217668-04 | 04:55:14.134574 |   13494
database || ExclusiveLock| t
   | postgres | autovacuum: VACUUM ANALYZE database.table | 2010-09-28 
10:38:23.217668-04 | 04:55:14.134574 |   13494


This table has this many rows.
 count 
---
 67083
(1 row)


#vacuum_cost_delay = 0  # 0-1000 milliseconds
#vacuum_cost_page_hit = 1   # 0-1 credits
#vacuum_cost_page_miss = 10 # 0-1 credits
#vacuum_cost_page_dirty = 20# 0-1 credits
#vacuum_cost_limit = 200# 1-1 credits
autovacuum = on # Enable autovacuum subprocess?  'on' 
#log_autovacuum_min_duration = -1   # -1 disables, 0 logs all actions and
#autovacuum_max_workers = 3 # max number of autovacuum subprocesses
#autovacuum_naptime = 1min  # time between autovacuum runs
#autovacuum_vacuum_threshold = 50   # min number of row updates before
# vacuum
#autovacuum_analyze_threshold = 50  # min number of row updates before 
#autovacuum_vacuum_scale_factor = 0.2   # fraction of table size before vacuum
#autovacuum_analyze_scale_factor = 0.1  # fraction of table size before analyze
#autovacuum_freeze_max_age = 2  # maximum XID age before forced vacuum
#autovacuum_vacuum_cost_delay = 20  # default vacuum cost delay for
# autovacuum, -1 means use
# vacuum_cost_delay
#autovacuum_vacuum_cost_limit = -1  # default vacuum cost limit for
# autovacuum, -1 means use
# vacuum_cost_limit
vacuum_freeze_min_age = 10

  

Re: [GENERAL] pg_upgrade

2010-09-28 Thread Bruce Momjian
Brian Hirt wrote:
 Looks like pg_upgrade is using 32bit oids.  2147483647 is the max signed 32 
 bit int, but the oids for my tables are clearly larger than that. 
 
 == output from pg_upgrade ==
 Database: basement84_dev
 relname: mit.company: reloid: 2147483647 reltblspace: 
 relname: mit.company_history: reloid: 2147483647 reltblspace: 
 
 == output from catalog query ==
 basement84_dev=# select c.oid,c.relname from pg_catalog.pg_namespace n, 
 pg_catalog.pg_class c where n.oid = c.relnamespace and n.nspname = 'mit';
 oid |  relname   
 +
  3000767630 | company
  3000767633 | company_history
 (22 rows)
 

Interesting.  Odd it would report the max 32-bit signed int.  I wonder
if it somehow is getting set to -1.  I looked briefly at the pg_upgrade
code and it appears to put all oids in unsigned ints.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Killing stuck queries and preventing queries from getting stuck

2010-09-28 Thread Tim Uckun

 You really should try to find out why they get stuck. Killing stuck clients 
 isn't going to solve your problem (aside from the fact that you probably 
 shouldn't be using kill -9 on them, that's like using a jackhammer on a 
 jammed door).

Well I didn't use kill -9 I used the pg_cancel_backend command.


 Some things to look into: Are those queries waiting on a lock by another 
 daemon maybe? Are some of them keeping transactions open for a long time 
 without committing them (or rolling them back)?

I'll take a look at that. It certainly would be simpler than attaching
a gdb session to the pid and getting a stacktrace.

 I recall you were having another problem (with deleting records). This all 
 smells like you either are waiting for locks on records or that the 
 statistics used for query planning aren't reflecting the actual situation.


I am having some performance issues with the database. I am also
trying to clean out a lot of records out of the system.  Once all the
records I want to delete are gone perhaps the problem will go away. I
am also looking at how the application can be refactored not to use
this particular DISTINCT query.

 Have a look in pg_locks and check the query plans of some of your more 
 problematic queries (see: explain analyse) to see what's going on there. 
 Posting the results of those here would allow more eyes to look into your 
 issues, in case it isn't obvious to you. There's some good documentation on 
 these subjects too.


I did look at the analyze and basically postgres is saying the
distinct is killing me. I remove that and the query is fine.   I
didn't look at the locks because the queries are read only so I didn't
think they would be effected by locks but I will look at them post
them here.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Documentation enhancement

2010-09-28 Thread Thomas Kellerer

Hi,

I would like to suggest to enhance the documentation of the CREATE VIEW 
statement.

I think the fact that a SELECT * is internally stored as the expanded column 
list (valid at the time when the view was created) should be documented together with the 
CREATE VIEW statement. Especially because the example does use SELECT * to create the 
view.

Regards
Thomas






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade

2010-09-28 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 Interesting.  Odd it would report the max 32-bit signed int.  I wonder
 if it somehow is getting set to -1.  I looked briefly at the pg_upgrade
 code and it appears to put all oids in unsigned ints.

On some platforms, that's what you'll get if you feed a value larger
than 2^31 to atoi() and related functions.  I will bet lunch that this
behavior reflects an attempt to use signed-integer input functions on
OID values.  You need to check the string conversion code itself, not
just the declared type of the result variables.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade

2010-09-28 Thread Brian Hirt
It looks like it's related to atol

$ cat test-atol.c 
#include stdlib.h
#include stdio.h

int
main(int argc, char **argv)
{
  unsigned int test1;
  long test2;
  long long test3;
  unsigned int test4;

  test1 = (unsigned int)atol(3000767169); 
  test2 = (long)atol(3000767169); 
  test3 = atoll(3000767169); 
  test4 = (unsigned int)atoll(3000767169); 

  fprintf(stderr,%u %ld %lld %u\n,test1,test2,test3,test4);
}

$ make test-atol
cc test-atol.c   -o test-atol
$ ./test-atol 
2147483647 2147483647 3000767169 3000767169


I think C90 and C99 specify different behaviors with atol

Is there some standard way postgresql parses integer strings?  Maybe that 
method should be used instead of duplicating the functionality so at least the 
two behave consistently.

--brian

On Sep 28, 2010, at 2:00 PM, Bruce Momjian wrote:

 Brian Hirt wrote:
 Looks like pg_upgrade is using 32bit oids.  2147483647 is the max signed 32 
 bit int, but the oids for my tables are clearly larger than that. 
 
 == output from pg_upgrade ==
 Database: basement84_dev
 relname: mit.company: reloid: 2147483647 reltblspace: 
 relname: mit.company_history: reloid: 2147483647 reltblspace: 
 
 == output from catalog query ==
 basement84_dev=# select c.oid,c.relname from pg_catalog.pg_namespace n, 
 pg_catalog.pg_class c where n.oid = c.relnamespace and n.nspname = 'mit';
oid |  relname   
 +
 3000767630 | company
 3000767633 | company_history
 (22 rows)
 
 
 Interesting.  Odd it would report the max 32-bit signed int.  I wonder
 if it somehow is getting set to -1.  I looked briefly at the pg_upgrade
 code and it appears to put all oids in unsigned ints.
 
 -- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com
 
  + It's impossible for everything to be true. +
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade

2010-09-28 Thread Bruce Momjian
Brian Hirt wrote:
 It looks like it's related to atol

Yep, I found the use of atol in the pg_upgrade code too.  Working on a
patch now.

---


 $ cat test-atol.c 
 #include stdlib.h
 #include stdio.h
 
 int
 main(int argc, char **argv)
 {
   unsigned int test1;
   long test2;
   long long test3;
   unsigned int test4;
 
   test1 = (unsigned int)atol(3000767169); 
   test2 = (long)atol(3000767169); 
   test3 = atoll(3000767169); 
   test4 = (unsigned int)atoll(3000767169); 
 
   fprintf(stderr,%u %ld %lld %u\n,test1,test2,test3,test4);
 }
 
 $ make test-atol
 cc test-atol.c   -o test-atol
 $ ./test-atol 
 2147483647 2147483647 3000767169 3000767169
 
 
 I think C90 and C99 specify different behaviors with atol
 
 Is there some standard way postgresql parses integer strings?  Maybe that 
 method should be used instead of duplicating the functionality so at least 
 the two behave consistently.
 
 --brian
 
 On Sep 28, 2010, at 2:00 PM, Bruce Momjian wrote:
 
  Brian Hirt wrote:
  Looks like pg_upgrade is using 32bit oids.  2147483647 is the max signed 
  32 bit int, but the oids for my tables are clearly larger than that. 
  
  == output from pg_upgrade ==
  Database: basement84_dev
  relname: mit.company: reloid: 2147483647 reltblspace: 
  relname: mit.company_history: reloid: 2147483647 reltblspace: 
  
  == output from catalog query ==
  basement84_dev=# select c.oid,c.relname from pg_catalog.pg_namespace n, 
  pg_catalog.pg_class c where n.oid = c.relnamespace and n.nspname = 'mit';
 oid |  relname   
  +
  3000767630 | company
  3000767633 | company_history
  (22 rows)
  
  
  Interesting.  Odd it would report the max 32-bit signed int.  I wonder
  if it somehow is getting set to -1.  I looked briefly at the pg_upgrade
  code and it appears to put all oids in unsigned ints.
  
  -- 
   Bruce Momjian  br...@momjian.ushttp://momjian.us
   EnterpriseDB http://enterprisedb.com
  
   + It's impossible for everything to be true. +
  
  -- 
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-general

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Commitfest: The Good, The Bad, and the Ugly

2010-09-28 Thread David Fetter
Folks,

We're almost half way through the commitfest, and so I'll start with:

The Good:

- Most patches still in play have a reviewer.

- It's possible for one person to post 5 reviews in a day.  Robert
  Haas actually did this on his own time yesterday.

- New people have been reviewing patches, at least up to the
  Submission criteria.

The Bad:

- There is 1 (one) patch marked Committed or Ready for Committer,
  where neither the author nor reviewer is a committer.  This
  basically means we have approximately one RRReviewer.

The Ugly:

- Patches are not getting even basic QA.

The Bad and the Ugly are fixable, and here's how.

At the moment, we've got 7 basic review criteria
http://wiki.postgresql.org/wiki/Reviewing_a_Patch, 5 of which can be
accomplished with C skills somewhere between 0 and tiny.  These are:

  1. Submission review (skills needed: patch, English comprehension)
  2. Usability review (skills needed: test-fu, ability to find and read spec)
  3. Feature test (skills needed: patch, configure, make, pipe errors to log)
  4. Performance review (skills needed: ability to time performance)
  5. Coding review (skills needed: guideline comparison, experience with 
portability issues, minor C-reading skills)

I'd like to set as a goal that every patch in this commitfest get
those levels of review.  You do not need C skills[1].  You do not need
to be a genius database engine hacker[2].  You just need to be
diligent and want to move the project ahead.

If you haven't yet, get signed in and start reviewing patches.  Sign
in with your community login, and let's get going :)
https://commitfest.postgresql.org/action/commitfest_view?id=7

In case you were wondering, what I'm doing here is part of step 7.

If you think that getting all outstanding patches through step 5 is
not doable, let me know.  If you think it is, this is your chance to
help make it happen.  Write back either way.

Cheers,
David.

[1] If you do have them, help out with step 6, too.
[2] If you are one, help out with step 6, too.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Behavior of parameter holders in query containing a '$1'

2010-09-28 Thread Eddy Hahn
Hi,

1) I'm reading the API documentation and I'm wondering how the client library 
would handle the following statement

INSERT INTO test (value1, value2) VALUES ('$1', $1)

Would it handle it incorrectly and would think that '$1' is the parameter or 
would it skip it because it know that it's a string value encapsulated in ''?

2) Can $1 be used more then one times for example if you want to use the same 
value for multiple column value?
UPDATE test set value1=$1,value2=$1

Thanks,

Eddy



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] huge difference in performance between MS SQL and pg 8.3 on UPDATE with full text search

2010-09-28 Thread Ivan Sergio Borgonovo
On Tue, 28 Sep 2010 20:19:10 +0200
Alban Hertroys dal...@solfertje.student.utwente.nl wrote:

 On 28 Sep 2010, at 12:49, Ivan Sergio Borgonovo wrote:
  The hardware on the 2 machines is a bit different.
  MS SQL 2005 runs on a box that is 2 years old, 2 SATA drives on
  RAID 1 hw, 2 Xeon dual core (I can't check details right now)
  PG runs on a box that has more than 5 years, 3 SCSI drives on
  RAID 5 hw, 2 Xeon single core HT (Intel(R) Xeon(TM) CPU 3.20GHz,
  cpu fam 14, model 4)

  In both tables I've to update price.
  VACUUM FULL was performed just before updating the prices.

 VACUUM FULL? Was that really necessary? You did REINDEX after
 that, didn't you? If not, your indexes became bloated. If the
 table wasn't empty before you probably meant to do a VACUUM
 ANALYSE, but if it was, just ANALYSE would have been sufficient.

...

 Did you ANALYSE between loading the data and updating?

I thought VACUUM FULL was more magical and implied a REINDEX.
Am I wrong?
The index that should be reindexed is the one on the pk, a simple
btree, that's not going to be as slow as rebuilding a gin... still
I'd really thought that VACUUM FULL implied a lot of things
(including ANALYZE as well).

 Also, executing the trigger on each copied line is likely to be a
 little slow, due to the overhead of calling a stored procedure
 (especially if it's plpgsql). It's probably quite a bit faster to
 disable the trigger and create the gin-index after loading the
 data.

I'll try to drop the trigger. I'm not expecting it the biggest
factor still adding something here and something there may end up in
the huge difference between the 2.
Anyway MS SQL seems to overcome all this nuisances auto-magically.

 An EXPLAIN ANALYSE of that statement would tell what it's doing
 and what's taking so long. A simple EXPLAIN would probably be
 sufficient to see what query plan it thinks it needs though.

I'll post the EXPLAIN. Before I run EXPLAIN ANALYZE I've to take
some precaution the DB doesn't explode.

 Did you tune that database? Several options (work_mem for example)
 could significantly improve your performance if you can set them
 higher (or reduce it if you set them too high). You can do that
 per session too.

pg is generally faster than the other MS SQL box on what's normally
done on a daily basis. Just large updates to the product page seems
to be a pain. Other INSERT/UPDATE operations are seldom performed,
they involve smaller tables with no gin index.

  If HW can justify such huge difference I'll devote my time to
  other problems.

 Partially, yes, but not that much I think.

That's my worry... but still in many circumstances pg performs
better than the MS SQL box... yeah... on pretty different
workload... but while on other workloads pg is a bit faster (20%
to 100% faster) even if it is on an older box, on this one is very
slow.

  I'd say that a potential culprit could be the gin index. No
  matter if the tsvector is updated or not, if the row is changed
  I think the index is going to be updated anyway.

 gin indexes require relatively much RAM. If you didn't assign much
 in your settings then it's quite possible that the database can't
 keep the index in memory or that things have to spill to disk.
 Leave enough room for the OS's disk cache though, Postgres
 benefits from that as well.

 Is there any particular reason you went with a gin index and not a
 gist one? Gin can be faster, but consumes (much) more memory, but
 gist is also quite good with text searches and doesn't require
 quite as much memory.

gin index is doing a very good work and well full text searches are
the typical workload of that box and the one that is more important
to be fast.

I'd say if gin was occupying so much memory performances wouldn't be
so good on a daily basis. 

I'd post excerpt of my postgres.conf (what's important here?) and
see if anything can be improved for *this* workload and temporary
tune the DB for this exceptional update still I'm going to continue
to be a bit surprised of such a huge difference even if it will come
out that it was actually a .conf not suited for this workload.

BTW the box is running Apache and php.

thanks

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade

2010-09-28 Thread Bruce Momjian
Bruce Momjian wrote:
 Brian Hirt wrote:
  It looks like it's related to atol
 
 Yep, I found the use of atol in the pg_upgrade code too.  Working on a
 patch now.

I have applied the attached patch to HEAD and 9.0.X.  Odd I had never
received a bug report about this before.  Good thing it didn't silently
fail, but it is designed to be very picky.

This patch will appear in the next 9.0.X release.  Thanks for the
report.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +
diff --git a/contrib/pg_upgrade/controldata.c b/contrib/pg_upgrade/controldata.c
index f36c2c1..c0fe821 100644
--- a/contrib/pg_upgrade/controldata.c
+++ b/contrib/pg_upgrade/controldata.c
@@ -155,7 +155,7 @@ get_control_data(migratorContext *ctx, ClusterInfo *cluster, bool live_check)
 pg_log(ctx, PG_FATAL, %d: pg_resetxlog problem\n, __LINE__);
 
 			p++;/* removing ':' char */
-			cluster-controldata.ctrl_ver = (uint32) atol(p);
+			cluster-controldata.ctrl_ver = str2uint(p);
 		}
 		else if ((p = strstr(bufin, Catalog version number:)) != NULL)
 		{
@@ -165,7 +165,7 @@ get_control_data(migratorContext *ctx, ClusterInfo *cluster, bool live_check)
 pg_log(ctx, PG_FATAL, %d: controldata retrieval problem\n, __LINE__);
 
 			p++;/* removing ':' char */
-			cluster-controldata.cat_ver = (uint32) atol(p);
+			cluster-controldata.cat_ver = str2uint(p);
 		}
 		else if ((p = strstr(bufin, First log file ID after reset:)) != NULL)
 		{
@@ -175,7 +175,7 @@ get_control_data(migratorContext *ctx, ClusterInfo *cluster, bool live_check)
 pg_log(ctx, PG_FATAL, %d: controldata retrieval problem\n, __LINE__);
 
 			p++;/* removing ':' char */
-			cluster-controldata.logid = (uint32) atol(p);
+			cluster-controldata.logid = str2uint(p);
 			got_log_id = true;
 		}
 		else if ((p = strstr(bufin, First log file segment after reset:)) != NULL)
@@ -186,7 +186,7 @@ get_control_data(migratorContext *ctx, ClusterInfo *cluster, bool live_check)
 pg_log(ctx, PG_FATAL, %d: controldata retrieval problem\n, __LINE__);
 
 			p++;/* removing ':' char */
-			cluster-controldata.nxtlogseg = (uint32) atol(p);
+			cluster-controldata.nxtlogseg = str2uint(p);
 			got_log_seg = true;
 		}
 		else if ((p = strstr(bufin, Latest checkpoint's TimeLineID:)) != NULL)
@@ -197,7 +197,7 @@ get_control_data(migratorContext *ctx, ClusterInfo *cluster, bool live_check)
 pg_log(ctx, PG_FATAL, %d: controldata retrieval problem\n, __LINE__);
 
 			p++;/* removing ':' char */
-			cluster-controldata.chkpnt_tli = (uint32) atol(p);
+			cluster-controldata.chkpnt_tli = str2uint(p);
 			got_tli = true;
 		}
 		else if ((p = strstr(bufin, Latest checkpoint's NextXID:)) != NULL)
@@ -211,7 +211,7 @@ get_control_data(migratorContext *ctx, ClusterInfo *cluster, bool live_check)
 pg_log(ctx, PG_FATAL, %d: controldata retrieval problem\n, __LINE__);
 
 			op++;/* removing ':' char */
-			cluster-controldata.chkpnt_nxtxid = (uint32) atol(op);
+			cluster-controldata.chkpnt_nxtxid = str2uint(op);
 			got_xid = true;
 		}
 		else if ((p = strstr(bufin, Latest checkpoint's NextOID:)) != NULL)
@@ -222,7 +222,7 @@ get_control_data(migratorContext *ctx, ClusterInfo *cluster, bool live_check)
 pg_log(ctx, PG_FATAL, %d: controldata retrieval problem\n, __LINE__);
 
 			p++;/* removing ':' char */
-			cluster-controldata.chkpnt_nxtoid = (uint32) atol(p);
+			cluster-controldata.chkpnt_nxtoid = str2uint(p);
 			got_oid = true;
 		}
 		else if ((p = strstr(bufin, Maximum data alignment:)) != NULL)
@@ -233,7 +233,7 @@ get_control_data(migratorContext *ctx, ClusterInfo *cluster, bool live_check)
 pg_log(ctx, PG_FATAL, %d: controldata retrieval problem\n, __LINE__);
 
 			p++;/* removing ':' char */
-			cluster-controldata.align = (uint32) atol(p);
+			cluster-controldata.align = str2uint(p);
 			got_align = true;
 		}
 		else if ((p = strstr(bufin, Database block size:)) != NULL)
@@ -244,7 +244,7 @@ get_control_data(migratorContext *ctx, ClusterInfo *cluster, bool live_check)
 pg_log(ctx, PG_FATAL, %d: controldata retrieval problem\n, __LINE__);
 
 			p++;/* removing ':' char */
-			cluster-controldata.blocksz = (uint32) atol(p);
+			cluster-controldata.blocksz = str2uint(p);
 			got_blocksz = true;
 		}
 		else if ((p = strstr(bufin, Blocks per segment of large relation:)) != NULL)
@@ -255,7 +255,7 @@ get_control_data(migratorContext *ctx, ClusterInfo *cluster, bool live_check)
 pg_log(ctx, PG_FATAL, %d: controldata retrieval problem\n, __LINE__);
 
 			p++;/* removing ':' char */
-			cluster-controldata.largesz = (uint32) atol(p);
+			cluster-controldata.largesz = str2uint(p);
 			got_largesz = true;
 		}
 		else if ((p = strstr(bufin, WAL block size:)) != NULL)
@@ -266,7 +266,7 @@ get_control_data(migratorContext *ctx, ClusterInfo *cluster, bool live_check)
 pg_log(ctx, PG_FATAL, %d: controldata 

Re: [GENERAL] Behavior of parameter holders in query containing a '$1'

2010-09-28 Thread Tom Lane
Eddy Hahn ed...@creightonedward.com writes:
 1) I'm reading the API documentation and I'm wondering how the client library 
 would handle the following statement

 INSERT INTO test (value1, value2) VALUES ('$1', $1)

 Would it handle it incorrectly and would think that '$1' is the parameter or 
 would it skip it because it know that it's a string value encapsulated in ''?

It should think that that's the literal constant dollarsign-one.  That
will definitely work as expected if the client library is relying on
server-side parameter substitution.  There have been versions of some
client libraries that did their own textual parameter substitution,
before the server-side facility existed.  It's at least theoretically
possible that one of those would've got it wrong, but I've not heard
of actual cases.

 2) Can $1 be used more then one times for example if you want to use the same 
 value for multiple column value?
 UPDATE test set value1=$1,value2=$1

Sure.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Exclusion constraint issue

2010-09-28 Thread Jeff Davis
On Tue, 2010-09-28 at 12:18 -0600, Eric McKeeth wrote:

 This is ugly, but it does seem to enforce the constraint I need, of
 non-overlapping dates where sharing an endpoint is not considered an
 overlap.

The period type supports different inclusivity/exclusivity combinations.
So, the period:

   '[2009-01-02, 2009-01-03)'

Does not overlap with:

   '[2009-01-03, 2009-01-04)'

Because [ or ] means inclusive and ( or ) means exclusive.

For further discussion, you can join the temporal-gene...@pgfoundry.org
mailing list (sign up at
http://pgfoundry.org/mailman/listinfo/temporal-general ). If this still
does not solve your use case, I'd like to see if it can be modified to
do so.

Regards,
Jeff Davis


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UPDATE/DELETE with ORDER BY and LIMIT

2010-09-28 Thread Bartlomiej Korupczynski
On Sat, 25 Sep 2010 12:32:55 +0200
Alban Hertroys dal...@solfertje.student.utwente.nl wrote:

 On 24 Sep 2010, at 21:20, Bartłomiej Korupczyński wrote:
 
  Hi guys,
  
  I've found some posts from 2007 about UPDATE/DELETE ... LIMIT N syntax
  and I'd like to raise it again. Current PostgreSQL of UPDATE/DELETE
  implement RETURNING statement, so extending by ORDER and LIMIT would be
  really useful.
 
  All that with just one query. In this specific example, the ORDER BY
  statement could be even omitted if we don't care how slots are
  distributed between users.
 
 This probably came up in the discussion from back then as well, but what 
 stops you from using a sub-select?
 
 UPDATE slots
   FROM (SELECT id FROM slots WHERE user IS NULL
   ORDER BY id LIMIT 1) AS available
SET user='joe'
  WHERE id = available.id
  RETURNING *;
 
 Admittedly that's longer and would be slightly less efficient, but it is 
 available now (and has been for a while) and it's still in one query.

Well, it's not that anything can stop me ;)

It's just a suggestion. I think that:
1. UPDATE ... LIMIT approach is more obvious (= more clear for people to read)
2. as you said -- it's shorter and more efficient, even if it's just a little 
bit (one index scan less, if id was indexed).


 Also:
  CREATE TABLE slots (
  id INTEGER UNIQUE NOT NULL,
  user VARCHAR(32),
  expires TIMESTAMP WITH TIMEZONE,
  -- some other columns
  );
 
 
 I'd declare a primary key as what it is, not as some generic UNIQUE NOT NULL 
 column ;) It won't make much difference in practice, but for example, that 
 way it's intended use is immediately clear from the table definition if 
 people look it up.

It was just a quick and dirty example, but of course you're right :)


Regards,
BK

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to handle results with column names clash

2010-09-28 Thread Bartlomiej Korupczynski
Hi,

I'm curious how do you handle results from multiple tables with
repeated column names. For example:

# CREATE TABLE c1 (id integer PRIMARY KEY, address inet);
# CREATE TABLE c2 (id integer PRIMARY KEY, address text);
# SELECT * FROM c1 JOIN c2 USING (id);
 id | address | address 
+-+-
(0 rows)
or:
# SELECT * FROM c1, c2 WHERE c1.id=c2.id;
 id | address | id | address 
+-++-
(0 rows)

Now lets say we want access results from PHP/perl/etc using column
names. We have address from c1, and the same from c2. We can't even
distinguish which one is from which table.

I see two available possibilities:
1. rename one or each column (eg. prefix with table name), but it's not
always acceptable and makes JOIN ... USING syntax useless (and it's
messy to change to JOIN .. ON for many columns), it would also not work
if we join on the same table twice or more,
2. select each column explicitly:
  SELECT c1.id, c1.address AS c1_address, c2.address AS c2.address
but this is nightmare for tables with many columns, especially if the
schema changes frequently.

Someone could say, that if we JOIN on some column, then it's the same
value, but it does not need to be always true -- we can join on
different columns in different queries.

Any other ideas?


3. Suggestion, but it would be probably hard to implement: to make SQL
engine prefix each returned column with table alias. Of course it would
not be a default behavior, but it would be enabled by some session wide
setting.

# SELECT * FROM c1, c2 WHERE c1.id=c2.id;
 c1.id | c1.address | c2.id | c2.address
[...]
# SELECT * FROM c1 JOIN c2 USING (id);
 ??id | c1.address | c2.address

As JOIN returns only one copy of id, it would be hard to decide about
results (could return one copy for each alias like above).

4. Probably also hard to implement, something like:
# SELECT c1.* AS c1_*, c2.* AS c2_* FROM ...


Or maybe 3 or 4 are already there?


Regards,
BK

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade

2010-09-28 Thread Brian Hirt
Bruce,

The applied patch has the same behavior on i686 Ubuntu 10.04.   It looks like 
atol() is just a macro for strtol() in stdio.h.   I think you want strtoul() 
instead of strtol()

when i change str2uint() to use strtoul() pg_upgrade completes without a 
problem (I still haven't tested the upgrade database, but I expect that will be 
just fine).

I think it's pretty uncommon for the OID to be that big which is why nobody 
stumbled onto this.   This particular installation has pretty much been 
reloading development databases non stop for the last year.  Also, people tend 
to initdb a lot when testing and doing development which will keep resetting 
the oid low.

Thanks for getting this one fixed

--brian

On Sep 28, 2010, at 3:49 PM, Bruce Momjian wrote:
 I have applied the attached patch to HEAD and 9.0.X.  Odd I had never
 received a bug report about this before.  Good thing it didn't silently
 fail, but it is designed to be very picky.
 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Exclusion constraint issue

2010-09-28 Thread Eric McKeeth
On Tue, Sep 28, 2010 at 4:07 PM, Jeff Davis pg...@j-davis.com wrote:

 On Tue, 2010-09-28 at 12:18 -0600, Eric McKeeth wrote:

  This is ugly, but it does seem to enforce the constraint I need, of
  non-overlapping dates where sharing an endpoint is not considered an
  overlap.

 The period type supports different inclusivity/exclusivity combinations.
 So, the period:

   '[2009-01-02, 2009-01-03)'

 Does not overlap with:

   '[2009-01-03, 2009-01-04)'

 Because [ or ] means inclusive and ( or ) means exclusive.


My problem wasn't with getting the period type to represent overlaps with
the correct inclusivity/exclusivity, but in getting it to work with my
exclusion constraint. Can you show an example of how I could get that
working perhaps?



 For further discussion, you can join the temporal-gene...@pgfoundry.org
 mailing list (sign up at
 http://pgfoundry.org/mailman/listinfo/temporal-general ). If this still
 does not solve your use case, I'd like to see if it can be modified to
 do so.

 Regards,
 Jeff Davis


I've subscribed to the temporal-general list, so we can move this discussion
there if that's more appropriate.

Thanks,
Eric


[GENERAL] PostgreSQL 9.0 (x86-64) and Windows 7 (x86-64) - Unable to install

2010-09-28 Thread Dr. Peter Voigt
I cannot install PostgreSQL 9.0 (x86-64) under Windows 7 (x86-64). The
installer fails right after starting the installation process with the
message:

An error occurred executing the Microsoft VC++ runtime installer.

I am using the installer from EnterpriseDB
http://www.enterprisedb.com/products/pgdownload.do. Installation file
is postgresql-9.0.0-1-windows_x64.exe.

Unfortunately there is no %TEMP%\install-postgresql.log.

When scanning the mailing lists under
http://www.postgresql.org/community/lists/ and under
http://forums.enterprisedb.com/forums/show/9.page I can see that this
error has been described for several times with PostgreSQL 8.3 and 8.4
under different Windows variants. A common hint was to activate the
Windos Scripting Host (WSH) allthough it obviously does not help in
all cases. On my machine the WSH is activated and working.

Under
http://www.enterprisedb.com/learning/pginst_guide.do#troubleshooting
you can read about the command line options of the EnterpriseDB
PostgreSQL Installer. An attempt with --install_runtimes 0 fails again
but with the different error message:

Unknown error while running C:\Users\Administrator\Lokale
Einstellungen\postgres_installer\getlocales.exe

Again there is no %TEMP%\install-postgresql.log.

As the second message is suggesting I am working as local
Administrator while installing PostgreSQL.

Maybe it is worth to be mentioned that I have installed Microsoft
Visual Studio 2008 Pro DE. Therefore the installation of the VC++
runtime should not be neccessary.

I am using now MySQL for serveral years and would like to compare it
with a current PostgreSQL version. The installation of PostgreSQL
under Windows is really disappointing but the same worked without
problems under Linux x86-64 (openSUSE 11.0). Under Linux I have used
the EnterpriseDB Installer of PostgreSQL 9.0 (x86-64) as well. The
installation file is postgresql-9.0.0-1-linux-x64.bin.

Is this problem already known and is there a solution for it?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to handle results with column names clash

2010-09-28 Thread Raymond O'Donnell

On 28/09/2010 23:53, Bartlomiej Korupczynski wrote:

Hi,

I'm curious how do you handle results from multiple tables with
repeated column names. For example:

# CREATE TABLE c1 (id integer PRIMARY KEY, address inet);
# CREATE TABLE c2 (id integer PRIMARY KEY, address text);
# SELECT * FROM c1 JOIN c2 USING (id);
  id | address | address
+-+-
(0 rows)
or:
# SELECT * FROM c1, c2 WHERE c1.id=c2.id;
  id | address | id | address
+-++-
(0 rows)

Now lets say we want access results from PHP/perl/etc using column
names. We have address from c1, and the same from c2. We can't even
distinguish which one is from which table.

I see two available possibilities:
1. rename one or each column (eg. prefix with table name), but it's not
always acceptable and makes JOIN ... USING syntax useless (and it's
messy to change to JOIN .. ON for many columns), it would also not work
if we join on the same table twice or more,
2. select each column explicitly:
   SELECT c1.id, c1.address AS c1_address, c2.address AS c2.address
but this is nightmare for tables with many columns, especially if the
schema changes frequently.



In PHP you can access columns by index, using pg_fetch_array().

However, I think it's better to embrace the pain and use aliases for the 
columns with duplicated names - makes your code much easier to read.


You could also create a view which defines the aliases for the columns, 
presenting a consistent interface to the PHP code.


Ray.


--
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Restore problem from 8.4 backup to 9.0

2010-09-28 Thread Adam Wizon
I installed v9.0 on my Mac Pro.  Dumped the 8.4 database using 'pg_dump -Fc -d 
dbname --username=xyz  backup_file_name' using the pg_dump from the 8.4 
installation.  I restored the database using 'pg_restore -d dbname 
backup_file_name'  using the 9.0 restore and after creating a new database 
under 9.0.  Under version 9.0 the database looks ok, but I had a lot of the 
following errors (132) during the restore:

pg_restore: [archiver (db)] could not execute query: ERROR:  relation xyz 
already exists ...
pg_restore: [archiver (db)] could not execute query: ERROR:  multiple primary 
keys for 
pg_restore: [archiver (db)] COPY failed: ERROR:  duplicate key value violates 
unique constraint ...

I tried restoring the 8.4 restore file to a new 8.4 database and there were no 
errors.  It almost looks like the schema already existed when I went to do the 
restore, but I was careful to create the new database in the admin tool under 
the correct server.  I thought the admin tool created the new database in the 
data folder of the selected installation.  Any ideas on what might have 
happened?  Thanks.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] Commitfest: The Good, The Bad, and the Ugly

2010-09-28 Thread Itagaki Takahiro
On Wed, Sep 29, 2010 at 6:03 AM, David Fetter da...@fetter.org wrote:
 The Good:
 - Most patches still in play have a reviewer.

As far as I remember, there were discussions about the issue
A patch has a reviewer, but in Needs Review state for several weeks 
in 9.0 development.

Do we have any plans for it? According to the commitfest app, one patch
has only one reviewer at once. A new reviewer might avoid reviewing
a patch that have another reviewer already.

-- 
Itagaki Takahiro

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [RRR] [HACKERS] Commitfest: The Good, The Bad, and the Ugly

2010-09-28 Thread Robert Haas
On Tue, Sep 28, 2010 at 9:11 PM, Itagaki Takahiro
itagaki.takah...@gmail.com wrote:
 On Wed, Sep 29, 2010 at 6:03 AM, David Fetter da...@fetter.org wrote:
 The Good:
 - Most patches still in play have a reviewer.

 As far as I remember, there were discussions about the issue
 A patch has a reviewer, but in Needs Review state for several weeks 
 in 9.0 development.

 Do we have any plans for it? According to the commitfest app, one patch
 has only one reviewer at once. A new reviewer might avoid reviewing
 a patch that have another reviewer already.

No, the column is very clearly labelled Reviewers, not Reviewer.
And we have certainly had patches with more than one person's name in
that field in the past.  The issue is rather that we don't have enough
people reviewing.  We haven't had enough people volunteer to do
reviews to even assign ONE person to each patch, let alone two.  There
are, as of this writing, SEVEN patches that have no reviewer at all.

Of course, several of the committers, including you, me, and Tom, have
been working our way through the patches.  And that is great.  But the
point of the CommitFest process is that everyone is supposed to pitch
in and help out, not just the committers.  That is not happening, and
it's a problem.  This process does not work and will not scale if the
committers are responsible for doing all the work on every patch from
beginning to end.  That has never worked, and the fact that we have a
few more committers now doesn't change that.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [RRR] [HACKERS] Commitfest: The Good, The Bad, and the Ugly

2010-09-28 Thread Itagaki Takahiro
On Wed, Sep 29, 2010 at 10:18 AM, Robert Haas robertmh...@gmail.com wrote:
 No, the column is very clearly labelled Reviewers, not Reviewer.
 And we have certainly had patches with more than one person's name in
 that field in the past.  The issue is rather that we don't have enough
 people reviewing.  We haven't had enough people volunteer to do
 reviews to even assign ONE person to each patch, let alone two.  There
 are, as of this writing, SEVEN patches that have no reviewer at all.

Some of them might be too difficult to review. For example, replication
or snapshot management requires special skills to review.

I'm worrying about new reviewers hesitate to review a patch that has
a previous reviewer, and then, if they think the remaining patches are
too difficult for them, they would just leave the commitfest page.

-- 
Itagaki Takahiro

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restore problem from 8.4 backup to 9.0

2010-09-28 Thread Tom Lane
Adam Wizon adamwi...@mac.com writes:
 I installed v9.0 on my Mac Pro.  Dumped the 8.4 database using 'pg_dump -Fc 
 -d dbname --username=xyz  backup_file_name' using the pg_dump from the 8.4 
 installation.  I restored the database using 'pg_restore -d dbname 
 backup_file_name'  using the 9.0 restore and after creating a new database 
 under 9.0.  Under version 9.0 the database looks ok, but I had a lot of the 
 following errors (132) during the restore:
 pg_restore: [archiver (db)] could not execute query: ERROR:  relation xyz 
 already exists ...
 pg_restore: [archiver (db)] could not execute query: ERROR:  multiple primary 
 keys for 
 pg_restore: [archiver (db)] COPY failed: ERROR:  duplicate key value violates 
 unique constraint ...

 I tried restoring the 8.4 restore file to a new 8.4 database and there
 were no errors.  It almost looks like the schema already existed when I
 went to do the restore, but I was careful to create the new database in
 the admin tool under the correct server.

Restoring twice is almost certainly the explanation.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to handle results with column names clash

2010-09-28 Thread Darren Duncan

Bartlomiej Korupczynski wrote:

I'm curious how do you handle results from multiple tables with
repeated column names. For example:

# CREATE TABLE c1 (id integer PRIMARY KEY, address inet);
# CREATE TABLE c2 (id integer PRIMARY KEY, address text);
# SELECT * FROM c1 JOIN c2 USING (id);
 id | address | address 
+-+-

(0 rows)
or:
# SELECT * FROM c1, c2 WHERE c1.id=c2.id;
 id | address | id | address 
+-++-

(0 rows)

Now lets say we want access results from PHP/perl/etc using column
names. We have address from c1, and the same from c2. We can't even
distinguish which one is from which table.


The only proper solution is for every resultset column to have a distinct 
unqualified name, full-stop.


If you are joining tables that use the same name for different things, then you 
have two good options:


1.  Rename the table columns to be unique, such as using inet_addr and 
street_addr.


2.  Use AS in your query to give the result columns unique names.

Similarly, id columns should be more descriptive to say what they are the id of 
(eg, artist_id, track_id, etc), and use the same name for columns containing the 
same data, and different names for different data, so approach #1; the main time 
to deviate from this is if you have several columns with the same kind of data, 
and then you use #2.



I see two available possibilities:
1. rename one or each column (eg. prefix with table name), but it's not
always acceptable and makes JOIN ... USING syntax useless (and it's
messy to change to JOIN .. ON for many columns), it would also not work
if we join on the same table twice or more,


Don't prefix with the table name if that doesn't make sense.  In your case, you 
could call the field c_id in both tables for example.


Generally speaking, you *do* want a situation that lets you use JOIN ... USING 
wherever possible.



2. select each column explicitly:
  SELECT c1.id, c1.address AS c1_address, c2.address AS c2.address
but this is nightmare for tables with many columns, especially if the
schema changes frequently.


If you give the table columns good names, you generally won't have to do that.


Someone could say, that if we JOIN on some column, then it's the same
value, but it does not need to be always true -- we can join on
different columns in different queries.


Yes you can, but with a well designed schema you would be joining on same-named 
columns most of the time, and for the rest, you can use AS.



Any other ideas?


I've given mine.


3. Suggestion, but it would be probably hard to implement: to make SQL
engine prefix each returned column with table alias. Of course it would
not be a default behavior, but it would be enabled by some session wide
setting.

# SELECT * FROM c1, c2 WHERE c1.id=c2.id;
 c1.id | c1.address | c2.id | c2.address
[...]
# SELECT * FROM c1 JOIN c2 USING (id);
 ??id | c1.address | c2.address

As JOIN returns only one copy of id, it would be hard to decide about
results (could return one copy for each alias like above).

4. Probably also hard to implement, something like:
# SELECT c1.* AS c1_*, c2.* AS c2_* FROM ...


Some DBMSs already do this, and is a *bad* idea.

The fact that SQL lets you have a rowset with column names either duplicated or 
missing is a horrible misfeature and one shouldn't rely on it.


-- Darren Duncan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [RRR] [HACKERS] Commitfest: The Good, The Bad, and the Ugly

2010-09-28 Thread Andy Colson

On 9/28/2010 8:33 PM, Itagaki Takahiro wrote:

On Wed, Sep 29, 2010 at 10:18 AM, Robert Haasrobertmh...@gmail.com  wrote:

No, the column is very clearly labelled Reviewers, not Reviewer.
And we have certainly had patches with more than one person's name in
that field in the past.  The issue is rather that we don't have enough
people reviewing.  We haven't had enough people volunteer to do
reviews to even assign ONE person to each patch, let alone two.  There
are, as of this writing, SEVEN patches that have no reviewer at all.


Some of them might be too difficult to review. For example, replication
or snapshot management requires special skills to review.

I'm worrying about new reviewers hesitate to review a patch that has
a previous reviewer, and then, if they think the remaining patches are
too difficult for them, they would just leave the commitfest page.



If I might...  I think it would be good to have new reviewers teamed 
with experienced reviewer on a single patch.  Let the newbie have a 
crack at it while having a safety net too.  Good for the newbie, good 
for the project.


You just need a way to assign them.  Message's are already sent out 
saying if you wanna help, email xyz to get started.  A message like 
that could be added to the web page.  The commitfest overlord could 
assign the newbie a patch and a sponsor, saying dear newbie you'll be 
working with bob on this patch, ask him any questions about the process, 
you'll both do the review and you can compare your work to his, and 
learn the process along the way.


The sponsor could answer any dumb questions off list, and every once and 
a while say Hey, that's a great thing to post to the mailing list for 
everyone to review.


And/Or/Also, maybe a snippet on the page saying dont feel like you have 
to do a full review, if you can only do a part, do it, and someone else 
can do the rest.  And/Or: feel free to review something someone else 
is reviewing, more eyes = better prize.


-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [RRR] [HACKERS] Commitfest: The Good, The Bad, and the Ugly

2010-09-28 Thread Robert Haas
On Tue, Sep 28, 2010 at 9:33 PM, Itagaki Takahiro
itagaki.takah...@gmail.com wrote:
 On Wed, Sep 29, 2010 at 10:18 AM, Robert Haas robertmh...@gmail.com wrote:
 No, the column is very clearly labelled Reviewers, not Reviewer.
 And we have certainly had patches with more than one person's name in
 that field in the past.  The issue is rather that we don't have enough
 people reviewing.  We haven't had enough people volunteer to do
 reviews to even assign ONE person to each patch, let alone two.  There
 are, as of this writing, SEVEN patches that have no reviewer at all.

 Some of them might be too difficult to review. For example, replication
 or snapshot management requires special skills to review.

 I'm worrying about new reviewers hesitate to review a patch that has
 a previous reviewer, and then, if they think the remaining patches are
 too difficult for them, they would just leave the commitfest page.

That's a legitimate concern, but I am not sure how much of a problem
it is in practice.  Most people who become round-robin reviewers are
getting pulled into the process a little more than just stumbling
across the CF page by happenstance, or at least I hope they are.  Not
all patches can benefit from multiple reviewers, but CF managers can
and should encourage multiple reviews of those that can.  However, at
the moment, the problem is that regardless of who is assigned to do
what, we're not getting enough reviews done.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to handle results with column names clash

2010-09-28 Thread Darren Duncan

Darren Duncan wrote:

3. Suggestion, but it would be probably hard to implement: to make SQL
engine prefix each returned column with table alias. Of course it would
not be a default behavior, but it would be enabled by some session wide
setting.

# SELECT * FROM c1, c2 WHERE c1.id=c2.id;
 c1.id | c1.address | c2.id | c2.address
[...]
# SELECT * FROM c1 JOIN c2 USING (id);
 ??id | c1.address | c2.address

As JOIN returns only one copy of id, it would be hard to decide about
results (could return one copy for each alias like above).

4. Probably also hard to implement, something like:
# SELECT c1.* AS c1_*, c2.* AS c2_* FROM ...


Some DBMSs already do this, and is a *bad* idea.


Actually, I should clarify that it is the top 2 examples that some DBMSs already 
do, and that's a bad idea.


What you proposed in #4 looks unique and might actually be useful, that just 
being a shorthand for mass regular AS renames.


But what would be *more* useful in general is if SQL supported an all-but 
syntax, where you explicitly named the columns you don't want when that is a 
shorter list.  I know I've proposed this before.


-- Darren Duncan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade

2010-09-28 Thread Bruce Momjian
Brian Hirt wrote:
 Bruce,
 
 The applied patch has the same behavior on i686 Ubuntu 10.04.   It
 looks like atol() is just a macro for strtol() in stdio.h.   I think
 you want strtoul() instead of strtol()

Yes, thanks.  I have now applied that fix in HEAD and 9.0.X.

 when i change str2uint() to use strtoul() pg_upgrade completes without
 a problem (I still haven't tested the upgrade database, but I expect
 that will be just fine).

Yep.

 I think it's pretty uncommon for the OID to be that big which is why
 nobody stumbled onto this.   This particular installation has pretty
 much been reloading development databases non stop for the last year.
 Also, people tend to initdb a lot when testing and doing development
 which will keep resetting the oid low.

Yes, seems  2^31 oids are rarer than I thought.

--
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Fwd: [GENERAL] Restore problem from 8.4 backup to 9.0

2010-09-28 Thread Adam Wizon
Thanks for the fast reply.  I must have still been connected to the older 
database somehow.  I cleaned up my installation and restored the database.  No 
error messages this time.  I need to change the pg_hba.conf file.  I read the 
documentation and its supposed to be in the data directory (which is locked), 
but it doesn't seem to be there.  Is there an easy way to create the file in 
the data directory (without overriding access privileges) at this point?

Begin forwarded message:

 From: Tom Lane t...@sss.pgh.pa.us
 Date: September 28, 2010 9:58:10 PM EDT
 To: Adam Wizon adamwi...@mac.com
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Restore problem from 8.4 backup to 9.0
 
 Adam Wizon adamwi...@mac.com writes:
 I installed v9.0 on my Mac Pro.  Dumped the 8.4 database using 'pg_dump -Fc 
 -d dbname --username=xyz  backup_file_name' using the pg_dump from the 8.4 
 installation.  I restored the database using 'pg_restore -d dbname 
 backup_file_name'  using the 9.0 restore and after creating a new database 
 under 9.0.  Under version 9.0 the database looks ok, but I had a lot of the 
 following errors (132) during the restore:
 pg_restore: [archiver (db)] could not execute query: ERROR:  relation xyz 
 already exists ...
 pg_restore: [archiver (db)] could not execute query: ERROR:  multiple 
 primary keys for 
 pg_restore: [archiver (db)] COPY failed: ERROR:  duplicate key value 
 violates unique constraint ...
 
 I tried restoring the 8.4 restore file to a new 8.4 database and there
 were no errors.  It almost looks like the schema already existed when I
 went to do the restore, but I was careful to create the new database in
 the admin tool under the correct server.
 
 Restoring twice is almost certainly the explanation.
 
   regards, tom lane
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Slony-I installation Help

2010-09-28 Thread Vishnu S.
Hi,

 

Yes, using some proxy. The XML file mentioned in the URL can be opened
in browser. Even after setting the proxy values mentioned in the IE
(Tools-Internet Options  -Connections-LAN settings), the same message
box is shown.

 

 

 

 

Thanks  Regards,

Vishnu S

 

From: Sachin Srivastava [mailto:sachin.srivast...@enterprisedb.com] 
Sent: Tuesday, September 28, 2010 11:15 AM
To: Vishnu S.
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Slony-I installation Help

 

[Please keep the thread on the mailing list]

 

For the stackbuilder, are you behind any proxy? Can you open the url
mentioned in the error message via your browser? You can set the
appropriate proxy values on the first page of stackbuilder.

 

Slony-I Path will be where your slony binaries reside.

 

On Sep 28, 2010, at 9:24 AM, Vishnu S. wrote:





Hi,

 

When I tried to install Stackbuilder the following error message is
shown. How can I recover from this?

 

image001.png

 

I have already set the Slony-I path as C:\Program
Files\PostgreSQL\8.4\share. Is this path is correct?

 

 

 

Thanks  Regards,

Vishnu S

 

From: Sachin Srivastava [mailto:sachin.srivast...@enterprisedb.com] 
Sent: Monday, September 27, 2010 7:47 PM
To: Vishnu S.
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Slony-I installation Help

 

Hello,

 

If you have used EnterpriseDB's One Click Installer to install
PostgreSQL 8.4, then you can use Stackbuilder to install Slony for PG
8.4.

 

For making Slony work with pgAdmin, you need to put the Slony-I path in
the options file of pgAdmin. Open File Menu -- Options -- [In General
Tab] Set Slony-I path.

 

 

On Sep 27, 2010, at 3:54 PM, Vishnu S. wrote:






Hi,

I have downloaded slony-I(slony-I-2.0.2R-pg84.zip)  from  the site 
http://developer.pgadmin.org/~hiroshi/Slony-I/. But when I tried to
create a new  Replication cluster using PgAdmin(After doing all other
steps mentioned in http://www.pgadmin.org/docs/1.8/slony-example.html )
the 'OK' button in the Dialog is in disabled state. Also a message
'Slony-I creation scripts not available;only joining possible'. When I
tried to join the cluster ,it also get failed because no cluster is
created at all.

Please provide the address from which a Complete Installation package
for Slony-I, compatible with the PostgreSQL 8.4 will be available.

Please note that I am using Windows XP.

 

 

 

Thanks  Regards,

Vishnu S

 

 

--

Regards,

Sachin Srivastava

EnterpriseDB http://www.enterprisedb.com , the Enterprise Postgres
http://www.enterprisedb.com  company.

 

 

--

Regards,

Sachin Srivastava

EnterpriseDB http://www.enterprisedb.com , the Enterprise Postgres
http://www.enterprisedb.com  company.

 

image001.png

[GENERAL] FTS GIN Index Question

2010-09-28 Thread Christian Ramseyer

Hi List

I have a largish partitioned table, it has ~60 million records in each 
of 12 partitions. It appears that a Full Text Index could speed up some 
user queries a lot.


A quick test with an additional tsvector column revealed that this would 
take up around 35 GB of space for this column and then maybe 5 more for 
the gin index on it. As this is a lot of space (~ 480 GB), I'm a bit 
tempted to use a gin index without the separate tsvector column. 
However, the doc says that this will be slower.


Does anyone have an idea of how much slower we're talking here? The 
index defintion would be a concatenation of two setweights(), i.e.:


... using gin(
  (setweight(to_tsvector('config',coalesce(col1,'')), 'A') ||
   setweight(to_tsvector('config',coalesce(col2,'')), 'B')))

Also, general recommendations regarding full text search configurations 
of that size are very welcome.


Christian




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slony-I installation Help

2010-09-28 Thread Sachin Srivastava
What are the values of:

HKEY_CURRENT_USER\Software\PostgreSQL\StackBuilder\HTTP Proxy Host and
HKEY_CURRENT_USER\Software\PostgreSQL\StackBuilder\HTTP Proxy Port ?

Can you make sure whether both of them matches with the correct values?

On Sep 29, 2010, at 10:27 AM, Vishnu S. wrote:

 Hi,
  
 Yes, using some proxy. The XML file mentioned in the URL can be opened in 
 browser. Even after setting the proxy values mentioned in the IE 
 (Tools-Internet Options  -Connections-LAN settings), the same message box 
 is shown.
  
 image001.png
  
  
 Thanks  Regards,
 Vishnu S
  
 From: Sachin Srivastava [mailto:sachin.srivast...@enterprisedb.com] 
 Sent: Tuesday, September 28, 2010 11:15 AM
 To: Vishnu S.
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Slony-I installation Help
  
 [Please keep the thread on the mailing list]
  
 For the stackbuilder, are you behind any proxy? Can you open the url 
 mentioned in the error message via your browser? You can set the appropriate 
 proxy values on the first page of stackbuilder.
  
 Slony-I Path will be where your slony binaries reside.
  
 On Sep 28, 2010, at 9:24 AM, Vishnu S. wrote:
 
 
 Hi,
  
 When I tried to install Stackbuilder the following error message is shown. 
 How can I recover from this?
  
 image001.png
  
 I have already set the Slony-I path as “C:\Program 
 Files\PostgreSQL\8.4\share”. Is this path is correct?
  
  
  
 Thanks  Regards,
 Vishnu S
  
 From: Sachin Srivastava [mailto:sachin.srivast...@enterprisedb.com] 
 Sent: Monday, September 27, 2010 7:47 PM
 To: Vishnu S.
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Slony-I installation Help
  
 Hello,
  
 If you have used EnterpriseDB's One Click Installer to install PostgreSQL 
 8.4, then you can use Stackbuilder to install Slony for PG 8.4.
  
 For making Slony work with pgAdmin, you need to put the Slony-I path in the 
 options file of pgAdmin. Open File Menu -- Options -- [In General Tab] Set 
 Slony-I path.
  
  
 On Sep 27, 2010, at 3:54 PM, Vishnu S. wrote:
 
 
 
 Hi,
 I have downloaded slony-I(slony-I-2.0.2R-pg84.zip)  from  the site 
 http://developer.pgadmin.org/~hiroshi/Slony-I/. But when I tried to create a 
 new  Replication cluster using PgAdmin(After doing all other steps mentioned 
 in http://www.pgadmin.org/docs/1.8/slony-example.html ) the ‘OK’ button in 
 the Dialog is in disabled state. Also a message  ‘Slony-I creation scripts 
 not available;only joining possible’. When I tried to join the cluster ,it 
 also get failed because no cluster is created at all.
 Please provide the address from which a Complete Installation package for 
 Slony-I, compatible with the PostgreSQL 8.4 will be available.
 Please note that I am using Windows XP.
  
  
  
 Thanks  Regards,
 Vishnu S
  
  
 --
 Regards,
 Sachin Srivastava
 EnterpriseDB, the Enterprise Postgres company.
  
  
 --
 Regards,
 Sachin Srivastava
 EnterpriseDB, the Enterprise Postgres company.
  

--
Regards,
Sachin Srivastava
EnterpriseDB, the Enterprise Postgres company.