Re: [ADMIN] [GENERAL] Data Not replicating

2010-10-05 Thread Vishnu S.
Hi,


The cluster name is defined in  the  preamble.sk file like,

cluster name=TestCLuster;
node 1 admin conninfo = 'service=MasterSlonik host=10.2.26.53 port=5432 
user=postgres password=* dbname=master11';
node 2 admin conninfo = 'service=SlaveSlonik1 host=10.2.26.54 port=5433 
user=postgres password=* dbname=slave11';




Thanks & Regards,
Vishnu S

-Original Message-
From: Devrim GÜNDÜZ [mailto:dev...@gunduz.org] 
Sent: Tuesday, October 05, 2010 5:52 PM
To: Vishnu S.
Cc: pgsql-ad...@postgresql.org; pgsql-general@postgresql.org
Subject: Re: [ADMIN] [GENERAL] Data Not replicating

On Tue, 2010-10-05 at 11:03 +0530, Vishnu S. wrote:
> 
> Yes. I have written a script file like the following.
> 
> include ;
> init cluster (id=1, comment='hostname=10.2.26.53 port=5432');

It does not give me the $CLUSTER_NAME information.
--
Devrim GÜNDÜZ
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer PostgreSQL RPM 
Repository: http://yum.pgrpms.org
Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr 
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz
* Confidentiality Statement/Disclaimer *

This message and any attachments is intended for the sole use of the intended 
recipient. It may contain confidential information. Any unauthorized use, 
dissemination or modification is strictly prohibited. If you are not the 
intended recipient, please notify the sender immediately then delete it from 
all your systems, and do not copy, use or print. Internet communications are 
not secure and it is the responsibility of the recipient to make sure that it 
is virus/malicious code exempt.
The company/sender cannot be responsible for any unauthorized alterations or 
modifications made to the contents. If you require any form of confirmation of 
the contents, please contact the company/sender. The company/sender is not 
liable for any errors or omissions in the content of this message.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Basic SQL question and kudos

2010-10-05 Thread Dennis Gearon
Thanks for the start. I'll start on it.


Dennis Gearon

Signature Warning

EARTH has a Right To Life,
  otherwise we all die.

Read 'Hot, Flat, and Crowded'
Laugh at http://www.yert.com/film.php


--- On Tue, 10/5/10, Craig Ringer  wrote:

> From: Craig Ringer 
> Subject: Re: [GENERAL] Basic SQL question and kudos
> To: "Dennis Gearon" 
> Cc: pgsql-general@postgresql.org
> Date: Tuesday, October 5, 2010, 9:13 PM
> On 10/06/2010 12:31 AM, Dennis Gearon
> wrote:
> > I need to get a join across about 8 tables to only
> return rows for the MAX date in one of the tables for all
> otherwise identical return results. I am trying to get the
> most recent edit of normalized records that are composed of
> elements from all the tables. There could be 1 to 100+
> successive edits to the normalized/multi table record. The
> will be approximately 65,000 total records for now,
> 100,000,000s later.
> > 
> > If I make one of the values in the select statement be
> MAX(the date), does that automatically do grouping on 'the
> date' or will I only get ONE row total returned?
> 
> 
> If I understand what you're proposing correctly, I don't
> think it'll work.
> 
> What you need to do is use a subquery to obtain the max of
> the field of interest, and filter in your WHERE clause so
> you only return results when the selected value is the
> maximum for that field.
> 
> It's hard to be more specific with a fairly general
> question like this, but if I'm understanding you right this
> is how I'd start.
> 
> --
> Craig Ringer
> 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is there a logical reason for 25P02?

2010-10-05 Thread Konstantin Izmailov
Craig,
I think that failure of a previous command is the most possible reason. It
is an unexpected discovery since I keep track of statements that were
already prepared on the connection (to avoid preparing a statement twice).
However, the code might be flowed somehow. Anyway, I decided to use PQexec
for the time being.

Cédric, I'm basically using a code similar to one in pgloader. I'm trying to
find a way to efficiently handle an integrity violation error when inserting
large amount of data. Graceful handling means that a transaction should not
be aborted when the code tries to insert a duplicate row. Is using of a
SAVEPOINT only solution?

Thank you
Konstantin


On Tue, Oct 5, 2010 at 4:13 AM, Craig Ringer wrote:

> On 10/05/2010 12:39 PM, Konstantin Izmailov wrote:
>
>> Howdy,
>> I've noticed that there is a difference in result of execution of the
>> following statement:
>>   INSERT INTO testtable(col1) VALUES(NULL);
>> depending on whether the command is prepared or not.
>>
>> If I call PQprepare/PQexecPrepared for the statement, the error
>> "transaction aborted" is returned with SQL State = "25P02".
>>
>
> Specifically, I suspect the message should be:
>
> ERROR:  current transaction is aborted, commands ignored until end of
> transaction block
>
> If that's what you're getting, the problem was with an earlier command that
> returned an error you didn't notice, not with the command you just ran. I'm
> unsure if this could cause PQexecPrepared to return sqlstate 25P02 if
> PQprepare fails, but would want to investigate the possibility.
>
> --
> Craig Ringer
>


Re: [GENERAL] Basic SQL question and kudos

2010-10-05 Thread Craig Ringer

On 10/06/2010 12:31 AM, Dennis Gearon wrote:

I need to get a join across about 8 tables to only return rows for the MAX date 
in one of the tables for all otherwise identical return results. I am trying to 
get the most recent edit of normalized records that are composed of elements 
from all the tables. There could be 1 to 100+ successive edits to the 
normalized/multi table record. The will be approximately 65,000 total records 
for now, 100,000,000s later.

If I make one of the values in the select statement be MAX(the date), does that 
automatically do grouping on 'the date' or will I only get ONE row total 
returned?



If I understand what you're proposing correctly, I don't think it'll work.

What you need to do is use a subquery to obtain the max of the field of 
interest, and filter in your WHERE clause so you only return results 
when the selected value is the maximum for that field.


It's hard to be more specific with a fairly general question like this, 
but if I'm understanding you right this is how I'd start.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do I write this query? Distinct, Group By, Order By?

2010-10-05 Thread Josh Kupershmidt
On Tue, Oct 5, 2010 at 10:26 PM, Min Yin  wrote:
>  Hi There,
>
> I have a table looks like this:
>
> (order_id, user_id, order_time)
>
> One user_id can have multiple orders with order_id as the primary key, now I
> want to get a list of users, ordered by their latest order respectively, for
> example, if user A has two orders, one on today, the other a month ago, and
> user B has one order a week ago, then the result should be
>
> A
> B
>
> how do I do it? I tried various ways of SELECT with Distinct, Group By,
> Order By, but was hit by either "column must appear in the GROUP BY clause
> or be used in an aggregate function", or "for SELECT DISTINCT, ORDER BY
> expressions must appear in select list" every time.
>
> Is it possible to do it? Is it possible to do it in one none-nested query?


If all you need is the user_id, sorted by the timestamp of the user's
most recent order, I think this should work:

SELECT user_id FROM orders GROUP BY user_id ORDER BY MAX(order_time) DESC;

Josh

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How do I write this query? Distinct, Group By, Order By?

2010-10-05 Thread Min Yin

 Hi There,

I have a table looks like this:

(order_id, user_id, order_time)

One user_id can have multiple orders with order_id as the primary key, 
now I want to get a list of users, ordered by their latest order 
respectively, for example, if user A has two orders, one on today, the 
other a month ago, and user B has one order a week ago, then the result 
should be


A
B

how do I do it? I tried various ways of SELECT with Distinct, Group By, 
Order By, but was hit by either "column must appear in the GROUP BY 
clause or be used in an aggregate function", or "for SELECT DISTINCT, 
ORDER BY expressions must appear in select list" every time.


Is it possible to do it? Is it possible to do it in one none-nested query?


Many Thanks!


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] error messages

2010-10-05 Thread Lumber Cartel, local 42
On Oct 5, 4:19 am, amikl...@freemail.hu (Miklosi Attila) wrote:
> Dear Postgres!
>
> We have just moved our database to a more powerful pc and since then we
> keep getting this strange error message below:
> "PGRES_FATAL_ERROR FATAL:  invalid frontend message type 87"
>
> What does this message mean? What could cause this error?

Are you using the correct version of your psql command-line tool?
This error seems familiar from back when I tried to do this once.

--
The Lumber Cartel, local 42 (Canadian branch)
Beautiful British Columbia, Canada
http://www.lumbercartel.ca/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] anyone have Movable Type 5 working with Postgres?

2010-10-05 Thread Matthew Hixson
I've seen mention that it might be possible for MT5 to re-gain Postgres support 
(they dropped it in version 5) through the use of plugins.  Does anyone know of 
such a thing and/or have an installation working themselves?  Would greatly 
appreciate any info you can share.
  Thanks,
-M@
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres, DB2 and Materialized Query Tables (MQTs - DB2s Materialized Views)

2010-10-05 Thread Josh Kupershmidt
On Tue, Oct 5, 2010 at 3:28 PM, Sandeep Srinivasa  wrote:
> Below is an excerpt from a comment over at Reddit. It seemed quite
> interesting to read about the difference in PG and DB2's query optimizer.

The entire post makes it sound like the author believes that Postgres
has Materialized Views built-in to its core. It doesn't -- what you
can have are materialized views that you populate using triggers.
There was a GSoC project to try to add snapshot materialized views
into core a few months ago, though I think it would be fairly
rudimentary compared to the DB2 capabilities your post describes.

[snip]
> if you're in PostgreSQL (or MySQL, or pretty much any DB but Oracle), then
> the query optimizer will build a plan that involves a table scan
> over sometable. for a large table, that sucks. (a clustered index
> over somedate onsometable can make this less painful, but still. bleh.)

Naturally, since the Postgres optimizer has no knowledge about any
materialized views you may have constructed by hand.

The flip side of this is that when you build a materialized view by
hand, you can fine-tune it for performance pretty well: you can
control exactly how it gets refreshed, and use tricks involving your
knowledge of the base tables to take shortcuts in the view refreshing
(e.g. if you know that several types of updates to your base tables
wouldn't get reflected in the view anyway, you can safely ignore
them).

> if you're in DB2, the DB2 query optimizer is smart enough to realize that
> you have that data laying around in your MQT, so it just does a lookup
> for 2010 in the MQT and returns the corresponding COUNT(*) value. if the MQT
> has an index over YEAR(somedate), then this is blazing fast. Point DB2.

I guess this is nice. But again, you could just build the mat. view by
hand on Postgres, and your query will be just as fast, if you've coded
your application to pull from the view instead of the base table.

> and
> because the query optimizer considers MQTs out-of-the-box with no changes to
> existing applications querying the database, if a client's aggregation query
> is running slow and you can define an MQT that backs that query and a
> suitable update schedule for the MQT, the client's query will go 10x's
> faster and require no application changes at all. Another point DB2.

> also, DB2 (and Oracle too, pretty sure) lets you update MQTs incrementally.

You can do this with a hand-built mat. view in Postgres as well.

> in DB2, when you want to add records to an MQT, you can update the MQT's
> contents with just those records as opposed to re-running the query that
> backs it. (for most aggregation functions, anyway.) so, whereas in
> PostgreSQL you'd have to re-run SELECT COUNT(*) FROM sometable WHERE
> YEAR(somedate)=2010 to update your MQT if you add records,

Not true, unless you don't understand the difference between a
snapshot materialized view and the other types.

> in DB2 you can
> just pass those records through the existing MQT and update aggregate values
> where appropriate. (the syntax for that escapes me at the moment,
> unfortunately. any DB2-heads lurking about that can help with that?)
>
> anyway, DB2 is excellent at managing enterprise-level data sets. MQTs are an
> awesome feature it has, and they really change the kinds of applications you
> can write against the database.

Don't get me wrong, I'm not saying it wouldn't be cool to have the
capabilities the author is describing built-in to core, but doing
things by hand really isn't so bad, especially for
performance-critical views. I haven't used DB2, but I would think you
give up some performance and flexibility by using their automated
MQTs. Note that maybe the definitive source for materialized views on
PostgreSQL specifically says: "I do not recommend that PostgreSQL add
Materialized Views in its core.", with good reasons:


Josh

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help modeling a schedule system

2010-10-05 Thread Scott Ribe
Well, when I did something similar a very long time ago, I put a "next time to 
fire" column in the table. Each time an event is fired, the application updates 
that column.

Was a lot simpler than baroque searches for events that would need firing 
between the last check and now. Was a lot cleaner than creating tons of 
instances out through the future.

You could create a stored procedure, if you want, to calculate the next time 
based on all the possible criteria.

You could, have a "last time fired" column that keeps track of when the event 
was most recently fired, for logging/debugging/info. You could have a trigger 
that updates that "next time to fire" column each time the "last time fired" 
column is updated.

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] help modeling a schedule system

2010-10-05 Thread chris h
I'm not sure if this is the correct list for this, but I'm hoping to get
some help with a data model I'm struggling with.  Let me know if this should
be elsewhere! :)


We have an event schedule system that basically tells an event handler when
an event should be fired, and keeps a history of fired events.  An event
would only be fired once per day, and can have the following settings.

a) Time of day (granularity of minutes, we're not worried about seconds):
e.g. 09:34, 17:00, etc.

b) Day of week (an event can be fired or not fired any/all days of the
week):
e.g. sun-sat; mon-fri; mon,wed,thr; mon; etc.

c) Ratio in fired-event-days to all-event-days (an event may only fire on
some of the days that it is suppose to):
e.g. 1:2 - would fire every other day, 1:1 would fire every day, etc.

d) Exceptions (events can have periods of set inactivity):
e.g. 20110201-20110210 would disable the event from February 1st, 2011
through February 10th, 2011, after which it would resume as normal.

note: we are expecting number of events to be in the 10's of thousands, but
I would like a solution that can easily scale into the few millions.


The models I've come up with to satisfy these requirements seem bloated and
overly complex to me.  Having these events fire, on-time, is the biggest
priority here.  By having a complex, temporal system the probability of
logic errors seems to be high; I want to avoid that.

Thanks for any help everyone!

Chris H.


[GENERAL] Submissions for a PostgreSQL track at MySQL Conf 2011: Due October 25

2010-10-05 Thread Selena Deckelmann
Hi!

I'm on the committee for MySQL Conf this year, and the committee is
specifically seeking talks about PostgreSQL. The idea is to broaden
the scope of the conference to include a lot of different open source
database technology, including a bunch more about Postgres. The
organizing committee is specifically seeking PostgreSQL talks - some
of you may have heard from Baron or Robin already about this.

The CFP is here: http://en.oreilly.com/mysql2011/public/cfp/126

The submission deadline for all proposals is October 25, 2010!

Topics for consideration include:

* Innovative uses of Postgres
* Data warehousing
* Architectures based on Postgres
* PostGIS
* Government + Postgres
* [your favorite web framework] + Postgres
* Performance and optimization
* Security and database administration
* "In the cloud"
* Business and case studies

Please contact me directly for feedback, help with submissions or help
generating ideas.  And if you're submitting, please just drop me a
line to let me know! I'd love to hear from all that are interested.

-selena

-- 
http://chesnok.com/daily - me

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] querying the version of libpq

2010-10-05 Thread Magnus Hagander
On Tue, Oct 5, 2010 at 22:53, Greg Sabino Mullane  wrote:
>
>>> Er...yes it will match exact...because we[1] get the location of the
>>> library files from pg_config as well. :) Handy little utility that.
>>
>> How do you get pg_config to tell you which of the multiple versions of
>> libpq that may be installed on the machine you are actually linking
>> against?
>
> Not sure what you mean. pg_config *drives* the compilation and linking,
> we don't blindly compile and simply take pg_config's word for it.
> pg_config --libdir and pg_config --includedir.

But that's build-time, not run-time.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying to figure out why these queries are so slow

2010-10-05 Thread Tim Uckun
> whats the storage like on linode?  Is that some sort of virtualized cloudy
> thing?   wild guess says committed random writes are really slow.


Yes it's some sort of a virtual cloudy thing.  I can't tell you more
than that though. They sell virtual machines and this project runs on
them.

Does anybody else have any suggestions for how I can make this
database behave a little better? Is there some tweak I am missing?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] querying the version of libpq

2010-10-05 Thread Tom Lane
"Greg Sabino Mullane"  writes:
>> The real problem is that neither of these can be trusted to tell you the
>> *library* version.  PQserverVersion() is something else altogether,
>> and I wouldn't want to assume that pg_config exactly matches the library
>> you're linked to --- if it's even present at all.

> Er...yes it will match exact...because we[1] get the location of the 
> library files from pg_config as well. :) Handy little utility that.

[ shrug... ]  It's not at all hard to think of scenarios where that will
give you misleading results, even granted that you're willing to fall
over when pg_config isn't present.  But regardless of that, the
technique simply isn't available to ordinary C code using the standard
dynamic linker.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] querying the version of libpq

2010-10-05 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


>> Er...yes it will match exact...because we[1] get the location of the
>> library files from pg_config as well. :) Handy little utility that.
>
> How do you get pg_config to tell you which of the multiple versions of
> libpq that may be installed on the machine you are actually linking
> against?

Not sure what you mean. pg_config *drives* the compilation and linking, 
we don't blindly compile and simply take pg_config's word for it.
pg_config --libdir and pg_config --includedir.

- -- 
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201010051651
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAkyrkB0ACgkQvJuQZxSWSsh4OACglNyrdKgf1jk5op7yzBw1Mh4M
sFUAnjiWVUDjNmJbNRnsuumZxzZvxQCT
=ig1E
-END PGP SIGNATURE-



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] querying the version of libpq

2010-10-05 Thread Magnus Hagander
On Tue, Oct 5, 2010 at 22:39, Greg Sabino Mullane  wrote:
>
>
>> The real problem is that neither of these can be trusted to tell you the
>> *library* version.  PQserverVersion() is something else altogether,
>> and I wouldn't want to assume that pg_config exactly matches the library
>> you're linked to --- if it's even present at all.
>
> Er...yes it will match exact...because we[1] get the location of the
> library files from pg_config as well. :) Handy little utility that.

How do you get pg_config to tell you which of the multiple versions of
libpq that may be installed on the machine you are actually linking
against?

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] querying the version of libpq

2010-10-05 Thread Dmitriy Igrishin
Hey all,

Could you call it as PQlibVersion() or even PQversion() instead of
PQlibpqVersion() ?

2010/10/5 Tom Lane 

> Peter Geoghegan  writes:
> > On 5 October 2010 16:33, Greg Sabino Mullane  wrote:
> >>> How does the driver figure it out?
> >>
> >> DBD::Pg parses pg_config --version, then passes the information
> >> to the C programs for directive fiddling. I certainly hope
> >> other drivers are doing the same, as libpq varies across
> >> major versions a good deal.
>
> > I would imagine that most libpq wrapping drivers use libpq's
> > PQserverVersion(), which returns an integer that looks like 9.
>
> The real problem is that neither of these can be trusted to tell you the
> *library* version.  PQserverVersion() is something else altogether,
> and I wouldn't want to assume that pg_config exactly matches the library
> you're linked to --- if it's even present at all.
>
> We could add a PQlibpqVersion(), maybe, but it would be many years
> before client code could rely on that being present.
>
>regards, tom lane
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
// Dmitriy.


Re: [GENERAL] querying the version of libpq

2010-10-05 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


> The real problem is that neither of these can be trusted to tell you the
> *library* version.  PQserverVersion() is something else altogether,
> and I wouldn't want to assume that pg_config exactly matches the library
> you're linked to --- if it's even present at all.

Er...yes it will match exact...because we[1] get the location of the 
library files from pg_config as well. :) Handy little utility that.

[1] DBD::Pg


- -- 
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201010051638
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAkyrjPUACgkQvJuQZxSWSsiONwCfb8t30qkOBSKmMYDz2qL0mXIp
Xp4AoM2pQIQzUOSx4r8ZPdzNp/lA2Ar3
=1Ano
-END PGP SIGNATURE-



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] optimization (can I move pgsql_tmp)?

2010-10-05 Thread bricklen
On Tue, Oct 5, 2010 at 12:02 PM, Ozz Nixon  wrote:
> Good after everyone,
>
>        We are experiencing some performance issues on a table with 7 fields, 
> 8,800,000 rows. During some exercises, one thing I noticed is I need to 
> change the configuration of the system to house pgsql_tmp on a host partition 
> instead of the drive array... that will get me a little more speed... is this 
> controlled via a .conf file or pgamin?
>
You can try setting temp_tablespaces in your postgreqsl.conf file to
use a tablespace defined over another part of your disk subsytem.
Eg. create tablespace some_tablespace location '/some/path';

temp_tablespaces = 'some_tablespace';

Then reload your postgresql.conf file.  Not sure if it's going to help
you or not, though.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Non-standard escape sequences from Crystal Reports

2010-10-05 Thread bricklen
> Can anyone tell me if there is some setting that will stop these warnings
> from showing up (other than turning off warning messages completely, which I
> am about to suggest that the customer do)?  Also, does anyone know why my
> computer is not showing warnings while our customer's system is?
>
> Thanks very much!
>
> RobR

Set "escape_string_warning = off" in your postgresql.conf file.

http://www.postgresql.org/docs/8.4/interactive/runtime-config-compatible.html

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] optimization (can I move pgsql_tmp)?

2010-10-05 Thread Ozz Nixon
-- create index tags1 on allwikitags(tagword) - after still 18 seconds
-- create index tags6 on allwikitags(tagword,instances desc, pagename) - after 
now 32 seconds
-- drop index tags6
-- create index tags2 on allwikitags(instances) - after still 18 seconds
-- create index tags3 on allwikitags(soundex2)
-- create index tags4 on allwikitags(metaphone)
-- create index tags5 on allwikitags(metaphone2) - after 3 now 1 second

select pagename,tagword,instances from allwikitags
where tagword in 
('journey','journeys','journeyes','journeyd','journeyed','journeyly','journeyy','journeyth','journeydom','journeying','journeyize','journeyion','journeyism','journeyized','journeyizes','journeyizing','journeyions','journeyists','journeyfulness','journeyise','journeyish','journeyist','journeyful','journeytion','journeyless','journeyable','journeyfull','journeyious','journeylike','journeyment','journeyness')
or soundex2 in ('J650')
or metaphone in ('jrny')
or metaphone2 in ('JRN')
group by pagename,tagword,instances
order by tagword,instances desc,pagename

Giving it indexes for each of the "or" elements got the 8.8 million row 
query down to 1 second. So now, I just await for suggestions of how one would 
debug this and know he needed to hang more indexes off the table?

Thanks,
Ozz
On Oct 5, 2010, at 3:02 PM, Ozz Nixon wrote:

> Good after everyone,
> 
>   We are experiencing some performance issues on a table with 7 fields, 
> 8,800,000 rows. During some exercises, one thing I noticed is I need to 
> change the configuration of the system to house pgsql_tmp on a host partition 
> instead of the drive array... that will get me a little more speed... is this 
> controlled via a .conf file or pgamin?
> 
>   Optimization questions:
> 
>   When is pgsql_tmp used? (Right now as I am creating indexes by hand, I 
> see it grow for about 10 minutes):
> 
> du -h /mnt/data/base/
> 5.1M  /mnt/data/base/1
> 5.1M  /mnt/data/base/11563
> 4.0G  /mnt/data/base/11564
> 8.9M  /mnt/data/base/16395
> 586M  /mnt/data/base/pgsql_tmp
> 
>   During the create index - communications in general to the drive array 
> is "consumed".
> 
>   Before I keep experimenting and making things worse, I will ask - what 
> indexes should I have to make this query better - or how does on debug when 
> they find a query is taking too long???
> 
> STRUCTURE:
>   quer.SQL.Add('create table '+DBTags+' (');
>   quer.SQL.Add('   pagename '+SQL_TITLE+'(100) not null,');
>   quer.SQL.Add('   tagword '+SQL_TITLE+'(15) not null,');
>   quer.SQL.Add('   soundex2 '+SQL_TITLE+'(4) not null,');
>   quer.SQL.Add('   metaphone '+SQL_TITLE+'(15) not null,');
>   quer.SQL.Add('   metaphone2 '+SQL_TITLE+'(22) not null,');
>   quer.SQL.Add('   carverphone '+SQL_TITLE+'(22) not null,');
>   quer.SQL.Add('   instances '+SQL_INT32+' not null,');
>   if SQL_NAME_PRIMARY_KEYS then quer.SQL.Add('   constraint '+DBTags+'_PK');
>   quer.SQL.Add('   primary key(pagename, tagword, instances)');
> 
> 
> WAS 18 seconds with just the primary key, so I tried:
> create index tags1 on allwikitags(tagword) -- after still 18 seconds
> 
> Then I tried:
> create index tags6 on allwikitags(tagword,instances desc, pagename) -- after 
> now 32 seconds
> 
> 
> My Query:
> 
> select pagename,tagword,instances from allwikitags
> where tagword in 
> ('journey','journeys','journeyes','journeyd','journeyed','journeyly','journeyy','journeyth','journeydom','journeying','journeyize','journeyion','journeyism','journeyized','journeyizes','journeyizing','journeyions','journeyists','journeyfulness','journeyise','journeyish','journeyist','journeyful','journeytion','journeyless','journeyable','journeyfull','journeyious','journeylike','journeyment','journeyness')
> or soundex2 in ('J650')
> or metaphone in ('jrny')
> or metaphone2 in ('JRN')
> group by pagename,tagword,instances
> order by tagword,instances desc,pagename
> 
> Thanks,
> Ozz


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres, DB2 and Materialized Query Tables (MQTs - DB2s Materialized Views)

2010-10-05 Thread Sandeep Srinivasa
Below is an excerpt from a comment over at Reddit. It seemed quite
interesting to read about the difference in PG and DB2's query optimizer.


Can anybody shed any more light on this ?


-Sandeep



--


there are a couple really important things that DB2 does with Materialized
Query Tables (MQTs, DB2's name for Oracle/everyone else's Materialized
Views) that other DBs -- especially free DBs -- don't do:

   1. Consider the contents MQTs in the query optimizer
   2. Update MQTs incrementally

for example, imagine you have an MQT backed by this query:

SELECT YEAR(somedate), COUNT(*) FROM sometable GROUP BY YEAR(somedate)

now imagine that, completely independently, you want to run the following
query:

SELECT COUNT(*) FROM sometable WHERE YEAR(somedate)=2010

if you're in PostgreSQL (or MySQL, or pretty much any DB but Oracle), then
the query optimizer will build a plan that involves a table scan over
sometable. for a large table, that sucks. (a clustered index over somedate
 onsometable can make this less painful, but still. bleh.)

if you're in DB2, the DB2 query optimizer is smart enough to realize that
you have that data laying around in your MQT, so it just does a lookup for
2010 in the MQT and returns the corresponding COUNT(*) value. if the MQT has
an index over YEAR(somedate), then this is *blazing* fast. Point DB2. and
because the query optimizer considers MQTs out-of-the-box with no changes to
existing applications querying the database, if a client's aggregation query
is running slow and you can define an MQT that backs that query and a
suitable update schedule for the MQT, the client's query will go 10x's
faster and require no application changes at all. Another point DB2.

also, DB2 (and Oracle too, pretty sure) lets you update MQTs incrementally.
in DB2, when you want to add records to an MQT, you can update the MQT's
contents with just those records as opposed to re-running the query that
backs it. (for most aggregation functions, anyway.) so, whereas in
PostgreSQL you'd have to re-run SELECT COUNT(*) FROM sometable WHERE
YEAR(somedate)=2010 to update your MQT if you add records, in DB2 you can
just pass those records through the existing MQT and update aggregate values
where appropriate. (the syntax for that escapes me at the moment,
unfortunately. any DB2-heads lurking about that can help with that?)

anyway, DB2 is *excellent* at managing enterprise-level data sets. MQTs are
an awesome feature it has, and they really change the kinds of applications
you can write against the database.


Re: [GENERAL] Postgre 9.0 replication using streaming.

2010-10-05 Thread Karl Denninger
 On 10/5/2010 2:12 PM, Chris Barnes wrote:
> I would like to know if there is a way to configure 9 to do this.
>
> I have 4 unique databases running on 4 servers.
> I would like to have them replicate to a remote site for disaster
> recovery.
>
> I would like to consolidate these 4 database into one machine and use
> streaming replication from the 4 masters at out
> local collocation site.
>
> Is there a way configure postgres 9.0 to have 4 unique streaming
> connections from this one machine with the 4 databases
> to the 4 databases on 4 machines.
>
> Thanks for your reply,
>
> Chris Barnes
> CBIT Inc.
>
No, but you could set up four separate instances of Postgres and run all
four on one machine.

-- Karl
<>
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgre 9.0 replication using streaming.

2010-10-05 Thread Chris Barnes

I would like to know if there is a way to configure 9 to do this.

I have 4 unique databases running on 4 servers.
I would like to have them replicate to a remote site for disaster recovery.

I would like to consolidate these 4 database into one machine and use streaming 
replication from the 4 masters at out
local collocation site.

Is there a way configure postgres 9.0 to have 4 unique streaming connections 
from this one machine with the 4 databases
to the 4 databases on 4 machines.

Thanks for your reply,

Chris Barnes
CBIT Inc.


  

Re: [GENERAL] PG website testing

2010-10-05 Thread Alban Hertroys
On 5 Oct 2010, at 20:24, Thom Brown wrote:

> Thanks for the feedback.
> 
> To me your attached images look quite normal.  I do, however, still
> get the feeling the font is a tad too small in Firefox.  It calculates
> a size change of 1.33(recurring), which gets rounded down to 1.3em.
> I'm wondering if this should be rounded up.  It certainly looks more
> consistent in Firefox and Chrome, and should have no effect in Opera.
> Since I've only just done this, I can't test it in IE until I have a
> look at work tomorrow:
> http://pgweb.darkixion.com:8081/docs/8.4/static/functions-datetime.html
> 
> Any better?  Worse?


Well, hard to say... they certainly look a bit rounder, which was probably what 
was off in the previous version, but now they are indeed a little large in 
comparison with the other fonts. In the end I think I liked the smaller version 
a little better, but that's just my opinion of course.

It's probably nearly impossible to get this exactly right as long as people are 
looking at course anti-aliased LCD displays, but I suppose that's what the 
current style-sheet is aimed at (you probably  have a different one for 
@media=print, I didn't check).


I've also just discovered that I find the 'text-size: Large' version less tense 
on the eyes - I must be getting old...

One thing that's a bit annoying with the Large version is that if you give the 
table  (Table 9-27. Date/Time Functions for example, in the page you linked 
above) enough space that the text in the examples/results doesn't wrap too 
much, that the lines of normal text get so long that they're no longer easy to 
read...

I'm not sure what to do about that, but I get the feeling that it's not 
necessary to put all 5 columns (Function, Return Type, Description, Example & 
Result) next to each other on the same row. That definitely looks a bit 
cramped, and it's a lot of information to take in all at once.

Possibly either the description or the example could be moved above or below 
the function definition? Those examples could also do with a different styling 
to set them apart from the description (maybe a slightly less white background 
colour?).

Alban Hertroys

--
Screwing up is an excellent way to attach something to the ceiling.


!DSPAM:737,4cab76da678305275619103!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] optimization (can I move pgsql_tmp)?

2010-10-05 Thread Ozz Nixon
Good after everyone,

We are experiencing some performance issues on a table with 7 fields, 
8,800,000 rows. During some exercises, one thing I noticed is I need to change 
the configuration of the system to house pgsql_tmp on a host partition instead 
of the drive array... that will get me a little more speed... is this 
controlled via a .conf file or pgamin?

Optimization questions:

When is pgsql_tmp used? (Right now as I am creating indexes by hand, I 
see it grow for about 10 minutes):

du -h /mnt/data/base/
5.1M/mnt/data/base/1
5.1M/mnt/data/base/11563
4.0G/mnt/data/base/11564
8.9M/mnt/data/base/16395
586M/mnt/data/base/pgsql_tmp

During the create index - communications in general to the drive array 
is "consumed".

Before I keep experimenting and making things worse, I will ask - what 
indexes should I have to make this query better - or how does on debug when 
they find a query is taking too long???

STRUCTURE:
   quer.SQL.Add('create table '+DBTags+' (');
   quer.SQL.Add('   pagename '+SQL_TITLE+'(100) not null,');
   quer.SQL.Add('   tagword '+SQL_TITLE+'(15) not null,');
   quer.SQL.Add('   soundex2 '+SQL_TITLE+'(4) not null,');
   quer.SQL.Add('   metaphone '+SQL_TITLE+'(15) not null,');
   quer.SQL.Add('   metaphone2 '+SQL_TITLE+'(22) not null,');
   quer.SQL.Add('   carverphone '+SQL_TITLE+'(22) not null,');
   quer.SQL.Add('   instances '+SQL_INT32+' not null,');
   if SQL_NAME_PRIMARY_KEYS then quer.SQL.Add('   constraint '+DBTags+'_PK');
   quer.SQL.Add('   primary key(pagename, tagword, instances)');


WAS 18 seconds with just the primary key, so I tried:
create index tags1 on allwikitags(tagword) -- after still 18 seconds

Then I tried:
create index tags6 on allwikitags(tagword,instances desc, pagename) -- after 
now 32 seconds


My Query:

select pagename,tagword,instances from allwikitags
where tagword in 
('journey','journeys','journeyes','journeyd','journeyed','journeyly','journeyy','journeyth','journeydom','journeying','journeyize','journeyion','journeyism','journeyized','journeyizes','journeyizing','journeyions','journeyists','journeyfulness','journeyise','journeyish','journeyist','journeyful','journeytion','journeyless','journeyable','journeyfull','journeyious','journeylike','journeyment','journeyness')
or soundex2 in ('J650')
or metaphone in ('jrny')
or metaphone2 in ('JRN')
group by pagename,tagword,instances
order by tagword,instances desc,pagename

Thanks,
Ozz
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem installing 9.0.1 on Windows 7 x64

2010-10-05 Thread Michael Gould
It does appear so.. although I'm logged into the network administrator
account.  Weird but at least I can now track it down.

Best Regards

Mike Gould

"Dave Page"  wrote:
> [CC'ing the list to close the loop]
> 
> On Tue, Oct 5, 2010 at 7:43 PM, Michael Gould
>  wrote:
>> Dave,
>>
>> I found the problem.  For some reason icacls.exe can't be found, in fact
>> none of the executable files in system32 can be found from the c:\.
>> I'm not sure what's up because when I look at my path statement it shows
>> c:\windows\system32 when I do either a path or set from the command line
run
>> inside CMD.exe.
>>
>> At least I know that it's not a Postgres issue now.
> 
> Very odd. Permissions perhaps?
> 
> 
> 
> -- 
> Dave Page
> Blog: http://pgsnake.blogspot.com
> Twitter: @pgsnake
> 
> EnterpriseDB UK: http://www.enterprisedb.com
> The Enterprise Postgres Company
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem installing 9.0.1 on Windows 7 x64

2010-10-05 Thread Dave Page
[CC'ing the list to close the loop]

On Tue, Oct 5, 2010 at 7:43 PM, Michael Gould
 wrote:
> Dave,
>
> I found the problem.  For some reason icacls.exe can't be found, in fact
> none of the executable files in system32 can be found from the c:\.
> I'm not sure what's up because when I look at my path statement it shows
> c:\windows\system32 when I do either a path or set from the command line run
> inside CMD.exe.
>
> At least I know that it's not a Postgres issue now.

Very odd. Permissions perhaps?



-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Non-standard escape sequences from Crystal Reports

2010-10-05 Thread Rajesh Kumar Mallah
Dear Rob,
Please do not go offlist.

Its not a matter of cancelling,
consider  the fragment  --> relname like 'coil\\_status'
the underscore has to be escaped because if its not done
'_' will match any *single* character (rule of ilike matching) ,
 its escaped so that it matches  '_' in strings.
 two backslash is required so that ilike gets \_
it is the rule of quoting literal values in sql.

if you can add E you can get rid of the warnings.

Regds
Rajesh Kumar Mallah.

On Tue, Oct 5, 2010 at 2:18 PM, Rob Richardson
 wrote:
> Thank you very much!  Your examples worked as you said they would work, and
> they showed that my database is logging those warnings when they happen.
> That indicates that the difference is in the Crystal Reports installation
> someplace.
>
> I don't understand how the backslashes are handled.  I added underscores to
> a couple of records in my coilstatus table, so one had "Fr_ee" and another
> had "Pack_ed".  Then, the query "SELECT status from coilstatus  where status
> like '%\\_%' limit 10;" gave me the warning that you predicted, and it found
> the two records with underscored statuses.  That tells me that the two
> backslashes cancelled each other out, and the query was the same as "SELECT
> status from coilstatus  where status like '%_%' limit 10;".  Why is that?
> Do I need to read about regular expressions?
>
> RobR
>
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG website testing

2010-10-05 Thread Thom Brown
On 5 October 2010 18:45, Alban Hertroys
 wrote:
> On 4 Oct 2010, at 20:40, Thom Brown wrote:
>
>> Hi all,
>>
>> We're currently testing a new javascript change on the PostgreSQL
>> docs.  This is to make sure monospaced fonts still appear at a
>> reasonable size between browsers.  I'd appreciate it if some of you
>> could do some browser testing.  http://magnus.webdev.postgresql.org/
>> Only docs for 8.3 and below are available.  Please also check the main
>> site for font issues, and post any issues you find here.
>>
>> Thanks :)
>
>
> The font displays a bit smaller than on the live site using Safari 5.0.2, but 
> it seems to make some of the documentation easier to read - especially when 
> there's lots of monospaced text in it. With the smaller size the code also 
> doesn't catch the eye as much, of course.
>
> One thing I noticed is that the italic version of the lower-case 'e' looks a 
> little smeared along the horizontal line inside the curl. For example in: 
> http://magnus.webdev.postgresql.org/docs/8.3/static/functions-aggregate.html
>
> In Firefox 4b6 the monospace text looks a bit compressed at the top, like 
> something heavy has been lying on it for too long. Not sure how to describe 
> it, but the top half of the font seems smaller than it should be.
>
> I attached some small screen-shots of those "problems" so that you can judge 
> yourself.
>
> Good work on how the documentation looks as of late btw, they're a pleasure 
> to read. Even more so than before.

Thanks for the feedback.

To me your attached images look quite normal.  I do, however, still
get the feeling the font is a tad too small in Firefox.  It calculates
a size change of 1.33(recurring), which gets rounded down to 1.3em.
I'm wondering if this should be rounded up.  It certainly looks more
consistent in Firefox and Chrome, and should have no effect in Opera.
Since I've only just done this, I can't test it in IE until I have a
look at work tomorrow:
http://pgweb.darkixion.com:8081/docs/8.4/static/functions-datetime.html

Any better?  Worse?

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem installing 9.0.1 on Windows 7 x64

2010-10-05 Thread Dave Page
On Tue, Oct 5, 2010 at 6:34 PM, Michael Gould
 wrote:
> I'm trying to install 9.0.1 32 bit on Windows 7 x64.  I cannot use the 64
> bit server because we use the UUID contrib module and it evidently isn't
> compatible with Windows 64 bit libraries.  When I try and install the 32 bit
> server I get the following error message.
>
> "Problem running the post-install step.  Installation may not complete
> correctly.  The database initialisation failed.
>
> I've tried this using my normal account which has administrator rights and
> the system administrator account.

Can you post the installation log from %TEMP% please? See
http://wiki.postgresql.org/wiki/Troubleshooting_Installation#Collect_the_installer_log_file
for details.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bytea_output default value change in 9.0 breaks Apache::Session::Postgres

2010-10-05 Thread Frank Heikens




Op 5 okt 2010, om 20:12 heeft Rajesh Kumar Mallah het volgende  
geschreven:



Dear List ,

recent change of bytea_output format from escape to hex in 9.0  
apparently breaks
popular persistent session handling perl modules like  
Apache::Session::Postgres

which stores pickled data structures in bytea columns of db table.
It is difficult to guess the root cause from the exception thrown by
the said module.
The problem is fixed by adding
bytea_output='escape'  in postgresql.conf
and issuing a pg_ctl reload.



You can set this per database, using SQL:

ALTER DATABASE dbname SET bytea_output TO 'escape';



Eg in RT application the error is:
error:  RT couldn't store your session.
This may mean that that the directory /blah/blah/foo/bar isn't
writable or a database table is missing or corrupt

Regds
Rajesh Kumar Mallah.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



Regards,
Frank Heikens

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need the binary of PG9.0 for Windows 32bit

2010-10-05 Thread Vick Khera
On Tue, Oct 5, 2010 at 1:28 PM, Andrea Peri 2007  wrote:
>  Hi,
>
> I need to download the build of Postgres 9.0 compiled for win32bit.
>
> I see there is available the 9.0.1 version.
> But I'm searching specifically the 9.0 for a test.

You really want to use the 9.0.1 version. It includes bug fixes and
security enhancements over the 9.0 initial release.  Feature-wise they
are identical.  Check the release notes.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] bytea_output default value change in 9.0 breaks Apache::Session::Postgres

2010-10-05 Thread Rajesh Kumar Mallah
Dear List ,

recent change of bytea_output format from escape to hex in 9.0 apparently breaks
popular persistent session handling perl modules like Apache::Session::Postgres
which stores pickled data structures in bytea columns of db table.
It is difficult to guess the root cause from the exception thrown by
the said module.
The problem is fixed by adding
bytea_output='escape'  in postgresql.conf
and issuing a pg_ctl reload.


Eg in RT application the error is:
error:  RT couldn't store your session.
This may mean that that the directory /blah/blah/foo/bar isn't
writable or a database table is missing or corrupt

Regds
Rajesh Kumar Mallah.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Non-standard escape sequences from Crystal Reports

2010-10-05 Thread Rajesh Kumar Mallah
Dear Rob ,

this will give warning:
SELECT co_name from foo  where co_name ilike '%\\_%' limit 10;

and this will not:
SELECT co_name from foo  where co_name ilike E'%\\_%' limit 10;

regds
Rajesh Kumar Mallah.


Re: [GENERAL] Query tune, index is not using

2010-10-05 Thread Alban Hertroys
On 5 Oct 2010, at 10:21, AI Rumman wrote:

> explain analyze
> select cf_1507 
> from cf_1507 
> inner join role2picklist on 
> role2picklist.picklistvalueid=cf_1507.picklist_valueid 
> where roleid!='H1' order by sortid

A few indexes on role2picklist.sortid and cf_1507.picklist_valueid would 
probably help.

> \d role2picklist
>   Table "public.role2picklist"
>   Column | Type | Modifiers 
> -++---
>  roleid | character varying(255) | not null
>  picklistvalueid | integer | not null
>  picklistid | integer | not null
>  sortid | integer | 
> Indexes:
>   "role2picklist_pkey" PRIMARY KEY, btree (roleid, picklistvalueid, 
> picklistid)
>   "fk_2_role2picklist" btree (picklistid)
>   "role2picklist_picklistvalueid_idx" btree (picklistvalueid)
>   "role2picklist_roleid_picklistid_idx" btree (roleid, picklistid, 
> picklistvalueid)
> 
> \d cf_1507 
>   Table "public.cf_1507"
>   Column | Type | Modifiers  
> --++--
>  cf_1507id | integer | not null default nextval('cf_1507_seq'::regclass)
>  cf_1507 | character varying(200) | not null
>  presence | integer | not null default 1
>  picklist_valueid | integer | not null default 0
> Indexes:
>   "cf_1507_pkey" PRIMARY KEY, btree (cf_1507id)

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4cab661b678301651414001!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Non-standard escape sequences from Crystal Reports

2010-10-05 Thread Rob Richardson
When our customer runs Crystal Reports reports created in an old version
of CR and edited with CR XI, he gets a large number of warnings in his
Postgres log files that look like this:
 
2010-10-05 11:28:00 EDTWARNING:  nonstandard use of \\ in a string
literal at character 159
2010-10-05 11:28:00 EDTHINT:  Use the escape string syntax for
backslashes, e.g., E'\\'.

I just ran a test in which I connected to a different database server
for which my log settings showed the statements that generated warnings.
The statement was this:
 
2010-10-05 11:28:00 EDTSTATEMENT:  select relname, nspname, relkind from
pg_catalog.pg_class c, pg_catalog.pg_namespace n where relkind in ('r',
'v') and nspname like 'public' and relname like 'coil\\_status' and
nspname not in ('pg_catalog', 'information_schema', 'pg_toast',
'pg_temp_1') and n.oid = relnamespace order by nspname, relname

Coil_status is the name of one of the two relations used in the report.

 
Apparently, at some time in the distant past, it was necessary to do
something special for underscore characters.  The two backslashes are
apparently being ignored, because the report operates correctly.  
 
I tried running the same report on my computer, I did not get any
warnings.  Our customer and my computer are both running Postgres 8.4.
I tried copying the customer's postgres.conf file onto my system and
restarting my Postgres service, and I still got no warnings.  
 
Can anyone tell me if there is some setting that will stop these
warnings from showing up (other than turning off warning messages
completely, which I am about to suggest that the customer do)?  Also,
does anyone know why my computer is not showing warnings while our
customer's system is?
 
Thanks very much!
 
RobR
 


Re: [GENERAL] PG website testing

2010-10-05 Thread Alban Hertroys
On 4 Oct 2010, at 20:40, Thom Brown wrote:

> Hi all,
> 
> We're currently testing a new javascript change on the PostgreSQL
> docs.  This is to make sure monospaced fonts still appear at a
> reasonable size between browsers.  I'd appreciate it if some of you
> could do some browser testing.  http://magnus.webdev.postgresql.org/
> Only docs for 8.3 and below are available.  Please also check the main
> site for font issues, and post any issues you find here.
> 
> Thanks :)


The font displays a bit smaller than on the live site using Safari 5.0.2, but 
it seems to make some of the documentation easier to read - especially when 
there's lots of monospaced text in it. With the smaller size the code also 
doesn't catch the eye as much, of course.

One thing I noticed is that the italic version of the lower-case 'e' looks a 
little smeared along the horizontal line inside the curl. For example in: 
http://magnus.webdev.postgresql.org/docs/8.3/static/functions-aggregate.html

In Firefox 4b6 the monospace text looks a bit compressed at the top, like 
something heavy has been lying on it for too long. Not sure how to describe it, 
but the top half of the font seems smaller than it should be.

I attached some small screen-shots of those "problems" so that you can judge 
yourself.

Good work on how the documentation looks as of late btw, they're a pleasure to 
read. Even more so than before.



!DSPAM:737,4cab6464678301219212522!
<>


!DSPAM:737,4cab6464678301219212522!
<>
Alban Hertroys

--
Screwing up is an excellent way to attach something to the ceiling.



!DSPAM:737,4cab6464678301219212522!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] querying the version of libpq

2010-10-05 Thread Massa, Harald Armin
>
>
> > We could add a PQlibpqVersion(), maybe, but it would be many years
> > before client code could rely on that being present.
>
> I think we should.
>
> And in a small way they can already - if they check for it
> dynamically, they'll know if it was 9.1 or newer at least :-) It'll be
> a long time before it's *easy* to use though. But if we don't add it
> now, it'll be even longer...


and an additional argument: Isn't it a wise decision for clients, to allways
use the newset libpq, independent of the server? As younger libpqs happily
connect to older servers AND are supposed to have less bugs.

As libpq is very small, can be statically linked into applications and there
are no license troubles, I can see no argument to use something that is not
brand new... (compared to other databases with 120megabyte clients with
longer licence restrictions than documentations...)

Or am I missing sth?

Harald


-- 
GHUM GmbH
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607

Amtsgericht Stuttgart, HRB 734971
-
persuadere.
et programmare


[GENERAL] Problem installing 9.0.1 on Windows 7 x64

2010-10-05 Thread Michael Gould
I'm trying to install 9.0.1 32 bit on Windows 7 x64.  I cannot use the 64
bit server because we use the UUID contrib module and it evidently isn't
compatible with Windows 64 bit libraries.  When I try and install the 32 bit
server I get the following error message.


 


"Problem running the post-install step.  Installation may not complete
correctly.  The database initialisation failed.


 


I've tried this using my normal account which has administrator rights and
the system administrator account.  


 


Does anyone have any idea on why I can't install this?


 


Best Regards


Michael Gould


--
Michael Gould, Managing Partner
Intermodal Software Solutions, LLC
904.226.0978
904.592.5250 fax



[GENERAL] Need the binary of PG9.0 for Windows 32bit

2010-10-05 Thread Andrea Peri 2007

 Hi,

I need to download the build of Postgres 9.0 compiled for win32bit.

I see there is available the 9.0.1 version.
But I'm searching specifically the 9.0 for a test.

There some link with old binaries ?

Thx,

Andrea Peri.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql \q hang

2010-10-05 Thread Tom Lane
Jeff Ross  writes:
> On 10/05/10 10:35, Tom Lane wrote:
>> Huh ... what version of libreadline (or libedit) are you using exactly?
>> How did the large dump get in there?

> According to the README in the source, this is "Gnu Readline library, 
> version 4.3.".

Well, that's not terribly new, but still ...

> I'm not sure how the dump got in there.  I have a table of US Cities, 
> Zip Codes and lat/long data in one of the databases and the contents of 
> that table were in the history file twice.

When you say "contents of the table", what do you mean exactly?  Did it
look like COPY data, psql SELECT output, or what?  I'm still quite
confused about how data that you didn't type got into the history.

If you still have the file on backup media, it might be useful to look
at the last few history lines before each instance of the table
contents.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql \q hang

2010-10-05 Thread Jeff Ross

On 10/05/10 10:35, Tom Lane wrote:

Jeff Ross  writes:

Thanks for the hint, Greg!  It was indeed the .psql_history file.
Although I'm using the default history size, the file was over 5MB in
size and full of a table dump.  I truncated the file and no more delay
quitting psql.


Huh ... what version of libreadline (or libedit) are you using exactly?
How did the large dump get in there?

regards, tom lane



According to the README in the source, this is "Gnu Readline library, 
version 4.3.".


I'm not sure how the dump got in there.  I have a table of US Cities, 
Zip Codes and lat/long data in one of the databases and the contents of 
that table were in the history file twice.


Jeff

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] querying the version of libpq

2010-10-05 Thread Magnus Hagander
On Tue, Oct 5, 2010 at 18:41, Tom Lane  wrote:
> Peter Geoghegan  writes:
>> On 5 October 2010 16:33, Greg Sabino Mullane  wrote:
 How does the driver figure it out?
>>>
>>> DBD::Pg parses pg_config --version, then passes the information
>>> to the C programs for directive fiddling. I certainly hope
>>> other drivers are doing the same, as libpq varies across
>>> major versions a good deal.
>
>> I would imagine that most libpq wrapping drivers use libpq's
>> PQserverVersion(), which returns an integer that looks like 9.
>
> The real problem is that neither of these can be trusted to tell you the
> *library* version.  PQserverVersion() is something else altogether,
> and I wouldn't want to assume that pg_config exactly matches the library
> you're linked to --- if it's even present at all.
>
> We could add a PQlibpqVersion(), maybe, but it would be many years
> before client code could rely on that being present.

I think we should.

And in a small way they can already - if they check for it
dynamically, they'll know if it was 9.1 or newer at least :-) It'll be
a long time before it's *easy* to use though. But if we don't add it
now, it'll be even longer...

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying to figure out why these queries are so slow

2010-10-05 Thread John R Pierce

 On 10/05/10 2:20 AM, Tim Uckun wrote:

The database machine is a linode host. It's got two gigs of RAM  I
realize that this is not the ideal host for a database but it's what I
have to work with.


whats the storage like on linode?  Is that some sort of virtualized 
cloudy thing?   wild guess says committed random writes are really slow.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] querying the version of libpq

2010-10-05 Thread Tom Lane
Peter Geoghegan  writes:
> On 5 October 2010 16:33, Greg Sabino Mullane  wrote:
>>> How does the driver figure it out?
>> 
>> DBD::Pg parses pg_config --version, then passes the information
>> to the C programs for directive fiddling. I certainly hope
>> other drivers are doing the same, as libpq varies across
>> major versions a good deal.

> I would imagine that most libpq wrapping drivers use libpq's
> PQserverVersion(), which returns an integer that looks like 9.

The real problem is that neither of these can be trusted to tell you the
*library* version.  PQserverVersion() is something else altogether,
and I wouldn't want to assume that pg_config exactly matches the library
you're linked to --- if it's even present at all.

We could add a PQlibpqVersion(), maybe, but it would be many years
before client code could rely on that being present.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql \q hang

2010-10-05 Thread Tom Lane
Jeff Ross  writes:
> Thanks for the hint, Greg!  It was indeed the .psql_history file. 
> Although I'm using the default history size, the file was over 5MB in 
> size and full of a table dump.  I truncated the file and no more delay 
> quitting psql.

Huh ... what version of libreadline (or libedit) are you using exactly?
How did the large dump get in there?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Basic SQL question and kudos

2010-10-05 Thread Dennis Gearon
I need to get a join across about 8 tables to only return rows for the MAX date 
in one of the tables for all otherwise identical return results. I am trying to 
get the most recent edit of normalized records that are composed of elements 
from all the tables. There could be 1 to 100+ successive edits to the 
normalized/multi table record. The will be approximately 65,000 total records 
for now, 100,000,000s later.

If I make one of the values in the select statement be MAX(the date), does that 
automatically do grouping on 'the date' or will I only get ONE row total 
returned?

KUDOS, I almost never need to write the postgres group . . . because it just 
works. It's only using postgres that I ever even read the digest that I 
receive. 

Soon I will have to write more to get performance tips. You guys are great on 
this list and seem to always have answers, Thx. 
Dennis Gearon

Signature Warning

EARTH has a Right To Life,
  otherwise we all die.

Read 'Hot, Flat, and Crowded'
Laugh at http://www.yert.com/film.php

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] querying the version of libpq

2010-10-05 Thread Massa, Harald Armin
>
> I would imagine that most libpq wrapping drivers use libpq's
> PQserverVersion(), which returns an integer that looks like 9.
>
> exactly that is exposed as cn.server_version property of the psycopg2
connection object; but it does not help, as a 8.4libpq happily connects to a
9.0 server... for some values of happy :)

Harald


-- 
GHUM GmbH
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607

Amtsgericht Stuttgart, HRB 734971
-
persuadere.
et programmare


Re: [GENERAL] querying the version of libpq

2010-10-05 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


> I would imagine that most libpq wrapping drivers use libpq's
> PQserverVersion(), which returns an integer that looks like 9.

Yeah, I'm familiar with that (I wrote it :) but it's not useful 
here for two reasons:

1) It's not available on older versions

2) It returns the *target* version, not the *compiled* version.
In other words, it requires an existing PGconn object.

- -- 
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201010051212
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAkyrTrsACgkQvJuQZxSWSsh+rACgkvzm7HFWO4X6jHy68M4lAr74
vSoAnRzG5F+pML4IQfQxCl64b3eiNbRm
=JL6x
-END PGP SIGNATURE-



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Group By Question

2010-10-05 Thread Chris Velevitch
That's getting too complicated. It can be done simply as:

SELECT DISTINCT(test.people.id) test.people.id, test.people.name, test.likes.ref
FROM test.people LEFT JOIN test.likes ON test.people.id = test.likes.ref
ORDER BY test.people.id, test.likes.date DESCENDING

Assuming people.id is unique, the order by clause makes the first row
of every group the most recent like and distinct expression keeps only
the first row for each person.



Chris
--
Chris Velevitch
Manager - Adobe Platform Users Group, Sydney
m: 0415 469 095
www.apugs.org.au

Adobe Platform Users Group, Sydney
October 2010: Flash Builder for SalesForce
Date: 25th October, 6pm for 6:30 start
Details and RSVP coming soon

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] querying the version of libpq

2010-10-05 Thread Peter Geoghegan
On 5 October 2010 16:33, Greg Sabino Mullane  wrote:
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: RIPEMD160
>
>
>>> if ($dbh->{pg_lib_version} >= 9) {
>>>
>>> I'd raise a bug with psycopg2 if it does not provide that information.
>>
>> How does the driver figure it out?
>
> DBD::Pg parses pg_config --version, then passes the information
> to the C programs for directive fiddling. I certainly hope
> other drivers are doing the same, as libpq varies across
> major versions a good deal.

I would imagine that most libpq wrapping drivers use libpq's
PQserverVersion(), which returns an integer that looks like 9.


-- 
Regards,
Peter Geoghegan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] querying the version of libpq

2010-10-05 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


>> if ($dbh->{pg_lib_version} >= 9) {
>>
>> I'd raise a bug with psycopg2 if it does not provide that information.
>
> How does the driver figure it out?

DBD::Pg parses pg_config --version, then passes the information 
to the C programs for directive fiddling. I certainly hope 
other drivers are doing the same, as libpq varies across 
major versions a good deal.

- -- 
Greg Sabino Mullane g...@turnstep.com
PGP Key: 0x14964AC8 201010051132
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAkyrRR8ACgkQvJuQZxSWSsgHmwCfaAPXXA1YCZ6OgOjyis6uKVmz
HvoAnj42290eDAGO+HDsGVZvOAqwEXeR
=P8XV
-END PGP SIGNATURE-



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] queriing the version of libpq

2010-10-05 Thread Magnus Hagander
On Tue, Oct 5, 2010 at 16:32, Greg Sabino Mullane  wrote:
>> My question: Which way is available to query the linked libpq version?
>
> That's information your driver should be supplying. For example, in
> Perl (DBD::Pg), you would say
>
> if ($dbh->{pg_lib_version} >= 9) {
>
> I'd raise a bug with psycopg2 if it does not provide that information.

How does the driver figure it out?

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] queriing the version of libpq

2010-10-05 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


> My question: Which way is available to query the linked libpq version?

That's information your driver should be supplying. For example, in 
Perl (DBD::Pg), you would say 

if ($dbh->{pg_lib_version} >= 9) {

I'd raise a bug with psycopg2 if it does not provide that information.

- -- 
Greg Sabino Mullane g...@turnstep.com
PGP Key: 0x14964AC8 201010051030
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAkyrNskACgkQvJuQZxSWSsiW7wCfYge62y175Xtsk4drGbnt7UU5
0R4Ani1gxDhNA0xdHoq+xxxROjuC4wh+
=n33y
-END PGP SIGNATURE-



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql \q hang

2010-10-05 Thread Jeff Ross

On 10/04/10 15:55, Greg Sabino Mullane wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160



This is more of an odd anoyance than anything, but for the past month or
so when I \q out of psql I have to wait 20-25 seconds for the return to
the shell prompt.  This is the only copy of psql running and it doesn't


Perhaps there is an issue writing to the ~/.psql_history file? Try running
with --no-readline and see if it still happens. Otherwise, consider
using strace or lsof to see what it is doing during those 20 seconds.

- --
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201010041754
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-


Thanks for the hint, Greg!  It was indeed the .psql_history file. 
Although I'm using the default history size, the file was over 5MB in 
size and full of a table dump.  I truncated the file and no more delay 
quitting psql.


Jeff


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] error messages

2010-10-05 Thread Miklosi Attila
Hi!

Here are the details you asked for:

The hardware :
an intel server board S5500BC,
intel xeon quad core 2GHz processor,
8Gb RAM
and RAID.

The OS is debian linux-lenny
Here are the `uname` details:
Linux central-hu 2.6.26-2-686-bigmem #1 SMP Wed Aug 19 06:34:05 UTC 2009
i686 GNU/Linux

The postgres version is 8.3. The database was dumped by `pgdump` and it
was reloaded by `psql -f`.

Best regards
Attila Miklosi

> On Tue, Oct 5, 2010 at 7:19 AM, Miklosi Attila > >
 wrote:
> > Dear Postgres!
> >
> > We have just moved our database to a more powerful pc and since then we
> > keep getting this strange error message below:
> > "PGRES_FATAL_ERROR FATAL:  invalid frontend message type 87"
> >
> > What does this message mean? What could cause this error?

> We need exact details on what you did.  Describe the old and new
> computers, the OS you run, etc.

> Also, exactly *how* did you move the database?  Did you just copy the
> postgres files? Did you dump/reload?  What versions of postgres?

> Nobody knows what you did, so guessing what is wrong is just that: a guess.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] error messages

2010-10-05 Thread Thom Brown
On 5 October 2010 14:17, Miklosi Attila  wrote:
> Hi!
>
> Here are the details you asked for:
>
> The hardware :
> an intel server board S5500BC,
> intel xeon quad core 2GHz processor,
> 8Gb RAM
> and RAID.
>
> The OS is debian linux-lenny
> Here are the `uname` details:
> Linux central-hu 2.6.26-2-686-bigmem #1 SMP Wed Aug 19 06:34:05 UTC 2009
> i686 GNU/Linux
>
> The postgres version is 8.3. The database was dumped by `pgdump` and it
> was reloaded by `psql -f`.

There's no context to your email.  Could you reply to whichever email
you originally sent?

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] error messages

2010-10-05 Thread Miklosi Attila
Hi!

Here are the details you asked for:

The hardware :
an intel server board S5500BC, 
intel xeon quad core 2GHz processor, 
8Gb RAM 
and RAID.

The OS is debian linux-lenny
Here are the `uname` details:
Linux central-hu 2.6.26-2-686-bigmem #1 SMP Wed Aug 19 06:34:05 UTC 2009
i686 GNU/Linux

The postgres version is 8.3. The database was dumped by `pgdump` and it
was reloaded by `psql -f`.

Best regards
Attila Miklosi


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [ADMIN] [GENERAL] Data Not replicating

2010-10-05 Thread Devrim GÜNDÜZ
On Tue, 2010-10-05 at 11:03 +0530, Vishnu S. wrote:
> 
> Yes. I have written a script file like the following.
> 
> include ;
> init cluster (id=1, comment='hostname=10.2.26.53 port=5432');

It does not give me the $CLUSTER_NAME information.
-- 
Devrim GÜNDÜZ
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
PostgreSQL RPM Repository: http://yum.pgrpms.org
Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz


signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] error messages

2010-10-05 Thread Vick Khera
On Tue, Oct 5, 2010 at 7:19 AM, Miklosi Attila  wrote:
> Dear Postgres!
>
> We have just moved our database to a more powerful pc and since then we
> keep getting this strange error message below:
> "PGRES_FATAL_ERROR FATAL:  invalid frontend message type 87"
>
> What does this message mean? What could cause this error?

We need exact details on what you did.  Describe the old and new
computers, the OS you run, etc.

Also, exactly *how* did you move the database?  Did you just copy the
postgres files? Did you dump/reload?  What versions of postgres?

Nobody knows what you did, so guessing what is wrong is just that: a guess.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Off Topic] Software load balancing question

2010-10-05 Thread Vick Khera
On Mon, Oct 4, 2010 at 7:02 PM, Mike Christensen  wrote:
> Hi guys, in my typical "ignore the rules when I have an audience of
> smart techie people", I thought I'd poll the group about software load
> balancing.  I know next to nothing about it, but I'm deploying my site
>

Some really heavy hit sites run Varnish as a reverse proxy/balancer.
It is designed from the ground up to run extremely high volumes.
There's also Apache TrafficServer which powers Yahoo! infrastructure.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Record Separator with psql -c when output to a variable not a file!

2010-10-05 Thread Vick Khera
On Mon, Oct 4, 2010 at 3:51 PM, andrew harvey  wrote:
> command="`psql  -c "SELECT * FROM pg_stat_database`"
>
> when you retain the query result within the variable $command you need
> to count all the fields in the entire output in order to select the
> particular one that you want. (because the record separator newline
> doesn't apply to within the variable named above) Therefore all the
> sql output is bundled up into one string with very, very many fields.
>

To be clear, the backtick expression in the shell is what is eating
the newlines and converting them to spaces.  That's part of the
definition of how the backticks work.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] error messages

2010-10-05 Thread Miklosi Attila
Dear Postgres!

We have just moved our database to a more powerful pc and since then we
keep getting this strange error message below:
"PGRES_FATAL_ERROR FATAL:  invalid frontend message type 87"

What does this message mean? What could cause this error?

Bets regards
Attila Miklosi


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying to figure out why these queries are so slow

2010-10-05 Thread Tim Uckun
>
> I'd still recommend manually running ANALYZE on any involved tables,
> just in case.
>


Just out of curiosity

Say I did run it. Would it make the query run a few orders of magnitude faster?

If you read my original post you'd see that I let the query run for
over an  hour before I killed it.  It seems to me that a query joining
two tables on primary keys, updating one field on one table with
another field on another table on less than 100K records ought to take
less than three minutes.

Do the database statistics go that far out of whack especially if
autovacuum is on?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is there a logical reason for 25P02?

2010-10-05 Thread Craig Ringer

On 10/05/2010 12:39 PM, Konstantin Izmailov wrote:

Howdy,
I've noticed that there is a difference in result of execution of the
following statement:
   INSERT INTO testtable(col1) VALUES(NULL);
depending on whether the command is prepared or not.

If I call PQprepare/PQexecPrepared for the statement, the error
"transaction aborted" is returned with SQL State = "25P02".


Specifically, I suspect the message should be:

ERROR:  current transaction is aborted, commands ignored until end of 
transaction block


If that's what you're getting, the problem was with an earlier command 
that returned an error you didn't notice, not with the command you just 
ran. I'm unsure if this could cause PQexecPrepared to return sqlstate 
25P02 if PQprepare fails, but would want to investigate the possibility.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying to figure out why these queries are so slow

2010-10-05 Thread Thom Brown
On 5 October 2010 12:06, Tim Uckun  wrote:
> On Tue, Oct 5, 2010 at 11:38 PM, Thom Brown  wrote:
>> On 5 October 2010 10:20, Tim Uckun  wrote:
>>> The database machine is a linode host.
>>
>> Good choice ;)
>>
>> Obvious question, but I'll ask it: Have you run ANALYZE or VACUUM
>> ANALZYE on the database?  Also being able to see the query plan would
>> help.
>>
>
> I did run vacuum analzye on the table I created. The other table I
> presume is auto vacuumed since that is on.

I'd still recommend manually running ANALYZE on any involved tables,
just in case.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying to figure out why these queries are so slow

2010-10-05 Thread Tim Uckun
On Tue, Oct 5, 2010 at 11:38 PM, Thom Brown  wrote:
> On 5 October 2010 10:20, Tim Uckun  wrote:
>> The database machine is a linode host.
>
> Good choice ;)
>
> Obvious question, but I'll ask it: Have you run ANALYZE or VACUUM
> ANALZYE on the database?  Also being able to see the query plan would
> help.
>


I did run vacuum analzye on the table I created. The other table I
presume is auto vacuumed since that is on.

I can get you the plan but I don't think it would help because the
result table is now blank.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying to figure out why these queries are so slow

2010-10-05 Thread Thom Brown
On 5 October 2010 10:20, Tim Uckun  wrote:
> The database machine is a linode host.

Good choice ;)

Obvious question, but I'll ask it: Have you run ANALYZE or VACUUM
ANALZYE on the database?  Also being able to see the query plan would
help.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] streaming replication question

2010-10-05 Thread Rajesh Kumar Mallah
Dear Riggs,
I am so sorry that i am not able to replicate it this time. May be i did
something really stupid that day .
I remember at one point of time i ran the standy  without any recorvery.conf
even.
If i ever get that again i will surely try to replicate it and inform.

Regds
Rajesh Kumar Mallah.

On Tue, Oct 5, 2010 at 5:08 AM, Simon Riggs  wrote:

> On Sat, 2010-10-02 at 12:02 -0400, Rajesh Kumar Mallah wrote:
> > FATAL:  too many KnownAssignedXids
>
> That's a bug.
>
> I've a few guesses about that, but I'll put some better instrumentation
> in to see if we can prove what's causing it.
>
> Can you reproduce that again, or was that failure isolated to that
> particular point in the WAL stream? You can try turning HS off, then
> turning it back on again later.
>
> If you suspect a bug in Hot Standby, please set
>trace_recovery_messages = DEBUG2
> in postgresql.conf and repeat the action
>
> Always useful to know
> * max_connections
> * current number of sessions
> * whether we have two phase commits happening
>
> Thanks
>
> --
>  Simon Riggs   www.2ndQuadrant.com
>  PostgreSQL Development, 24x7 Support, Training and Services
>
>


Re: [GENERAL] Trying to figure out why these queries are so slow

2010-10-05 Thread Thomas Guettler
Hi,

just a guess: Counting is slow, since it needs to check all rows. Explained 
here:

http://wiki.postgresql.org/wiki/Slow_Counting

  Thomas Güttler

Tim Uckun wrote:
> I have two tables. Table C has about 300K records in it. Table E has
> about a million records in it.  Today I tried to run this query.
> 
> update C
>  set result_count = X.result_count
> from C
> inner join (select  c_id, count(c_id) as result_count
>   from E
>   where c_id is not null
>   group by c_id) as  X
>   on C.id = X.c_id
> 
> All the fields mentioned are indexed. In the case of Table C it's the
> primary key. In the case table E it's just an index (non unique).
> 
> I let this query run for about three hours before I cancelled it.
> ...

-- 
Thomas Guettler, http://www.thomas-guettler.de/
E-Mail: guettli (*) thomas-guettler + de

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Data Not replicating

2010-10-05 Thread Vishnu S.
Hi,

Yes. I have written a script file like the following.

include ;
init cluster (id=1, comment='hostname=10.2.26.53 port=5432');




Thanks & Regards,
Vishnu S


-Original Message-
From: Devrim GÜNDÜZ [mailto:dev...@gunduz.org] 
Sent: Tuesday, October 05, 2010 10:59 AM
To: Vishnu S.
Cc: pgsql-ad...@postgresql.org; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Data Not replicating

On Tue, 2010-10-05 at 10:19 +0530, Vishnu S. wrote:

> LOG:  unexpected EOF on client connection
> ERROR:  schema "_testcluster" does not exist at character 30

Did you run slonik_init_cluster?

--
Devrim GÜNDÜZ
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer PostgreSQL RPM 
Repository: http://yum.pgrpms.org
Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr 
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz
* Confidentiality Statement/Disclaimer *

This message and any attachments is intended for the sole use of the intended 
recipient. It may contain confidential information. Any unauthorized use, 
dissemination or modification is strictly prohibited. If you are not the 
intended recipient, please notify the sender immediately then delete it from 
all your systems, and do not copy, use or print. Internet communications are 
not secure and it is the responsibility of the recipient to make sure that it 
is virus/malicious code exempt.
The company/sender cannot be responsible for any unauthorized alterations or 
modifications made to the contents. If you require any form of confirmation of 
the contents, please contact the company/sender. The company/sender is not 
liable for any errors or omissions in the content of this message.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying to figure out why these queries are so slow

2010-10-05 Thread Tim Uckun
On Tue, Oct 5, 2010 at 10:33 PM, Thomas Guettler  wrote:
> Hi,
>
> just a guess: Counting is slow, since it needs to check all rows. Explained 
> here:
>

Actually counting was pretty fast. As I said when I created a table
using the subquery that went pretty fast. The update was the slow
part.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Trying to figure out why these queries are so slow

2010-10-05 Thread Tim Uckun
I have two tables. Table C has about 300K records in it. Table E has
about a million records in it.  Today I tried to run this query.

update C
 set result_count = X.result_count
from C
inner join (select  c_id, count(c_id) as result_count
  from E
  where c_id is not null
  group by c_id) as  X
on C.id = X.c_id

All the fields mentioned are indexed. In the case of Table C it's the
primary key. In the case table E it's just an index (non unique).

I let this query run for about three hours before I cancelled it.

Next I did a CREATE TABLE of the sub query thinking the subquery was
the problem I extracted the results into a separate table and in that
table made the c_id field the primary key. That table only had about
80 thousand records. The creation of the table took just a couple of
minutes. Perfectly reasonable.

I tried to run the same thing again but this time joining the newly
created table instead of the subquery. I killed that query after an
hour.

Then I tried to be clever about it so I added a

Where id in (select id from C  where result_count =0 limit 10) into
the where clause. I figure this would run fast because it's only
updating ten records. The result_count field is indexed so that part
should run very quick and the where ID IN should be quick because it's
looking up ten items in the PKEY right?  Well that didn't go very well
either. I killed that query after about 15 minutes.

In the end (once again) I wrote a ruby script to pull up the records
from table E in small chunks and pull the corresponding records
records from C in small chunks and update them myself. In other words
I wrote a ruby script to do the join and the update in chunks.

I am simply baffled.  What am I doing wrong.   Is the database not
tuned? Do I not have enough RAM? Is there some parameter I need to
tweak someplace?

There was nothing else going on in the database when this query was
running. I shut off all applications touching the database.

The database machine is a linode host. It's got two gigs of RAM  I
realize that this is not the ideal host for a database but it's what I
have to work with.

kernel.shmmax=536870912 , kernel.shmall=2097152,  shared_buffers =
128MB effective_cache_size = 256MB log_min_duration_statement = 1000
max_connections = 100

This was a case of doing an update on about 80,000 records using an
inner join on two tables on using the primary keys of both records.
Why would this query take more than a few seconds to run?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] streaming replication question

2010-10-05 Thread Simon Riggs
On Sat, 2010-10-02 at 12:02 -0400, Rajesh Kumar Mallah wrote:
> FATAL:  too many KnownAssignedXids

That's a bug.

I've a few guesses about that, but I'll put some better instrumentation
in to see if we can prove what's causing it.

Can you reproduce that again, or was that failure isolated to that
particular point in the WAL stream? You can try turning HS off, then
turning it back on again later.

If you suspect a bug in Hot Standby, please set 
trace_recovery_messages = DEBUG2
in postgresql.conf and repeat the action

Always useful to know
* max_connections
* current number of sessions
* whether we have two phase commits happening

Thanks

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Development, 24x7 Support, Training and Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] streaming replication question

2010-10-05 Thread Fujii Masao
On Tue, Oct 5, 2010 at 4:04 PM, Rajesh Kumar Mallah
 wrote:
>> > I am currently aiming to setup only SR  between 2 servers only.
>>
>> "only SR" means that you don't need Hot Standby (i.e., you don't need to
>> run any query on the standby server)? If so, you can set wal_level to
>> archive instead of hot_standby, and disable hot_standby parameter on the
>> standby.
>
> Thanks for the kind replies,
> May be I am mistaken in understanding of the terms. I do issue read only
> queries to the standby server. So I think its HS. But I also stream the
> wal records.

SR is capability to stream WAL records from the master to the standby and
keep the database on the standby up to date by applying the WAL records.
HS is capability to allow us to run read-only queries on the standby.
You seem to need both SR and HS. So you don't need to change wal_level and
hot_standby parameters.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Query tune, index is not using

2010-10-05 Thread AI Rumman
I am using two similar queries where one query is using index and other is
not.
I don't know why.
explain analyze
select ticketstatus
from ticketstatus
inner join role2picklist on
role2picklist.picklistvalueid=ticketstatus.picklist_valueid
where roleid!='H1' order by sortid

  QUERY PLAN
-
 Sort (cost=2053.15..2055.23 rows=834 width=422) (actual time=6.929..7.805
rows=952 loops=1)
  Sort Key: role2picklist.sortid
  -> Nested Loop (cost=0.00..2012.68 rows=834 width=422) (actual
time=0.151..5.109 rows=952 loops=1)
  -> Seq Scan on ticketstatus (cost=0.00..1.12 rows=12 width=422) (actual
time=0.029..0.046 rows=12 loops=1)
  -> Index Scan using role2picklist_picklistvalueid_idx on role2picklist
(cost=0.00..166.77 rows=69 width=8) (actual time=0.021..0.251 rows=79
loops=12)
  Index Cond: (role2picklist.picklistvalueid = "outer".picklist_valueid)
  Filter: ((roleid)::text <> 'H1'::text)
 Total runtime: 9.042 ms
(8 rows)


explain analyze
select cf_1507
from cf_1507
inner join role2picklist on
role2picklist.picklistvalueid=cf_1507.picklist_valueid
where roleid!='H1' order by sortid

  QUERY PLAN
-
 Sort (cost=5989.07..5998.27 rows=3682 width=422) (actual
time=545.203..548.895 rows=4037 loops=1)
  Sort Key: role2picklist.sortid
  -> Hash Join (cost=1.66..5350.24 rows=3682 width=422) (actual
time=5.817..536.341 rows=4037 loops=1)
  Hash Cond: ("outer".picklistvalueid = "inner".picklist_valueid)
  -> Seq Scan on role2picklist (cost=0.00..4208.30 rows=220692 width=8)
(actual time=0.011..297.592 rows=220788 loops=1)
  Filter: ((roleid)::text <> 'H1'::text)
  -> Hash (cost=1.53..1.53 rows=53 width=422) (actual time=0.161..0.161
rows=53 loops=1)
  -> Seq Scan on cf_1507 (cost=0.00..1.53 rows=53 width=422) (actual
time=0.020..0.087 rows=53 loops=1)
 Total runtime: 553.567 ms
(9 rows)
\d role2picklist
  Table "public.role2picklist"
  Column | Type | Modifiers
-++---
 roleid | character varying(255) | not null
 picklistvalueid | integer | not null
 picklistid | integer | not null
 sortid | integer |
Indexes:
  "role2picklist_pkey" PRIMARY KEY, btree (roleid, picklistvalueid,
picklistid)
  "fk_2_role2picklist" btree (picklistid)
  "role2picklist_picklistvalueid_idx" btree (picklistvalueid)
  "role2picklist_roleid_picklistid_idx" btree (roleid, picklistid,
picklistvalueid)

\d cf_1507
  Table "public.cf_1507"
  Column | Type | Modifiers
--++--
 cf_1507id | integer | not null default nextval('cf_1507_seq'::regclass)
 cf_1507 | character varying(200) | not null
 presence | integer | not null default 1
 picklist_valueid | integer | not null default 0
Indexes:
  "cf_1507_pkey" PRIMARY KEY, btree (cf_1507id)


\d ticketstatus
  Table "public.ticketstatus"
  Column | Type | Modifiers
--++---
 ticketstatus_id | integer | not null default
nextval('ticketstatus_seq'::regclass)
 ticketstatus | character varying(200) |
 presence | integer | not null default 0
 picklist_valueid | integer | not null default 0
Indexes:
  "ticketstatus_pkey" PRIMARY KEY, btree (ticketstatus_id)

Any help please.


[GENERAL] Windows 2008/IIS cannot connect to Postgres 9.0

2010-10-05 Thread Mike Christensen
Hey I know most of you guys are huge Windows fans, and UNIX support
will eventually be dropped in Postgres cuz really who uses that, but I
figure someone in here would love to help me with a Windows issue that
has me confused.

Postgres is working awesome on my Windows 2003 development
environment, using whatever flavor of IIS that thing comes with (is it
IIS6?)..  However, I'm in the process of moving my site over to
production cloud servers and these appear to be rain clouds, as I'm
having all sorts of issues getting this running..  When I setup the
site with Windows 2008/IIS, the first page that queries Postgres gives
me the following crash:


   3005
   An unhandled exception has occurred.
   10/5/2010 7:18:04 AM
   10/5/2010 7:18:04 AM
   fd49ed9391dd46fe9d2d11bd6392d9b6
   6
   1
   0
   /LM/W3SVC/1/ROOT-1-129307366626956000
   Full
   /
   C:\inetpub\wwwroot\


   2796
   w3wp.exe
   IIS APPPOOL\DefaultAppPool
   InvalidOperationException
   The Connection is not open.
   http://127.0.0.1/logon.html
   /logon.html
   127.0.0.1

   False

   IIS APPPOOL\DefaultAppPool
   7
   IIS APPPOOL\DefaultAppPool
   False
   at Npgsql.NpgsqlCommand.CheckConnectionState() at
Npgsql.NpgsqlCommand.GetReader(CommandBehavior cb) at
Npgsql.NpgsqlCommand.ExecuteReader(CommandBehavior cb) at
Npgsql.NpgsqlCommand.ExecuteDbDataReader(CommandBehavior behavior) at
System.Data.Common.DbCommand.System.Data.IDbCommand.ExecuteReader() at
KPCServer.DB.Query.ExecSproc(String sproc, DbParameter[] parameters)
at KPCServer.KitchenPC.GetFeaturedRecipes(Int32 max) at
KPCServer.Pages.Logon.FeatuerdRecipes(TextWriter output,
DynamicContentArgs args) at Imp.Compiler.CompiledPage.Render(BasePage
page, TextWriter output) at
System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
at System.Web.HttpApplication.ExecuteStep(IExecutionStep step,
Boolean& completedSynchronously)


I couldn't find anything on the Googletubes about this, so after like
half an hour of cursing, I finally ran across a setting that fixes it.
 If I switch the "Managed Pipeline" setting (WTF does this even mean?)
from "Integrated" to "Classic", everything works.  My question is
what's the deal with this?  Does "Integrated" somehow block the
process from opening a connection with Postgres?  In the Postgres logs
I couldn't even see any evidence that anything was trying to connect
with it..  If anyone has an IIS7 website connecting to Postgres, maybe
you can share your app pool settings with me so I can understand the
best practice, as this will be a production environment now..  Thanks,
and sorry again for the quasi-off-topic email..  Go Postgres!

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] queriing the version of libpq

2010-10-05 Thread Magnus Hagander
On Tue, Oct 5, 2010 at 09:33, Massa, Harald Armin  wrote:
> With 9.0 there is the new hex encoding for BYTEA, which is activated by
> default.
> libpq BEFORE 9.0 are not able to decode that encoding.
> I am programming with Python, using psycopg2. When psycopg2 is linked with
> libpq 9.0, everything works fine with hex-encoding; if psycopg2 is linked
> with libpq < 9.0, decoding hex-encoded bytea fails (it stays hexencoded).
> This happens because in default configuration psycopg2 calls the
> libpq-decode-encoded-bytea function (which is the way it should be done).
> Now I would love to have an additional check "is the used psycopg2 linked to
> an advanced-enough libpq", to be able to set bytea_output to 'escape' if the
> libpq is not worthy.
> My question: Which way is available to query the linked libpq version?
> My other option is to select 'something_that_gets_casted_to_bytea'::bytea,
> and check the return value. BUT that requires a round-trip to the server

You can try calling PQconninfoParse() on a connectino string that has
applicationname= in it. That will fail on anything pre-9.0. Assuming
there's a way to access that function through psycopg2.

But it does outline that fact that it wouldn't suck to have a function
in libpq returning the version so that application can check this at
runtime - clearly it would also be useful when being linked "through"
something like psycopg2.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Having two simultaneous and similar database

2010-10-05 Thread Sairam Krishnamurthy
Scott, Thanks to answering the question. Thats exactly the reason.

Also any operation on the tables is going to be huge. LIke 10s of
thousands of rows inserted of deleted. This basically locks the table
and that y we wanted to update a database and sync it with the
production database when no one will be using the system.



On Mon, Oct 4, 2010 at 9:55 PM, Scott Marlowe  wrote:
> On Mon, Oct 4, 2010 at 8:46 PM, Rajesh Kumar Mallah
>  wrote:
>> Dear Sai,
>>
>> Why do you want to update at nite only. You can setup streaming replication
>> with pgsql9.0 and
>> have it updated almost instantly.
>
> They could have something where they need a stable unchanging version
> for testing, or they're load testing and need to write to the salve,
> etc.
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Having two simultaneous and similar database

2010-10-05 Thread Sairam Krishnamurthy
Ben,

Thanks for the pointer. I did had a look at this b4 posting this. It
seems more of a back up rather than maintaining two databases. So i
basically have to set up a continuous archive and use the raw file to
update the other database.

I was rather looking for a direct way to maintain to synchronous databases.



On Mon, Oct 4, 2010 at 6:31 PM, Ben Madin
 wrote:
> I think you want something like :
> http://www.postgresql.org/docs/8.4/interactive/backup.html
> On 05/10/2010, at 7:02 AM, Sairam Krishnamurthy wrote:
>
> Hi all.
>
> I am trying to create two databases in two different machines connected over
> the lan. Both the databases have similar tables and fields in them. I will
> be updating database A always. I want the update to be backed up to database
> B automatically during the night when no one will be using the databases.
>
> Can some one tell if if this can be done and if yes how ?
>
> --
> Thanks,
> Sairam Krishnamurthy
> +1 612 859 8161
>
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is there a logical reason for 25P02?

2010-10-05 Thread Cédric Villemain
2010/10/5 Konstantin Izmailov :
> Let me ask another question related to the INSERT as well. I'm running
> periodic import into Postgres and sometimes the application tries to insert
> a duplicate row which, as expected, results in "integrity violation" error.
> This aborts entire transaction (which is huge). I can use "SAVEPOINT
> ...;INSERT ...;RELEASE SAVEPOINT ..." but I'm concerned with potential
> performance hit. I haven't had time to benchmark the difference in
> performance, so could you please suggest if the difference will be
> noticeable. Is there a better approach? Is it possible to customize INSERT
> behavior to not abort transaction due to an integrity violation? Would it be
> possible to alter COPY command behavior as well (to gracefully continue
> after integrity violation)?

you probably want pgloader : http://pgloader.projects.postgresql.org/


-- 
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] queriing the version of libpq

2010-10-05 Thread Massa, Harald Armin
With 9.0 there is the new hex encoding for BYTEA, which is activated by
default.

libpq BEFORE 9.0 are not able to decode that encoding.

I am programming with Python, using psycopg2. When psycopg2 is linked with
libpq 9.0, everything works fine with hex-encoding; if psycopg2 is linked
with libpq < 9.0, decoding hex-encoded bytea fails (it stays hexencoded).

This happens because in default configuration psycopg2 calls the
libpq-decode-encoded-bytea function (which is the way it should be done).

Now I would love to have an additional check "is the used psycopg2 linked to
an advanced-enough libpq", to be able to set bytea_output to 'escape' if the
libpq is not worthy.

My question: Which way is available to query the linked libpq version?

My other option is to select 'something_that_gets_casted_to_bytea'::bytea,
and check the return value. BUT that requires a round-trip to the server

Harald
-- 
GHUM GmbH
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607

Amtsgericht Stuttgart, HRB 734971
-
persuadere.
et programmare


Re: [GENERAL] Question regarding custom parser

2010-10-05 Thread Arjen Nienhuis
You can create an index on to_tsvector(replace(foo, '-', ' ')) and then
search using ...match..(replace(foo, ...), ...)

On Mon, Oct 4, 2010 at 11:41 AM, Arthur van der Wal <
arthurvander...@gmail.com> wrote:

> Hi,
>
> I want to change the way PostgreSQL splits text into tokens, for example:
>
> plainto_tsquery("v-74") should split it up as "v" & "74" instead of "v" &
> "-74".
>
> Another example:
>
>   select to_tsvector('NL83-V-74-001-001')'-001':5,6 '74':4 'nl83':2 
> 'nl83-v':1 'v':3
>
> Searching for 'v-71' does not find the database entry as the '-' in 'v-71'
> is not indexed. It's hard to determine when PostgreSQL splits things up by
> '-' and when not
>
>
> I tried writing my own parser (based on the the test_parser example) which
> does nothing more than split at '-', however it seems to me that the logic
> for finding 'base' words and derivitives that postgres does so nicely
> doesn't work anymore.
>
> Another way would be to disable the (signed) int tokeniser and have the
> unsigned int tokeniser accept preceeding 0's.
>
> Can anybody point me in the right direction as in how to tackle this
> problem?
>
> Thanks very much in advance,
>
> Arthur van der Wal
>


Re: [GENERAL] streaming replication question

2010-10-05 Thread Rajesh Kumar Mallah
On Tue, Oct 5, 2010 at 12:09 AM, Fujii Masao  wrote:

> On Tue, Oct 5, 2010 at 2:57 AM, Rajesh Kumar Mallah
>  wrote:
> > I am currently aiming to setup only SR  between 2 servers only.
>
> "only SR" means that you don't need Hot Standby (i.e., you don't need to
> run any query on the standby server)? If so, you can set wal_level to
> archive instead of hot_standby, and disable hot_standby parameter on the
> standby.
>

Thanks for the kind replies,
May be I am mistaken in understanding of the terms. I do issue read only
queries to the standby server. So I think its HS. But I also stream the
wal records.


>
> > My question is in SR setup do i need to keep transferring the archived
> WAL
> > files also ?
>
> No if wal_keep_segments is high enough that WAL files required for the
> standby
> can be saved in pg_xlog directory of the master.
>
> > when the slave requests WAL records from the master does the master
> consult
> > the archived
> > wal files also for sending the records ?
>
> No. In 9.0, the master doesn't read and send the archived WAL files.
> But I agree to make the master send the archived WAL files, and I'm
> now proposing that.
> http://archives.postgresql.org/pgsql-hackers/2010-09/msg02040.php
>
> Regards,
>

I just wanted to let you know how i am now managing the wal logs.
i have mounted the wal archive folder(Xp) of primary on the slave via NFS to
a folder (Ys).
The folder Xp of primary is periodically rsynced to folder Xs on slave with
option --delete
in recovery.conf of slave archive_cleanup_command has been specified to
cleanup Ys
(not Xs), since it is a NFS rw mount it removes the unneeded archived WAL
files from the
 source also ie Xp . subsequently the files are removed from Xs also because
of the
--delete option of rsync.

My original requirement is that , I should be able to divert certain amount
of Read Only
queries from the master to salve so as to reduce load in primary. Our pilot
applications
have been modified so that they always contact master when that have to
modify data
and slave when they have to read data. The current setup provides for the
requirement
but I am concerned what happens if the slave is shutdown for a prolonged
duration how
will i get to know when  a base backup is required. I am trying to read the
docs and
practically observing also the effects of various steps. We have a switch in
the software
that lets was not to use the slave at all ! .

Regds
Rajesh Kumar Mallah.
Tradeindia.com - India's Largest B2B MarketPlace.
(uses PostgreSQL for past 10 years)


>
> --
> Fujii Masao
> NIPPON TELEGRAPH AND TELEPHONE CORPORATION
> NTT Open Source Software Center
>