Re: [GENERAL] Using pg_dump in a cron

2005-06-02 Thread Patrick . FICHE
Thanks Tom,

You are absolutely right, the commandt stty istrip was the first line of my
.profile...

Regards,
Patrick


--- 
Patrick Fiche 
email : [EMAIL PROTECTED] 
tél : 01 69 29 36 18 

--- 




-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Tom Lane
Sent: jeudi 2 juin 2005 18:07
To: [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Using pg_dump in a cron 


[EMAIL PROTECTED] writes:
> The dump execution is OK but my user gets a mail containing the =
> following
> message :
>  
> Your "cron" job on ALIS
> . $HOME/.profile; pg_dump alis -U postgres -f
> $AQSADIR/data/backup/alis_data.sql -a -d >/dev/null 2>&1
>  
> produced the following output:
>  
> stty: : No such device or address

> What's wrong with it ?

My bet is the .profile file contains a call of stty ... and of course
stdin is going to be pointing somewhere else than a terminal ...

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL vs. InnoDB performance

2005-06-02 Thread Tino Wildenhain
Am Freitag, den 03.06.2005, 00:36 +0200 schrieb Peter Eisentraut:
> On a particular system, loading 1 million rows (100 bytes, nothing 
> fancy) into PostgreSQL one transaction at a time takes about 90 
> minutes.  Doing the same in MySQL/InnoDB takes about 3 minutes.  InnoDB 
> is supposed to have a similar level of functionality as far as the 
> storage manager is concerned, so I'm puzzled about how this can be.  
> Does anyone know whether InnoDB is taking some kind of questionable 
> shortcuts it doesn't tell me about?  The client interface is DBI.  This 
> particular test is supposed to simulate a lot of transactions happening 
> in a short time, so turning off autocommit is not relevant.

Maybe postgres' actually working ref-integrity checks bite here?
That test is a bit vague - maybe we can see more details? :-)

> As you might imagine, it's hard to argue when the customer sees these 
> kinds of numbers.  So I'd take any FUD I can send back at them. :)
> 


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] SQL call to get pid of current connection

2005-06-02 Thread David Parker
That certainly looks like it! Thanks!

- DAP 

>-Original Message-
>From: Michael Fuhr [mailto:[EMAIL PROTECTED] 
>Sent: Thursday, June 02, 2005 8:47 PM
>To: Bruno Wolff III; David Parker; postgres general
>Subject: Re: [GENERAL] SQL call to get pid of current connection
>
>On Thu, Jun 02, 2005 at 05:02:14PM -0500, Bruno Wolff III wrote:
>> On Thu, Jun 02, 2005 at 17:04:22 -0400,
>>   David Parker <[EMAIL PROTECTED]> wrote:
>> > Is there a function call that will return the pid of the postgres 
>> > process associated with the current client connection?
>> 
>> I thought I remembered seeing one, but I looked through the 
>> development docs and didn't see a function or a GUC variable 
>with that information.
>
>Are you looking for pg_backend_pid()?  It's documented in the 
>"Statistics Collector" section of the "Monitoring Database Activity"
>chapter; it's been around since 7.3:
>
>http://www.postgresql.org/docs/7.3/interactive/release-7-3.html
>http://www.postgresql.org/docs/7.3/interactive/monitoring-stats.html
>http://www.postgresql.org/docs/7.4/interactive/monitoring-stats.html
>http://www.postgresql.org/docs/8.0/interactive/monitoring-stats.html
>
>--
>Michael Fuhr
>http://www.fuhr.org/~mfuhr/
>

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] SQL call to get pid of current connection

2005-06-02 Thread Tom Lane
"David Parker" <[EMAIL PROTECTED]> writes:
> Is there a function call that will return the pid of the postgres
> process associated with the current client connection?

libpq makes this available as PQbackendPID().  Dunno about other
client libraries.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] PostgreSQL vs. InnoDB performance

2005-06-02 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


Something ain't kosher. I tried the same test with the latest and greatest
DBI, DBD::Pg, and PostgreSQL, tuned everything up, and still got around
10,000 transactions per minute or so. There is no way MySQL is doing an
order of magnitude or more better than that and using properly transactioned
inserts.

- --
Greg Sabino Mullane [EMAIL PROTECTED]
PGP Key: 0x14964AC8 200506022050
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iD8DBQFCn6mavJuQZxSWSsgRAq4JAJ9SszAYi6i+RhhS0AQTLGr/+JqI6ACgk9Dj
3qXjrSk1nnh4vdnGmY/R3e0=
=kJkK
-END PGP SIGNATURE-



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] SQL call to get pid of current connection

2005-06-02 Thread Michael Fuhr
On Thu, Jun 02, 2005 at 05:02:14PM -0500, Bruno Wolff III wrote:
> On Thu, Jun 02, 2005 at 17:04:22 -0400,
>   David Parker <[EMAIL PROTECTED]> wrote:
> > Is there a function call that will return the pid of the postgres
> > process associated with the current client connection?
> 
> I thought I remembered seeing one, but I looked through the development
> docs and didn't see a function or a GUC variable with that information.

Are you looking for pg_backend_pid()?  It's documented in the
"Statistics Collector" section of the "Monitoring Database Activity"
chapter; it's been around since 7.3:

http://www.postgresql.org/docs/7.3/interactive/release-7-3.html
http://www.postgresql.org/docs/7.3/interactive/monitoring-stats.html
http://www.postgresql.org/docs/7.4/interactive/monitoring-stats.html
http://www.postgresql.org/docs/8.0/interactive/monitoring-stats.html

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL vs. InnoDB performance

2005-06-02 Thread Tom Lane
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> On a particular system, loading 1 million rows (100 bytes, nothing 
> fancy) into PostgreSQL one transaction at a time takes about 90 
> minutes.  Doing the same in MySQL/InnoDB takes about 3 minutes.

What sort of hardware, exactly?

Simple division says that that's about 11K transactions per minute,
which is more or less what you could expect to get with a 15000RPM
drive if everyone is honest and a commit actually involves bits hitting
a platter.  Now we've talked about schemes for committing more than one
transaction per disk revolution, but there's no way we could get to 30
per revolution given our lack of knowledge about the actual disk layout.

I don't think I believe that InnoDB is really truly committing 330K
transactions per minute.  Suggest that the customer try a pull-the-plug
type of test.  Does the DB come back at all, and if so how close to the
last reported-committed row has it got?

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] PostgreSQL Developer Network

2005-06-02 Thread Gevik babakhani








Dear People,

 

After a long time of various timeouts, I managed to get back on track
developing the long promised PQDN website. As some of you might know. I was
working on a project called the PostgreSQL Developer Network. 

 

The websites (which is a la MSDN) is meant to provide a knowledge base
for the precious programming knowledge regarding writing code for PostgreSQL.
Hopefully with your contribution it would be a great place for the ones (like
me) who would like to contribute.

 

The websites is being developed on www.truesoftware.net:8081/pgdn/
using PHP5 and of course PostgreSQL 8.0.3 

 

Please do not hesitate to share your ideas.

 

Kind regards,

Gevik

 








Re: [GENERAL] PostgreSQL vs. InnoDB performance

2005-06-02 Thread David Fetter
On Fri, Jun 03, 2005 at 12:36:29AM +0200, Peter Eisentraut wrote:
> On a particular system, loading 1 million rows (100 bytes, nothing
> fancy) into PostgreSQL one transaction at a time takes about 90
> minutes.  Doing the same in MySQL/InnoDB takes about 3 minutes.
> InnoDB is supposed to have a similar level of functionality as far
> as the storage manager is concerned, so I'm puzzled about how this
> can be.  Does anyone know whether InnoDB is taking some kind of
> questionable shortcuts it doesn't tell me about?  The client
> interface is DBI.  This particular test is supposed to simulate a
> lot of transactions happening in a short time, so turning off
> autocommit is not relevant.

This doesn't sound like a very good test.  Have they tried the OSDL
stuff and/or Jan Wieck's PHP-TPCW?

http://www.osdl.org/lab_activities/kernel_testing/osdl_database_test_suite/
http://pgfoundry.org/projects/tpc-w-php/

> As you might imagine, it's hard to argue when the customer sees
> these kinds of numbers.  So I'd take any FUD I can send back at
> them. :)

HTH :)

Cheers,
D
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
phone: +1 510 893 6100   mobile: +1 415 235 3778

Remember to vote!

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL vs. InnoDB performance

2005-06-02 Thread Joshua D. Drake

Peter Eisentraut wrote:
On a particular system, loading 1 million rows (100 bytes, nothing 
fancy) into PostgreSQL one transaction at a time takes about 90 
minutes.  



Doing the same in MySQL/InnoDB takes about 3 minutes.  InnoDB
is supposed to have a similar level of functionality as far as the 
storage manager is concerned, so I'm puzzled about how this can be.  
Does anyone know whether InnoDB is taking some kind of questionable 
shortcuts it doesn't tell me about? 


What about fsync/opensync and wal segments?

What happens if we turn off fsync entirely?


 The client interface is DBI.  This
particular test is supposed to simulate a lot of transactions happening 
in a short time, so turning off autocommit is not relevant.


As you might imagine, it's hard to argue when the customer sees these 
kinds of numbers.  So I'd take any FUD I can send back at them. :)





--
Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240
PostgreSQL Replication, Consulting, Custom Programming, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] PostgreSQL vs. InnoDB performance

2005-06-02 Thread Peter Eisentraut
On a particular system, loading 1 million rows (100 bytes, nothing 
fancy) into PostgreSQL one transaction at a time takes about 90 
minutes.  Doing the same in MySQL/InnoDB takes about 3 minutes.  InnoDB 
is supposed to have a similar level of functionality as far as the 
storage manager is concerned, so I'm puzzled about how this can be.  
Does anyone know whether InnoDB is taking some kind of questionable 
shortcuts it doesn't tell me about?  The client interface is DBI.  This 
particular test is supposed to simulate a lot of transactions happening 
in a short time, so turning off autocommit is not relevant.

As you might imagine, it's hard to argue when the customer sees these 
kinds of numbers.  So I'd take any FUD I can send back at them. :)

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] adding columns with defaults is not implemented

2005-06-02 Thread Joshua D. Drake

Marcelo wrote:

Hi,
Thanks for your reply, but I have some doubts.

Are yoy sugesting I create the column as an Integer then change it to
Serial? in Pgsql 7 you cant change a column type.


Serial is not a real data type. Do this.

create table foo (bar integer not null);
create sequence foo_bar_seq;
alter table foo alter column bar set default nextval('foo_bar_seq');

Sincerely,

Joshua D. Drake




If I create the column as an int then add a default value, how can I make
this default value increment with each insert?

Thanks again for your help.
Marcelo


- Original Message - 
From: "Scott Marlowe" <[EMAIL PROTECTED]>

To: "Marcelo" <[EMAIL PROTECTED]>
Cc: 
Sent: Thursday, June 02, 2005 4:43 PM
Subject: Re: [GENERAL] adding columns with defaults is not implemented




On Thu, 2005-06-02 at 15:29, Marcelo wrote:


Hello,
Using Postgres 7.4, I am trying to perform an "alter table
temptable add column "myCol" serial"

It gives the following msg
ERROR:  adding columns with defaults is not implemented

You cannot add a column that is serial in a table which already has
data in postgres 7.

Is there a way I can create a serial column on a table which already
has data? Or is the only solution upgrading to postgres 8 ?


You can add a default after you add the column with a separate alter
table statement...

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly



--
Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240
PostgreSQL Replication, Consulting, Custom Programming, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] adding columns with defaults is not implemented

2005-06-02 Thread Sven Willenberger
On Thu, 2005-06-02 at 17:47 -0400, Marcelo wrote:
> Hi,
> Thanks for your reply, but I have some doubts.
> 
> Are yoy sugesting I create the column as an Integer then change it to
> Serial? in Pgsql 7 you cant change a column type.
> 
> If I create the column as an int then add a default value, how can I make
> this default value increment with each insert?
> 
> Thanks again for your help.
> Marcelo
> 
> 
> - Original Message - 
> From: "Scott Marlowe" <[EMAIL PROTECTED]>
> To: "Marcelo" <[EMAIL PROTECTED]>
> Cc: 
> Sent: Thursday, June 02, 2005 4:43 PM
> Subject: Re: [GENERAL] adding columns with defaults is not implemented
> 
> 
> > On Thu, 2005-06-02 at 15:29, Marcelo wrote:
> > > Hello,
> > > Using Postgres 7.4, I am trying to perform an "alter table
> > > temptable add column "myCol" serial"
> > >
> > > It gives the following msg
> > > ERROR:  adding columns with defaults is not implemented
> > >
> > > You cannot add a column that is serial in a table which already has
> > > data in postgres 7.
> > >
> > > Is there a way I can create a serial column on a table which already
> > > has data? Or is the only solution upgrading to postgres 8 ?
> >
> > You can add a default after you add the column with a separate alter
> > table statement...
> >
> > ---(end of broadcast)---
> > TIP 2: you can get off all lists at once with the unregister command
> > (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
> 
> 
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
>   subscribe-nomail command to [EMAIL PROTECTED] so that your
>   message can get through to the mailing list cleanly

[Bottom posting to the top-posted reply] 

You would have to do this in steps: Assuming that "mytable" exists and
"mycol" is currently of type int and currently has as its max value 100:

create sequence mytable_mycol_seq start with 101;
alter table mytable alter mycol set default
nextval('mytable_mycol_seq'::text);

At this point any new inserts will start autoincrementing the mycol
field starting with value 101.

Sven


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] SQL call to get pid of current connection

2005-06-02 Thread David Parker
Yeah, simple enough. I just wanted to make sure I wasn't duplicating
something that was already there.

Thanks.

- DAP 

>-Original Message-
>From: Bruno Wolff III [mailto:[EMAIL PROTECTED] 
>Sent: Thursday, June 02, 2005 6:02 PM
>To: David Parker
>Cc: postgres general
>Subject: Re: SQL call to get pid of current connection
>
>On Thu, Jun 02, 2005 at 17:04:22 -0400,
>  David Parker <[EMAIL PROTECTED]> wrote:
>> Is there a function call that will return the pid of the postgres 
>> process associated with the current client connection?
>
>I thought I remembered seeing one, but I looked through the 
>development docs and didn't see a function or a GUC variable 
>with that information.
>It wouldn't be too hard to write a C or Perl function to get 
>that information.
>

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] SQL call to get pid of current connection

2005-06-02 Thread Bruno Wolff III
On Thu, Jun 02, 2005 at 17:04:22 -0400,
  David Parker <[EMAIL PROTECTED]> wrote:
> Is there a function call that will return the pid of the postgres
> process associated with the current client connection?

I thought I remembered seeing one, but I looked through the development
docs and didn't see a function or a GUC variable with that information.
It wouldn't be too hard to write a C or Perl function to get that
information.

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] adding columns with defaults is not implemented

2005-06-02 Thread Marcelo

Hi,
Thanks for your reply, but I have some doubts.

Are yoy sugesting I create the column as an Integer then change it to
Serial? in Pgsql 7 you cant change a column type.

If I create the column as an int then add a default value, how can I make
this default value increment with each insert?

Thanks again for your help.
Marcelo


- Original Message - 
From: "Scott Marlowe" <[EMAIL PROTECTED]>
To: "Marcelo" <[EMAIL PROTECTED]>
Cc: 
Sent: Thursday, June 02, 2005 4:43 PM
Subject: Re: [GENERAL] adding columns with defaults is not implemented


> On Thu, 2005-06-02 at 15:29, Marcelo wrote:
> > Hello,
> > Using Postgres 7.4, I am trying to perform an "alter table
> > temptable add column "myCol" serial"
> >
> > It gives the following msg
> > ERROR:  adding columns with defaults is not implemented
> >
> > You cannot add a column that is serial in a table which already has
> > data in postgres 7.
> >
> > Is there a way I can create a serial column on a table which already
> > has data? Or is the only solution upgrading to postgres 8 ?
>
> You can add a default after you add the column with a separate alter
> table statement...
>
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] SQL call to get pid of current connection

2005-06-02 Thread David Parker



Is there a function 
call that will return the pid of the postgres process associated with the 
current client connection?
- 
DAP--David 
Parker    Tazz Networks    (401) 
709-5130 
 


Re: [GENERAL] adding columns with defaults is not implemented

2005-06-02 Thread Scott Marlowe
On Thu, 2005-06-02 at 15:29, Marcelo wrote:
> Hello,
> Using Postgres 7.4, I am trying to perform an "alter table
> temptable add column "myCol" serial"
>  
> It gives the following msg
> ERROR:  adding columns with defaults is not implemented
>  
> You cannot add a column that is serial in a table which already has
> data in postgres 7.
>  
> Is there a way I can create a serial column on a table which already
> has data? Or is the only solution upgrading to postgres 8 ?

You can add a default after you add the column with a separate alter
table statement...

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] adding columns with defaults is not implemented

2005-06-02 Thread Marcelo



Hello,
Using Postgres 7.4, I am trying to perform an "alter table temptable add 
column "myCol" serial"

 
It gives the following msg
ERROR:  adding columns with defaults is not 
implemented
 
You cannot add a column that is serial in a table 
which already has data in postgres 7.
 
Is there a way I can create a serial column on a 
table which already has data? Or is the only solution upgrading to postgres 8 
?
 
Thanks
 


Re: [GENERAL] Limits of SQL

2005-06-02 Thread Sean Davis

A couple of links:

http://www.dbazine.com/ofinterest/oi-articles/celko24
http://www.dbmsmag.com/9603d06.html


On Jun 2, 2005, at 2:33 AM, Joachim Zobel wrote:


Hi.

I am looking for a way to write a SELECT that finds connectivity
components of a graph or at least for one that given two nodes
determines if there is a path between them. It seems that this is not
possible, no matter what graph representation I choose. Which 
constructs

from set theory are missing in SQL? Set of all subsets is one I am
missing, or can it be done somehow?

Is anybody else thinking about the limits of SQL? As often I am 
probably

not the first to ask these questions. Any pointers?

Sincerely,
Joachim



---(end of 
broadcast)---

TIP 6: Have you searched our list archives?

   http://archives.postgresql.org




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Limits of SQL

2005-06-02 Thread Oleg Bartunov

I'm not sure if it's relevant to your question
http://www-2.cs.cmu.edu/~cache/pg_graph/

pg_graph provides a way of handling graph-based data structures within 
the relational database PostgreSQL. In particular, it provides a convenient 
means of inserting graphs as BLOB-like objects in the RDBMS. 
Primarily, however, it provides a mechanism for indexing the graphs to 
provide efficient means to perform nearest-neighbor queries over 
collections of graphs.


On Thu, 2 Jun 2005, Joachim Zobel wrote:


Hi.

I am looking for a way to write a SELECT that finds connectivity
components of a graph or at least for one that given two nodes
determines if there is a path between them. It seems that this is not
possible, no matter what graph representation I choose. Which constructs
from set theory are missing in SQL? Set of all subsets is one I am
missing, or can it be done somehow?

Is anybody else thinking about the limits of SQL? As often I am probably
not the first to ask these questions. Any pointers?

Sincerely,
Joachim



---(end of broadcast)---
TIP 6: Have you searched our list archives?

  http://archives.postgresql.org



Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Limits of SQL

2005-06-02 Thread Ben
You mean, you want to be able to say something like:

select isConnected(a,b)

and get back a true/false, or maybe the path?

That seems quite doable in SQL, assuming you either store those results 
and simply use sql to retrieve them, or use a stored proc to compute the 
result each time.

On Thu, 2 Jun 2005, Joachim Zobel wrote:

> Hi.
> 
> I am looking for a way to write a SELECT that finds connectivity
> components of a graph or at least for one that given two nodes
> determines if there is a path between them. It seems that this is not
> possible, no matter what graph representation I choose. Which constructs
> from set theory are missing in SQL? Set of all subsets is one I am
> missing, or can it be done somehow?
> 
> Is anybody else thinking about the limits of SQL? As often I am probably
> not the first to ask these questions. Any pointers?
> 
> Sincerely,
> Joachim
> 
> 
> 
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
> 
>http://archives.postgresql.org
> 



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] [SQL] index row size 2728 exceeds btree maximum, 27

2005-06-02 Thread Bruno Wolff III
On Thu, Jun 02, 2005 at 18:00:17 +0100,
  Richard Huxton  wrote:
> 
> Certainly, but if the text in the logfile row is the same, then hashing 
> isn't going to make a blind bit of difference. That's the root of my 
> concern, and something only Dinesh knows.

Sure it is. Because the hash can be used in the primary key instead of
of the error message which should reduce the size of the key enough
that he can use a btree index.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Limits of SQL

2005-06-02 Thread Joachim Zobel
Hi.

I am looking for a way to write a SELECT that finds connectivity
components of a graph or at least for one that given two nodes
determines if there is a path between them. It seems that this is not
possible, no matter what graph representation I choose. Which constructs
from set theory are missing in SQL? Set of all subsets is one I am
missing, or can it be done somehow?

Is anybody else thinking about the limits of SQL? As often I am probably
not the first to ask these questions. Any pointers?

Sincerely,
Joachim



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Deleting orphaned records to establish Ref Integrity

2005-06-02 Thread Greg Stark
Tom Lane <[EMAIL PROTECTED]> writes:

> "Roman F" <[EMAIL PROTECTED]> writes:
> 
> > DELETE FROM child_table WHERE parentid NOT IN
> >   (SELECT parentid FROM parent_table)
> 
> Another idea is to try an outer join:
> 
>   SELECT child_table.parentid INTO tmp_table
>   FROM child_table LEFT JOIN parent_table
>ON (child_table.parentid = parent_table.parentid)
>   WHERE parent_table.parentid IS NULL;

There's also 

DELETE 
  FROM child_table 
 WHERE NOT EXISTS (select 1 
 from parent_table 
where parent_id = child_table.parent_id
  )


Which won't use anything as efficient as a hash join or merge join but will be
at least capable of using index lookups for something basically equivalent to
a nested loop.



-- 
greg


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] writting a large store procedure

2005-06-02 Thread Russ Brown

Tony Caduto wrote:

Sean,
I am tooting my own horn here, but I would recomend PG Lightning Admin.
It has a incredible function editor (and query editor) that is based on 
Synedit(synedit.sourceforge.net), and has full code completion that 
includes
all the Postgres built in functions,exception names,types etc etc, and 
it it has schema and table completion as well.

Just type the name of a schema and the .
i.e. public.  and you will see all objects for that schema including 
functions tables,domains etc.
Type the name of a table i.e. public.mytable. and you will see all the 
fields for that table.


We have a large internal project that has stored functions that are 1000 
lines long and of course we use PG Lightning Admin.


You can check it out here http://www.amsoftwaredesign.com , and get a 
full 30 day demo.

You will not be disappointed.

The price right now is only 15.99 USD, which is a incredible deal.  
After the pre-release the price is going up to 29.99, so if you grab a 
copy now you can get it for the cost of lunch :-)


Tony Caduto
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql 8.x




I was about to go and buy this and then saw that it's Windows-only. Oh well!

--

Russ.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Using pg_dump in a cron

2005-06-02 Thread Vivek Khera
On Jun 2, 2005, at 9:43 AM, [EMAIL PROTECTED] wrote:Your "cron" job on ALIS. $HOME/.profile; pg_dump alis -U postgres -f $AQSADIR/data/backup/alis_data.sql -a -d >/dev/null 2>&1   produced the following output:   stty: : No such device or address What's wrong with it ? Your .profile executes the stty command.  Don't do that.  Why do you need stuff from your .profile anyhow?And why would you drop the output to /dev/null -- you'll never know if your dump fails!  Why bother making one then? Vivek Khera, Ph.D. +1-301-869-4449 x806  

smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] [SQL] index row size 2728 exceeds btree maximum, 27

2005-06-02 Thread Richard Huxton

Bruno Wolff III wrote:

On Thu, Jun 02, 2005 at 13:40:53 +0100,
  Richard Huxton  wrote:

Actually, Dinesh didn't mention he was using this for the speed of 
lookup. He'd defined the columns as being the PRIMARY KEY, presumably 
because he feels they are/should be unique. Given that they are rows 
from a logfile, I'm not convinced this is the case.



Even for case you could still use hashes. The odds of a false collision
using SHA-1 are so small that some sort of disaster is more likely.
Another possibility is if there are a fixed number of possible messages,
is that they could be entered in their own table with a serail PK and
the other table could reference the PK.


Certainly, but if the text in the logfile row is the same, then hashing 
isn't going to make a blind bit of difference. That's the root of my 
concern, and something only Dinesh knows.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] [SQL] index row size 2728 exceeds btree maximum, 27

2005-06-02 Thread Bruno Wolff III
On Thu, Jun 02, 2005 at 13:40:53 +0100,
  Richard Huxton  wrote:
> 
> Actually, Dinesh didn't mention he was using this for the speed of 
> lookup. He'd defined the columns as being the PRIMARY KEY, presumably 
> because he feels they are/should be unique. Given that they are rows 
> from a logfile, I'm not convinced this is the case.

Even for case you could still use hashes. The odds of a false collision
using SHA-1 are so small that some sort of disaster is more likely.
Another possibility is if there are a fixed number of possible messages,
is that they could be entered in their own table with a serail PK and
the other table could reference the PK.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] postgresql 8 abort with signal 10

2005-06-02 Thread Alexandre Biancalana
I changed from postgresql to mysql and everything now is great ;)

Same machine, same os, etc...

On 6/2/05, Roman Neuhauser <[EMAIL PROTECTED]> wrote:
> # [EMAIL PROTECTED] / 2005-05-03 17:56:53 -0300:
> > The FreeBSD is the last STABLE version. I can try to change some
> > hardware, I already changed memory, what can I try now ? the processor
> > ? motherboard ??
> 
> > On 5/3/05, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> > > On Tue, 2005-05-03 at 15:04, Alexandre Biancalana wrote:
> > > > Thank you for the detailed explanation Scott, they are very handy !!
> > > >
> > > > I reduced the shared_buffers to 32768, but the problem still occurs.
> > > >
> > > > Any other idea ??
> > >
> > > Yeah, I had a sneaking suspicion that shared_buffers wasn't causing the
> > > issue really.
> > >
> > > Sounds like either a hardware fault, or a BSD bug.  I'd check the BSD
> > > mailing lists for mention of said bug, and see if you can grab a spare
> > > drive and install the last stable version of FreeBSD 4.x and if that
> > > fixes the problem.
> > >
> > > If you decide to try linux, avoid the 2.6 kernel, it's still got
> > > issues...  2.4 is pretty stable.
> > >
> > > I really doubt it's a problem in postgresql itself though.
> 
> For the sake of archives, what was causing the SIGBUSes?
> 
> --
> How many Vietnam vets does it take to screw in a light bulb?
> You don't know, man.  You don't KNOW.
> Cause you weren't THERE. http://bash.org/?255991
>

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] SRFs returning records from a view

2005-06-02 Thread Mark Lubratt

AAARRRGGGHHH...  Now I understand you, Josh.  Sorry.

You are correct.

Thanks!
Mark

On Jun 1, 2005, at 4:39 PM, Joshua D. Drake wrote:


[EMAIL PROTECTED] wrote:

Hello!
I have a view that I'm putting into a report in my application.  I'd
like to get several blank lines returned by the view as well as the
legitimate data (to leave room in the report for manual entries).  I
thought I could make a SRF that would return the data from the view 
and

then spit out a number of blank records.
As a first step in getting my SRF feet wet, I tried:
CREATE OR REPLACE FUNCTION bluecard(int4)
  RETURNS SETOF view_bluecard AS
$BODY$declare
r   view_bluecard%rowtype;
begin
for r in select * from view_bluecard where job_id = jn loop
return next r;
end loop;
return;
end;$BODY$
  LANGUAGE 'plpgsql' VOLATILE;
where view_bluecard is the view that is already defined.  When I try
select bluecard(1130);


select * from bluecard(1130)?

Sincerely,

Joshua D. Drake

---(end of 
broadcast)---

TIP 7: don't forget to increase your free space map settings



--
Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240
PostgreSQL Replication, Consulting, Custom Programming, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/




---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Stats not getting updated....

2005-06-02 Thread Himanshu Baweja
Tom Lane <[EMAIL PROTECTED]> wrote:
> backends only ship stats to the collector at transaction commit.> Or maybe it's at the end of processing a client command. It's certainly> not continuous.
 
yup that i already know 
but is there any way to make it do the update more frequently 4 times in 30 mins... which makes the stats useless
 
or there is any way to identify the usage of tables... wht i am trying to do is check the table heap_blks_read time to time so that i can know how much io is getting used for each table... and during which time... i am doing sampling every 2 min
 
now after i have identified which tables are getting used and when... we can move them to diff partitions for better preformance.
 
is there any way to know the table usage
 
thx
Himanshu
		Do you Yahoo!? 
Yahoo! Mail - You care about security. So do we.

Re: [GENERAL] Using pg_dump in a cron

2005-06-02 Thread Tom Lane
[EMAIL PROTECTED] writes:
> The dump execution is OK but my user gets a mail containing the =
> following
> message :
>  
> Your "cron" job on ALIS
> . $HOME/.profile; pg_dump alis -U postgres -f
> $AQSADIR/data/backup/alis_data.sql -a -d >/dev/null 2>&1
>  
> produced the following output:
>  
> stty: : No such device or address

> What's wrong with it ?

My bet is the .profile file contains a call of stty ... and of course
stdin is going to be pointing somewhere else than a terminal ...

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] [GENERAL] index row size 2728 exceeds btree maximum, 27

2005-06-02 Thread Jaime Casanova
On 6/2/05, Richard Huxton  wrote:
> KÖPFERL Robert wrote:
> > To me it seems that the definer of this table missed the concept index ...
> > or the concept database
> > One usually looks up data using a key, but if the whole row is the key, what
> > data shall be looked up.
> 
> > So short story long: Remove data from your index.  The data column seems
> > like the data to be looked up using the key
> > (scan_id, host_ip, port_num, plugin_id, severity) or even less.
> > Postgres is able to take several indices over distinct columns into account.
> > Thus reducing the possible candidates to a hand full.
> > So several indices are also an option
> 
> Actually, Dinesh didn't mention he was using this for the speed of
> lookup. He'd defined the columns as being the PRIMARY KEY, presumably
> because he feels they are/should be unique. Given that they are rows
> from a logfile, I'm not convinced this is the case.
> 

If this a log he will need a timestamp field to be usefull, making
that field part of the primary key and letting the data out of the
primary has more sense to me.

-- 
regards,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] hpw to Count without group by

2005-06-02 Thread Yudie Pg

I do not believe you can do this without a subquery - you are tryingto get 2 separate pieces of information from your data
  * some data about the record having MAX(rank) for each categoryand  * the count of records in each category
 
Hi, I guess i try to answer my own question which end up with creating stored procedure.
Unless you have direct query idea.
 
This function cut the half of query time, as my concern about postgres count agregate function is always slower than I expected.
 
SQL:
CREATE TYPE product_type as (sku int4, category int4, display_name varchar(100),rank int4, category_count);
CREATE OR REPLACE FUNCTION get_toprank_product_category (text) returns setof product_typeas 'DECLARE   kwd ALIAS for $1;   mrow RECORD;    retrow prdtcat_searchresult;   tempcount int4;   prevcatnum int4 ;
   i int4;BEGIN   tempcount = 0;   prevcatnum := 0;   I:=0;   FOR tbrow IN  select * from product order by category, rank
   LOOP  i := i+1;
 IF prevcatnum != mrow.catnum OR i = 1 THEN   prevcatnum := mrow.catnum;   if i > 1 THEN
   RETURN NEXT retrow;
   END IF;   retrow.catnum := mrow.catnum;   retrow.corenum :=mrow.corenum;   retrow. mernum := mrow.mernum;   retrow.mersku := mrow.mersku;
   tempcount = 1;    retrow.catcount := tempcount;    prevcatnum := mrow.catnum; ELSE    tempcount := tempcount + 1;    retrow.catcount := tempcount;      END IF;
   END LOOP;   RETURN NEXT retrow;     RETURN;END'language 'PLPGSQL';
 


Re: [GENERAL] postgresql 8 abort with signal 10

2005-06-02 Thread Roman Neuhauser
# [EMAIL PROTECTED] / 2005-05-03 17:56:53 -0300:
> The FreeBSD is the last STABLE version. I can try to change some
> hardware, I already changed memory, what can I try now ? the processor
> ? motherboard ??

> On 5/3/05, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> > On Tue, 2005-05-03 at 15:04, Alexandre Biancalana wrote:
> > > Thank you for the detailed explanation Scott, they are very handy !!
> > >
> > > I reduced the shared_buffers to 32768, but the problem still occurs.
> > >
> > > Any other idea ??
> > 
> > Yeah, I had a sneaking suspicion that shared_buffers wasn't causing the
> > issue really.
> > 
> > Sounds like either a hardware fault, or a BSD bug.  I'd check the BSD
> > mailing lists for mention of said bug, and see if you can grab a spare
> > drive and install the last stable version of FreeBSD 4.x and if that
> > fixes the problem.
> > 
> > If you decide to try linux, avoid the 2.6 kernel, it's still got
> > issues...  2.4 is pretty stable.
> > 
> > I really doubt it's a problem in postgresql itself though.

For the sake of archives, what was causing the SIGBUSes?

-- 
How many Vietnam vets does it take to screw in a light bulb?
You don't know, man.  You don't KNOW.
Cause you weren't THERE. http://bash.org/?255991

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Stats not getting updated....

2005-06-02 Thread Tom Lane
Richard Huxton  writes:
> It is possible that during very busy periods the stats don't get 
> recorded - that's part of the design.

IIRC, backends only ship stats to the collector at transaction commit.
Or maybe it's at the end of processing a client command.  It's certainly
not continuous.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Old problem needs solution

2005-06-02 Thread Tom Lane
"Gerald D. Anderson" <[EMAIL PROTECTED]> writes:
> So, the plot thickens.  Is there somewhere I can go tweak a few bytes to 
> make it think it's 7.3?

No.  That's not what you want anyway; you want a late 7.4 build, just
one without the hierarchical-queries patch.  I dunno enough about Gentoo
to say how you get rid of a patch you don't want, but if it's anything
like RPMs, you can just dike the patch out of the specfile and rebuild.

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] Automate Postgres Backup In windows

2005-06-02 Thread Shaun Clements



I am trying to automate a postgres database data and schema 
dump through windows.
Does anyone have any suggestions on best way to do 
so.
Thanks
 


Kind Regards,Shaun 
Clements


[GENERAL] Using pg_dump in a cron

2005-06-02 Thread Patrick . FICHE



Hi 
all,
 
I'm using PostgreSQL 
8.0.2 on Solaris 10.
I would like to 
manage an automatic backup of my database.
 
Here is the line in 
the cron :
37 * * * * . 
$HOME/.profile; pg_dump alis -U postgres -f $AQSADIR/data/backup/alis_data.sql 
-a -d >/dev/null 2>&1
The dump execution 
is OK but my user gets a mail containing the following message 
:
 
Your "cron" job on 
ALIS. $HOME/.profile; pg_dump alis -U postgres -f 
$AQSADIR/data/backup/alis_data.sql -a -d >/dev/null 
2>&1
 
produced the 
following output:
 
stty: : No such 
device or address
What's wrong with it 
?

--- 
Patrick Fiche email : [EMAIL PROTECTED] tél : 01 69 29 36 
18 --- 

 


Re: [GENERAL] writing a file using procedure

2005-06-02 Thread Michael Fuhr
On Thu, Jun 02, 2005 at 11:00:56AM +0100, Richard Huxton wrote:
> Nageshwar Rao wrote:
> >Can we write/read a file in postgresql procedure ?
> 
> Using one of the "untrusted" languages, yes. They can only be used by an 
> administrator but have access to the full capabilities of the underlying 
> procedural language.

To qualify the above: only database superusers can create functions
written in an untrusted language, but any user can call such functions.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] [GENERAL] index row size 2728 exceeds btree maximum, 27

2005-06-02 Thread Richard Huxton

KÖPFERL Robert wrote:

To me it seems that the definer of this table missed the concept index ...
or the concept database
One usually looks up data using a key, but if the whole row is the key, what
data shall be looked up.



So short story long: Remove data from your index.  The data column seems
like the data to be looked up using the key 
(scan_id, host_ip, port_num, plugin_id, severity) or even less.

Postgres is able to take several indices over distinct columns into account.
Thus reducing the possible candidates to a hand full.
So several indices are also an option


Actually, Dinesh didn't mention he was using this for the speed of 
lookup. He'd defined the columns as being the PRIMARY KEY, presumably 
because he feels they are/should be unique. Given that they are rows 
from a logfile, I'm not convinced this is the case.


--
  Richard Huxton
  Archonet Ltd


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Old problem needs solution

2005-06-02 Thread Gerald D. Anderson
Ok, well it looks like Richard owes Tom a lunch, or that's my best guess 
anyway.  I am running Gentoo, as far as that particular patch, I'm 
unsure, but if it's installed by default then yes.  So, I've pulled down 
the earliest ebuild I can get which is 7.3.6.  Get it installed and go 
to start it:


FATAL:  The data directory was initialized by PostgreSQL version 7.4, 
which is not compatible with this version 7.3.6.


So, the plot thickens.  Is there somewhere I can go tweak a few bytes to 
make it think it's 7.3?  Also, is 7.3 going to be early enough?  This 
database has been around for years, and like I said, I'm not sure when 
this happened : /


Truly appreciate the help guys!

g



Alvaro Herrera wrote:


On Wed, Jun 01, 2005 at 10:24:25AM -0500, Gerald D. Anderson wrote:
 


Greetings!

At some point, I'm not even sure when, I apparently had a bad upgrade on 
one of my production databases.  I'm in a situation now where I've got 
my back up against a wall and not sure what to do.  The problem is, I'm 
receiving this message frequently:


did not find '}' at end of input node
   



Are you using Gentoo?  Are you using the hierarchical queries patch?
Were you using any of the above in the previous Postgres version?

If this is the case, you will have to get the old postmaster again, and
run it against your data directory, and use that to make the dump.  The
reload it in the new directory.

The problem with the hierarchical queries patch (which Gentoo applies to
some of their builds) is that it doesn't include the necessary
catalog-version increment.  So people can use the same data directory
with or without the patch applied, which is a mistake because it causes
the errors you are seeing.

 



---(end of broadcast)---
TIP 6: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Determining when a row was inserted

2005-06-02 Thread Terry Lee Tucker
I don't think there is a way to do that. You'll have to create an audit table 
and a rule to update it or you'll have to add a column to the table and a 
trigger to update it.

On Thursday 02 June 2005 01:22 am, Eisenhut, Glenn saith:
> Folks - hi
>
> Is it possible to determine when a row was inserted into a table using the
> system catalogs or such. I have the situation where I need to find out when
> a user was added to a user table - the table was not setup with a date to
> track this.
>
> Thanks
> Glenn

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Determining when a row was inserted

2005-06-02 Thread Bruno Wolff III
On Thu, Jun 02, 2005 at 06:22:01 +0100,
  "Eisenhut, Glenn" <[EMAIL PROTECTED]> wrote:
> 
> 
> Folks - hi
> 
> Is it possible to determine when a row was inserted into a table using the 
> system catalogs or such.
> I have the situation where I need to find out when a user was added to a user 
> table - the table was not setup with a date to track this.

No. If you want this information, you have to set it up yourself.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] index row size 2728 exceeds btree maximum, 2713

2005-06-02 Thread Bruno Wolff III
On Thu, Jun 02, 2005 at 17:48:47 +0530,
  Dinesh Pandey <[EMAIL PROTECTED]> wrote:
> Yes I am storing some "error messages" in data column, and the PK columns
> are party of search criteria.

If you need to be able to search based on the entire stored error message,
than you might try adding an indexed hash column to the table and using that
to speed up searches. You can still compare the full string in case you
have a hash collision, but those should be very rare.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Determining when a row was inserted

2005-06-02 Thread Eisenhut, Glenn


Folks - hi

Is it possible to determine when a row was inserted into a table using the 
system catalogs or such.
I have the situation where I need to find out when a user was added to a user 
table - the table was not setup with a date to track this.

Thanks
Glenn






***
The information in this email is confidential and may be legally privileged.  
Access to this email by anyone other than the intended addressee is 
unauthorized.  If you are not the intended recipient of this message, any 
review, disclosure, copying, distribution, retention, or any action taken or 
omitted to be taken in reliance on it is prohibited and may be unlawful.  If 
you are not the intended recipient, please reply to or forward a copy of this 
message to the sender and delete the message, any attachments, and any copies 
thereof from your system.
***

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] hpw to Count without group by

2005-06-02 Thread Edmund Bacon
[EMAIL PROTECTED] (Yudie Pg) writes:

> Hello,
> I have a table, structure like this:
>  create table product(
>  sku, int4 not null,
>  category int4 null,
>  display_name varchar(100) null,
>  rank int4 null
> )
>  let say example data:
> sku, category, display_name
> ===
> 10001, 5, postgresql, 132
> 10002, 5, mysql, 243
> 10003, 5, oracle, 323
> 10006, 7, photoshop, 53 
> 10007, 7, flash mx, 88
> 10008, 9, Windows XP, 44
>  10008, 9, Linux, 74
>  Expected query result:
>  sku, category, display_name, category_count
> 
> 10001, 5, postgresql, 3
>  10006, 7, photoshop, 2
>  10008, 9, Windows XP, 2
>  The idea is getting getting highest ranking each product category and COUNT 
> how many products in the category with SINGLE query.
>  the first 3 columns can be done with select distinct on (category) ... 
> order by category, rank desc but it still missing the category_count. I wish 
> no subquery needed for having simplest query plan.
>   Thank you. 
>   Yudie G.

I do not believe you can do this without a subquery - you are trying
to get 2 separate pieces of information from your data
   * some data about the record having MAX(rank) for each category
and
   * the count of records in each category

Note, however that you can get MAX(rank) and COUNT(category) in one
sequential pass of the data: e.g
 SELECT category, MAX(rank), COUNT(category) FROM product;

Joining this with the orignal table is not too dificult :

SELECT sku, category, display_name, category_count 
  FROM  product
  JOIN (SELECT category, MAX(rank) AS rank, COUNT(category) AS category_count
   FROM product 
   GROUP BY category) subq
USING(category, rank)
 ORDER BY sku;

Depending on what your data looks like, you might improve things by
having an index on category, and perhaps on (category, rank).

Note that there is may be a problem with this query: If you have more
than one product with the same rank in the same category, you may get
more than one record for that category.  Apply distinct on as
neccessary.

-- 
Remove -42 for email

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] interval integer comparison

2005-06-02 Thread Bruno Wolff III
On Thu, Jun 02, 2005 at 13:26:32 +0200,
  Havasvölgyi Ottó <[EMAIL PROTECTED]> wrote:
> 
> I tried to simulate this unexpected result, but with no success. Here in
> Hungary we had daylight saving this year on the 27th of March
> (http://webexhibits.org/daylightsaving/b.html). So I tried these:
> 
> select '2005-03-28'::date - '1 day'::interval;
> select '2005-03-28'::timestamp - '1 day'::interval;
> select '2005-03-28'::date - '24 hour'::interval;
> select '2005-03-28'::timestamp - '24 hour'::interval;
> 
> Each of the results were the same: 2005-03-27 00:00:00
> 
> I tried with a larger interval too but I didn't experience any shift in
> hours. By the way, I use PG 8.0.3 on WinXP SP1, I just experiment with
> Postgres.

I double checked and the promotion is to TIMESTAMP WITHOUT TIME ZONE so
so daylight savings won't in fact be a problem.

However, subtracting an integer will avoid the conversion and should
run slightly faster.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] writting a large store procedure

2005-06-02 Thread Tony Caduto

Sean,
I am tooting my own horn here, but I would recomend PG Lightning Admin.
It has a incredible function editor (and query editor) that is based on 
Synedit(synedit.sourceforge.net), and has full code completion that includes
all the Postgres built in functions,exception names,types etc etc, and 
it it has schema and table completion as well.

Just type the name of a schema and the .
i.e. public.  and you will see all objects for that schema including 
functions tables,domains etc.
Type the name of a table i.e. public.mytable. and you will see all the 
fields for that table.


We have a large internal project that has stored functions that are 1000 
lines long and of course we use PG Lightning Admin.


You can check it out here http://www.amsoftwaredesign.com , and get a 
full 30 day demo.

You will not be disappointed.

The price right now is only 15.99 USD, which is a incredible deal.  
After the pre-release the price is going up to 29.99, so if you grab a 
copy now you can get it for the cost of lunch :-)


Tony Caduto
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql 8.x


Sean Davis wrote:

I have to recommend pgEdit (nearly free and does have a nearly 
fully-functional test version).


Sean

On Jun 2, 2005, at 3:57 AM, Rodríguez Rodríguez, Pere wrote:


Hello,

I'm writing a large store procedures (more than 700 lines) and I have 
much problems to debug it. How can I debug it easily?


I use pgAdmin, is there another editor (free software) that permit 
write large store procedure more easily?


Thanks in advance.

pere




---(end of broadcast)---
TIP 6: Have you searched our list archives?

  http://archives.postgresql.org






---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] index row size 2728 exceeds btree maximum, 27

2005-06-02 Thread Bruno Wolff III
On Thu, Jun 02, 2005 at 14:08:54 +0200,
  KÖPFERL Robert <[EMAIL PROTECTED]> wrote:
> To me it seems that the definer of this table missed the concept index ...
> or the concept database
> One usually looks up data using a key, but if the whole row is the key, what
> data shall be looked up.

You sometimes do want to make a whole role a key to avoid duplicate keys.
A common case is when you use a table to connect two other tables with
a many to many relation.

It would be rare to want to do that with large text values though.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] [SQL] index row size 2728 exceeds btree maximum, 2713

2005-06-02 Thread Dinesh Pandey








Yes I am storing
some “error messages” in data column, and the PK columns are party
of search criteria.

 



Thanks
Dinesh Pandey











From: Ramakrishnan
Muralidharan [mailto:[EMAIL PROTECTED] 
Sent: Thursday, June 02, 2005 4:44
PM
To: [EMAIL PROTECTED];
pgsql-general@postgresql.org; PostgreSQL
Subject: RE: [SQL] index row size
2728 exceeds btree maximum, 2713



 




Hi 





 






It is not advisable to add a variable length data field in the Index key, since
it is very difficult predict the size of the field which may vary from record
to record.  





 






are you included this field for Full text search on data field?





 





Regards,





R.Muralidharan





-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Dinesh Pandey
Sent: Thursday, January 01, 2004
3:14 PM
To: Ramakrishnan Muralidharan;
pgsql-general@postgresql.org; 'PostgreSQL'
Subject: Re: [SQL] index row size
2728 exceeds btree maximum, 2713



Hi,

 

One of the columns in primary key is of
type “TEXT”. I am able to insert with small data, but for around
3000 characters it’s failing. How to handle that?

 

Thanks
Dinesh Pandey







From: Ramakrishnan
Muralidharan [mailto:[EMAIL PROTECTED] 
Sent: Thursday, June 02, 2005 3:11
PM
To: [EMAIL PROTECTED];
pgsql-general@postgresql.org; PostgreSQL
Subject: RE: [SQL] index row size
2728 exceeds btree maximum, 2713



 




Hi,





 






The issue looks like your Index width exceeds the maximum width  of the index key limit, Please review the keys used in the index.





 





Regards,





R.Muralidharan





 





 





-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Dinesh Pandey
Sent: Thursday, June 02, 2005
12:35 PM
To: pgsql-general@postgresql.org;
'PostgreSQL'
Subject: [SQL] index row size 2728
exceeds btree maximum, 2713

TABLE 

---+---+---

  Column  
 |
Type   

---+---+---

 scan_id    |
bigint   


 host_ip    | character varying(15) 

 port_num |
integer  


 plugin_id  | integer 
 

 severity   | character varying(50) 

 data   | text 


Indexes:

    "pk_scanned_port_info"
PRIMARY KEY, btree (scan_id, host_ip, port_num, plugin_id, severity, data)

 

 

On inserting record I am getting this
error “index row size 2728 exceeds btree maximum, 2713”

 

How
to solve this problem?

 

 

 












Re: [SQL] [GENERAL] index row size 2728 exceeds btree maximum, 27

2005-06-02 Thread KÖPFERL Robert
To me it seems that the definer of this table missed the concept index ...
or the concept database
One usually looks up data using a key, but if the whole row is the key, what
data shall be looked up.

So short story long: Remove data from your index.  The data column seems
like the data to be looked up using the key 
(scan_id, host_ip, port_num, plugin_id, severity) or even less.
Postgres is able to take several indices over distinct columns into account.
Thus reducing the possible candidates to a hand full.
So several indices are also an option

|-Original Message-
|From: Dinesh Pandey [mailto:[EMAIL PROTECTED]
|Sent: Donnerstag, 01. Jänner 2004 11:09
|To: 'Richard Huxton'
|Cc: pgsql-general@postgresql.org; 'PostgreSQL'
|Subject: Re: [SQL] [GENERAL] index row size 2728 exceeds btree maximum,
|2713
|
|
|I am inserting some log messages in the column "data". (Basically I am
|inserting records from reading an xml file)
|
|In the PRIMARY KEY, btree (scan_id, host_ip, port_num, 
|plugin_id, severity,
|data) data is of type TEXT and can contain long string values.
|
|The question is how to remove this error "index row size 2728 
|exceeds btree
|maximum, 2713" by increasing the btree size?
|
|The big problem is "I can not add any additional column in this table."
|
|Thanks
|Dinesh Pandey
|
|-Original Message-
|From: [EMAIL PROTECTED]
|[mailto:[EMAIL PROTECTED] On Behalf Of Richard Huxton
|Sent: Thursday, June 02, 2005 3:29 PM
|To: [EMAIL PROTECTED]
|Cc: pgsql-general@postgresql.org; 'PostgreSQL'
|Subject: Re: [GENERAL] index row size 2728 exceeds btree maximum, 2713
|
|Dinesh Pandey wrote:
|> ---+---+---
|>   Column| Type   
|> ---+---+---
|>  scan_id| bigint
|>  host_ip| character varying(15) 
|>  port_num | integer   
|>  plugin_id  | integer   
|>  severity   | character varying(50) 
|>  data   | text  
|> 
|> Indexes:
|> "pk_scanned_port_info" PRIMARY KEY, btree (scan_id, 
|host_ip, port_num,
|> plugin_id, severity, data)
|> 
|> On inserting record I am getting this error "index row size 
|2728 exceeds
|> btree maximum, 2713"
|
|Well - the error message is clear enough. The question is, what to do.
|
|Without knowing what the table "means", it's difficult to say what the 
|primary-key should be, but it seems unlikely to include an 
|unlimited-length text-field called "data".
|
|If the data itself doesn't offer any suitable candidate keys (as can 
|well be the case) then common practice is to generate a unique number 
|and use that as an ID - in PostgreSQL's case by use of the SERIAL 
|pseudo-type.
|
|Does that help?
|-- 
|   Richard Huxton
|   Archonet Ltd
|
|---(end of 
|broadcast)---
|TIP 7: don't forget to increase your free space map settings
|
|
|
|---(end of 
|broadcast)---
|TIP 2: you can get off all lists at once with the unregister command
|(send "unregister YourEmailAddressHere" to 
|[EMAIL PROTECTED])
|

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] One Sequence for all tables or one Sequence for each

2005-06-02 Thread Janning Vygen
Am Donnerstag, 2. Juni 2005 12:03 schrieb Martijn van Oosterhout:
> On Thu, Jun 02, 2005 at 12:58:33PM +0300, Kaloyan Iliev Iliev wrote:
> > Hi,
> >
> > I suppose the paralel work will be a problem if you are using one
> > sequence for all tables. 
>
> I don't know about this. Sequences are designed to be very efficient,
> they don't rollback and can be cached by backends.
>
> In several of the databases I setup, I sometimes arranged for sequences
> to start at different points so when you setup a foreign key there was
> no chance you linked it to the wrong table. This especially in cases
> where there might be confusion about which table links where.
>
> Using one serial for everything does this even better. As for
> performance, I think disk I/O is going to be an issue before getting
> sequence numbers will be...

I guess i will use one sequence for all tables if there are now drawbacks. 
BTW: OIDs are using the same conecpt, don't they? And for me it makes sense 
to use a sequence only for getting a unique identifier and nothing else. even 
better if this identifier is unique among all tables. 

Thanks a lot for your opinions!

regards
janning


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Automate Postgres Backup In windows

2005-06-02 Thread Richard Huxton

Shaun Clements wrote:

Apologies if this post is a repeat.
 
I am trying to automate a postgres database data and schema dump through

windows.
Does anyone have any suggestions on best way to do so.
Thanks


Call pg_dump via a batch-file and run it from the Windows scheduler? Or 
am I missing something?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] interval integer comparison

2005-06-02 Thread Havasvölgyi Ottó
Hi,

- Original Message - 
From: "Bruno Wolff III" <[EMAIL PROTECTED]>
To: "Havasvölgyi Ottó" <[EMAIL PROTECTED]>
Cc: 
Sent: Thursday, June 02, 2005 3:53 AM
Subject: Re: [GENERAL] interval integer comparison


> On Thu, Jun 02, 2005 at 01:54:12 +0200,
>   Havasvölgyi Ottó <[EMAIL PROTECTED]> wrote:
> > Thank you Tom.
> >
> > It was a bit confusing because my WHERE clause looked something like
this:
> >
> > ... WHERE date_field - current_date < '21 days'::interval;
> >
> > And then I got records, whose with date_field's year was 2010. :-o
> > Now I am using this formula:
> >
> > ... WHERE date_field < current_date + '21 days'::interval;
>
> If date_field and current_date are realy of type date (and not say
> some timestamp varient), then you should use:
> ... WHERE date_field < current_date + 21
>
> What you used above may have unexpected results near a daylight savings
> time change as the data will be promoted to timestamps to do the
> comparison.
>
> ---(end of broadcast)---
> TIP 8: explain analyze is your friend
>
>

I tried to simulate this unexpected result, but with no success. Here in
Hungary we had daylight saving this year on the 27th of March
(http://webexhibits.org/daylightsaving/b.html). So I tried these:

select '2005-03-28'::date - '1 day'::interval;
select '2005-03-28'::timestamp - '1 day'::interval;
select '2005-03-28'::date - '24 hour'::interval;
select '2005-03-28'::timestamp - '24 hour'::interval;

Each of the results were the same: 2005-03-27 00:00:00

I tried with a larger interval too but I didn't experience any shift in
hours. By the way, I use PG 8.0.3 on WinXP SP1, I just experiment with
Postgres.

Best Regards,
Otto



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] [SQL] index row size 2728 exceeds btree maximum, 2713

2005-06-02 Thread Ramakrishnan Muralidharan



 Hi 
 
 
It is not advisable to add a variable length data field in the Index key, since 
it is very difficult predict the size of the field which may vary from record to 
record.  
 
 
are you included this field for Full text search on data 
field?
 
Regards,
R.Muralidharan

  -Original Message-From: 
  [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]On 
  Behalf Of Dinesh PandeySent: Thursday, January 01, 2004 3:14 
  PMTo: Ramakrishnan Muralidharan; pgsql-general@postgresql.org; 
  'PostgreSQL'Subject: Re: [SQL] index row size 2728 exceeds btree 
  maximum, 2713
  
  
  Hi,
   
  One of the columns 
  in primary key is of type “TEXT”. I am able to insert with small data, but for 
  around 3000 characters it’s failing. How to handle 
  that?
   
  ThanksDinesh 
  Pandey
  
  
  
  From: 
  Ramakrishnan Muralidharan [mailto:[EMAIL PROTECTED] 
  Sent: Thursday, June 02, 
  2005 3:11 PMTo: 
  [EMAIL PROTECTED]; 
  pgsql-general@postgresql.org; PostgreSQLSubject: RE: [SQL] index row size 2728 
  exceeds btree maximum, 2713
   
  
   
  Hi,
  
   
  
   
  The issue looks like your Index width exceeds the maximum width 
   of the index key 
  limit, Please review the keys used in the index.
  
   
  
  Regards,
  
  R.Muralidharan
  
   
  
   
  
-Original 
Message-From: 
[EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED]]On Behalf Of Dinesh PandeySent: Thursday, June 02, 2005 12:35 
PMTo: 
pgsql-general@postgresql.org; 'PostgreSQL'Subject: [SQL] index row size 2728 
exceeds btree maximum, 2713
TABLE 

---+---+---
  Column   
 | 
Type   
---+---+---
 scan_id   
 | 
bigint    

 host_ip   
 | character varying(15) 
 port_num | 
integer   

 plugin_id  | 
integer  
 
 severity   | 
character varying(50) 
 data  
 | text  

Indexes:
    
"pk_scanned_port_info" PRIMARY KEY, btree (scan_id, host_ip, port_num, 
plugin_id, severity, data)
 
 
On 
inserting record I am getting this error “index 
row size 2728 exceeds btree maximum, 2713”
 
How 
to solve this problem?
 
 
 


[GENERAL] Automate Postgres Backup In windows

2005-06-02 Thread Shaun Clements



Apologies if this post is a repeat.
 
I am trying to automate a postgres database data and schema 
dump through windows.
Does anyone have any suggestions on best way to do 
so.
Thanks
 


Kind Regards,Shaun 
Clements


Re: [GENERAL] Stats not getting updated....

2005-06-02 Thread Richard Huxton

Himanshu Baweja wrote:

i also noted one more thing... the stats are getting updated only
when i do vacuum is that wht is supposed to happen... or
something is wrong


Hmm - there's something strange going on certainly. Try a couple of 
simple queries on a table and see whether the stats get updated.


It is possible that during very busy periods the stats don't get 
recorded - that's part of the design.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Stats not getting updated....

2005-06-02 Thread Himanshu Baweja
i also noted one more thing... the stats are getting updated only when i do vacuum is that wht is supposed to happen... or something is wrong
 
thx
HimanshuHimanshu Baweja <[EMAIL PROTECTED]> wrote:

i am trying to identify which tables should be moved to diff drives i first identified the most used tables... by looking at the data in pg_statio_user_tables.
and then i did sampling of the io-usage of these tables every 2 mins... to identify which are getting used when
 
but the problem is my stats are not getting updated they first get updated around 20% done... and then once more at around 70% done why is the stats not getting updated more frequently.
 
the test application i am using has a 33 min test with lots of read and write
 
thx
Himanshu
		Discover Yahoo! 
Have fun online with music videos, cool games, IM & more. Check it out!

Re: [SQL] [GENERAL] index row size 2728 exceeds btree maximum, 2713

2005-06-02 Thread Richard Huxton

Dinesh Pandey wrote:

I am inserting some log messages in the column "data". (Basically I am
inserting records from reading an xml file)

In the PRIMARY KEY, btree (scan_id, host_ip, port_num, plugin_id, severity,
data) data is of type TEXT and can contain long string values.


I'm still not convinced that "data" makes a sensible part of the primary 
key. Can you give an example of "data" and explain why the whole value 
determines unique-ness?



The question is how to remove this error "index row size 2728 exceeds btree
maximum, 2713" by increasing the btree size?

The big problem is "I can not add any additional column in this table."


Why not?

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] [SQL] index row size 2728 exceeds btree maximum, 2713

2005-06-02 Thread Martijn van Oosterhout
On Thu, Jan 01, 2004 at 03:13:48PM +0530, Dinesh Pandey wrote:
> One of the columns in primary key is of type "TEXT". I am able to insert
> with small data, but for around 3000 characters it's failing. How to handle
> that?

Easy, btree indexes can't handle data with more that 2713 bytes.

You need to decide if having a single index on all your columns is
actually what you want. Depending on your queries it may not even be
used.

Hope this helps,

> From: Ramakrishnan Muralidharan
> [mailto:[EMAIL PROTECTED] 
> Sent: Thursday, June 02, 2005 3:11 PM
> To: [EMAIL PROTECTED]; pgsql-general@postgresql.org; PostgreSQL
> Subject: RE: [SQL] index row size 2728 exceeds btree maximum, 2713
> 
>  
> 
>  Hi,
> 
>  
> 
>  The issue looks like your Index width exceeds the maximum width
> of the index key limit, Please review the keys used in the index.
> 
>  
> 
> Regards,
> 
> R.Muralidharan
> 
>  
> 
>  
> 
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Behalf Of Dinesh Pandey
> Sent: Thursday, June 02, 2005 12:35 PM
> To: pgsql-general@postgresql.org; 'PostgreSQL'
> Subject: [SQL] index row size 2728 exceeds btree maximum, 2713
> 
> TABLE 
> 
> ---+---+---
> 
>   Column| Type   
> 
> ---+---+---
> 
>  scan_id| bigint
> 
>  host_ip| character varying(15) 
> 
>  port_num | integer   
> 
>  plugin_id  | integer   
> 
>  severity   | character varying(50) 
> 
>  data   | text  
> 
> Indexes:
> 
> "pk_scanned_port_info" PRIMARY KEY, btree (scan_id, host_ip, port_num,
> plugin_id, severity, data)
> 
>  
> 
>  
> 
> On inserting record I am getting this error "index row size 2728 exceeds
> btree maximum, 2713"
> 
>  
> 
> How to solve this problem?
> 
>  
> 
>  
> 
>  
> 

-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.


pgpSeQMnm6cPD.pgp
Description: PGP signature


Re: [GENERAL] index row size 2728 exceeds btree maximum, 2713

2005-06-02 Thread Dinesh Pandey
I am inserting some log messages in the column "data". (Basically I am
inserting records from reading an xml file)

In the PRIMARY KEY, btree (scan_id, host_ip, port_num, plugin_id, severity,
data) data is of type TEXT and can contain long string values.

The question is how to remove this error "index row size 2728 exceeds btree
maximum, 2713" by increasing the btree size?

The big problem is "I can not add any additional column in this table."

Thanks
Dinesh Pandey

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Richard Huxton
Sent: Thursday, June 02, 2005 3:29 PM
To: [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org; 'PostgreSQL'
Subject: Re: [GENERAL] index row size 2728 exceeds btree maximum, 2713

Dinesh Pandey wrote:
> ---+---+---
>   Column| Type   
> ---+---+---
>  scan_id| bigint
>  host_ip| character varying(15) 
>  port_num | integer   
>  plugin_id  | integer   
>  severity   | character varying(50) 
>  data   | text  
> 
> Indexes:
> "pk_scanned_port_info" PRIMARY KEY, btree (scan_id, host_ip, port_num,
> plugin_id, severity, data)
> 
> On inserting record I am getting this error "index row size 2728 exceeds
> btree maximum, 2713"

Well - the error message is clear enough. The question is, what to do.

Without knowing what the table "means", it's difficult to say what the 
primary-key should be, but it seems unlikely to include an 
unlimited-length text-field called "data".

If the data itself doesn't offer any suitable candidate keys (as can 
well be the case) then common practice is to generate a unique number 
and use that as an ID - in PostgreSQL's case by use of the SERIAL 
pseudo-type.

Does that help?
-- 
   Richard Huxton
   Archonet Ltd

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] One Sequence for all tables or one Sequence for each

2005-06-02 Thread Martijn van Oosterhout
On Thu, Jun 02, 2005 at 12:58:33PM +0300, Kaloyan Iliev Iliev wrote:
> Hi,
> 
> I suppose the paralel work will be a problem if you are using one 
> sequence for all tables. If you insert a large amount of rows in 
> different tables there will be great slowdown because your sequence is 
> the bottle neck of your database. All the inserts must read from it one 
> by one. If you have many sequences (one for each table PK) every insert 
> in a different table will use different sequence and this will improve 
> performance.

I don't know about this. Sequences are designed to be very efficient,
they don't rollback and can be cached by backends.

In several of the databases I setup, I sometimes arranged for sequences
to start at different points so when you setup a foreign key there was
no chance you linked it to the wrong table. This especially in cases
where there might be confusion about which table links where.

Using one serial for everything does this even better. As for
performance, I think disk I/O is going to be an issue before getting
sequence numbers will be...
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.


pgp25NHwrToIj.pgp
Description: PGP signature


Re: [GENERAL] writing a file using procedure

2005-06-02 Thread Tino Wildenhain
Am Donnerstag, den 02.06.2005, 15:06 +0530 schrieb Nageshwar Rao:
> Can we write/read a file in postgresql procedure ?

Yes.
-- 
Tino Wildenhain <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] writing a file using procedure

2005-06-02 Thread Richard Huxton

Nageshwar Rao wrote:

Can we write/read a file in postgresql procedure ?


Using one of the "untrusted" languages, yes. They can only be used by an 
administrator but have access to the full capabilities of the underlying 
procedural language.


Of course, you will read and write files as the user PostgreSQL runs as, 
which opens up a whole world of possibilities regarding security 
breaches and database destruction.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] writing a file using procedure

2005-06-02 Thread Peter Eisentraut
Am Donnerstag, 2. Juni 2005 11:36 schrieb Nageshwar Rao:
> Can we write/read a file in postgresql procedure ?

That depends on the language you use.  PL/PerlU and PL/sh can do it, for 
example, whereas plain PL/Perl or PL/pgSQL cannot.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] index row size 2728 exceeds btree maximum, 2713

2005-06-02 Thread Richard Huxton

Dinesh Pandey wrote:

---+---+---
  Column| Type   
---+---+---
 scan_id| bigint
 host_ip| character varying(15) 
 port_num | integer   
 plugin_id  | integer   
 severity   | character varying(50) 
 data   | text  


Indexes:
"pk_scanned_port_info" PRIMARY KEY, btree (scan_id, host_ip, port_num,
plugin_id, severity, data)

On inserting record I am getting this error "index row size 2728 exceeds
btree maximum, 2713"


Well - the error message is clear enough. The question is, what to do.

Without knowing what the table "means", it's difficult to say what the 
primary-key should be, but it seems unlikely to include an 
unlimited-length text-field called "data".


If the data itself doesn't offer any suitable candidate keys (as can 
well be the case) then common practice is to generate a unique number 
and use that as an ID - in PostgreSQL's case by use of the SERIAL 
pseudo-type.


Does that help?
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] writting a large store procedure

2005-06-02 Thread Sean Davis
I have to recommend pgEdit (nearly free and does have a nearly 
fully-functional test version).


Sean

On Jun 2, 2005, at 3:57 AM, Rodríguez Rodríguez, Pere wrote:


Hello,

I'm writing a large store procedures (more than 700 lines) and I have 
much problems to debug it. How can I debug it easily?


I use pgAdmin, is there another editor (free software) that permit 
write large store procedure more easily?


Thanks in advance.

pere



---(end of broadcast)---
TIP 6: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] One Sequence for all tables or one Sequence for each

2005-06-02 Thread Kaloyan Iliev Iliev

Hi,

I suppose the paralel work will be a problem if you are using one 
sequence for all tables. If you insert a large amount of rows in 
different tables there will be great slowdown because your sequence is 
the bottle neck of your database. All the inserts must read from it one 
by one. If you have many sequences (one for each table PK) every insert 
in a different table will use different sequence and this will improve 
performance.


Kaloyan Iliev

Janning Vygen wrote:


Hi,

if you define a SERIAL column postgresql's default is to generate a sequence 
for each SERIAL column (table_column_seq). But you can use one sequence for 
the whole database like this:


CREATE dbsequence;
CREATE TABLE one (
 id int4 NOT NULL DEFAULT nextval('dbseq')
);
CREATE TABLE two (
 id int4 NOT NULL DEFAULT nextval('dbseq')
);

One drawback: You reach the internal end of a sequence faster if you use your 
sequence for all tables. But this can be avoided if you use int8 datatype. 

Are there other drawbacks/benfits using one Sequence for each table or one 
sequence for all tables?


kind regards,
janning

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


 



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] [SQL] index row size 2728 exceeds btree maximum, 2713

2005-06-02 Thread Dinesh Pandey










Hi,

 

One of the columns in primary key is of
type “TEXT”. I am able to insert with small data, but for around
3000 characters it’s failing. How to handle that?

 

Thanks
Dinesh Pandey







From: Ramakrishnan
Muralidharan [mailto:[EMAIL PROTECTED] 
Sent: Thursday, June 02, 2005 3:11
PM
To: [EMAIL PROTECTED];
pgsql-general@postgresql.org; PostgreSQL
Subject: RE: [SQL] index row size
2728 exceeds btree maximum, 2713



 




Hi,





 






The issue looks like your Index width exceeds the maximum width  of the index key limit, Please review the keys used in the index.





 





Regards,





R.Muralidharan





 





 





-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Dinesh Pandey
Sent: Thursday, June 02, 2005
12:35 PM
To: pgsql-general@postgresql.org;
'PostgreSQL'
Subject: [SQL] index row size 2728
exceeds btree maximum, 2713

TABLE 

---+---+---

  Column  
 |
Type   

---+---+---

 scan_id    |
bigint   


 host_ip    | character varying(15) 

 port_num |
integer  


 plugin_id  | integer 
 

 severity   | character varying(50) 

 data   | text 


Indexes:

    "pk_scanned_port_info"
PRIMARY KEY, btree (scan_id, host_ip, port_num, plugin_id, severity, data)

 

 

On inserting record I am getting this
error “index row size 2728 exceeds btree maximum, 2713”

 

How
to solve this problem?

 

 

 










[GENERAL] writing a file using procedure

2005-06-02 Thread Nageshwar Rao








Can we write/read a file in postgresql procedure ?








Re: [GENERAL] [SQL] index row size 2728 exceeds btree maximum, 2713

2005-06-02 Thread Ramakrishnan Muralidharan



 Hi,
 
 
The issue looks like your Index width exceeds the maximum width 
 of the index key limit, 
Please review the keys used in the index.
 
Regards,
R.Muralidharan
 
 

  -Original Message-From: 
  [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]On 
  Behalf Of Dinesh PandeySent: Thursday, June 02, 2005 12:35 
  PMTo: pgsql-general@postgresql.org; 'PostgreSQL'Subject: 
  [SQL] index row size 2728 exceeds btree maximum, 2713
  
  TABLE 
  
  ---+---+---
    Column   
   | 
  Type   
  ---+---+---
   scan_id   
   | 
  bigint    
  
   host_ip   
   | character varying(15) 
   port_num | 
  integer   
  
   plugin_id  | 
  integer  
   
   severity   | 
  character varying(50) 
   data  
   | 
  text  
  
  Indexes:
      
  "pk_scanned_port_info" PRIMARY KEY, btree (scan_id, host_ip, port_num, 
  plugin_id, severity, data)
   
   
  On inserting 
  record I am getting this error “index 
  row size 2728 exceeds btree maximum, 2713”
   
  How 
  to solve this problem?
   
   
   


[GENERAL] Stats not getting updated....

2005-06-02 Thread Himanshu Baweja
i am trying to identify which tables should be moved to diff drives i first identified the most used tables... by looking at the data in pg_statio_user_tables.
and then i did sampling of the io-usage of these tables every 2 mins... to identify which are getting used when
 
but the problem is my stats are not getting updated they first get updated around 20% done... and then once more at around 70% done why is the stats not getting updated more frequently.
 
the test application i am using has a 33 min test with lots of read and write
 
thx
Himanshu
		Discover Yahoo! 
Find restaurants, movies, travel & more fun for the weekend. Check it out!

[GENERAL] One Sequence for all tables or one Sequence for each table?

2005-06-02 Thread Janning Vygen
Hi,

if you define a SERIAL column postgresql's default is to generate a sequence 
for each SERIAL column (table_column_seq). But you can use one sequence for 
the whole database like this:

CREATE dbsequence;
CREATE TABLE one (
  id int4 NOT NULL DEFAULT nextval('dbseq')
);
CREATE TABLE two (
  id int4 NOT NULL DEFAULT nextval('dbseq')
);

One drawback: You reach the internal end of a sequence faster if you use your 
sequence for all tables. But this can be avoided if you use int8 datatype. 

Are there other drawbacks/benfits using one Sequence for each table or one 
sequence for all tables?

kind regards,
janning

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] writting a large store procedure

2005-06-02 Thread ouyang_jw
Title: writting a large store procedure



I use EMS postgresql manager 
2.8.0.3
 

  - Original Message - 
  From: 
  Rodríguez Rodríguez, 
  Pere 
  To: pgsql-general@postgresql.org 
  
  Sent: Thursday, June 02, 2005 3:57 
  PM
  Subject: [GENERAL] writting a 
  large store procedure
  
  Hello, 
  I'm writing a large store procedures (more than 700 
  lines) and I have much problems to debug it. How can I debug it 
  easily?
  I use pgAdmin, is there another editor (free 
  software) that permit write large store procedure more easily? 
  Thanks in advance. 
  pere 


[GENERAL] writting a large store procedure

2005-06-02 Thread Rodríguez Rodríguez, Pere
Title: writting a large store procedure





Hello,


I'm writing a large store procedures (more than 700 lines) and I have much problems to debug it. How can I debug it easily?

I use pgAdmin, is there another editor (free software) that permit write large store procedure more easily?


Thanks in advance.


pere





[GENERAL] index row size 2728 exceeds btree maximum, 2713

2005-06-02 Thread Dinesh Pandey








TABLE 

---+---+---

  Column    | Type   

---+---+---

 scan_id    | bigint    

 host_ip    | character varying(15) 

 port_num | integer   

 plugin_id  | integer   

 severity   | character varying(50) 

 data   | text  

Indexes:

    "pk_scanned_port_info" PRIMARY KEY, btree
(scan_id, host_ip, port_num, plugin_id, severity, data)

 

 

On inserting record I am getting this
error “index row size 2728 exceeds btree maximum, 2713”

 

How
to solve this problem?