Re: [GENERAL] Postgresql mentioned on Newsforge MySQL article

2006-08-29 Thread Markus Schiltknecht

Scott Marlowe wrote:

Was this all the same basic task implemented by different teams then?


Yep.


Can we see the code?  hack it?  I'm sure someone here could help out.


Sure.


I don't care about the contest, but it would be nice to be able to put
out a version that could compete with MySQL's.


Sure. The main db-contest site of the magazine is [1], but it's all 
german. You'lll find an english translation of the original article, 
describing the problem to solve in [2].


The contest entries are downloadable on [3]. Mine is the 'bluegap' one. 
If you want to hack together a high-performance solution based on 
PostgreSQL I'd start with Alvar C.H. Freude's disqualified entry.


Happy hacking

Markus


[1]: http://www.heise.de/ct/dbcontest/
[2]: http://firebird.sourceforge.net/connect/ct-dbContest.html
[3]: http://www.heise.de/ct/dbcontest/teilnehmer.shtml

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Shared Objects (Dynamic loading)

2006-08-29 Thread Tiger Quimpo
[original email got setn to Jasbinder but not to the list
 fixing that]
On 8/28/06, Gerald Timothy G Quimpo... wrote:
> Any reason why you don't do your functions and triggers in
> pl/pgsql instead?  It's simpler to setup and learn
> than pl/perl, IMO.

On Mon, 2006-08-28 at 03:27 -0400, Jasbinder Bali wrote:
> Its because my trigger has to initiate some unix tools 
> and the code for the same is already written in Perl.
> So my trigger just needs to call the Perl program that 
> would do the needful eventually.

OK. Something to keep in mind:

Whatever your triggers do can be rolled back if the 
transaction rolls back for whatever reason.  If 
you're calling external programs inside your trigger,
you're performing actions which you can't rollback.
For any rolledback transactions where you already ran
the external unix tools, there's going to be a 
disconnect between the data and what the unix tools
did.

One approach to this is to have a status table where 
you push things that should be executed externally, 
have the triggers insert into that status table, and 
have a separate program that either polls the status 
table or does a LISTEN for notification that it should 
check the status table.

There's some discussion of that in the archives. This
way, inserts into the status table ALSO rollback if the
transaction rolls back, so if a status table entry
ever becomes visible, it's only because the transaction
already completed and it's safe to do the external 
unix tool tasks.

tiger


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] xpath_string and group by

2006-08-29 Thread Perez
Hi all,

Using 8.1.4 and contrib/xml2.  When I do a 

select xpath_string(note, '//Thing') as note,
   count(aDate) from theTable
group by lower(xpath_string(note, '//Thing'))
order by 2 desc;

I get an error:
GROUP BY must contain note.  
But I can do that for a plain text/varchar field.  Adding the non-xpath 
note field messes up the grouping.

Can someone explain what interaction of features causes the above?  I'm
finding that to do the counts the way I want (case-insensitive with
trimmed blanks) I end up doing some variation of:

select note, count(aDate) from 
  (select  lower(xpath_string(note, '//Thing')) as note, aDate from 
theTable) as foo
group by note

tia,
arturo

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Fwd: Can't populate database using Copy

2006-08-29 Thread Jim
OK, disabled the SELinux , so can copy now; thanks for your time...Jim <[EMAIL PROTECTED]> wrote: Date: Sat, 26 Aug 2006 11:35:56 -0700 (PDT)From: Jim <[EMAIL PROTECTED]>Subject: Can't populate database using CopyTo: pgsql-general@postgresql.org copy tabledaily from '/home/jb/FinalData.txt' with csv;ERROR:  could not open file "/home/jb/FinalData.txt" for reading: Permission denied.  Yet everything else works, including populating tables with Insert queries.All the permissions were changed for world access ( perhaps bad idea but out of  frustration; and am the only person on this computer ).drwxrwxrwx   3 root root  4096 Aug 21 10:19 home drwxrwxrwx 25 jb jb 4096 Aug 26 11:18 jb -rwxrwxrwx 1 jb   jb 255 Aug 25 18:15
 FinalData.txtI see other people with this same problem on the mailing lists, but no example-solutions are offered.  Can you get the newbys started ?  Thanks.

Re: [GENERAL] mising chunk

2006-08-29 Thread Andrzej



Hello , thanks for answer. i've also serach goole 
to find any answer but nowhere have found that. But how to find it - can you 
send me some queries that you have used???It would be very glad 
 
Thanks very much
Andrzej

  - Original Message - 
  From: 
  Silvela, Jaime 
  (Exchange) 
  To: Andrzej Folek ; pgsql-general@postgresql.org 
  
  Sent: Saturday, August 26, 2006 10:03 
  PM
  Subject: RE: [GENERAL] mising chunk
  
  
  I had this problem a 
  few weeks ago.
  What I did was 
  isolate it into an individual row that was causing the problem, then going 
  over the fields one by one, till I located the free text field that was 
  responsible.
  Then I repopulated it 
  with the value in the latest backup I could find.
  I wasn’t able to find 
  any better techniques on the web.
   
  For locating the 
  faulty row, you might want to write a procedural routine that will query rows 
  sequentially.
  I did it in a very 
  crude way by using LIMIT statements till I hit the 
  mark.
   
  As to why this 
  happens, the suggestion I got here was faulty 
  hardware.
   
  Good 
  luck
   
   
  
  
  
  
  From: 
  [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
  On Behalf Of Andrzej 
  FolekSent: Saturday, August 
  26, 2006 3:47 AMTo: 
  pgsql-general@postgresql.orgSubject: [GENERAL] mising 
  chunk
   
  I've got a problem with 
  conversion.Database is running on pg8.1.3I've 2 tables, in one there 
  is mediabinary collumn of bytea type. And during conversion to another table / 
  conversion only move data from timestamp type to timestamptz but mediabinary 
  in botha are the same. And during query which move data I recieve ' MISSING 
  CHUNK NUMBER 0 FOR TOAST VALUE .. I've re-index this table, vacumed and 
  nothing. Everytime I get this info. Please give me some advice how to fix 
  this or even drop data to file I don't know, any suggestion 
  
  
   
  
  

  ***Bear 
  Stearns is not responsible for any recommendation, solicitation,offer or 
  agreement or any information about any transaction, customeraccount or 
  account activity contained in this communication.Bear Stearns does not 
  provide tax, legal or accounting advice.  Youshould consult your own 
  tax, legal and accounting advisors beforeengaging in any transaction. In 
  order for Bear Stearns to comply withInternal Revenue Service Circular 230 
  (if applicable), you are notifiedthat any discussion of U.S. federal tax 
  issues contained or referred toherein is not intended or written to be 
  used, and cannot be used, forthe purpose of:  (A) avoiding penalties 
  that may be imposed under theInternal Revenue Code; nor (B) promoting, 
  marketing or recommending toanother party any transaction or matter 
  addressed 
  herein.***


Re: [GENERAL] Shared Objects (Dynamic loading)

2006-08-29 Thread Gerald Timothy G Quimpo
On Mon, 2006-08-28 at 01:29 -0400, Jasbinder Bali wrote:
> Also, the triggers that i wrote in C are not all that elaborative.
> They are pretty basic triggers. Also, I'm a rookie in perl but don't
> need to do something hifi with it. 

Any reason why you don't do your functions and triggers in
pl/pgsql instead?  It's simpler to setup and learn 
than pl/perl, IMO.

tiger

-- 
Gerald Timothy Quimpo   [EMAIL PROTECTED]
Business Systems Development, KFC/Mr Donut/Ramcar

  Debugging code is twice as hard as writing it, so by definition,
   if you code to the best of your ability, you are not capable of
   debugging it.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] How do i store arbitrary questions and answers in SQL?

2006-08-29 Thread macgillivary
I agree that option 3 is the way to go.  There is a little reading at:
http://www.varlena.com/varlena/GeneralBits/110.php
which may be of interest.

Tim Allen wrote:
> lifeisgood wrote:
> > 3. Different answer tables each with answer types - same problem as 2
> > but even harder.
> 
> This is the other option I mentioned above. It's not hard at all.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] speeding up big query lookup

2006-08-29 Thread macgillivary
I just finished reading 'The Art of SQL' by Stephane Faroult who has a
chapter (ch 6) discussing this very topic.  I strongly recommend any
developer dealing with databases take a few days to read this
narrative.

A solution would seem to depend on whether you have many objects which
change in measurement only occasionally or if your have very few
objects whose measurement change very frequently.  If you have a
chance, go to your local big name bookstore who allow you to sit and
take a read of the book, jump to ch6 (page 156 or so) and get a good
understanding of some various techniques to take into consideration.

A good suggestion (imo) is the composite key already mentioned.  If I
can give an example from the book in the case you have many objects
whose measurements change occasionally:

select whatever
from object_val as outer
where outer.item_id = someValueForObjectX
and object_val_type_id = someValueForTypeA
and outer.observation_date = ( select max(inner.observation_date)
 from object_val as inner
 where inner.item_id =
someValueForObjectX
  and
inner.object_val_type_id = someValueForTypeA
  and
inner.observation_date <= yourReferenceDate )

Hopefully, I haven't completely mis-understood the author's intentions
and gave you some silly query.  Some may argue that the inner.item_id
could be correlated to the outer.item_id, but then the inner query
would need to be run multiple times.  If used as presented, the inner
query fires only once.

I'd be rather interested in knowing if the above query (or similar)
works in practice as occassionally can sound good on paper, but doesn't
work in the real world - I'd like to know if in your situation it
provides a good execution time.  I would take the step of creating that
composite key as suggested by Ragnar first.

Good luck,
am.

"Silvela, Jaime (Exchange)" wrote:
> This is a question on speeding up some type of queries.
>
> I have a very big table that catalogs measurements of some objects over
> time. Measurements can be of several (~10) types. It keeps the
> observation date in a field, and indicates the type of measurement in
> another field.
>
> I often need to get the latest measurement of type A for object X.
>


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] transaction isolation level

2006-08-29 Thread josh hes.
We recently updated our postgres to 8.1.4, since then
I’ve been getting the following error:

[Aug 28, 2006 5:43:16 AM ERROR]
org.postgresql.util.PSQLException: Cannot change
transaction isolation level in the middle of a
transaction.
 at
org.postgresql.jdbc2.AbstractJdbc2Connection.setTransactionIsolation(AbstractJdbc2Connection.java:729)
 at com.evermind._in.setTransactionIsolation(Unknown
Source)
 at com.evermind._in.setTransactionIsolation(Unknown
Source)
 at com.evermind._in.setTransactionIsolation(Unknown
Source)

Here is an example of the code that does this:

Connection con = null;
   try
   {
   DataSource ds = (DataSource)
ctx.lookup("jdbc/EncounterEJBDS");
con = ds.getConnection();

con.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED);
.
.
.

Anybody experienced this or have a fix?




__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] speeding up big query lookup

2006-08-29 Thread macgillivary
Just for fun, another approach since I believe pg supports it:

select whateverFields
from object_val as outer
where (outer.object_id,
outer.object_val_type_id,outer.observation_date) IN
 (select inner.object_id,
inner.object_val_type,max(inner.observation_date)
  from object_val as inner
  where inner.object_id = somevalueForObjectX
  and inner.object_val_type = someValueForTypeA
  and inner.observation_date <= yourReferenceDate
  group by inner.object_id, inner.object_val_type)

The reason these subqueries should run quickly is because the
object_id,object_val_type,oberservation_date make up a composite key,
so the subquery should execute extremely fast, thus eliminating the
majority of the data when you want to display or act on other fields
from the object_val (as outer).  I suppose if you don't need any
further information from object_val, and you are happy with the speeds,
Tom's method is smooth.

Adding the order by clause will take you out of the 'relational world'
and thus slow you down.  My fear with the triggers and the separate
snapshot is that the delays are spread out and add questionable
complexity, and potentially uneccessary overhead to the application.
Something to consider (although admittedly it is arguably a weak
consideration in some circumstances) is the extra space, indexes, and
other factors such as additional time for backup routines (and
restoration) the extra table creates.

Best of luck,
am


"Silvela, Jaime (Exchange)" wrote:
> No, you can make this work just fine if you JOIN right.
> You're way is a more concise way of expressing it, though.
>
> Tom's trick
>
>   SELECT DISTINCT ON (object_id, object_val_type_id) * from object_val
> ORDER BY object_id DESC, object_val_type_id DESC, observation_date
> DESC
>
> Runs about twice as fast as the GROUP BY ... HAVING, but definitely not
> as fast as keeping a separate table with only the latest observations,
> updated by triggers. I'll be testing out the differences in overall
> performance for my applications.
> 
> Thanks for the suggestions,
> Jaime
>


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Deathly slow performance on SMP red-hat system

2006-08-29 Thread Patrick TJ McPhee
I have a Red Hat ELsmp system running on a dual opteron hp server with
16 gigs of memory. I believe the RH installation is straight out of the
box. I've compiled postgres 8.1.4 from sources.

The problem is that query performance is horrible, and I can't think
why, although it seems clear the problem is outside the query engine.
For instance, "select 1" takes on the order of 100ms. Explain analyze
doesn't shed much light on why this might be:

 explain analyze select 1;
 QUERY PLAN 


 Result  (cost=0.00..0.01 rows=1 width=0) (actual time=0.005..0.006 rows=1 
loops=1)
 Total runtime: 0.059 ms
(2 rows)

But if I turn on duration logging, I get timings like
 LOG:  duration: 91.480 ms

I was able to pg_reload 80 gigs of data in a reasonable amount of time, but
individual queries are taking a long time.

I'm wondering if anyone's seen anything like this.
-- 

Patrick TJ McPhee
North York  Canada
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] xpath_string and group by

2006-08-29 Thread Martijn van Oosterhout
On Sat, Aug 26, 2006 at 03:51:06PM -0400, Perez wrote:
> Hi all,
> 
> Using 8.1.4 and contrib/xml2.  When I do a 
> 
> select xpath_string(note, '//Thing') as note,
>count(aDate) from theTable
> group by lower(xpath_string(note, '//Thing'))
> order by 2 desc;
> 
> I get an error:
> GROUP BY must contain note.  
> But I can do that for a plain text/varchar field.  Adding the non-xpath 
> note field messes up the grouping.

I wonder if it's getting confused about which "note" you're referring
to in the GROUP BY clause.

> select note, count(aDate) from 
>   (select  lower(xpath_string(note, '//Thing')) as note, aDate from 
> theTable) as foo
> group by note

This is about the same thing, so why not use that?

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] xpath_string and group by

2006-08-29 Thread Arturo Perez
In article <[EMAIL PROTECTED]>,
 kleptog@svana.org (Martijn van Oosterhout) wrote:

> On Sat, Aug 26, 2006 at 03:51:06PM -0400, Perez wrote:
> > Hi all,
> > 
> > Using 8.1.4 and contrib/xml2.  When I do a 
> > 
> > select xpath_string(note, '//Thing') as note,
> >count(aDate) from theTable
> > group by lower(xpath_string(note, '//Thing'))
> > order by 2 desc;
> > 
> > I get an error:
> > GROUP BY must contain note.  
> > But I can do that for a plain text/varchar field.  Adding the non-xpath 
> > note field messes up the grouping.
> 
> I wonder if it's getting confused about which "note" you're referring
> to in the GROUP BY clause.
> 
> > select note, count(aDate) from 
> >   (select  lower(xpath_string(note, '//Thing')) as note, aDate from 
> > theTable) as foo
> > group by note
> 
> This is about the same thing, so why not use that?
> 
> Have a nice day,

You're right, it is almost the same thing.  But the second form loses
the case of the original - everything is returned lower case.  If I must
I must but I'ld like to preserve the case is possible.

-arturo

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] speeding up big query lookup

2006-08-29 Thread Geoffrey

macgillivary wrote:

I just finished reading 'The Art of SQL' by Stephane Faroult who has a
chapter (ch 6) discussing this very topic.


I'd be curious to know any other references, books, folks would 
recommend when it comes to writing efficient SQL, as well as references 
on database design.


--
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
 - Benjamin Franklin

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] xpath_string and group by

2006-08-29 Thread Martijn van Oosterhout
On Tue, Aug 29, 2006 at 07:05:44AM -0400, Arturo Perez wrote:
> > > select note, count(aDate) from 
> > >   (select  lower(xpath_string(note, '//Thing')) as note, aDate from 
> > > theTable) as foo
> > > group by note
> > 
> > This is about the same thing, so why not use that?
> 
> You're right, it is almost the same thing.  But the second form loses
> the case of the original - everything is returned lower case.  If I must
> I must but I'ld like to preserve the case is possible.

Oh I see, you want to group thing ignoring case, but want to keep the
case and presumably you don't care which version you get.

Well, how about:

select max(note), count(aDate) from
  (select  xpath_string(note, '//Thing') as note, aDate from  theTable) as foo
group by lower(note);

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] database files are incompatible with server, after

2006-08-29 Thread Douglas McNaught
CSN <[EMAIL PROTECTED]> writes:

> I don't think so -- I followed the instructions here:
>
> http://www.robbyonrails.com/articles/2006/05/29/install-ruby-rails-and-postgresql-on-osx
>
> But looking around, I see there's a pg_ctl in
> /usr/local/bin, but 'port contents postgresql8' shows
> a pg_ctl in /opt/local/lib/pgsql8/bin.

Which one are you intending to run, and which one is first in the
PATH?

-Doug

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] transaction isolation level

2006-08-29 Thread Tom Lane
"josh hes." <[EMAIL PROTECTED]> writes:
> We recently updated our postgres to 8.1.4, since then
> I’ve been getting the following error:

> [Aug 28, 2006 5:43:16 AM ERROR]
> org.postgresql.util.PSQLException: Cannot change
> transaction isolation level in the middle of a
> transaction.
>  at
> org.postgresql.jdbc2.AbstractJdbc2Connection.setTransactionIsolation(AbstractJdbc2Connection.java:729)

Updated to 8.1.4 from what?

You'll probably have better luck asking on the pgsql-jdbc list (and
mentioning which jdbc driver versions are involved).  This looks to me
like it's associated with a change in driver behavior not backend
behavior.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] speeding up big query lookup

2006-08-29 Thread Silvela, Jaime \(Exchange\)
Thanks for the tips, am

Actually, your suggestion is equivalent to JOINing the table with a
GROUP BY copy of itself, and EXPLAIN shows both versions using the same
index and aggregates. Just a matter of style. Your previous suggestion
from the book works well too, but I actually prefer the JOIN method,
since that allows me to set the object_id and/or object_val_type values
in only one place.

Tom's method is faster, but has against it a bit of obscurity - it's
very fine tuned to a very specific behavior of DISTINCT ON and is less
easy to read than the others.

I fully agree that it is annoying to keep another table with triggers.
And of course, that table needs to be indexed too, or it's worthless.
I'm wondering how much extra time the db spends running all those
indexes and triggers, and I'll probably dismantle that in favor of the
composite index and the queries suggested so far.

I'll definitely check that book, I've been looking for something like
that.

Thanks
Jaime


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of macgillivary
Sent: Monday, August 28, 2006 10:14 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] speeding up big query lookup

Just for fun, another approach since I believe pg supports it:

select whateverFields
from object_val as outer
where (outer.object_id,
outer.object_val_type_id,outer.observation_date) IN
 (select inner.object_id,
inner.object_val_type,max(inner.observation_date)
  from object_val as inner
  where inner.object_id = somevalueForObjectX
  and inner.object_val_type = someValueForTypeA
  and inner.observation_date <= yourReferenceDate
  group by inner.object_id, inner.object_val_type)

The reason these subqueries should run quickly is because the
object_id,object_val_type,oberservation_date make up a composite key,
so the subquery should execute extremely fast, thus eliminating the
majority of the data when you want to display or act on other fields
from the object_val (as outer).  I suppose if you don't need any
further information from object_val, and you are happy with the speeds,
Tom's method is smooth.

Adding the order by clause will take you out of the 'relational world'
and thus slow you down.  My fear with the triggers and the separate
snapshot is that the delays are spread out and add questionable
complexity, and potentially uneccessary overhead to the application.
Something to consider (although admittedly it is arguably a weak
consideration in some circumstances) is the extra space, indexes, and
other factors such as additional time for backup routines (and
restoration) the extra table creates.

Best of luck,
am


"Silvela, Jaime (Exchange)" wrote:
> No, you can make this work just fine if you JOIN right.
> You're way is a more concise way of expressing it, though.
>
> Tom's trick
>
>   SELECT DISTINCT ON (object_id, object_val_type_id) * from object_val
> ORDER BY object_id DESC, object_val_type_id DESC, observation_date
> DESC
>
> Runs about twice as fast as the GROUP BY ... HAVING, but definitely
not
> as fast as keeping a separate table with only the latest observations,
> updated by triggers. I'll be testing out the differences in overall
> performance for my applications.
> 
> Thanks for the suggestions,
> Jaime
>


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster




***
Bear Stearns is not responsible for any recommendation, solicitation,
offer or agreement or any information about any transaction, customer
account or account activity contained in this communication.

Bear Stearns does not provide tax, legal or accounting advice.  You
should consult your own tax, legal and accounting advisors before
engaging in any transaction. In order for Bear Stearns to comply with
Internal Revenue Service Circular 230 (if applicable), you are notified
that any discussion of U.S. federal tax issues contained or referred to
herein is not intended or written to be used, and cannot be used, for
the purpose of:  (A) avoiding penalties that may be imposed under the
Internal Revenue Code; nor (B) promoting, marketing or recommending to
another party any transaction or matter addressed herein.
***

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Deathly slow performance on SMP red-hat system

2006-08-29 Thread Joshua D. Drake

Patrick TJ McPhee wrote:

I have a Red Hat ELsmp system running on a dual opteron hp server with
16 gigs of memory. I believe the RH installation is straight out of the
box. I've compiled postgres 8.1.4 from sources.

The problem is that query performance is horrible, and I can't think
why, although it seems clear the problem is outside the query engine.
For instance, "select 1" takes on the order of 100ms. Explain analyze
doesn't shed much light on why this might be:

 explain analyze select 1;
 QUERY PLAN 


 Result  (cost=0.00..0.01 rows=1 width=0) (actual time=0.005..0.006 rows=1 
loops=1)
 Total runtime: 0.059 ms
(2 rows)

But if I turn on duration logging, I get timings like
 LOG:  duration: 91.480 ms

I was able to pg_reload 80 gigs of data in a reasonable amount of time, but
individual queries are taking a long time.

I'm wondering if anyone's seen anything like this.


Vacuum? Analyze?

Joshua D. Drake

--

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Deathly slow performance on SMP red-hat system

2006-08-29 Thread Steve Poe
Patrick,Have you modified your postgresql.conf file or is this default out-of-the-box too? Are the database and database logs (pg_xlog) on shared disc space?SteveOn 8/28/06, 
Patrick TJ McPhee <[EMAIL PROTECTED]> wrote:
I have a Red Hat ELsmp system running on a dual opteron hp server with16 gigs of memory. I believe the RH installation is straight out of thebox. I've compiled postgres 8.1.4 from sources.The problem is that query performance is horrible, and I can't think
why, although it seems clear the problem is outside the query engine.For instance, "select 1" takes on the order of 100ms. Explain analyzedoesn't shed much light on why this might be: explain analyze select 1;
 QUERY PLAN Result  (cost=0.00..0.01 rows=1 width=0) (actual time=0.005..0.006 rows=1 loops=1)
 Total runtime: 0.059 ms(2 rows)But if I turn on duration logging, I get timings like LOG:  duration: 91.480 msI was able to pg_reload 80 gigs of data in a reasonable amount of time, butindividual queries are taking a long time.
I'm wondering if anyone's seen anything like this.--Patrick TJ McPheeNorth York  Canada[EMAIL PROTECTED]---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to   choose an index scan if your joining column's datatypes do not   match


Re: [GENERAL] Deathly slow performance on SMP red-hat system

2006-08-29 Thread Florian G. Pflug

Patrick TJ McPhee wrote:

I have a Red Hat ELsmp system running on a dual opteron hp server with
16 gigs of memory. I believe the RH installation is straight out of the
box. I've compiled postgres 8.1.4 from sources.

The problem is that query performance is horrible, and I can't think
why, although it seems clear the problem is outside the query engine.
For instance, "select 1" takes on the order of 100ms. Explain analyze
doesn't shed much light on why this might be:

I've had this problem on a old version of redhat. It turned out that
the driver included with redhat for the raid-controller was very slow -
i think that it was fsync-performance which was really bad with that driver.
The controller was as adaptec aacraid - in the end, we used debian, and
a recent 2.6 kernel, and things were much faster.

greetings, Florian Pflug

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Confused on a some deadlocks occuring...

2006-08-29 Thread Erik Jones

Ok, consider the following table definition:

CREATE TABLE stats (
id SERIAL PRIMARY KEY,
hits bigint default 0,
clickthrus bigint default 0,
referrals bigint default 0);


Now, this table has a lot of rows that are constantly being updated by 
queries of the following form:


UPDATE stats
SET clickthrus = clickthrus + #
WHERE id = blah;  -- sub various values for # and blah

There can be, and often are,  multiple updates for the same row coming 
in at the same time,  but never for the same field.  My understanding of 
the locking involved is that updates take out row-exclusive locks to 
prevent other transactions from modifying the data and to serialize with 
other updates.  So, multiple update statements to the same row come in, 
the first to arrive is granted a row-exclusive lock and the other wait.  
When the first is finished and commits, the second to have arrived get 
the lock, and so forth.  Here is what I am seeing all through my logs:


2006-08-29 03:17:25 CDT 16074 192.168.1.173(35190):STATEMENT: ABORT
2006-08-29 03:17:25 CDT 8553 192.168.1.168(42707):ERROR: deadlock detected
2006-08-29 03:17:25 CDT 8553 192.168.1.168(42707):DETAIL: Process 8553 
waits for ShareLock on transaction 1548224183; blocked by process 5499.
Process 5499 waits for ShareLock on transaction 1548224182; blocked by 
process 8553.

2006-08-29 03:17:25 CDT 8553 192.168.1.168(42707):STATEMENT: UPDATE stats
SET hits = hits + 3
WHERE id = 271524;

or,

2006-08-29 08:47:31 CDT 12479 192.168.1.168(46125):ERROR: deadlock detected
2006-08-29 08:47:31 CDT 12479 192.168.1.168(46125):DETAIL: Process 12479 
waits for ExclusiveLock on tuple (3024,45) of relation 33942 of database 
33325; blocked by process 12513

.
Process 12513 waits for ShareLock on transaction 1550567046; blocked by 
process 12495.
Process 12495 waits for ShareLock on transaction 1550569729; blocked by 
process 12479.

2006-08-29 08:47:31 CDT 12479 192.168.1.168(46125):STATEMENT: UPDATE stats
SET click_thrus = clickthrus + 1
WHERE id = 275359;

What's with ShareLock s on transactions?  Where do those come from?

--
erik jones <[EMAIL PROTECTED]>
software development
emma (r)


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Confused on a some deadlocks occuring...

2006-08-29 Thread Erik Jones

Erik Jones wrote:

Ok, consider the following table definition:

CREATE TABLE stats (
id SERIAL PRIMARY KEY,
hits bigint default 0,
clickthrus bigint default 0,
referrals bigint default 0);


Now, this table has a lot of rows that are constantly being updated by 
queries of the following form:


UPDATE stats
SET clickthrus = clickthrus + #
WHERE id = blah;  -- sub various values for # and blah

There can be, and often are,  multiple updates for the same row coming 
in at the same time,  but never for the same field.  My understanding 
of the locking involved is that updates take out row-exclusive locks 
to prevent other transactions from modifying the data and to serialize 
with other updates.  So, multiple update statements to the same row 
come in, the first to arrive is granted a row-exclusive lock and the 
other wait.  When the first is finished and commits, the second to 
have arrived get the lock, and so forth.  Here is what I am seeing all 
through my logs:


2006-08-29 03:17:25 CDT 16074 192.168.1.173(35190):STATEMENT: ABORT
2006-08-29 03:17:25 CDT 8553 192.168.1.168(42707):ERROR: deadlock 
detected
2006-08-29 03:17:25 CDT 8553 192.168.1.168(42707):DETAIL: Process 8553 
waits for ShareLock on transaction 1548224183; blocked by process 5499.
Process 5499 waits for ShareLock on transaction 1548224182; blocked by 
process 8553.

2006-08-29 03:17:25 CDT 8553 192.168.1.168(42707):STATEMENT: UPDATE stats
SET hits = hits + 3
WHERE id = 271524;

or,

2006-08-29 08:47:31 CDT 12479 192.168.1.168(46125):ERROR: deadlock 
detected
2006-08-29 08:47:31 CDT 12479 192.168.1.168(46125):DETAIL: Process 
12479 waits for ExclusiveLock on tuple (3024,45) of relation 33942 of 
database 33325; blocked by process 12513

.
Process 12513 waits for ShareLock on transaction 1550567046; blocked 
by process 12495.
Process 12495 waits for ShareLock on transaction 1550569729; blocked 
by process 12479.
2006-08-29 08:47:31 CDT 12479 192.168.1.168(46125):STATEMENT: UPDATE 
stats

SET click_thrus = clickthrus + 1
WHERE id = 275359;

What's with ShareLock s on transactions?  Where do those come from?



I should also note that each of those updates occurs in it's own 
transactions, but that they do not attempt to modify any other rows in 
that table before commiting.  They do,  however, delete rows in another 
common table (where they pulled the stat counts from), but the rows they 
delete are disjunct.


The whole process/algorithm is such:

1.  Get rows matching X from temp table.
2.  Accumulate values from X and update row and field corresponding to X 
in stats table.

3.  Delete rows collected in step one.
4.  Commit.
5.  Repeat from step 1.

With multiple processes using the same algo and tables but for different 
values of X.



--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Atomicity?

2006-08-29 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Naz Gassiep wrote:
> I am getting an error that I think I understand, but that I didn't think
> should happen.
> 
> Below is the output from psql that I am getting to trigger this error.
> If the violation of the constraint really is being caused WITHIN the
> query, doesn't that violate the principle of atomicity? I.e., operations
> and entities should be considered a single entire construct rather than
> a collection of smaller, discrete parts. Or do I have my understanding
> all wrong?
> 
> In any case, how do I get around this problem?
> 
> conwatch=# UPDATE replies SET rgt = rgt + 2 WHERE postid = 18 AND rgt >=
> 11;

Another work-around would be a loop.

Pseudo-code:
  BEGIN
  FOR :X IN
  SELECT RGT, REPLYID
  FROM REPLIES
  WHERE POSTID = 18
AND RGT >= 11
  DO
  UPDATE REPLIES
  SET RGT = RGT + 2
  WHERE REPLIYID = :X.REPLYID;
  END FOR;
  END;

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFE9GSjS9HxQb37XmcRArqiAJ90r+NPOzs312kav/682DiH16YBzgCgriDt
pCy0mK/74NvnHim5uaLeYrU=
=hJ1s
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Anonymous stored procedures

2006-08-29 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

Does PostgreSQL support them?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFE9GUES9HxQb37XmcRApFWAKCYIKuQKe4aSYXPpVWi/q2hxxdKLgCeKsXm
q/2TF5v5F5yEt5nHYWILdWI=
=BCmx
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Anonymous stored procedures

2006-08-29 Thread David Fetter
On Tue, Aug 29, 2006 at 11:02:12AM -0500, Ron Johnson wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Hi,
> 
> Does PostgreSQL support them?

Not yet.

Cheers,
D
-- 
David Fetter <[EMAIL PROTECTED]> http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
  Skype: davidfetter

Remember to vote!

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] strange sum behaviour

2006-08-29 Thread Andrew Baerg

Hi,

I am getting strange results from the sum function as follows:

corp=# select amount from acc_trans where trans_id=19721 and chart_id=10019;
amount
-
   4.88
  117.1
-121.98
(3 rows)

corp=# select sum(amount) from acc_trans where trans_id=19721 and
chart_id=10019;
sum
--
-1.4210854715202e-14
(1 row)


amount is defined as double precision. I noticed that if I cast amount
as numeric, the sum comes out 0 as expected.

I am using postgresql 8.0.1

Thanks in advance for any help or suggestions.

Andrew Baerg

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Shared Objects (Dynamic loading)

2006-08-29 Thread Jasbinder Bali
Tiger,
Thanks alot for sharing that info.
Thats exactly how we have implemented our system after seeing the mismatch in the transactions
for any rollback transactions.
Regards,
~Jas 
On 8/29/06, Tiger Quimpo <[EMAIL PROTECTED]> wrote:
[original email got setn to Jasbinder but not to the listfixing that]On 8/28/06, Gerald Timothy G Quimpo... wrote:
> Any reason why you don't do your functions and triggers in> pl/pgsql instead?  It's simpler to setup and learn> than pl/perl, IMO.On Mon, 2006-08-28 at 03:27 -0400, Jasbinder Bali wrote:
> Its because my trigger has to initiate some unix tools> and the code for the same is already written in Perl.> So my trigger just needs to call the Perl program that> would do the needful eventually.
OK. Something to keep in mind:Whatever your triggers do can be rolled back if thetransaction rolls back for whatever reason.  Ifyou're calling external programs inside your trigger,you're performing actions which you can't rollback.
For any rolledback transactions where you already ranthe external unix tools, there's going to be adisconnect between the data and what the unix toolsdid.One approach to this is to have a status table where
you push things that should be executed externally,have the triggers insert into that status table, andhave a separate program that either polls the statustable or does a LISTEN for notification that it should
check the status table.There's some discussion of that in the archives. Thisway, inserts into the status table ALSO rollback if thetransaction rolls back, so if a status table entryever becomes visible, it's only because the transaction
already completed and it's safe to do the externalunix tool tasks.tiger


Re: [GENERAL] strange sum behaviour

2006-08-29 Thread Karen Hill

"Andrew Baerg" wrote:
> Hi,
>
> I am getting strange results from the sum function as follows:
>
> corp=# select amount from acc_trans where trans_id=19721 and chart_id=10019;
>  amount
> -
> 4.88
>117.1
>  -121.98
> (3 rows)
>
> corp=# select sum(amount) from acc_trans where trans_id=19721 and
> chart_id=10019;
>  sum
> --
>  -1.4210854715202e-14
> (1 row)
>
>
> amount is defined as double precision. I noticed that if I cast amount
> as numeric, the sum comes out 0 as expected.
>


You are using the wrong datatype if you are working with currency.  Use
Numeric or Decimal instead.  The "money" type is depreciated.

http://www.postgresql.org/docs/8.1/interactive/datatype-money.html


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] strange sum behaviour

2006-08-29 Thread codeWarrior
As an alternative -- you could do an inline type cast

SELECT SUM(amount)::numeric(10, 2) FROM acc_trans WHERE trans_id=19721 AND 
chart_id=10019;






"Karen Hill" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]
>
> "Andrew Baerg" wrote:
>> Hi,
>>
>> I am getting strange results from the sum function as follows:
>>
>> corp=# select amount from acc_trans where trans_id=19721 and 
>> chart_id=10019;
>>  amount
>> -
>> 4.88
>>117.1
>>  -121.98
>> (3 rows)
>>
>> corp=# select sum(amount) from acc_trans where trans_id=19721 and
>> chart_id=10019;
>>  sum
>> --
>>  -1.4210854715202e-14
>> (1 row)
>>
>>
>> amount is defined as double precision. I noticed that if I cast amount
>> as numeric, the sum comes out 0 as expected.
>>
>
> Double precision accorrding to the documentation is "8 byte
> variable-precision, inexact".  That means when you do the sum, rounding
> occurs.  You should use Numeric or Decimal as the datatype.  The money
> type is depreciated so don't use it if what you are summing is currency.
> 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] strange sum behaviour

2006-08-29 Thread Peter Eisentraut
> corp=# select sum(amount) from acc_trans where trans_id=19721 and
> chart_id=10019;
>  sum
> --
>  -1.4210854715202e-14

[expected to be 0]

Floating-point numbers are typically inaccurate like that, and if you 
rely in equality comparisons, you're doing something wrong.  You should 
use numeric or fix your application to take these errors into account.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] strange sum behaviour

2006-08-29 Thread Seneca Cunningham

On 29-Aug-2006, at 13:13 :48, Andrew Baerg wrote:


Hi,

I am getting strange results from the sum function as follows:

corp=# select amount from acc_trans where trans_id=19721 and  
chart_id=10019;

amount
-
   4.88
  117.1
-121.98
(3 rows)

corp=# select sum(amount) from acc_trans where trans_id=19721 and
chart_id=10019;
sum
--
-1.4210854715202e-14
(1 row)


amount is defined as double precision. I noticed that if I cast amount
as numeric, the sum comes out 0 as expected.



Don't use floats or doubles for financial data, use numeric types.   
Your double sum looks to be a normal error for floats in that situation.


Take a look at  for  
some information about why you should be using numeric for your  
amount column.


--
Seneca Cunningham
[EMAIL PROTECTED]




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] counting days

2006-08-29 Thread garry saddington
I need to count the days between two dates that are not saturdays or
sundays. I have read the manual and searched the lists but I am
struggling. I can count the days but am finding difficulty excluding sat
and sun from the count. I need this without reference to any tables.
Does anyone have any pointers please.
Regards
Garry


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] strange sum behaviour

2006-08-29 Thread Andrew Baerg

Thanks for so many prompt responses. I have researched the differences
between floating point and arbitrary precision numbers in the pgsql
docs and understand now what is happening.

Thanks again to the many great members of the pgsql community.

Andrew

On 8/29/06, Andrew Baerg <[EMAIL PROTECTED]> wrote:

Hi,

I am getting strange results from the sum function as follows:

corp=# select amount from acc_trans where trans_id=19721 and chart_id=10019;
 amount
-
4.88
   117.1
 -121.98
(3 rows)

corp=# select sum(amount) from acc_trans where trans_id=19721 and
chart_id=10019;
 sum
--
 -1.4210854715202e-14
(1 row)


amount is defined as double precision. I noticed that if I cast amount
as numeric, the sum comes out 0 as expected.

I am using postgresql 8.0.1

Thanks in advance for any help or suggestions.

Andrew Baerg



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] strange sum behaviour

2006-08-29 Thread John D. Burger

Andrew Baerg wrote:

corp=# select amount from acc_trans where trans_id=19721 and 
chart_id=10019;

amount
-
   4.88
  117.1
-121.98
(3 rows)

corp=# select sum(amount) from acc_trans where trans_id=19721 and
chart_id=10019;
sum
--
-1.4210854715202e-14
(1 row)



This has nothing to do with SUM():

> select 4.88::float + 117.1::float + -121.98::float;
   ?column?
--
 -1.4210854715202e-14
(1 row)

It's just the inherent inexactness of floating point, and probably not 
even particular to Postgres.


- John D. Burger
  MITRE


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Anonymous stored procedures

2006-08-29 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

David Fetter wrote:
> On Tue, Aug 29, 2006 at 11:02:12AM -0500, Ron Johnson wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> Hi,
>>
>> Does PostgreSQL support them?
> 
> Not yet.

Any plans?  Don't see it on the TODO?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFE9JLeS9HxQb37XmcRAj2kAJ4/JWxQhMw4p5TVk0QTgKP6UuSVbwCgqqKY
8UB3pm8qGQYpLHE06DLSJ/Q=
=TZSh
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] strange sum behaviour

2006-08-29 Thread Chris Mair

> corp=# select amount from acc_trans where trans_id=19721 and chart_id=10019;
>  amount
> -
> 4.88
>117.1
>  -121.98
> (3 rows)
> 
> corp=# select sum(amount) from acc_trans where trans_id=19721 and
> chart_id=10019;
>  sum
> --
>  -1.4210854715202e-14
> (1 row)
> 
> 
> amount is defined as double precision. I noticed that if I cast amount
> as numeric, the sum comes out 0 as expected.

0.1 cannot be represented exactly using floating point numbers
(the same way as 1/3 cannot be represented exactly using decimal
numbers). You're bound to suffer from round-off errors.

Use numeric for exact, decimal math.

Bye, Chris.


-- 

Chris Mair
http://www.1006.org


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] SQL Tuning

2006-08-29 Thread ing_enriquebarrios



Hola, soy nuevo por acá, quisiera saber si me pueden ayudar para ver que 
modificaciones puedo hacerle a esta consulta para 
que funciones mas rápido ya que como esta ahora consume mucha cpu y memoria.
desde ya muchas gracias.
saludos

query

SELECT DISTINCT 
t.id,  
emp.descripcion as contratista, 
fecha_etapa_cod(eh.idetapa , '12') as asignado , 
t.id AS calden, 
trabajosistemao.reclamo, 
tt.descripcion, 
oc.calle, 
oc.numero, 
s.descripcion || ' - ' || s.medida  || ' - ' || s.tipo  AS solado, 
oc.cantidad,  
oc.largo, 
oc.ancho, 
oc.profundidad, 
oc.piezas, 
entrecalle_siade_cod(trabajosistemao.reclamo) as entrecalle

FROM obracivil AS oc INNER JOIN etapahead AS eh ON oc.idetapa = eh.idetapa) 
 
INNER JOIN trabajo AS t ON eh.idtrabajo = t.id)
INNER JOIN aprobacion AS ap ON ap.idetapa = eh.idetapa)
INNER JOIN solado AS s ON oc.idsolado = s.idsolado)  
LEFT JOIN trabajosistemao ON t.id = trabajosistemao.id, empresa emp, 
tipotrabajo tt

WHERE fecha_etapa_cod(eh.idetapa , '3') is null AND
(oc.discrim)='D' AND

emp.idcontratista = empresa_etapa_cod(eh.idetapa) AND

tt.idtipotrabajo = idtipotrabajo_etapa_cod(eh.idetapa) AND

empresa_etapa_cod(eh.idetapa) in (500189) AND

fecha_etapa_cod(eh.idetapa , '12') between '2006-05-30' and '2006-06-30'

--select max(fecha) from aprobacion  where idetapa = $1 and idestado = $2 
--INNER JOIN aprobacion AS ap ON ap.idetapa = eh.idetapa)

UNION

SELECT 
eh.idtrabajo as id, 
emp.descripcion as contratista, 
aprobacion.fecha as asignado , 
eh.idtrabajo as calden,
trabajosistemao.reclamo, 
tt.descripcion, 
oc.calle, 
oc.numero, 
'' as solado, 
oc.cantidad,  
oc.largo, 
oc.ancho, 
oc.profundidad, 
oc.piezas, 
entrecalle_siade_cod(trabajosistemao.reclamo) as entrecalle
FROM (((
  trabajosistemao INNER JOIN etapahead AS eh ON trabajosistemao.id = 
eh.idtrabajo) 
  LEFT JOIN obracivil AS oc ON eh.idetapa = oc.idetapa) 
  INNER JOIN aprobacion ON eh.idetapa = aprobacion.idetapa) 
  INNER JOIN trabajo AS t ON trabajosistemao.id = t.id, empresa emp, 
tipotrabajo tt
WHERE (((trabajosistemao.reclamo) Is Not Null) AND ((oc.idetapa) Is Null) AND 
((eh.idetapa) Not In (SELECT DISTINCT 
etapamecanica.idetapa FROM etapamecanica INNER JOIN aprobacion ON 
etapamecanica.idetapa = aprobacion.idetapa WHERE 
(((aprobacion.idestado)='3' AND ((aprobacion.idestado)='12'))
AND emp.idcontratista = empresa_etapa_cod(eh.idetapa) 
AND tt.idtipotrabajo = idtipotrabajo_etapa_cod(eh.idetapa)
AND empresa_etapa_cod(eh.idetapa) in (500189) 
AND aprobacion.fecha between '2006-05-30' and '2006-06-30';



Explain output
*---


Unique  (cost=193218032.67..193218032.75 rows=1 width=278)
  ->  Sort  (cost=193218032.67..193218032.67 rows=2 width=278)
Sort Key: id, contratista, asignado, calden, reclamo, descripcion, 
calle, numero, solado, cantidad, largo, ancho, 
profundidad, piezas, entrecalle
->  Append  (cost=1707.38..193218032.66 rows=2 width=278)
  ->  Subquery Scan "*SELECT* 1"  (cost=1707.38..1707.43 rows=1 
width=278)
->  Unique  (cost=1707.38..1707.43 rows=1 width=278)
  ->  Sort  (cost=1707.38..1707.39 rows=1 width=278)
Sort Key: t.id, emp.descripcion, 
fecha_etapa_cod(eh.idetapa, 12::numeric), t.id, trabajosistemao.reclamo, 
tt.descripcion, oc.calle, oc.numero, s.descripcion || ' - '::character 
varying) || s.medida) || ' - '::character varying) || s.tipo), 
oc.cantidad, oc.largo, oc.ancho, oc.profundidad, oc.piezas, 
entrecalle_siade_cod(trabajosistemao.reclamo)
->  Nested Loop  (cost=1624.89..1707.37 rows=1 
width=278)
  Join Filter: ("inner".idtipotrabajo = 
idtipotrabajo_etapa_cod("outer".idetapa))
  ->  Nested Loop  (cost=1624.89..1705.57 
rows=1 width=246)
Join Filter: ("inner".idcontratista 
= empresa_etapa_cod("outer".idetapa))
->  Nested Loop  
(cost=1624.89..1691.56 rows=10 width=220)
  ->  Hash Join  
(cost=1624.89..1630.10 rows=10 width=192)
Hash Cond: 
("outer".idsolado = "inner".idsolado)
->  Seq Scan on solado 
s  (cost=0.00..3.37 rows=137 width=47)
->  Hash  
(cost=1624.86..1624.86 rows=10 width=145)
  ->  Nested Loop  
(cost=0.00..1624.86 rows=10 width=145)
->  Nested 
Loop  (cost=0.00..1609.53 rows=1 width=133)
  ->  
Nested Loop  (cost=0.00..1603.66 rows=1 width=120)
  

Re: [GENERAL] database files are incompatible with server, after computer restart

2006-08-29 Thread CSN
--- Douglas McNaught <[EMAIL PROTECTED]> wrote:

> CSN <[EMAIL PROTECTED]> writes:
> 
> > I don't think so -- I followed the instructions
> here:
> >
> >
>
http://www.robbyonrails.com/articles/2006/05/29/install-ruby-rails-and-postgresql-on-osx
> >
> > But looking around, I see there's a pg_ctl in
> > /usr/local/bin, but 'port contents postgresql8'
> shows
> > a pg_ctl in /opt/local/lib/pgsql8/bin.
> 
> Which one are you intending to run, and which one is
> first in the
> PATH?
> 
> -Doug
> 

I don't know which to run now! I tried both
'/opt/local/lib/pgsql8/bin/pg_ctl start -D pgdata' and
putting /opt/local/lib/pgsql8/bin before
/usr/local/bin in PATH and still get the same error.

Is there some way to specify HAVE_INT64_TIMESTAMP to
pg_ctl or just export what's in the existing pgdata
and initdb a new dir and import?

Thanks,
csn

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] counting days

2006-08-29 Thread AgentM
Perhaps you are trying to count business days? Those vary around the  
world and you would certainly need a table to hold the holidays and  
such. If you just want to count the number of non-weekend-days, then  
get the interval as days and then it's a simple matter of math:


2*daysInterval/7+(daysInterval % 7 == 6 ? 1 : 0)

This holds assuming you are using dates and not timestamps- you may  
wish to subtract 1 depending on what the dates represent.


-M

On Aug 29, 2006, at 14:35 , garry saddington wrote:


I need to count the days between two dates that are not saturdays or
sundays. I have read the manual and searched the lists but I am
struggling. I can count the days but am finding difficulty  
excluding sat

and sun from the count. I need this without reference to any tables.
Does anyone have any pointers please.
Regards



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] XPath and XML support

2006-08-29 Thread John Gray
On Thu, 24 Aug 2006 13:56:12 -0700, Frank Cohen wrote:

> Thanks Nikolay: Seeing as xml2 hasn't been ported to Windows yet  
> makes me wonder if this is going to be the best way to use XML in  
> PostgreSQL in the long-term? Is there anything else on the boards? - 
> Frank
> 

There's a lot of scope to do better than the current support, certainly.

However, the "not ported to Windows" is slightly misleading - the code
(like a lot of contrib code) compiles fine under Windows. Because of the
dependency on libxml2, it might not often be distributed as a binary, but
people are definitely using on Win32 (I'm not really a Win32 user but I
have tried it and it did work).

John




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] database files are incompatible with server, after

2006-08-29 Thread Douglas McNaught
CSN <[EMAIL PROTECTED]> writes:

> I don't know which to run now! I tried both
> '/opt/local/lib/pgsql8/bin/pg_ctl start -D pgdata' and

You might want an absolute path for 'pgdata' here--have you tried
that? 

> putting /opt/local/lib/pgsql8/bin before
> /usr/local/bin in PATH and still get the same error.

Why don't you run both pg_ctl binaries with the --version option, then
compare against the pg_control file in the data directory (I think
that's what it's called).  That should hopeully tell you which one to
run.

> Is there some way to specify HAVE_INT64_TIMESTAMP to
> pg_ctl or just export what's in the existing pgdata
> and initdb a new dir and import?

You need to have the server running to export.  :)

I'm pretty sure you can get it running; you just haven't tried the
right way yet.

-Doug

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] database files are incompatible with server, after computer restart

2006-08-29 Thread CSN
--- Douglas McNaught <[EMAIL PROTECTED]> wrote:

> CSN <[EMAIL PROTECTED]> writes:
> 
> > I don't know which to run now! I tried both
> > '/opt/local/lib/pgsql8/bin/pg_ctl start -D pgdata'
> and
> 
> You might want an absolute path for 'pgdata'
> here--have you tried
> that? 

~ $ /opt/local/lib/pgsql8/bin/pg_ctl start -D
/Users/csn/pgdata/
postmaster starting
~ $ FATAL:  database files are incompatible with
server
DETAIL:  The database cluster was initialized without
HAVE_INT64_TIMESTAMP but the server was compiled with
HAVE_INT64_TIMESTAMP.
HINT:  It looks like you need to recompile or initdb.

> > putting /opt/local/lib/pgsql8/bin before
> > /usr/local/bin in PATH and still get the same
> error.
> 
> Why don't you run both pg_ctl binaries with the
> --version option, then
> compare against the pg_control file in the data
> directory (I think
> that's what it's called).  That should hopeully tell
> you which one to run

~ $ /usr/local/bin/pg_ctl --version
pg_ctl (PostgreSQL) 8.1.3

~ $ /opt/local/lib/pgsql8/bin/pg_ctl --version
pg_ctl (PostgreSQL) 8.1.3

pgdata/global/pg_control seems to be binary.
pgdata/PG_VERSION has 8.1 in it.

> > Is there some way to specify HAVE_INT64_TIMESTAMP
> to
> > pg_ctl or just export what's in the existing
> pgdata
> > and initdb a new dir and import?
> 
> You need to have the server running to export.  :)
> 
> I'm pretty sure you can get it running; you just
> haven't tried the
> right way yet.

I'm stymied. ;)

Thanks,
csn


> 
> -Doug
> 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] counting days

2006-08-29 Thread Ragnar
On þri, 2006-08-29 at 19:35 +0100, garry saddington wrote:
> I need to count the days between two dates that are not saturdays or
> sundays. I have read the manual and searched the lists but I am
> struggling. I can count the days but am finding difficulty excluding sat
> and sun from the count. I need this without reference to any tables.
> Does anyone have any pointers please.


how about something like:

# assuming d id number of days
# and dow is day of week of first day (0-7;sun=0)

wholeweeks=days div 7
partial=days mod 7

# adjust dow to mon=0,tue=1...sun=6
dow=(dow+6) mod 7

# count 5 weekdays for each whole week
wd=5*wholeweeks

# add all days of partial week
wd=wd+partial

# substract 1 if saturday was included
if dow+partial>=6 then wd=wd-1

# substract 1 if sunday was included
if dow+partial>=7 then wd=wd-1

# now wd is result


hope this helps
gnari



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] strange sum behaviour

2006-08-29 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Seneca Cunningham wrote:
> On 29-Aug-2006, at 13:13 :48, Andrew Baerg wrote:
[snip]
> Take a look at  for
> some information about why you should be using numeric for your amount
> column.

So how does PG implement Decimal?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFE9MRDS9HxQb37XmcRAgfEAJwKv/6K6CgaCQukQsE79OcYM7MMVwCdFjPS
dOR1oyNM954cAWHW493SUv0=
=Qy0I
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] counting days

2006-08-29 Thread SCassidy
If I understood you correctly, you want the number of weekdays (e.g. an
integer number of days)  between 2 specified dates.

This seems to work (although I am not the greatest plpgsql function person
- not enough practice - there may be better ways):

CREATE or REPLACE FUNCTION count_weekdays (date, date) returns integer
language plpgsql STABLE
  AS '
   DECLARE
start_date alias for $1;
end_date alias for $2;
tmp_date date;
tmp_dow integer;
-- double precision returned from extract
tot_dow integer;
   BEGIN
 tmp_date := start_date;
 tot_dow := 0;
 WHILE (tmp_date <= end_date) LOOP
   select into tmp_dow  cast(extract(dow from tmp_date) as integer);
   IF ((tmp_dow >= 2) and (tmp_dow <= 6)) THEN
 tot_dow := (tot_dow + 1);
   END IF;
   select into tmp_date (tmp_date + interval ''1 day '');
 END LOOP;
 return tot_dow;

   END;
';

select count_weekdays(date '2006-08-01', date '2006-08-10');
 count_weekdays

  8
(1 row)

Note that I used 2 single-quotes around ''1 day'', not double quotes.  I'm
on version 7.4.6.

Susan Cassidy



   
 garry saddington  
 <[EMAIL PROTECTED] 
 hers.co.uk>To 
 Sent by:  pgsql-general@postgresql.org
 pgsql-general-own  cc 
 [EMAIL PROTECTED] 
   Subject 
   [GENERAL] counting days 
 08/29/2006 11:35  
 AM
  |---|
  | [ ] Expand Groups |
  |---|
   
   
   
   
   
   




I need to count the days between two dates that are not saturdays or
sundays. I have read the manual and searched the lists but I am
struggling. I can count the days but am finding difficulty excluding sat
and sun from the count. I need this without reference to any tables.
Does anyone have any pointers please.
Regards
Garry


---(end of broadcast)---
TIP 6: explain analyze is your friend




--
Simply protected storage solutions ensure that your information is
automatically safe, readily available and always there, visit us at 
http://www.overlandstorage.com
--


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] database files are incompatible with server, after computer restart

2006-08-29 Thread CSN
--- Douglas McNaught <[EMAIL PROTECTED]> wrote:

> CSN <[EMAIL PROTECTED]> writes:
> 
> > I'm stymied. ;)
> 
> So you've tried running both pg_ctl binaries against the data
> directory and both don't work?  

Yes.

> I wonder how you ever had a database
> working in that case.  :)

Me too! After 'initdb -D pgdata' and 'pg_ctl start -D pgdata' it ran for days 
during which I used
it quite a bit. But once I restarted the computer...

> Are you sure you have the right data directory?
> Maybe the one you
> think was used isn't the one that was actually used.

Yes. But I looked around and couldn't find any other data dirs.

> If you can't get either to work, you should hopefully be able to
> compile a version of the code with HAVE_64BIT_TIMESTAMP and get it to
> start up with your data.

Hmm, I installed using DarwinPorts (I think, if that uses 'port') -- I wonder 
if it can be
reinstalled that way and pass a HAVE_64BIT_TIMESTAMP flag to it. Besides my 
current problem, does
HAVE_64BIT_TIMESTAMP have much impact on things?

> But I still strongly suspect this'll be an
> easy fix once we figure it out.  :)

Heh, I hope. :) I've never had a problem with PG before, and I have no idea how 
or why this
happened (other than it coincided with rebooting).

Thanks,
csn


> -Doug
> 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] counting days

2006-08-29 Thread David Fetter
On Tue, Aug 29, 2006 at 07:35:27PM +0100, garry saddington wrote:
> I need to count the days between two dates that are not saturdays or
> sundays.  I have read the manual and searched the lists but I am
> struggling.  I can count the days but am finding difficulty
> excluding sat and sun from the count.  I need this without reference
> to any tables.  Does anyone have any pointers please.

You can do this with an SQL function.  The function below includes
both the start date and end date, but you could adjust it so that it
takes either or neither.  You can query it as though it were a table,
e.g.

SELECT * FROM non_weekends_between('2006-08-01'::date, 'today'::date);

or in your case,

SELECT COUNT(*) FROM non_weekends_between('2006-08-01'::date, 'today'::date);

Regards,
David.

CREATE OR REPLACE FUNCTION non_weekends_between(
first_date DATE, /* $1 */
last_date DATE   /* $2 */
)
RETURNS SETOF date
LANGUAGE sql
AS
$$
SELECT
$1 + s.i
FROM generate_series(
0,
$2 - $1
) AS s(i)
WHERE
extract(
DOW
FROM
$1 + s.i
) NOT IN (
0,  /* Sunday   */
6   /* Saturday */
);
$$;
-- 
David Fetter <[EMAIL PROTECTED]> http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
  Skype: davidfetter

Remember to vote!

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] counting days

2006-08-29 Thread Ragnar
On þri, 2006-08-29 at 17:32 -0400, AgentM wrote:
> Perhaps you are trying to count business days? Those vary around the  
> world and you would certainly need a table to hold the holidays and  
> such. If you just want to count the number of non-weekend-days, then  
> get the interval as days and then it's a simple matter of math:
> 
> 2*daysInterval/7+(daysInterval % 7 == 6 ? 1 : 0)

looks like you are calculating the number of weekend 
days here, not the number of weekdays. looks like you are assuming the
interval started on a monday, too.

gnari

> This holds assuming you are using dates and not timestamps- you may  
> wish to subtract 1 depending on what the dates represent.
> 
> -M
> 
> On Aug 29, 2006, at 14:35 , garry saddington wrote:
> 
> > I need to count the days between two dates that are not saturdays or
> > sundays. I have read the manual and searched the lists but I am
> > struggling. I can count the days but am finding difficulty  
> > excluding sat
> > and sun from the count. I need this without reference to any tables.
> > Does anyone have any pointers please.
> > Regards
> 
> 
> ---(end of broadcast)---
> TIP 3: Have you checked our extensive FAQ?
> 
>http://www.postgresql.org/docs/faq
> 


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] strange sum behaviour

2006-08-29 Thread Joshua D. Drake

Ron Johnson wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Seneca Cunningham wrote:

On 29-Aug-2006, at 13:13 :48, Andrew Baerg wrote:

[snip]

Take a look at  for
some information about why you should be using numeric for your amount
column.


So how does PG implement Decimal?


As mentioned above, please look at numeric. :)

Sincerely,

Joshua D. Drake




- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFE9MRDS9HxQb37XmcRAgfEAJwKv/6K6CgaCQukQsE79OcYM7MMVwCdFjPS
dOR1oyNM954cAWHW493SUv0=
=Qy0I
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster




--

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] transaction isolation level

2006-08-29 Thread Dave Cramer
My guess would be that you are getting the connection with autocommit 
(false);


Either way try

conn =  getConn...
conn.setAutoCommit(false);
conn.changeTransactionLevel


On 29-Aug-06, at 9:15 AM, Tom Lane wrote:


"josh hes." <[EMAIL PROTECTED]> writes:

We recently updated our postgres to 8.1.4, since then
I’ve been getting the following error:



[Aug 28, 2006 5:43:16 AM ERROR]
org.postgresql.util.PSQLException: Cannot change
transaction isolation level in the middle of a
transaction.
 at
org.postgresql.jdbc2.AbstractJdbc2Connection.setTransactionIsolation( 
AbstractJdbc2Connection.java:729)


Updated to 8.1.4 from what?

You'll probably have better luck asking on the pgsql-jdbc list (and
mentioning which jdbc driver versions are involved).  This looks to me
like it's associated with a change in driver behavior not backend
behavior.

regards, tom lane

---(end of  
broadcast)---

TIP 5: don't forget to increase your free space map settings




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] strange sum behaviour

2006-08-29 Thread Alvaro Herrera
Ron Johnson wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Seneca Cunningham wrote:
> > On 29-Aug-2006, at 13:13 :48, Andrew Baerg wrote:
> [snip]
> > Take a look at  for
> > some information about why you should be using numeric for your amount
> > column.
> 
> So how does PG implement Decimal?

http://developer.postgresql.org/cvsweb.cgi/pgsql/src/backend/utils/adt/numeric.c?rev=1.94;content-type=text%2Fx-cvsweb-markup

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] strange sum behaviour

2006-08-29 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Alvaro Herrera wrote:
> Ron Johnson wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> Seneca Cunningham wrote:
>>> On 29-Aug-2006, at 13:13 :48, Andrew Baerg wrote:
>> [snip]
>>> Take a look at  for
>>> some information about why you should be using numeric for your amount
>>> column.
>> So how does PG implement Decimal?
> 
> http://developer.postgresql.org/cvsweb.cgi/pgsql/src/backend/utils/adt/numeric.c?rev=1.94;content-type=text%2Fx-cvsweb-markup

Thanks.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFE9N2rS9HxQb37XmcRAplmAKCHYRrv4e4Y4RAweQiJqlEe9PPAMgCggdv2
OX0hHo8jC7l6rR2i/0+vy/I=
=KFxv
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Santa Clara Pg Training Event (resent to -general)

2006-08-29 Thread Michael Loftis

I'm resending this to -general.  I'd originally sent it to -hackers.

I've been waiting for atleast one more person to sign up for the training 
so OTG can confirm this class down there, they said that some of the 
internal hackers had requested the course out there, so I wanted to ping 
everyone here.  I know, kinda off topic and all, but the event is coming up 
soon and I have to either book airfare, or not, this week.


Right now they only need 1-2 more people to make the course a go. 



TIA!

--
"Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds."
-- Samuel Butler

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] reindexdb and "could not open relation" error message

2006-08-29 Thread Aleksander Kmetec

Hi!

For the past two days I've been getting the following errors from my cron 
scripts which connect to the database:

day 1, on a SELECT statement:
ERROR:  could not open relation 1663/35154/221872: No such file or directory

day 2, on a LOCK TABLE statement:
ERROR:  could not open relation 1663/35154/222823: No such file or directory

These errors are sent at the exact same time when reindexdb is also started and I cannot replicate them by executing the 
same SQL statements at a different time. Should I be worried about this?


I'm running:
 PostgreSQL 8.1.4 on i386-redhat-linux-gnu, compiled by GCC 
i386-redhat-linux-gcc (GCC) 4.1.0 20060304 (Red Hat 4.1.0-3)

Regards,
Aleksander

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Version 1 Calling convention

2006-08-29 Thread Jasbinder Bali
I have a .pgc file and i have the following code snippet in it:PG_FUNCTION_INFO_V1 (test);Datumtest (PG_FUNCTION_ARGS){  EXEC SQL BEGIN DECLARE SECTION;   int id = PG_GETARG_INT(0);
   char *raw_rawemail1 = PG_GETARG_CHAR_P(1);   char *rawemail1 = (char *)  palloc (VARSIZE(rawemail1));   VARATT_SIZEP ( rawemail1 ) = VARSIZE (raw_rawemail1); //LINE 140 (thats where i get syntax error)
   memcpy( (void *) (VARDATA) (rawemail1), (void *) (VARDATA) (raw_rawemail1) , (VARSIZE) (raw_rawemail1) - VARHDRSZ );  EXEC SQL END DECLARE SECTION;  EXEC SQL CONNECT TO dbxyz;}When I compile this using
ECPG parser.pgcI get the follwing errorSyntax error at or near 'rawemail1' (LINE 140 marked above)Don't know what this error is all about.Don't know if my declaration in BEGIN END block is correct in the pgc file for version 1 calling convention
Thanks~Jas


Re: [GENERAL] Version 1 Calling convention

2006-08-29 Thread Alvaro Herrera
Jasbinder Bali wrote:
> I have a .pgc file and i have the following code snippet in it:
> 
> PG_FUNCTION_INFO_V1 (test);
> 
> Datum
> test (PG_FUNCTION_ARGS)
> {
>  EXEC SQL BEGIN DECLARE SECTION;

You can't use ecpg in server-side functions.  You need to use the SPI
interface.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Version 1 Calling convention

2006-08-29 Thread Jasbinder Bali
Well I think I can because so far its been working fine.Anyways, as suggested by others too, I'm going to change it to SPI but right now the matter of concern is the error that I'm getting. Is it because I'm using ECPG in server side code or something else?
On 8/30/06, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
Jasbinder Bali wrote:> I have a .pgc file and i have the following code snippet in it:>> PG_FUNCTION_INFO_V1 (test);>> Datum> test (PG_FUNCTION_ARGS)> {>  EXEC SQL BEGIN DECLARE SECTION;
You can't use ecpg in server-side functions.  You need to use the SPIinterface.--Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support


Re: [GENERAL] Version 1 Calling convention

2006-08-29 Thread Jasbinder Bali
OK. I've removed the ECPG code and error has gone away.ThanksjasOn 8/30/06, Alvaro Herrera <[EMAIL PROTECTED]
> wrote:Jasbinder Bali wrote:> I have a .pgc file and i have the following code snippet in it:
>> PG_FUNCTION_INFO_V1 (test);>> Datum> test (PG_FUNCTION_ARGS)> {>  EXEC SQL BEGIN DECLARE SECTION;You can't use ecpg in server-side functions.  You need to use the SPI
interface.--Alvaro Herrerahttp://www.CommandPrompt.com/PostgreSQL Replication, Consulting, Custom Development, 24x7 support



Re: [GENERAL] Version 1 Calling convention

2006-08-29 Thread Jasbinder Bali
Do i need to download spi.h from somewhere or what? I can't see it anywhereOn 8/30/06, Jasbinder Bali <[EMAIL PROTECTED]
> wrote:OK. I've removed the ECPG code and error has gone away.Thanks
jasOn 8/30/06, Alvaro Herrera <
[EMAIL PROTECTED]
> wrote:Jasbinder Bali wrote:
> I have a .pgc file and i have the following code snippet in it:
>> PG_FUNCTION_INFO_V1 (test);>> Datum> test (PG_FUNCTION_ARGS)> {>  EXEC SQL BEGIN DECLARE SECTION;You can't use ecpg in server-side functions.  You need to use the SPI
interface.--Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support





Re: [GENERAL] Version 1 Calling convention

2006-08-29 Thread Martijn van Oosterhout
On Wed, Aug 30, 2006 at 01:01:32AM -0400, Jasbinder Bali wrote:
> Do i need to download spi.h from somewhere or what? I can't see it anywhere

You need the backend header files. In Debian they're in the
postgresql-dev package. You'll need to add
-I/usr/include/postgresql/server to your compile command to see them.

Hope this helps,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Version 1 Calling convention

2006-08-29 Thread Jasbinder Bali
Isn't there any rpm to download the SPI ??On 8/30/06, Martijn van Oosterhout  wrote:
On Wed, Aug 30, 2006 at 01:01:32AM -0400, Jasbinder Bali wrote:> Do i need to download 
spi.h from somewhere or what? I can't see it anywhereYou need the backend header files. In Debian they're in thepostgresql-dev package. You'll need to add-I/usr/include/postgresql/server to your compile command to see them.
Hope this helps,--Martijn van Oosterhout      http://svana.org/kleptog/> From each according to his ability. To each according to his ability to litigate.
-BEGIN PGP SIGNATURE-Version: GnuPG v1.4.1 (GNU/Linux)iD8DBQFE9R/4IB7bNG8LQkwRAiekAJ9VV+aC+bshCm8H6Lifa5rwTtBl8wCcCyli9BMLtVsMsLzJM1FwLvhqMbs==Xbxl-END PGP SIGNATURE-



Re: [GENERAL] Version 1 Calling convention

2006-08-29 Thread Jasbinder Bali
OK..i got it..spi.h was in /usr/include/pgsql/server/executorJasOn 8/30/06, Jasbinder Bali <[EMAIL PROTECTED]
> wrote:Isn't there any rpm to download the SPI ??
On 8/30/06, Martijn van Oosterhout <
kleptog@svana.org> wrote:
On Wed, Aug 30, 2006 at 01:01:32AM -0400, Jasbinder Bali wrote:
> Do i need to download 
spi.h from somewhere or what? I can't see it anywhereYou need the backend header files. In Debian they're in thepostgresql-dev package. You'll need to add-I/usr/include/postgresql/server to your compile command to see them.
Hope this helps,--Martijn van Oosterhout      
http://svana.org/kleptog/> From each according to his ability. To each according to his ability to litigate.
-BEGIN PGP SIGNATURE-Version: GnuPG v1.4.1 (GNU/Linux)iD8DBQFE9R/4IB7bNG8LQkwRAiekAJ9VV+aC+bshCm8H6Lifa5rwTtBl8wCcCyli9BMLtVsMsLzJM1FwLvhqMbs==Xbxl-END PGP SIGNATURE-






[GENERAL] #include "executor/spi.h" throwing errors

2006-08-29 Thread Harpreet Dhaliwal
Hi,I'm trying to use SPI for database connectivity at the server side.My database is intialized at /usr/local/pgsql/hpsspi.h is at the follwoing location/usr/include/pgsql/server/executor.
I have a C file that has the SPI code and is located at /usr/local/pgsql/hpsI thin file i do #include "executor/spi.h"This throws hell lot of errors (hundreds of them), even if i don't have any SPI code in my C file as of now.
Can anyone please tell me whats going wrong and what header files should be kept where?Thanks in advance,~Harpreet


Re: [GENERAL] #include "executor/spi.h" throwing errors

2006-08-29 Thread Martijn van Oosterhout
On Wed, Aug 30, 2006 at 02:16:36AM -0400, Harpreet Dhaliwal wrote:
> Hi,
> I'm trying to use SPI for database connectivity at the server side.



> This throws hell lot of errors (hundreds of them), even if i don't have any
> SPI code in my C file as of now.
> 
> Can anyone please tell me whats going wrong and what header files should be
> kept where?

Well, you're doing something wrong then:

$ cat a.c
#include 
$ gcc -I /usr/include/postgresql/server -c a.c 
$

No errors or warnings. Please show the exact command you're using.

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] #include "executor/spi.h" throwing errors

2006-08-29 Thread Harpreet Dhaliwal
Now i did something like this$ cat a.c#include $ gcc -I /usr/include/pgsql/server -c a.c$still getting the same errors.The only difference between your and my code being /usr/include/postgresql/server and /usr/include/pgsql/server
On 8/30/06, Martijn van Oosterhout  wrote:
On Wed, Aug 30, 2006 at 02:16:36AM -0400, Harpreet Dhaliwal wrote:> Hi,> I'm trying to use SPI for database connectivity at the server side.> This throws hell lot of errors (hundreds of them), even if i don't have any
> SPI code in my C file as of now.>> Can anyone please tell me whats going wrong and what header files should be> kept where?Well, you're doing something wrong then:$ cat a.c
#include $ gcc -I /usr/include/postgresql/server -c a.c$No errors or warnings. Please show the exact command you're using.Have a nice day,--Martijn van Oosterhout   <
kleptog@svana.org>   http://svana.org/kleptog/> From each according to his ability. To each according to his ability to litigate.
-BEGIN PGP SIGNATURE-Version: GnuPG v1.4.1 (GNU/Linux)iD8DBQFE9S/1IB7bNG8LQkwRAolRAJ9Kx0cFrzFiox0Bva5dD/nps+9EKQCdGsyn+46ZT9EMGLoYC55WGuhaRvQ==eDn0-END PGP SIGNATURE-