Re: [GENERAL] More Deadlock Detection on Insert

2004-03-10 Thread Csaba Nagy
It is the foreign key. Checking foreign keys in postgres is implemented
by locking the corresponding row in the parent table. So if you have 2
transactions inserting rows which reference the same keys in the parent
table in reverse order, you get a deadlock.
This lock is admittedly too strong and not appropriate for a foreign key
check, but postgres lacks the proper lock type to do it.

I think there was a patch for disable this locking and accept a certain
risk of data corruption - look in the archives. Might suite your needs
if you can make sure your application can accept that risk (or does not
generate the risky cases in the first place).
Or you can order your inserts, but that won't help if you have multiple
and complex foreign key relations, and is bound to be broken when you
change schema.

HTH,
Csaba.

On Wed, 2004-03-10 at 16:33, [EMAIL PROTECTED] wrote:
 I've searched the archives and haven't found anything that matches my
 problem, other than it may have something to do with foreign keys.
 
 If I have two processes loading a particular table at the same time, I may
 get:
 
 ERROR:  deadlock detected
 DETAIL:  Process 12154 waits for ShareLock on transaction 74240; blocked by
 process 12142.
 Process 12142 waits for ShareLock on transaction 74241; blocked by
 process 12154.
 
 
 The table in question has the following attributes of interest:
 
   widget_key numeric(12,0)
   widget_group numeric(10,0)
   widget_maker numeric(12,0)
 
 The foreign key references are:
 
 $1 FOREIGN KEY (widget_group) REFERENCES widget_sessions(widget_group)
 ON DELETE CASCADE
 $2 FOREIGN KEY (widget_maker) REFERENCES addresses(widget_maker)
 
 There are some other attributes that are indexed but have no foreign key
 references.
 
 The program is a C program using ecpg.  It loads 10,000 records at a time.
 The load goes something like this:
 
while (widgets) {
 
[snip]
 
 EXEC SQL SELECT
 nextval('widget_key_sequence')
 INTO
 :widget_key;
 
 
 [snip]
 
 INSERT INTO widgets ...
 
 [snip]
 
 }
 
 There are no explicit locks.  The INSERT is just INSERT INTO ... VALUES
 ...
 
 When two processes run simultaneously, they will never be referencing the
 same widget_group record, but they could reference the same widget_maker
 record.  I need the widget_maker reference to insure referential integrity.
 
 All indexes are btree.
 
 I can find no logical reason for the deadlock.  Why is this occurring and
 how can I get around it?  Unless it is deadlocking on the widget record
 itself, it would have to be either the sequence or the widget_maker foreign
 key reference.  Neither makes any sense.  The times I've seen it, based on
 the log messages it appears to happen as soon as the second process starts
 to load, and after the 'nextval' (i.e on the INSERT).
 
 Any help would be appreciated.
 
 Wes
 
 
 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faqs/FAQ.html


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Data Corruption in case of abrupt failure

2004-03-10 Thread scott.marlowe
On Wed, 3 Mar 2004, satish satish wrote:

 Hi,
  
 I am trying to do some reliability tests on postgre SQL. I have 
 use-case where the power can go off abruptly. I initiated 10,000 insert 
 operations and pulled out the cable in the middle. I had auto-commit 
 option turned on. I observed 2 out of 5 times the tables were totally 
 corrupted and could not read any data whereas 3 times I was able to read 
 the data which was inserted.
  
 Is there any way that I could avoid that data corruption and ensure 
 that atleast the records inserted till that point are available in the 
 database. Or are there any tools through which I can recover the data in 
 case the database gets corrupted?

There are a few steps to making sure your data can survive a power failure 
when writing.

1:  Make sure fsync is enabled in postgresql.conf / postmaster startup.
2:  Use a journaling file system.  Meta data only is fine.
3:  Use hardware that fsyncs and doesn't lie about it.
4:  Ditto # 3 for your OS.  Most the OSes pgsql runs on are fine.

#3 above is the big sticker most of the time.  write caching raid 
controllers without battery backup or write caching hard drives that lie 
about their fsync.  IDEs drives are known to generally do this.  SCSI 
drives generally don't.  No one on the list has done a lot of testing with 
SATA, but if someone wants to send me a drive and a controller card I'd be 
quite happy to pull the power plug on my box to test it.  :-0

Anyway, someone else on the list has reported that the Escalade IDE RAID 
controller passes the power pull test.  I have personally tested the LSI / 
MegaRAID controllers (the U160 one) with battery backed cache and found 
they survive the power off test well.  Someone else has tested the 
adaptec SCSI RAID controllers with battery backed cache and reported that 
they worked as well.  Turning off the cache on IDE drives will drop your 
average performance to about 1/3 that of what you get with caching on.  
But it should make them reliable for power loss recovery.  The command in 
linux is: hdparm -W0 /dev/hda.  Replace hda with the drive you would like 
to disable the write caching for.


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] postgres FROM clause problem

2004-03-10 Thread Tom Lane
Paolo Tavalazzi [EMAIL PROTECTED] writes:
 I have two query that they are different only for order of the tables
 in FROM lclause , but give back different query plan :

Hm, seems like the planner is making wacko estimates in the second case.
You didn't say what data types are involved in this query --- are any of
the join columns int8 or float8 or timestamp?  If so you might be
getting bitten by the 7.4 pg_statistic alignment bug.  Please follow the
repair procedures suggested in the 7.4.2 release notes:
http://developer.postgresql.org/docs/postgres/release.html#RELEASE-7-4-2
and see if that improves matters.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Moving from 7.3.4 to 7.4.x?

2004-03-10 Thread Bruno Wolff III
On Wed, Mar 10, 2004 at 16:18:06 +0100,
  Bjørn T Johansen [EMAIL PROTECTED] wrote:
 I am running 7.3.4 and I am thinking about upgrading to 7.4, so I was just 
 wondering what pitfalls, caveats,etc I should know of?

Going from 7.3 to 7.4 shouldn't be a big deal. 7.4 pg_dumpall has some
improvements and if possible you should dump your 7.3 database with
the 7.4 version of pg_dumpall.
You can look through the 7.4 release notes to see if any changes are likely
to cause you problems.
7.4.2 was just released. I don't think RPMs are out just yet, but you
probably want to go right to 7.4.2 if that isn't a problem.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] postgres FROM clause problem

2004-03-10 Thread Paolo Tavalazzi
Alle 16:54, mercoledì 10 marzo 2004, hai scritto:
 Paolo Tavalazzi [EMAIL PROTECTED] writes:
  I have two query that they are different only for order of the tables
  in FROM lclause , but give back different query plan :

 Hm, seems like the planner is making wacko estimates in the second case.
 You didn't say what data types are involved in this query --- are any of
 the join columns int8 or float8 or timestamp?  If so you might be
 getting bitten by the 7.4 pg_statistic alignment bug.  Please follow the
 repair procedures suggested in the 7.4.2 release notes:
 http://developer.postgresql.org/docs/postgres/release.html#RELEASE-7-4-2
 and see if that improves matters.

   regards, tom lane


I  have applied the procedure for fixing pg_statistic as you had said, but  the result 
is the same!
Only tran.time in the query is a timestamp , no int8 or float8.
The OS is FEDORA 0.96 x86_64 and the flag --enable-integer-datetimes is false,it could 
be a problem??

I don't know what to make, help me please!

Another question, in 7.4 GROUP BY clause  not guarantee the ordering of result.
Can I desable the new group by algorithm to maintain the coherence whit the programs 
that I have in production with pg_7.3

Thank you !!!

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] does this look more like a possible bug or more like a possible hardware problem...? (long)

2004-03-10 Thread Frank van Vugt
 I'm sorry, I meant to say save a copy of
 pg_attribute_relid_attnum_index.  The correct filename for it can be
 found via
 select relfilenode from pg_class where relname =
 'pg_attribute_relid_attnum_index';

Got it, made a backup of the entire database as well. Since the db wasn't 
filled yet, both files are fairly small when bzipped (0.5MB and  5MB) and 
both can be made available if you want.

Issuing 'reindex table pg_attribute' did not solve the problem, though

  Oid of table article_property_tree from database megafox_trial:
  -
  VERY scary:  more than one table with that name found!!

 Do you have more than one such table (presumably in different schemas)?
 If so this isn't scary at all.  I don't believe oid2name has been fixed
 to be schema-aware :-(

Nope, all user-tables are in public at the moment.

Also:

megafox_trial=# select * from article_property_tree;
ERROR:  catalog is missing 6 attribute(s) for relid 8349771





-- 
Best,




Frank.


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] updates (postgreSQL) very slow

2004-03-10 Thread scott.marlowe

Have you run this update query again and again with vacuuming?

http://www.postgresql.org/docs/7.4/static/maintenance.html#ROUTINE-VACUUMING

If so, you might have millions and millions of dead tuples taking up 
space and slowing things down.  If you're running 7.4, install the 
autovacuum daemon and turn it on.  Nice little program that should mostly 
take care of this issue for you.

Got any foreign keys on that field?  Triggers?




---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] does this look more like a possible bug or more like a possible hardware problem...? (long)

2004-03-10 Thread Tom Lane
Frank van Vugt [EMAIL PROTECTED] writes:
 At one point, I arrived at the following situation:
 psql:/home/data/megadump.sql:5169: WARNING:  specified item offset is too
 large
 psql:/home/data/megadump.sql:5169: PANIC:  failed to add item to the page
 for pg_attribute_relid_attnum_index

 Trying the same script on a newly created database doesn't show the problem. 
 However, I do still have the database that came up with this message, which 
 is now failing a vacuum full verbose analyse with:

 ERROR:  catalog is missing 6 attribute(s) for relid 8349771

This is consistent with the idea that pg_attribute_relid_attnum_index is
corrupted.  I would suggest saving a copy of that file for postmortem
analysis and then trying to REINDEX pg_attribute.  (Depending on which
PG version you are running, that may require running a standalone
backend.  See the REINDEX man page.)

If REINDEX makes the problem go away, would you send me the corrupted
index file off-list?

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] does this look more like a possible bug or more like a possible hardware problem...? (long)

2004-03-10 Thread Tom Lane
Frank van Vugt [EMAIL PROTECTED] writes:
 This is consistent with the idea that pg_attribute_relid_attnum_index is
 corrupted.  I would suggest saving a copy of that file for postmortem
 analysis and then trying to REINDEX pg_attribute.

 Uhm.. looking for a file named 8349771 located in the directory base/nr
 where nr corresponds to the proper db, using oid2name.
 We seem to have a small problem : there is no such file.

I'm sorry, I meant to say save a copy of
pg_attribute_relid_attnum_index.  The correct filename for it can be
found via
select relfilenode from pg_class where relname = 'pg_attribute_relid_attnum_index';

 Oid of table article_property_tree from database megafox_trial:
 -
 VERY scary:  more than one table with that name found!!

Do you have more than one such table (presumably in different schemas)?
If so this isn't scary at all.  I don't believe oid2name has been fixed
to be schema-aware :-(

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] does this look more like a possible bug or more like a possible hardware problem...? (long)

2004-03-10 Thread Frank van Vugt
Some additional info:

# select * from pg_class where relname = 'article_property_tree';
-[ RECORD 1 ]--
+---
relname| article_property_tree
relnamespace   | 2200
reltype| 8349772
relowner   | 1000
relam  | 0
relfilenode| 8349771
relpages   | 4
reltuples  | 299
reltoastrelid  | 8349775
reltoastidxid  | 0
relhasindex| t
relisshared| f
relkind| r
relnatts   | 6
relchecks  | 1
reltriggers| 3
relukeys   | 0
relfkeys   | 0
relrefs| 0
relhasoids | t
relhaspkey | t
relhasrules| f
relhassubclass | f
relacl | {vugtf=a*r*w*d*R*x*t*/vugtf,=arwdRxt/vugtf,postgres=arwdRxt/
vugtf}
-[ RECORD 2 ]--
+---
relname| article_property_tree
relnamespace   | 2200
reltype| 8354495
relowner   | 1000
relam  | 0
relfilenode| 8354494
relpages   | 4
reltuples  | 299
reltoastrelid  | 8354498
reltoastidxid  | 0
relhasindex| t
relisshared| f
relkind| r
relnatts   | 6
relchecks  | 1
reltriggers| 5
relukeys   | 0
relfkeys   | 0
relrefs| 0
relhasoids | t
relhaspkey | t
relhasrules| f
relhassubclass | f
relacl | {vugtf=a*r*w*d*R*x*t*/vugtf,=arwdRxt/vugtf,postgres=arwdRxt/
vugtf}







-- 
Best,




Frank.


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] [NEWBIE] need help optimizing this query

2004-03-10 Thread Stephan Szabo

On Wed, 10 Mar 2004, Dexter Tad-y wrote:

 On Wed, 2004-03-10 at 23:08, Stephan Szabo wrote:
  On Wed, 10 Mar 2004, Dexter Tad-y wrote:
 
   Greetings,
   I need help in optimizing this query:
  
   select a.id, b.transaction from test as a left join pg_locks as b on
   a.xmax = b.transaction where b.transaction is null;
  
   im using the query in obtaining records not locked by other
   transactions. any help is appreciated.
 
  It's hard to say without knowing more about the size of a and explain
  analyze output.  On my 7.4 machine, using NOT IN rather than the left join
  gives about a 2x speed increase on a 400k row table.


 2) using NOT IN

 csp= explain select * from test where id not in (select test.id from
 test, pg_locks where pg_locks.transaction=test.xmax);

I think you'd want:
select * from text where xmax not in (select transaction from pg_locks);

Also, use explain analyze which will actually run the query and show you
the real time for the steps.

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] More Deadlock Detection on Insert

2004-03-10 Thread wespvp
On 3/10/04 10:03 AM, Csaba Nagy [EMAIL PROTECTED] wrote:

 It is the foreign key. Checking foreign keys in postgres is implemented
 by locking the corresponding row in the parent table. So if you have 2
 transactions inserting rows which reference the same keys in the parent
 table in reverse order, you get a deadlock.
 This lock is admittedly too strong and not appropriate for a foreign key
 check, but postgres lacks the proper lock type to do it.

 I think there was a patch for disable this locking and accept a certain
 risk of data corruption - look in the archives. Might suite your needs
 if you can make sure your application can accept that risk (or does not
 generate the risky cases in the first place).
 Or you can order your inserts, but that won't help if you have multiple
 and complex foreign key relations, and is bound to be broken when you
 change schema.

Ugh.  That's ugly.  All I need to do is verify at insert time that the child
record exists (database enforced as opposed to code enforced).

If I understand you right, if I were to insert the records ordered by the
child foreign key (since the parent is unique between runs), this would
eliminate the deadlock.  I'm assuming the lock is retained until the
transaction is complete?

Since all 10,000 records are a single transaction (if one fails, all must
fail), and it is almost certain that two loads will have common child
records, it sounds like even with ordered records I have almost no
concurrency.  Once a collision occurred, process 2 would wait on process 1
to complete.  I might as well just grab an exclusive lock on the table when
loading it?

I'd prefer to avoid one-off patches, as in a new installation that is likely
to be overlooked.

Wes


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] pg_aggregate weird stuff

2004-03-10 Thread Alexander Cohen
Im trying to get a list of aggregate names. So i do SELECT aggfnoid 
FROM pg_aggregate to get a list of Oid of the functions. But this 
column seems to return names and not Oid's of the procs. Is this 
normal, how can i get  a loist of proc Oid's for the aggregtates?

thanks!
--
Alexander Cohen
http://www.toomuchspace.com
(819) 348-9237
(819) 432-3443
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] [NEWBIE] need help optimizing this query

2004-03-10 Thread Dexter Tad-y

 explain select * from foo where xmax not in
 (select transaction from pg_locks where transaction is not null);


Thanks a lot! This query is what i've been looking for.



Cheers!

Dexter Tad-y



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] load testing

2004-03-10 Thread scott.marlowe
On Tue, 9 Mar 2004, Sally Sally wrote:

 I wanted to do some quick load testing on a postgres database. Does anyone 
 have any tips on how to go about doing this?
 Thanks much.

If you just wanna beat the database a bit to test for reliability etc, 
look at contrib/pgbench in the distro.

If you want to test massive workloads, look at the OSDL tests on 
www.osdl.org:

http://www.osdl.org/lab_activities/kernel_testing/osdl_database_test_suite/




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] load testing

2004-03-10 Thread Steve Wolfe
 I wanted to do some quick load testing on a postgres database. Does
anyone
 have any tips on how to go about doing this?
 Thanks much.

   Sure.  Test after the manner in which the database is normally used,
and with real-world data.

   I've seen far too many people benchmark a database system by opening
a single connection, and issuing a number of queries.  However, it's more
common for a database server to be handling multiple queries at the same
time.

   Also, I've seen far too many people use contrived test data and
contrived queries.  However, the nature of queries may be very different
from the actual queries you run.  Test with what you use!

  For my own benchmarking, I usually log ~10,000 queries from our
production server, and start a Perl script that I whipped up quickly.  It
will split the queries into chunks, and test with 1 through 10
simultaneous connections, and that's been a very good indicator of how the
machine in question will behave once it's put into production.

steve


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] updates (postgreSQL) very slow

2004-03-10 Thread Fred Moyer
On Wed, 2004-03-10 at 15:30, Tom Lane wrote:
 Fred Moyer [EMAIL PROTECTED] writes:
  This is just a Seq Scan where a numeric field must be updated to
  NULL but if I run it you can see that this simple query takes
  forever (7628686.23 ms this is over 2 hours for only updating
  747524 records!).
 
  However updating every row to null with 700k rows is going to take a while
 
 A while, sure, but 2 hours seems excessive to me too.  I'm betting that
 there are triggers or foreign keys on the table being updated, and that
 that's where the time is going.  It might be possible to improve that,
 but Bobbie hasn't given us enough information.

If there are no foreign keys or triggers and updating each row is taking
one drive seek ( approximately 9 ms with the 80 gig IDE drive being used
here ) then to do 747524 seeks will take 6727716 ms, about 10% less than
the time of 7628686 ms for the update above.  Is this is an accurate
estimate or are these numbers just coincidence?  It seems like this could
represent the least efficient update scenario.


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] postgres FROM clause problem

2004-03-10 Thread Tom Lane
Paolo Tavalazzi [EMAIL PROTECTED] writes:
 I  have applied the procedure for fixing pg_statistic as you had said, but  the 
 result is the same!

Hm.  It could be a planner bug.  Can you reproduce the misbehavior if
you dump and load the tables into a fresh database?  If so, could you
send me the dump so I can look at the problem with a debugger?

(Also, you might try updating to 7.4.2 first and see if that changes
anything.  We did fix quite a number of bugs already in 7.4.2...)

 Another question, in 7.4 GROUP BY clause  not guarantee the ordering of result.
 Can I desable the new group by algorithm to maintain the coherence whit the programs 
 that I have in production with pg_7.3

As Bruno said, your programs are broken because they are assuming
something not guaranteed by the SQL spec.  But until you get around to
adding the ORDER BY clauses they should have, see enable_hashagg.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] postgres FROM clause problem (GROUP BY subquestion)

2004-03-10 Thread Bruno Wolff III
On Wed, Mar 10, 2004 at 18:33:41 +0100,
  Paolo Tavalazzi [EMAIL PROTECTED] wrote:
 
 I don't know what to make, help me please!
 
 Another question, in 7.4 GROUP BY clause  not guarantee the ordering of result.
 Can I desable the new group by algorithm to maintain the coherence whit the programs 
 that I have in production with pg_7.3

GROUP BY never guarenteed an order. That this happened in 7.3 was an
implementation detail. If you want a specific order you need to use an
ORDER BY clause.

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] returning row numbers in select

2004-03-10 Thread Randall Skelton
Is there a way to return an integer row number for a query?  Note that 
there may be a large number of rows so I would rather not have joined 
selects...

For a rather simple query:

SELECT timestamp from test
WHERE  timestamp  '2004-02-02 04:15:00.00 +0'
ANDtimestamp  '2004-02-02 04:15:10.00 +0';
where 'test' is

  Column   |   Type   | Modifiers
---+--+---
 timestamp | timestamp with time zone |
 value | double precision |
Indexes: table_timestamp
I to return a sequential row number beginning at 1?

i.e.

row| timestamp
---+
1   2004-02-01 23:15:00.824-05
2   2004-02-01 23:15:01.824-05
3   2004-02-01 23:15:02.824-05
...
My reason for wanting row numbers is so I can use a 'MOD(row_number, 
n)' to get the nth row from the table.

Cheers,
Randall
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Question on Opteron performance

2004-03-10 Thread Steve Wolfe
 The only time I've seen high cpu and memory bandwidth load with
near-zero i/o
 load like you describe was on Oracle and it turned out to be an sql
 optimization problem.

 What caused it was a moderate but not very large table on which a very
 frequent query was doing a full table scan (= sequential scan). The
entire
 table was easily kept in cache, but it was large enough that merely
scanning
 every block of it in the cache consumed a lot of cpu and memory
bandwidth. I
 don't remember how large, but something on the order of a few thousand
records.

  Every so often, I log all queries that are issued, and on a seperate
machine, I EXPLAIN them and store the results in a database, so I can do
analysis on them.  Each time, we look at what's using the greatest amount
of resources, and attack that.  Believe me, the low-hanging fruit like
using indexes instead of sequential scans were eliminated years ago. : )

   Over the past four years, our traffic has increased, on average, about
90% per year.  We've also incorporated far more sources of data into our
model, and come up with far more ways to use the data.  When you're
talking about exponential traffic growth combined with exponential data
complexity, it doesn't take long before you start hitting limits!

   Before I shell out the $15k on the 4-way Opteron, I'm going to spend
some long, hard time looking for ways to make the system more efficient.
However, after all that's already been done, I'm not optimistic that it's
going to preclude needing the new server.  I'm just surprised that nobody
seems to have used PostgreSQL on a quad-Opteron before!

steve


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] postgres FROM clause problem

2004-03-10 Thread Martijn van Oosterhout
On Wed, Mar 10, 2004 at 06:33:41PM +0100, Paolo Tavalazzi wrote:
 Another question, in 7.4 GROUP BY clause  not guarantee the ordering of result.
 Can I desable the new group by algorithm to maintain the coherence whit the programs 
 that I have in production with pg_7.3

Uf you're expecting ordered output, perhaps you should add an ORDER BY
clause?
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 If the Catholic church can survive the printing press, science fiction
 will certainly weather the advent of bookwarez.
http://craphound.com/ebooksneitherenorbooks.txt - Cory Doctorow


pgp0.pgp
Description: PGP signature


Re: [GENERAL] returning row numbers in select

2004-03-10 Thread Chris

 I to return a sequential row number beginning at 1?
 
 i.e.
 
 row| timestamp
 ---+
 1   2004-02-01 23:15:00.824-05
 2   2004-02-01 23:15:01.824-05
 3   2004-02-01 23:15:02.824-05
 ...
 
 My reason for wanting row numbers is so I can use a 'MOD(row_number, 
 n)' to get the nth row from the table.


Doesn't LIMIT and OFFSET do the job?
http://www.postgresql.org/docs/7.3/interactive/queries-limit.html


Bye, Chris.



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] symbolic links for 7.4.2

2004-03-10 Thread Bruno Wolff III
I suggested that one of my coworkers pick up a copy of 7.4.2 and he
had a problem locating it because the latest symbolic link still
points to 7.4.1 and there is also no v7.4.2 link as there is for 7.4.1.
I had him get a copy by going into the source directory, but I wasn't
sure if we were jumping the gun or if this was an oversight.

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Question on Opteron performance

2004-03-10 Thread Christopher Petrilli
On Mar 10, 2004, at 3:14 PM, Steve Wolfe wrote:

   Before I shell out the $15k on the 4-way Opteron, I'm going to spend
some long, hard time looking for ways to make the system more 
efficient.
However, after all that's already been done, I'm not optimistic that 
it's
going to preclude needing the new server.  I'm just surprised that 
nobody
seems to have used PostgreSQL on a quad-Opteron before!
Well, I haven't had a chance to run PostgreSQL on a quad-Opteron box, 
but in discussing this with someone building a cluster out of them, 
their experience is that they are seeing better performance out of a 
quad-Opteron than a 3Ghz Xeon box (quad as well), which they believe 
reflects superior memory architecture.  So, if someone has run on a 
quad-Xeon of similar specs, then I would imagine you should see 
similar, if not better, numbers.

Chris
--
| Christopher Petrilli
| petrilli (at) amber.org
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] returning row numbers in select

2004-03-10 Thread Tom Lane
Randall Skelton [EMAIL PROTECTED] writes:
 Correction, I don't want to simply get the nth row, I want all rows 
 that are divisible by n.  Essentially, the timestamp is at a regular 
 interval and I want a way of selecting rows at different sampling 
 intervals.

Couldn't you code this as a WHERE test on the timestamp?

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] symbolic links for 7.4.2

2004-03-10 Thread Bruce Momjian
Bruno Wolff III wrote:
 I suggested that one of my coworkers pick up a copy of 7.4.2 and he
 had a problem locating it because the latest symbolic link still
 points to 7.4.1 and there is also no v7.4.2 link as there is for 7.4.1.
 I had him get a copy by going into the source directory, but I wasn't
 sure if we were jumping the gun or if this was an oversight.

I am not sure we officially released 7.4.2 yet.  Have we?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] returning row numbers in select

2004-03-10 Thread Bernard Clement

Maybe by using a sequence and a function.

The sequece to generate the row number.

The a function 
1) to reset the sequence and 
2) to perform a select with the first column nextval(seq) and the column the 
timestamp

However, I am a newbie with PostgreSQL and I am not sure it this will work 
correctly...you might have to play with it (or wait for somebody with more 
experience than me).

Bernard

On Wednesday 10 March 2004 16:23, Randall Skelton wrote:
  I to return a sequential row number beginning at 1?
 
  i.e.
 
  row| timestamp
  ---+
  1   2004-02-01 23:15:00.824-05
  2   2004-02-01 23:15:01.824-05
  3   2004-02-01 23:15:02.824-05
  ...
 
  My reason for wanting row numbers is so I can use a 'MOD(row_number,
  n)' to get the nth row from the table.

 Correction, I don't want to simply get the nth row, I want all rows
 that are divisible by n.  Essentially, the timestamp is at a regular
 interval and I want a way of selecting rows at different sampling
 intervals.

  Doesn't LIMIT and OFFSET do the job?
  http://www.postgresql.org/docs/7.3/interactive/queries-limit.html

 It would if I only wanted an offset butI want a query to return the
 first, fifth, and tenth, and so on row.  This would be 'MOD(row_num,
 5)' but given that I don't know the number of rows a priori, it is
 difficult to write a LIMIT.  Moreover, the offset doesn't make it easy
 to get the first row.  Unless, of course, I am missing something
 obvious?

 Cheers,
 Randall


 ---(end of broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] symbolic links for 7.4.2

2004-03-10 Thread Marc G. Fournier

this will all be changed over this evening

On Wed, 10 Mar 2004, Bruno Wolff III wrote:

 I suggested that one of my coworkers pick up a copy of 7.4.2 and he
 had a problem locating it because the latest symbolic link still
 points to 7.4.1 and there is also no v7.4.2 link as there is for 7.4.1.
 I had him get a copy by going into the source directory, but I wasn't
 sure if we were jumping the gun or if this was an oversight.

 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] returning row numbers in select

2004-03-10 Thread Tom Lane
Randall Skelton [EMAIL PROTECTED] writes:
 Couldn't you code this as a WHERE test on the timestamp?

 That would be ideal as it is theoretically possible for there to be 
 missing rows due to sampling errors; nevertheless, a WHERE test doesn't 
 seem obvious to me.  Can you please post an example?

Something like
WHERE (EXTRACT(EPOCH FROM timestamp)::numeric % 5) = 0;
The EXTRACT function returns double precision, but there's no double
modulo operator for some reason, hence the cast to numeric which does
have one.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] returning row numbers in select

2004-03-10 Thread Bruno Wolff III
On Wed, Mar 10, 2004 at 17:25:49 -0500,
  Randall Skelton [EMAIL PROTECTED] wrote:
 
 That would be ideal as it is theoretically possible for there to be 
 missing rows due to sampling errors; nevertheless, a WHERE test doesn't 
 seem obvious to me.  Can you please post an example?  The time spacing 
 between rows is 1 second but I want my select statement to return rows 
 every 5 seconds (see marked lines below).  I've tried various interval 
 operations but I don't really understand how to relate the timestamp 
 and and the interval.

You could extract seconds from timestamp, cast to integer and apply
the mod function and test against whichever remainder you want.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Mails not taken by List

2004-03-10 Thread Alexander Hachmann
Hello,
Sorry to send this off topic mail.
I sent a ontopic mail for three times over the whole day and none did
arrive.
No I changed my mail adress.
How can that be?
Sorry, next mail is on Topic.
Alexander


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly