Re: [GENERAL] pg_dump on 7.4

2006-05-28 Thread Jürgen Rose


We have a running postgresql 7.4 on our server (SuSE Linux 9), and at the moment there is no chance to upgrade. Through an unusual combination of constraints (spanning over schemas) the dump cannot be imported without errors anymore. 



Try using an 8.0 or 8.1 pg_dump with the 7.4 server.  7.4's pg_dump is
not very bright about dumping things in the right order.  If it still
doesn't work with a modern pg_dump, please show details.

regards, tom lane
  


Right, this is the exact problem, it works with pg_dump of postgresql 8. 
But I need the pg_dump on the server itself, Therefore I need a 
statically compiled version (because I can't upgrade to the whole 
postgresql 8)


Does anyone know where to find statically compiled binaries?


regards,
Jürgen



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] SLOOOOOOOW

2005-09-07 Thread Jürgen Rose

Scott Marlowe wrote:

On Tue, 2005-09-06 at 16:09, Jürgen Rose wrote:

Sorry, but I better use this email address, I just hate to use Outlook 
for this stuff.


To Peter Eisentraut

Yes, I've read the chapter in the manual.

To Michael Glaesemann

locally I run the database on my laptop (Dell D800) 1 GB Ram, but there 
within VMWARE with 512MB assigned RAM. But the target platform is a dual 
processor machine with 2 GB.


But, and thats the big but here, I don't care. For me a database has to 
work satisfying in the first place without twiddeling some obscure knobs 
or push levers to get just accaptable performance if I only have a small 
set of data. Heck, I'm talking about maybe in the whole 45.000 
records!!! I mean I used Interbase, MySQL, SQLite, SQLServer before, and 
for this project postgres was set, so I had to use  it. Which is fine, I 
wouldn't mind, if I would not have such troubles.



A couple of points:

1:  You wouldn't buy the QE II (a big luxery liner) and complain that it
doesn't work well for water skiing and is too complex.  It's the QE II.


I don't get that argument.


2:  You've given us absolutely nothing we can go on to help you make
postgresql work better for you.  Nothing.  Just one explain analyze
output.


It is a bit complex, and I have now added two eplains from the main 
queries to this mail



3:  If you're running one OS on top of another, and then a database on
top of that, and all you've got is 512 Meg of RAM, don't expect stellar
performance, especially from a database that uses shared memory like
postgresql does.


I don't expect mega performance, this is just a devel system, I mean do 
you develop your database on the productive system?



4:  Don't compare PostgreSQL to those other databases unless you're
going to give it a chance.  So far, you haven't done so, you've only
complained. 


I used it now for over half a year, and as I said in one of my previous 
mails, it is quite powerful and flexible, but I'm not impressed with the 
performance and it has its quirks.



5:  I have tested a properly tuned PostgreSQL server that was on about
1/2 the hardware (CPU speed, memory, RAID array) as a MSSQL server and
easily out ran it.  But, I took my time, read the docs, and tuned the
server OS and PostgreSQL


As I said before, I don't want to tune my system (my devel!), in my 
opinion it has to run with ok performance out of the box.



6:  Databases may appear simple, they are not, and the more complex they
are, the more you'll have to do to make full use of them.

So, have you been running vacuum and analyze, do you have the right
indexes, are you using queries that can use those indexes, have you
turned up sort_mem and a few other easily tweakable settings.

PostgreSQL's use of shared memory, combined with many older Operating
systems have VERY conservative settings for such, combined further with
the need for PostgreSQL to run on dang near anything, mean that, often,
out of the box, it's not as fast as some other servers.


I can set the sort_mem to what I want, but postgres doesn't care. It 
just consumes my CPU time but no memory.



OTOH, it coexists well with other software. If you've ever tried to
build a MSSQL or ORacle box that did anything else, you know how those
two database engines just consume memory and CPU without really asking. 
Here's an explain analyze on one of the production pgsql servers I work

on:

explain analyze select count(*) from sometable
 QUERY
PLAN  
-

 Aggregate  (cost=6209.99..6209.99 rows=1 width=0) (actual
time=339.200..339.201 rows=1 loops=1)
   -  Seq Scan on sometable  (cost=0.00..5856.19 rows=141519 width=0)
(actual time=0.025..202.636 rows=162427 loops=1)
 Total runtime: 339.262 ms
(3 rows)

Admitted, the rows aren't that big, but that's a seq scan of 160,000
rows.  Not bad really.  And it's quite fast at our more esoteric
reporting generation queries as well.

Now, we can sit here and argue about how nice it would be if PostgreSQL
just configured itself for maximum performance on installation, or you
can tell us what runs slow, and let us help you fix it.  The ball is in
your court.



I'm sorry but it is a bit complicated to explain the whole structure.

And my main point is, that with each row I insert, it becomes slower, 
and I will try to reproduce that behaviour, but I don't have so much 
time for it right now.


The following output is for the two main queries. There are some other 
views depending on it, but the tuning was left as an exercise for later, 
but it seems it would be needed now. The basic idea is just to handle 
the stored persons and organisations in the same way, therefore using 
the same tables and having

[GENERAL] SLOOOOOOOW

2005-09-06 Thread Jürgen Rose
Sorry, but I better use this email address, I just hate to use Outlook 
for this stuff.


To Peter Eisentraut

Yes, I've read the chapter in the manual.

To Michael Glaesemann

locally I run the database on my laptop (Dell D800) 1 GB Ram, but there 
within VMWARE with 512MB assigned RAM. But the target platform is a dual 
processor machine with 2 GB.


But, and thats the big but here, I don't care. For me a database has to 
work satisfying in the first place without twiddeling some obscure knobs 
or push levers to get just accaptable performance if I only have a small 
set of data. Heck, I'm talking about maybe in the whole 45.000 
records!!! I mean I used Interbase, MySQL, SQLite, SQLServer before, and 
for this project postgres was set, so I had to use  it. Which is fine, I 
wouldn't mind, if I would not have such troubles.


Which I'm working on is just a redesign of a database which has some 
hysterically grown tables. Not much in it, but there are some tables 
which should be merged together and some others have to split up. No big 
deal. So my basic idea was to use the flexibility of rules to provide a 
transparent interface to the frontend, which has the big advantage of 
not having to change the frontend in most places at all. We have a bit 
of a homegrown framework (PHP) to show and manipulate the data on the 
frontend side. Unfortunately it is only easy if you access 1 table, and 
don't have to update several tables. So my idea was to use the rule 
system as well to put the data into the database and distribute it on 
the underlying tables. To have views which separate the physical model 
from the logical model. This is best practice isn't it? Unfortunately it 
seems no way to create triggers on views, which is what I need. Some 
insert rules are not enough, because I'm using data which is just 
created, so this is not an option. Ok as a workaround I create a table 
which is just there to have a insert trigger on it to distribute the 
data on the tables. For selecting, updating, and deleting the rules are 
sufficient.


So I actually merging some tables with appr. 8000 + 14,000 + 30,000 
records in it, so we talking about a small database. The performance of 
selecting data from the views is slow, I mean there are only around 
5 records in there in the whole. It can take up to several seconds 
to get the data from the views, which is just not fast enough. The 
update is even slower, for just updating 1 record it takes ages.


The actual migration process, of moving the old data to the new tables 
is just agonizing slow. To move tha data from the small table (8000 
entries) it takes somewhere (not deterministic) between a few minutes 
and 40 minutes to move it. Essentually it is just a select from one 
table to the compatibility view of the new table. For me it seems that 
each additional row makes the database slower. It occured to me that 
either table (8000 or 14000 entries) is faster migrated if it happens to 
be the first of both. Then migrating the 3 entries (and it has to be 
the last one) takes **hours**!!!


The migration of the tables itself consists of two parts, first move the 
data from the table, than update all the linked tables (I had to remove 
the joins, they have to point afterwards to the new tables), at this 
point I'm using a lot of subselects (which are slow but there is no 
other way).


And I actually vacuum and analyze the database after each step, all 
usefull indices are set and also used.


I did some serious stuff with SQLServer and Interbase, and I had 
**never** those performance problems.


enough of ranting, but I'm totally frustrated
with best regards
Jürgen

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster