I have done some beta testing with PostgreSQL 7.4beta2.
I have run a simple set of SQL statements 1 million times:

-- START TRANSACTION ISOLATION LEVEL READ COMMITTED;
INSERT INTO t_data (data) VALUES ('2500');
UPDATE t_data SET data = '2500' WHERE data = '2500';
DELETE FROM t_data WHERE data = '2500';
-- COMMIT;

The interesting thing was that my postmaster needed around 4mb of RAM when I started running my test script using ...

psql test < script.sql

After about 2 1/2 hours the backend process already needed 11mb of ram. looking at the output of top you can see that it seems to be in the shared memory area:

PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND
28899 hs 39 19 11456 11M 10620 R N 89.8 2.9 150:23 postmaster


this seems very surprising to me because I have no explanation why PostgreSQL should consume so much more memory than at the beginning of the test.
There are no trigger or something like that around.


The table I am working on consist of two columns (one timestamp, one int4).


In addition to that I have made a test with a different set of SQL statements. I have tried 1500 concurrent transaction on my good old AMD Athlon 500 box running RedHat Linux. It worked out pretty fine. The thing I came across was that my memory consumption raised during the first two hours of my test (from about 1 1/2 gigs to 1.7 gigs ram). pretty surprising as well.


does anybody have an explanation for this behaviour?

Regards,

Hans




I have run 1500 concurrent transactions on an AMD Athlon box (RedHat 9).



-- Cybertec Geschwinde u Schoenig Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria Tel: +43/2952/30706; +43/664/233 90 75 www.cybertec.at, www.postgresql.at, kernel.cybertec.at



---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

Reply via email to