"Thalis A. Kalfigopoulos" wrote on Fri, 22 Jun 2001 16:34:56 EDT
>On Fri, 22 Jun 2001, Alex Pilosov wrote:
>
> [SNIP]
>
>Had the same problem with a disk and boosted its performance with hdparm (4->3
>5Mb/s). I don't know if this was the reason I started getting the message "--
>MARK --" in my sy
I'm trying to load a 200GB table. WAL seems to be using many
GB of disk space and slowing down the load. I tried "-F" and
fync=off in the postmaster.conf (these are supposed to be the
same, right?) but these don't inhibit the WAL use.
What is the right approach here?
Thanks!
--Martin
-
Tom Lane wrote on Tue, 12 Jun 2001 19:28:14 EDT
>Martin Weinberg <[EMAIL PROTECTED]> writes:
>> As long as the input table (in this case, may14_goodsrc) is small
>> enough it works fine. For large input tables, postgres exhausts
>> all swap space and crashes.
>
We have a database with two-dimensional spatial data. The following
query is makes a table of separation between pairs of points:
create table close_test as
select a.cntr as a_cntr,b.cntr as b_cntr
from may14_goodsrc as a, may14_goodsrc as b
where a.decl between b.decl+.1 and b.de
Using the distributed examples as a guide, I wrote
a c++ code to execute a large suite of queries.
Each query opens and closes a backend. I notice
a *big leak* when large numbers of tuples are returned.
I get the same behavior using libpq (with PQclears and
PQ finishes explicitly) or using libpq
eat on
my Linux box with a Raid array. The full database is >100GB.
--Martin
=======
Martin Weinberg Phone: (413) 545-3821
Dept. of Physics and Astronomy FAX: (413) 545-2117/0648
530 Graduate Rese
any way to save this? I have nearly
100GB in this database . . . sigh.
--Martin
=======
Martin Weinberg Phone: (413) 545-3821
Dept. of Physics and Astronomy FAX: (413) 545-2117/0648
530 Graduate Rese
Does anyone have any suggestions on the best way to
copy a database to tape in mulitple volumes under Linux?
The database takes up more than half of my raid filesystem
(100GB) so I can't make temporary files first.
Thanks,
--Martin
Can any one point me to a document describing
pgsql's use of shared memory so I can pick the
optimal choice for SHMMAX?
How much shared memory can pgsql effectively
use?
Thanks!
--Martin
===
Martin Wei
I have four tables with identical fields and I would like
to automate the same query on each table and pool the
results.
Any suggestions on streamlining this?
Thanks!
--Martin
===
Martin Weinberg
GB database in the next few weeks. I
try to post some scaling stats when it's up and running.
--Martin
===
Martin Weinberg Phone: (413) 545-3821
Dept. of Physics and Astronomy FAX: (4
ease refer to the attached messages from [EMAIL PROTECTED]
>
>CN
>
> [SNIP . . .]
===
Martin Weinberg Phone: (413) 545-3821
Dept. of Physics and Astronomy FAX: (413) 545-2117/0648
530 Graduate Research Tower
University of Massachusetts
Amherst, MA 01003-4525
===
Martin Weinberg Phone: (413) 545-3821
Dept. of Physics and Astronomy FAX: (413) 545-2117/0648
530 Graduate Research Tower
University of Massachusetts
Amherst, MA 01003-4525
nd k_m, vacuum analyzed and tried
again, but got identical performance. If this is the way
it is, so be it, but I have the feeling that something is
not working properly.
Any ideas?
Again, with _heaps_ of thanks,
--Martin
===
Martin Weinberg Phone: (413) 545-3821
Dept. of Physics and Astronomy FAX: (413) 545-2117/0648
530 Graduate Research Tower
University of Massachusetts
Amherst, MA 01003-4525
(and vacuum analyzed several times as well as
dumped and reloaded and reloaded from scracth). We have
a larger database with 20M rows which has a similar behavior.
There are 7092894 rows in database "lmc". So:
(3.5-3.4)/(99.999-2.731) = 7292.1 != 788k
A clue?
Thanks again,
--M
===
15 matches
Mail list logo