Hi,
I'm using Postgres 7.4. I have a web application built with php4 using postgres7.4
I was going through /var/log/messages of my linux box ( SLES 9). I encountered the following messages quite a few times.
>postgres[20199]: [4-1] ERROR: could not send data to client: Broken pipe
>postgres[303
"Tomeh, Husam" <[EMAIL PROTECTED]> writes:
> mtrac=# show maintenance_work_mem ;
> maintenance_work_mem
> --
> 1048576<==
> (1 row)
> mtrac=#
> mtrac=#
> mtrac=# create index mort_ht on mortgage(county_id,mtg_rec_dt);
> ERROR: out of memory
No special data types. The table is pretty large one with over 15GB. The
index is about 1.5 GB. Here's the table structure :
Column | Type | Modifiers
-+---+---
county_id | numeric(5,0) | not null
batch_dt
Well just a little update:
1) Looks like I'm definately RAM constrained. Just placed an order
for another 4GB.
2) I ended up dropping the primary key too which helped with disk
thrashing a lot (average disk queue wait was between 500ms and 8500ms
before and 250-500ms after)
3) Playing with most
---Original Message---
From: Michael Fuhr
Date: 02/14/06 23:05:55
To: Adnan DURSUN
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] SQL
Function Performance
>On Tue, Feb 14, 2006 at 11:33:57AM +0200, Adnan DURSUN wrote:
>>
-> Nested Loop (cost=5.90..267.1
"Tomeh, Husam" <[EMAIL PROTECTED]> writes:
> I have run pg_dump and had no errors. I also got this error when
> creating one index but not another. When I lowered my
> maintenance_work_mem, the create index succeeded.
Create index too? Hm. That begins to sound more like a memory leak.
Do you ha
I have run pg_dump and had no errors. I also got this error when
creating one index but not another. When I lowered my
maintenance_work_mem, the create index succeeded.
Regards,
Husam
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Tuesday, February 14,
What version of Solaris are you using?
Do you have the recommendations while using COPY on Solaris?
http://blogs.sun.com/roller/page/jkshah?entry=postgresql_on_solaris_better_use
wal_sync_method = fsync
wal_buffers = 128
checkpoint_segments = 128
bgwriter_percent = 0
bgwriter_maxpages = 0
And
Hi Guys.
We are running v8.1.2 on FreeBSD 5.4-RELEASE and the server is running
with above averege load.
When i do top i see alot of postmaster processes in "sbwait" state:
# uptime
4:29PM up 23 days, 20:01, 3 users, load averages: 3.73, 1.97, 1.71
# top
82808 pgsql 1 40 15580K
> Hmm, the large number of columns might have something to do with it ...
> what datatypes are the columns?
All sorts, but mostly float4 and varchar(2 to 10)
Jay
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Tuesday, February 14, 2006 1:03 PM
To: Jay Greenfield
Cc: '
On Tue, Feb 14, 2006 at 11:33:57AM +0200, Adnan DURSUN wrote:
> -> Nested Loop (cost=5.90..267.19 rows=3 width=101) (actual
> time=76.240..30974.777 rows=63193 loops=1)
> -> Nested Loop (cost=5.90..123.48 rows=26 width=73) (actual
> time=32.082..4357.786 rows=14296 loops
"Jay Greenfield" <[EMAIL PROTECTED]> writes:
> The table is 1.2 million rows X 246 columns. The only index is the primary
> key. I will try to remove that index to see if that improves performance at
> all.
Hmm, the large number of columns might have something to do with it ...
what datatypes ar
> How do you get 4,000+ lines of explain analyze for one update query in a
> database with only one table? Something a bit fishy there. Perhaps you
> mean explain verbose, though I don't really see how that'd be so long
> either, but it'd be closer. Could you provide some more sane
> informati
* Tom Lane ([EMAIL PROTECTED]) wrote:
> Stephen Frost <[EMAIL PROTECTED]> writes:
> > While it's true that Access almost certainly takes some shortcuts, 24
> > minutes for an update across 1.2 millon rows seems an awefully long time
> > for Postgres.
>
> I did some experiments along this line with
Stephen Frost <[EMAIL PROTECTED]> writes:
> While it's true that Access almost certainly takes some shortcuts, 24
> minutes for an update across 1.2 millon rows seems an awefully long time
> for Postgres.
I did some experiments along this line with a trivial table (2 integer
columns) of 1.28M rows
"Tomeh, Husam" <[EMAIL PROTECTED]> writes:
> =# vacuum analyze code;
> ERROR: out of memory
> DETAIL: Failed on request of size 1073741820.
That looks a whole lot like a corrupt-data issue. The apparent
dependency on maintenance_work_mem is probably illusory --- I suspect
some of your trials ar
On Tue, 2006-02-14 at 11:36, Tom Lane wrote:
> [EMAIL PROTECTED] writes:
> > Yes, I've launched ANALYZE command before sending request.
> > I precise that's postgres version is 7.3.4
>
> Can't possibly be 7.3.4, that version didn't have HashAggregate.
>
> How many distinct values of "query" actua
[EMAIL PROTECTED] writes:
> Yes, I've launched ANALYZE command before sending request.
> I precise that's postgres version is 7.3.4
Can't possibly be 7.3.4, that version didn't have HashAggregate.
How many distinct values of "query" actually exist in the table?
regards, t
* Jay Greenfield ([EMAIL PROTECTED]) wrote:
> Database has one table with 1.2 million rows
> Query:
>
> UPDATE ntdn SET gha=area/1
>
> I could post the EXPLAIN ANALYZE results but its 4,000+ lines long
How do you get 4,000+ lines of explain analyze for one update query in a
database with onl
30% faster !!! i will test this new version ...
thanks a lot
-Message d'origine-
De : [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] la part de Albert
Cervera Areny
Envoyé : mardi 14 février 2006 17:07
À : pgsql-performance@postgresql.org
Objet : Re: [PERFORM] copy and postgresql.conf
So
On Tue, 2006-02-14 at 10:32, [EMAIL PROTECTED] wrote:
> command explain analyze crash with the "out of memory" error
>
> I precise that I've tried a lot of values from parameters shared_buffer and
> sort_mem
>
> now, in config file, values are :
> sort_mem=32768
> and shared_buffer=3
OK, on
This is the second time I'm getting out of memory error when I start a
database vacuum or try to vacuum any table. Note this machine has been
used for data load batch purposes.
=# vacuum analyze code;
ERROR: out of memory
DETAIL: Failed on request of size 1073741820.
I'm running Postgres 8.1
command explain analyze crash with the "out of memory" error
I precise that I've tried a lot of values from parameters shared_buffer and
sort_mem
now, in config file, values are :
sort_mem=32768
and shared_buffer=3
server has 4Go RAM.
and kernel.shmmax=30720
> On Tue, 2006-02-14 at 10
On Tue, 2006-02-14 at 10:17, Jay Greenfield wrote:
> Is it possible to configure Postgres to behave like Access - a single user
> and use as much of the recourses as required?
No. If you want something akin to that, try SQL Lite. it's not as
featureful as PostgreSQL, but it's closer to it than
On Tue, 2006-02-14 at 10:15, [EMAIL PROTECTED] wrote:
> Yes, I've launched ANALYZE command before sending request.
> I precise that's postgres version is 7.3.4
So what does explain analyze show for this query, if anything? Can you
increase your sort_mem or shared_buffers (I forget which hash_agg
Is it possible to configure Postgres to behave like Access - a single user
and use as much of the recourses as required?
Thanks,
Jay.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Scott Marlowe
Sent: Tuesday, February 14, 2006 8:05 AM
To: Jay Greenfi
Yes, I've launched ANALYZE command before sending request.
I precise that's postgres version is 7.3.4
> On Tue, 2006-02-14 at 10:03, [EMAIL PROTECTED] wrote:
> > Thanks for your response,
>
> SNIP
>
> > if HashAgg operation ran out of memory, what can i do ?
>
> 1: Don't top post.
>
> 2: Have you
Sorry, COPY improvements came with 8.1
(http://www.postgresql.org/docs/whatsnew)
A Dimarts 14 Febrer 2006 14:26, FERREIRA, William (VALTECH) va escriure:
> thanks,
>
> i'm using postgresql 8.0.3
> there is no primary key and no index on my tables
>
> regards
>
> -Message d'origine-
> De :
On Tue, 2006-02-14 at 10:03, [EMAIL PROTECTED] wrote:
> Thanks for your response,
SNIP
> if HashAgg operation ran out of memory, what can i do ?
1: Don't top post.
2: Have you run analyze? Normally when hash agg runs out of memory, the
planner THOUGHT the hash agg would fit in memory, but it wa
On Tue, 2006-02-14 at 09:51, Jay Greenfield wrote:
> I am running some simple queries to benchmark Postgres 8.1 against MS
> Access and Postgres is 2 to 3 times slower that Access.
A BUNCH OF STUFF SNIPPED
> Why does Access run so much faster? How can I get Postgres to run as
> fast as Access?
Thanks for your response,
I've made this request :
SELECT query_string, DAY.ocu from search_data.query_string,
(SELECT SUM(occurence) as ocu, query
FROM daily.queries_detail_statistics
WHERE date >= '2006-01-01' AND date <= '2006-01-30'
AND portal IN (1,2)
GROUP BY query
ORDER BY ocu DESC
L
I am running some simple queries to benchmark
Postgres 8.1 against MS Access and Postgres is 2 to 3 times slower that
Access.
Hardware:
Dell Optiplex GX280
P4 3.20 GHz
3GB RAM
Windows XP SP1
Database has one table with 1.2 million rows
Query:
UPDATE ntdn SET gha=area/1000
[EMAIL PROTECTED] writes:
> I've error "out of memory" with these traces :
Doing what?
> AggContext: -1976573952 total in 287 blocks; 25024 free (414 chunks);
> -1976598976 used
> DynaHashTable: 503439384 total in 70 blocks; 6804760 free (257 chunks);
> 496634624 used
I'd guess that a HashAgg op
Hello,
I've error "out of memory" with these traces :
TopMemoryContext: 32768 total in 3 blocks; 5152 free (1 chunks); 27616 used
TopTransactionContext: 8192 total in 1 blocks; 8136 free (0 chunks); 56 used
DeferredTriggerXact: 0 total in 0 blocks; 0 free (0 chunks); 0 used
MessageContext: 24576
thanks,
i'm using postgresql 8.0.3
there is no primary key and no index on my tables
regards
-Message d'origine-
De : [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] la part de Albert
Cervera Areny
Envoyé : mardi 14 février 2006 12:38
À : pgsql-performance@postgresql.org
Objet : Re: [PERFOR
Hi William,
which PostgreSQL version are you using? Newer (8.0+) versions have some
important performance improvements for the COPY command.
Also, you'll notice significant improvements by creating primary &
foreign
keys after the copy command. I think config tweaking can improv
hi,
i load data from files using copy method.
Files contain between 2 and 7 millions of rows, spread on 5 tables.
For loading all the data, it takes 40mn, and the same processing takes 17mn
with Oracle.
I think that this time can be improved by changing postgresql configuration
file.
But which
From: Michael Fuhr
Date: 02/14/06 03:32:28
To: Tom Lane
Cc: Adnan DURSUN; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] SQL
Function Performance
On Mon, Feb 13, 2006 at 07:57:07PM -0500, Tom Lane wrote:
>> "Adnan DURSUN" <[EMAIL PROTECTED]> writes:
>> EXPLAIN ANALYZE EXECUT
38 matches
Mail list logo