Re: [GENERAL] Problems with memory

2008-05-08 Thread Shane Ambler

Richard Huxton wrote:

Pau Marc Munoz Torres wrote:

Hi


To your initial question all configurable options can be found and 
adjusted in the postgresql.conf which can be found in the top of your 
data dir.


You can find more detail on what the options are for at
http://www.postgresql.org/docs/8.3/interactive/runtime-config.html

Although I doubt it will fix your problem you will most likely want to 
tune them to get the best performance from your db once it goes live.



maybe i should give you some more explanations of my problem.

The reason for which i think that postgresql run out of memory is that: I
have a relation with 6 fields, 29 indexes and 32000 registers, the 
registers


So how much ram does your machine have? have you looked at how much 
postgres is using while it is running the queries?


What CPU's and disks/raid controller?


when i do a query as:

select * from precalc where idr(p1, p4, p6, p7, p9, 
'HLA-DRB1*0101')<-2; it

works and return 128030 registers

if i do

select * from precalc where idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0101')>-2;
3071970 registers, it don't work
ERROR:  relation "pssms" does not exist
CONTEXT:  SQL statement "select score from PSSMS where AA= $1  and 
POS=1 and

MOLEC= $2 "
PL/pgSQL function "idr" line 11 at SQL statement


Start with the idr function - this is where the error comes from.
The error doesn't mention out of memory.




if i ask for explanation for both queries works:


explain will show what the planner expects to do. explain analyse will 
actually run through the query and give exact row counts returned etc. 
and will hit the error that you get when running the query.




If for that reason that i think that my machine runs out of memory, by 
the

way, this is not the biggest table that i have others have more than
50300 registers, so if I try to do a cross select between tables it
could be worse.


You may hear from people on this list that have db's storing many 
TeraBytes of data.




--

Shane Ambler
pgSQL (at) Sheeky (dot) Biz

Get Sheeky @ http://Sheeky.Biz

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problems with memory

2008-05-08 Thread Richard Huxton

Pau Marc Munoz Torres wrote:

Hi

maybe i should give you some more explanations of my problem.

The reason for which i think that postgresql run out of memory is that: I
have a relation with 6 fields, 29 indexes and 32000 registers, the registers
where made up using a pgsql language to save disk space, and they "work"
(see the table schema under those lines)


You have 29 indexes on a table with 6 columns?
But only 32000 rows?


 Column | Type | Modifiers
+--+---
 id | integer  |
 p1 | character(1) |
 p4 | character(1) |
 p6 | character(1) |
 p7 | character(1) |
 p9 | character(1) |
Indexes:
"h2iab" btree (idr(p1, p4, p6, p7, p9, 'H-2*IAb'::character varying))
"h2iad" btree (idr(p1, p4, p6, p7, p9, 'H-2*IAd'::character varying))
"h2iak" btree (idr(p1, p4, p6, p7, p9, 'H-2*IAk'::character varying))
"h2ied" btree (idr(p1, p4, p6, p7, p9, 'H-2*IEd'::character varying))

etc.

OK, so you have 29 different functional indexes which use your columns 
and then a fixed parameter. Looks odd to me, but I suppose you might 
have good reason.


Oh - and it's not necessarily saving you any disk space - the index 
values need to be stored.



when i do a query as:

select * from precalc where idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0101')<-2; it
works and return 128030 registers

if i do

select * from precalc where idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0101')>-2;
3071970 registers, it don't work
ERROR:  relation "pssms" does not exist
CONTEXT:  SQL statement "select score from PSSMS where AA= $1  and POS=1 and
MOLEC= $2 "
PL/pgSQL function "idr" line 11 at SQL statement


Do you have a table/view called pssms in your search-path? Because 
that's what the error is about. Might it be a case-sensitive issue - do 
you have a table called PSSMS instead?



if i ask for explanation for both queries works:

mhc2db=> explain select count(*) from precalc where idr(p1, p4, p6, p7, p9,
'HLA-DRB1*0101')<-2;

[snip]


mhc2db=> explain select count(*) from precalc where idr(p1, p4, p6, p7, p9,
'HLA-DRB1*0101')>-2;

[snip]


and the index used are the correct ones

If for that reason that i think that my machine runs out of memory, by the
way, this is not the biggest table that i have others have more than
50300 registers, so if I try to do a cross select between tables it
could be worse.


For what reason? I still don't see any out-of-memory errors.

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problems with memory

2008-05-08 Thread Pau Marc Munoz Torres
Hi

maybe i should give you some more explanations of my problem.

The reason for which i think that postgresql run out of memory is that: I
have a relation with 6 fields, 29 indexes and 32000 registers, the registers
where made up using a pgsql language to save disk space, and they "work"
(see the table schema under those lines)



 Column | Type | Modifiers
+--+---
 id | integer  |
 p1 | character(1) |
 p4 | character(1) |
 p6 | character(1) |
 p7 | character(1) |
 p9 | character(1) |
Indexes:
"h2iab" btree (idr(p1, p4, p6, p7, p9, 'H-2*IAb'::character varying))
"h2iad" btree (idr(p1, p4, p6, p7, p9, 'H-2*IAd'::character varying))
"h2iak" btree (idr(p1, p4, p6, p7, p9, 'H-2*IAk'::character varying))
"h2ied" btree (idr(p1, p4, p6, p7, p9, 'H-2*IEd'::character varying))
"hladqa10501" btree (idr(p1, p4, p6, p7, p9, 'HLA-DQA1*0501'::character
varying))
"hladqb10201" btree (idr(p1, p4, p6, p7, p9, 'HLA-DQB1*0201'::character
varying))
"hladr" btree (idr(p1, p4, p6, p7, p9, 'HLA-DR'::character varying))
"hladr1" btree (idr(p1, p4, p6, p7, p9, 'HLA-DR1'::character varying))
"hladr13" btree (idr(p1, p4, p6, p7, p9, 'HLA-DR13'::character varying))
"hladr3" btree (idr(p1, p4, p6, p7, p9, 'HLA-DR3'::character varying))
"hladr7" btree (idr(p1, p4, p6, p7, p9, 'HLA-DR7'::character varying))
"hladrb10101" btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0101'::character
varying))
"hladrb10102" btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0102'::character
varying))
"hladrb10301" btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0301'::character
varying))
"hladrb10302" btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0302'::character
varying))
"hladrb10401" btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0401'::character
varying))
"hladrb10402" btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0402'::character
varying))
"hladrb10701" btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0701'::character
varying))
"hladrb10802" btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0802'::character
varying))
"hladrb10901" btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0901'::character
varying))
"hladrb11101" btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*1101'::character
varying))
"hladrb11102" btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*1102'::character
varying))
"hladrb11103" btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*1103'::character
varying))
"hladrb11104" btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*1104'::character
varying))
"hladrb11301" btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*1301'::character
varying))
"hladrb11302" btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*1302'::character
varying))
"hladrb11501" btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*1501'::character
varying))
"hladrb40101" btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB4*0101'::character
varying))
"hladrb50101" btree (idr(p1, p4, p6, p7, p9, 'HLA-DRB5*0101'::character
varying))




when i do a query as:

select * from precalc where idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0101')<-2; it
works and return 128030 registers

if i do

select * from precalc where idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0101')>-2;
3071970 registers, it don't work
ERROR:  relation "pssms" does not exist
CONTEXT:  SQL statement "select score from PSSMS where AA= $1  and POS=1 and
MOLEC= $2 "
PL/pgSQL function "idr" line 11 at SQL statement

if i ask for explanation for both queries works:

mhc2db=> explain select count(*) from precalc where idr(p1, p4, p6, p7, p9,
'HLA-DRB1*0101')<-2;
   QUERY PLAN

 Aggregate  (cost=66188.88..66188.89 rows=1 width=0)
   ->  Bitmap Heap Scan on precalc  (cost=17615.20..63522.21 rows=107
width=0)
 Recheck Cond: (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0101'::character
varying) < -2::double precision)
 ->  Bitmap Index Scan on hladrb10101  (cost=0.00..17348.54
rows=107 width=0)
   Index Cond: (idr(p1, p4, p6, p7, p9,
'HLA-DRB1*0101'::character varying) < -2::double precision)
(5 rows)

mhc2db=> explain select count(*) from precalc where idr(p1, p4, p6, p7, p9,
'HLA-DRB1*0101')>-2;
   QUERY PLAN

 Aggregate  (cost=66188.88..66188.89 rows=1 width=0)
   ->  Bitmap Heap Scan on precalc  (cost=17615.20..63522.21 rows=107
width=0)
 Recheck Cond: (idr(p1, p4, p6, p7, p9, 'HLA-DRB1*0101'::character
varying) > -2::double precision)
 ->  Bitmap Index Scan on hladrb10101  (cost=0.00..17348.54
rows=107 width=0)
   Index Cond: (idr(p1, p4, p6, p7, p9,
'HLA-DRB1*0101'::character varying) > -2::double precision)
(5 rows)

and the index used are the correct ones

If for that reason that i think that my machine runs out of memo

[GENERAL] Problems with memory

2008-05-07 Thread Pau Marc Munoz Torres
Hi

 I'm setting up a big database , and when i say big, i mean BIG, the problem
with this is that some times, when a do a query the database run out of
memory, so I really need to increase the amount of memory reserved to
postgress almost 10x, could anyone tell me how can i do that? i mean, what
configuration file and variables do i have to modify?

Thanks

pau

-- 
Pau Marc Muñoz Torres

Laboratori de Biologia Computacional
Institut de Biotecnologia i Biomedicina Vicent Villar
Universitat Autonoma de Barcelona
E-08193 Bellaterra (Barcelona)

telèfon: 93 5812807
Email : [EMAIL PROTECTED]


Re: [GENERAL] Problems with memory

2008-05-07 Thread Merlin Moncure
On Wed, May 7, 2008 at 11:55 AM, Pau Marc Munoz Torres
<[EMAIL PROTECTED]> wrote:
> Hi
>
>  I'm setting up a big database , and when i say big, i mean BIG, the problem
> with this is that some times, when a do a query the database run out of
> memory, so I really need to increase the amount of memory reserved to
> postgress almost 10x, could anyone tell me how can i do that? i mean, what
> configuration file and variables do i have to modify?

All available memory of the machine is automatically reserved for the
database by the operating system, more or less.   At most you can
reserve memory for certain particular operations of the database at
the expense of others.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problems with memory

2008-05-07 Thread Richard Huxton

Pau Marc Munoz Torres wrote:

Hi

 I'm setting up a big database , and when i say big, i mean BIG, 


I have to say, I've seen more accurate stats given. How many Terabytes 
are we talking about?


> the problem

with this is that some times, when a do a query the database run out of
memory,


What precisely do you mean by "run out of memory" - is this work-mem or 
shared-mem? How do you know this is what's happening?


> so I really need to increase the amount of memory reserved to

postgress almost 10x, could anyone tell me how can i do that? i mean, what
configuration file and variables do i have to modify?


What memory are you trying to reserve for PostgreSQL? Is it shared-mem 
you're talking about? If you let us know where you found the manuals 
confusing, we can try to improve them.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problems with memory

2008-05-07 Thread Pau Marc Munoz Torres
Hi

 I'm setting up a big database , and when i say big, i mean BIG, the problem
with this is that some times, when a do a query the database run out of
memory, so I really need to increase the amount of memory reserved to
postgress almost 10x, could anyone tell me how can i do that? i mean, what
configuration file and variables do i have to modify?

Thanks

pau

-- 
Pau Marc Muñoz Torres

Laboratori de Biologia Computacional
Institut de Biotecnologia i Biomedicina Vicent Villar
Universitat Autonoma de Barcelona
E-08193 Bellaterra (Barcelona)

telèfon: 93 5812807
Email : [EMAIL PROTECTED]


Re: [GENERAL] problems with memory

2000-02-18 Thread selkovjr

> * Dean Browett <[EMAIL PROTECTED]> [000218 15:55] wrote:
> > Hi,
> > 
> > We are running postgres-6.5.3 on a dual pentium 300 machine, 0.5Gb RAM under
> > Linux Redhat 6.0 (kernel 2.2.14). The machine we are using sits on a 100Mb
> > network and the nics are 3com3c590's. We are also using a DPT Raid
> > controller in a raid5 configuration set up as 1 logical drive.
> > 
> > We are try to insert a large amount of data into the database. What happens
> > is that when we first start loading data everything is fine. Over a period
> > of time (1.5hrs) there is a marked decrease in performance in terms of both
> > memory and cpu usage. At this time cpu usage has crept up to 45-50% and
> > memory usage is 100Mb and rising slowly and there is only one connection to
> > the database.
> > 
> > All the statistics are gleaned from using 'top'.
> > 
> > 
> > Consequently, the database gets slower and slower until it loads at the rate
> > of <3KBps at which point it becomes more exciting to watch paint dry 8-)).
> > 
> > Postgres is the only program running (except for normal system programs)
> > that uses a significant amount of memory.
> > 
> > Is Postgres known to leak memory? What causes the high cpu usage?
> > 
> > Any ideas would be greatly appreciated.
> 
> You really haven't given very much information on the rules and constraints
> in your tables, one problem that I had was that a constraint on a table
> of mine caused extreme slowdown because each row inserted needed to be
> validated through a constraint, as the table grew the amount of data that
> needed to be scanned for each insert grew exponentially.
> 
> hope this helps,
> -Alfred

The same is true of any type of index. Make sure you don't have
indices defined unitl after the insert. Also, do a COPY instead of
INSERT if possible.

--Gene





Re: [GENERAL] problems with memory

2000-02-18 Thread Alfred Perlstein

* Dean Browett <[EMAIL PROTECTED]> [000218 15:55] wrote:
> Hi,
> 
> We are running postgres-6.5.3 on a dual pentium 300 machine, 0.5Gb RAM under
> Linux Redhat 6.0 (kernel 2.2.14). The machine we are using sits on a 100Mb
> network and the nics are 3com3c590's. We are also using a DPT Raid
> controller in a raid5 configuration set up as 1 logical drive.
> 
> We are try to insert a large amount of data into the database. What happens
> is that when we first start loading data everything is fine. Over a period
> of time (1.5hrs) there is a marked decrease in performance in terms of both
> memory and cpu usage. At this time cpu usage has crept up to 45-50% and
> memory usage is 100Mb and rising slowly and there is only one connection to
> the database.
> 
> All the statistics are gleaned from using 'top'.
> 
> 
> Consequently, the database gets slower and slower until it loads at the rate
> of <3KBps at which point it becomes more exciting to watch paint dry 8-)).
> 
> Postgres is the only program running (except for normal system programs)
> that uses a significant amount of memory.
> 
> Is Postgres known to leak memory? What causes the high cpu usage?
> 
> Any ideas would be greatly appreciated.

You really haven't given very much information on the rules and constraints
in your tables, one problem that I had was that a constraint on a table
of mine caused extreme slowdown because each row inserted needed to be
validated through a constraint, as the table grew the amount of data that
needed to be scanned for each insert grew exponentially.

hope this helps,
-Alfred





[GENERAL] problems with memory

2000-02-18 Thread Dean Browett

Hi,

We are running postgres-6.5.3 on a dual pentium 300 machine, 0.5Gb RAM under
Linux Redhat 6.0 (kernel 2.2.14). The machine we are using sits on a 100Mb
network and the nics are 3com3c590's. We are also using a DPT Raid
controller in a raid5 configuration set up as 1 logical drive.

We are try to insert a large amount of data into the database. What happens
is that when we first start loading data everything is fine. Over a period
of time (1.5hrs) there is a marked decrease in performance in terms of both
memory and cpu usage. At this time cpu usage has crept up to 45-50% and
memory usage is 100Mb and rising slowly and there is only one connection to
the database.

All the statistics are gleaned from using 'top'.


Consequently, the database gets slower and slower until it loads at the rate
of <3KBps at which point it becomes more exciting to watch paint dry 8-)).

Postgres is the only program running (except for normal system programs)
that uses a significant amount of memory.

Is Postgres known to leak memory? What causes the high cpu usage?

Any ideas would be greatly appreciated.

Dean