Re: [GENERAL] Funky template1 problem?

2005-12-21 Thread Richard Huxton

Jerry LeVan wrote:

Hi,

I have a bunch of small dbs (levan, pictures, and a couple of others).

I have had these db's over many upgrades and I have just noticed that
template1 seems to be a bit munged...

[snip]

I found this out when I tried to create a new user
testuser for testing a totally none privileged user.
When I recreate the testuser db using template1
I found the tables from the picture db were in
the newly created db.


Yep - that's a feature. It applies to functions etc. too.


Is it safe to simple drop the db's from the template1
db?  ( I have no idea of how the tables became part
of template1.)


Deleting the tables is fine. You should be able to drop the template1 
database altogether and recreate it using template0 as it's template. 
That's why there's two of them, for exactly this circumstance. Have a 
look in the mailing-list archives for details.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] PostgreSQL crashing

2005-12-21 Thread Richard Huxton

[EMAIL PROTECTED] wrote:
PostgreSQL runs for a while but always eventually (30min - 2hrs) crashes. 
I thought it was memory initially but Memtest had no errors after running

for a day.  I don't think it is a hardware issues because I can run other
applications without error.  I have ran both version 8.0.3 and 8.1.1 with
the same error.  I have posted a clipping of the crash from the system
log.  Any help would be greatly appreciated.

Dec 20 17:14:57 server4 kernel: postmaster: page allocation failure.
order:0, mode:0xd0
Dec 20 17:14:57 server4 kernel:  [] __alloc_pages+0x2e1/0x2f7
Dec 20 17:14:57 server4 kernel:  [] __get_free_pages+0x18/0x24
Dec 20 17:14:57 server4 kernel:  [] kmem_getpages+0x1c/0xbb

...

Dec 20 17:15:01 server4 kernel:  [] sync_sbs+0x22/0x2f
Dec 20 17:15:01 server4 kernel:  [] md_update_sb+0x84/0xc6
Dec 20 17:15:01 server4 kernel:  [] md_write_start+0x5e/0x8c


Hmm - not sure what this is showing, but since the log is all kernel 
activity and it goes on for four seconds after PostgreSQL errors I'm not 
sure it's your crash in progress.


You probably need to check the mailing-list archives for details on how 
to make sure you get a core dump. Also people will want to know what 
version of PG we are looking at, what O.S. (Linux by the look of it, but 
what distro/version?) and how you installed PG (from source/rpm/etc).


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] contrib extenstions

2005-12-21 Thread Richard Huxton

S McLurkin wrote:

Is there some place where I can find information on all the contrib
extenstions?


The source distribution has a series of README files in the contrib/ 
directory.


If you have a packaged version, documentation will depend on the 
packaging. Is there a specific package you are interested in?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] view or index to optimize performance

2005-12-21 Thread Richard Huxton

Klein Balázs wrote:

Hello everybody,

I have a table that stores responses to questions in different
questionnaires. This table will grow to millions of rows.

[snip]

I can store the fact that it is biodata in the bio field - it is biodata if
the value of that field is not 0 and I can index that field and simply use
that as one of the conditions in queries.

Or should I instead create a view that contains only the biodata and select
from that? But will postgres use the indexes than? Would that be a better
approach?


Create the index on the table and a view should use it.

You might like to read up on partial indexes where you can do something 
like:


CREATE INDEX my_index ON itemresponse (testoccasionid,itemorder)
WHERE bio > 0;

So long as your query/view definition has WHERE bio > 0 in it then this 
index can be used.


HTH
--
  Richard Huxton
  Archonet Ltd


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Running with fsync=off

2005-12-21 Thread Benjamin Arai
I want to be able to do large updates on an existing backed up database 
with fsync=off but at the end of the updates how do I ensure that the 
data gets synced?


Somebody said running "sync ; sync; sync" from the console.  This seems 
reasonable but why not just "sync" or is there another command I should 
ruyn after the update either in postgres or the console?


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] two shared memory segments?

2005-12-21 Thread Tom Lane
Ed Loehr <[EMAIL PROTECTED]> writes:
> On Wednesday December 21 2005 8:24 pm, Tom Lane wrote:
>> I'd say that you had a backend crash, causing the postmaster
>> to abandon the original shared memory segment and make a new
>> one, but the old segment is still attached to by a couple of
>> processes.

> Does that make sense even if the creating pid is the same for 
> both?

Sure.  The postmaster survives backend crashes --- that's the point
of having a separate postmaster process at all.

>> There was a bug awhile back whereby the stats support
>> processes failed to detach from shared memory and thus would
>> cause a dead shmem segment to hang around like this.  What PG
>> version are you running?

> This is an old 7.3.7 cluster.

[ digs in CVS logs... ]  Hmm.  AFAICT that bug was fixed in 7.3.5:

2003-11-30 16:56  tgl

* src/: backend/port/sysv_shmem.c, backend/postmaster/pgstat.c,
include/storage/pg_shmem.h (REL7_3_STABLE): Back-patch fix to cause
stats processes to detach from shared memory, so that they do not
prevent the postmaster from deleting the shmem segment during crash
recovery.

You sure it's a 7.3.7 postmaster?  Can you dig down to determine exactly
which processes are attached to the older shmem segment?

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] SAVEPOINT performance

2005-12-21 Thread Bruce Momjian
Joseph Shraibman wrote:
> Is there any performance impact of releasing savepoints?

Releasing savepoints does release a little memory associated with each
subtransaction, but I doubt you would be able to measure the difference
unless you had thousands of subtransactions.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] SAVEPOINT performance

2005-12-21 Thread Joseph Shraibman

Is there any performance impact of releasing savepoints?

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] two shared memory segments?

2005-12-21 Thread Tom Lane
"Ed L." <[EMAIL PROTECTED]> writes:
> I have a cluster configured for ~800MB of shared memory cache
> (shared_buffers = 10), but ipcs shows TWO shared memory 
> segments of ~800MB belonging to that postmaster.  What kind of
> a problem do I have here?

I'd say that you had a backend crash, causing the postmaster to abandon
the original shared memory segment and make a new one, but the old
segment is still attached to by a couple of processes.

There was a bug awhile back whereby the stats support processes failed
to detach from shared memory and thus would cause a dead shmem segment
to hang around like this.  What PG version are you running?

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] contrib extenstions

2005-12-21 Thread Michael Fuhr
On Tue, Dec 20, 2005 at 10:31:06PM -0800, S McLurkin wrote:
> Is there some place where I can find information on all the contrib
> extenstions?

In the contrib directory of the source code.  That directory has a
README with a short summary of each extension, and each extension's
subdirectory has one or more README files.

If you've installed the contrib extensions via a pre-built package,
then hopefully that package installed the README files somewhere.
Check the package's file list to see if it did and where.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] lo function changed in PostgreSQL 8.1.1 (ask again)

2005-12-21 Thread Premsun Choltanwanich


From contrib/lo I found that it has something  difference between old and new version of PostgreSQL.  And I'm sure that I already tick on Large Object (lo) option when I install.
 
How can I manage on difference function?
 
 
 
:::New Version:::
DOMAIN lo AS pg_catalog.oid;
 
FUNCTION lo_oid(lo) RETURNS pg_catalog.oid AS 'SELECT $1::pg_catalog.oid' LANGUAGE SQL STRICT IMMUTABLE;
 
FUNCTION lo_manage() RETURNS pg_catalog.trigger AS '$libdir/lo' LANGUAGE C;
 
 
 
 

:::Old Version:::
 
FUNCTION lo_in(cstring) RETURNS lo AS '$libdir/lo' LANGUAGE C IMMUTABLE STRICT;
 
FUNCTION lo_out(lo) RETURNS cstring AS '$libdir/lo' LANGUAGE C IMMUTABLE STRICT;
 
TYPE lo (  INTERNALLENGTH = 4, EXTERNALLENGTH = variable, INPUT = lo_in, OUTPUT = lo_out);
 
FUNCTION lo_oid(lo) RETURNS oid AS '$libdir/lo' LANGUAGE C IMMUTABLE STRICT;
 
FUNCTION oid(lo) RETURNS oid AS '$libdir/lo', 'lo_oid' LANGUAGE C IMMUTABLE STRICT;
 
CAST (lo as oid) WITH FUNCTION oid(lo) AS IMPLICIT;
 
FUNCTION lo(oid) RETURNS lo AS '$libdir/lo' LANGUAGE C IMMUTABLE STRICT;
 
CAST (oid as lo) WITH FUNCTION lo(oid) AS IMPLICIT;
 
FUNCTION lo_manage() RETURNS trigger AS '$libdir/lo' LANGUAGE C;


[GENERAL] two shared memory segments?

2005-12-21 Thread Ed L.
I have a cluster configured for ~800MB of shared memory cache
(shared_buffers = 10), but ipcs shows TWO shared memory 
segments of ~800MB belonging to that postmaster.  What kind of
a problem do I have here?

T  ID KEYMODEOWNER GROUP   CREATORCGROUP NATTCH 
 SEGSZ  CPID  LPID   ATIMEDTIMECTIME 
Shared Memory:
m  114695 0x D-rw---pg   pgpg   pg  2 861011968 
17065 17065  7:00:07 13:38:22 13:38:22
m   16396 0x0089d911 --rw---pg   pgpg   pg 47 861011968 
17065 17065 13:38:22 no-entry 13:38:22

The "D" in the MODE for the first one means "the associated 
shared memory segment has been removed.  It will disappear 
when the last process attached to the segment detaches it."
(from 'man ipcs')

However, ipcs says pid 17065 (the live postmaster pid) 
created them both.  The postmaster has been running for
about 130 days, but the ATIME/DTIME/CTIME columns seem to
suggest both segments are still being accessed.

Ed

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostgreSQL crashing

2005-12-21 Thread Neil Conway

[EMAIL PROTECTED] wrote:
PostgreSQL runs for a while but always eventually (30min - 2hrs) crashes. 



Dec 20 17:14:57 server4 kernel: postmaster: page allocation failure.
order:0, mode:0xd0
Dec 20 17:14:57 server4 kernel:  [] __alloc_pages+0x2e1/0x2f7


This looks like a kernel or hardware issue, not a problem with 
PostgreSQL itself.


-Neil


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Funky template1 problem?

2005-12-21 Thread Qingqing Zhou


On Wed, 21 Dec 2005, Tom Lane wrote:

> > "Jerry LeVan" <[EMAIL PROTECTED]> wrote in message
> >> I have had these db's over many upgrades and I have just noticed that
> >> template1 seems to be a bit munged...
>
> I'm sure they are.

Oh right -- since CREATE DATABASE just copied the whole directory.

Regards,
Qingqing

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Indices for select count(*)?

2005-12-21 Thread Chris Browne
[EMAIL PROTECTED] (Marcus Engene) writes:
> Greg Stark wrote:
>> Alexander Scholz <[EMAIL PROTECTED]> writes:
>>
>>>Hi, thank you for your answer.
>>>
>>>Regarding the performance flow when trying to find out how many records are
>>>currently being stored in the table, I don't see how an index should help...
>>>Nevertheless we've created an unique index on "ID" but SELECT count("ID") 
>>>from
>>>"XYZ" still takes 35 seconds*. (ID is the primary key basing on a sequence,
>>>select count(*) isn't faster.)
>>>
>>>So - what kind of indexing would speed this up then?
>> No form of indexing can speed this up. To answer the server has to
>> look at
>> every record and count up how many of them should be included in your result.
>
> Why couldn't it be possible to count # of items in an index?
> The density of the information (items/inode|block|whatever it's called
> in btrees) is likely to be much higher giving less disk i/o.
>
> I'm sorry if this has been discussed recently.

The index does not contain tuple visibility information, and so is
*useless* for the purpose.  It does not contain the useful information
you evidently imagine it does.

This question is asked steadily, frequently.
-- 
output = ("cbbrowne" "@" "ntlug.org")
http://www3.sympatico.ca/cbbrowne/
Rules of the Evil Overlord #32. "I will not fly into a rage and kill a
messenger who brings me bad news  just to illustrate how evil I really
am. Good messengers are hard to come by."


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Funky template1 problem?

2005-12-21 Thread Tom Lane
"Qingqing Zhou" <[EMAIL PROTECTED]> writes:
> "Jerry LeVan" <[EMAIL PROTECTED]> wrote in message 
>> I have had these db's over many upgrades and I have just noticed that
>> template1 seems to be a bit munged...

> ... And check if they are really in the template1?

I'm sure they are.  Probably Jerry just mistakenly created the tables
while connected to template1 at some point.  I see no evidence here
of anything but pilot error.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Funky template1 problem?

2005-12-21 Thread Qingqing Zhou

"Jerry LeVan" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]
> Hi,
>
> I have a bunch of small dbs (levan, pictures, and a couple of others).
>
> I have had these db's over many upgrades and I have just noticed that
> template1 seems to be a bit munged...
>
> testuser=> \c template1
> You are now connected to database "template1".
> template1=> \d
>  List of relations
> Schema |  Name  |   Type   | Owner
> ++--+---
> public | lo_shadow  | table| levan
> public | my_description | table| levan
> public | person | table| levan
> public | person_id_seq  | sequence | levan
> public | pics   | table| levan
> public | pics_ident_seq | sequence | levan
> (6 rows)
>

Can you show the relfilenode (the physical file name of the relation) of 
these tables like this:

select relname, relfilenode from pg_class, pg_namespace where relnamespace = 
pg_namespace.oid and nspname = 'public';

And check if they are really in the template1?

Regards,
Qingqing 



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Indices for select count(*)?

2005-12-21 Thread Jaime Casanova
On 12/21/05, Marcus Engene <[EMAIL PROTECTED]> wrote:
> Greg Stark wrote:
> > Alexander Scholz <[EMAIL PROTECTED]> writes:
> >
> >>Hi, thank you for your answer.
> >>
> >>Regarding the performance flow when trying to find out how many records are
> >>currently being stored in the table, I don't see how an index should help...
> >>Nevertheless we've created an unique index on "ID" but SELECT count("ID") 
> >>from
> >>"XYZ" still takes 35 seconds*. (ID is the primary key basing on a sequence,
> >>select count(*) isn't faster.)
> >>
> >>So - what kind of indexing would speed this up then?
> >
> >
> > No form of indexing can speed this up. To answer the server has to look at
> > every record and count up how many of them should be included in your 
> > result.
>
> Why couldn't it be possible to count # of items in an index?
> The density of the information (items/inode|block|whatever it's called
> in btrees) is likely to be much higher giving less disk i/o.
>

because in the MVCC model an index contains tuples (records) that are
dead to you (doesn't exist, becuase were deleted, updated) but that
are live to other transactions... so you still have to visit the table
to see if that tuple is live to to you and have to count it or not...

> I'm sorry if this has been discussed recently.
>
> Best regards,
> Marcus
>



--
Atentamente,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Indices for select count(*)?

2005-12-21 Thread Marcus Engene

Greg Stark wrote:

Alexander Scholz <[EMAIL PROTECTED]> writes:


Hi, thank you for your answer.

Regarding the performance flow when trying to find out how many records are
currently being stored in the table, I don't see how an index should help...
Nevertheless we've created an unique index on "ID" but SELECT count("ID") from
"XYZ" still takes 35 seconds*. (ID is the primary key basing on a sequence,
select count(*) isn't faster.)

So - what kind of indexing would speed this up then?



No form of indexing can speed this up. To answer the server has to look at
every record and count up how many of them should be included in your result.


Why couldn't it be possible to count # of items in an index?
The density of the information (items/inode|block|whatever it's called 
in btrees) is likely to be much higher giving less disk i/o.


I'm sorry if this has been discussed recently.

Best regards,
Marcus

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Indices for select count(*)?

2005-12-21 Thread Nicolas Barbier
On 12/21/05, Alexander Scholz <[EMAIL PROTECTED]> wrote:

> Regarding the performance flow when trying to find out how many records
> are currently being stored in the table, I don't see how an index should
> help... Nevertheless we've created an unique index on "ID" but SELECT
> count("ID") from "XYZ" still takes 35 seconds*. (ID is the primary key
> basing on a sequence, select count(*) isn't faster.)

I would like to redirect you to the zillions of mailing list posts
about this subject :-).

> So - what kind of indexing would speed this up then?

None.

greetings,
Nicolas

--
Nicolas Barbier
http://www.gnu.org/philosophy/no-word-attachments.html

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Indices for select count(*)?

2005-12-21 Thread Greg Stark

Alexander Scholz <[EMAIL PROTECTED]> writes:

> Hi, thank you for your answer.
> 
> Regarding the performance flow when trying to find out how many records are
> currently being stored in the table, I don't see how an index should help...
> Nevertheless we've created an unique index on "ID" but SELECT count("ID") from
> "XYZ" still takes 35 seconds*. (ID is the primary key basing on a sequence,
> select count(*) isn't faster.)
> 
> So - what kind of indexing would speed this up then?

No form of indexing can speed this up. To answer the server has to look at
every record and count up how many of them should be included in your result.

If you only need an approximate value there's one available in the stats
tables (I don't remember exactly how to get it) or you can keep a recent value
in a table and update it periodically and just query that.

> *) MSSQL 2005 on the same server takes 4 seconds for this query for the
> analogue table, and there hasn't any special tuning been applied, too.

MSSQL presumably has the entire table cached in RAM and postgres doesn't. Even
if MSSQL can scan just the index (which postgres can't do) I would only expect
a factor of 2-4x. Hm. Unless perhaps this table is extremely wide? How large
are these records?

-- 
greg


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] view or index to optimize performance

2005-12-21 Thread Klein Balázs
I thought that if I used a view to retrieve data its content might be cached
so it would make the query faster.

Am Mittwoch, 21. Dezember 2005 21:27 schrieb Klein Balázs:
> Or should I instead create a view that contains only the biodata and
select
> from that? But will postgres use the indexes than? Would that be a better
> approach?

Whether the query is executed by a view or typed in in its full form by hand

is completely irrelevant to the question whether indexes are used or should 
be created.  Views do not optimize anything.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] view or index to optimize performance

2005-12-21 Thread Peter Eisentraut
Am Mittwoch, 21. Dezember 2005 21:27 schrieb Klein Balázs:
> Or should I instead create a view that contains only the biodata and select
> from that? But will postgres use the indexes than? Would that be a better
> approach?

Whether the query is executed by a view or typed in in its full form by hand 
is completely irrelevant to the question whether indexes are used or should 
be created.  Views do not optimize anything.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Indices for select count(*)?

2005-12-21 Thread Peter Eisentraut
Am Mittwoch, 21. Dezember 2005 12:01 schrieb Alexander Scholz:
> So - what kind of indexing would speed this up then?

You can't speed up a full-table count using an index.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Memory Question

2005-12-21 Thread Joshua D. Drake

Benjamin Arai wrote:

A machine with 4GB of memory has the ability to allocate 2GB to 
postgres under kenel 2.6.  Is the 2GB maximum the sum total of 
(work_mem, maitenance_work_mem, and shared_memory) or just 
shared_memory?   


shared_memory and if I recall correctly that is adjustable but I don't 
know for a fact.


Joshua D. Drake

--
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: PLphp, PLperl - http://www.commandprompt.com/


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Memory Question

2005-12-21 Thread Benjamin Arai



A machine with 4GB 
of memory has the ability to allocate 2GB to postgres under kenel 2.6.  Is 
the 2GB maximum the sum total of (work_mem, maitenance_work_mem, and 
shared_memory) or just 
shared_memory?   


Re: [GENERAL] is this a bug or I am blind?

2005-12-21 Thread Mage

Martijn van Oosterhout wrote:


On Sat, Dec 17, 2005 at 05:01:15PM -0500, Tom Lane wrote:
 


Martijn van Oosterhout  writes:
   


I think the real solution is to implement COLLATE support.
 


Maybe so, but we still need to figure out what we're doing for the back
branches, and that won't be it ...
   



To be honest, there are really only a handful of locales that suffer
from this issue, so perhaps we should document it and move on.

I don't agree. Usually I read the whole documentation of the software I 
use, but you cannot presume that every user even with good sql skills 
will check the documentation for a thing he wouldn't imagine.


With knowing the background it is understandable locale problem, but in 
the user's point of view it's a weird and serious bug which shouldn't be 
there. Using hu_HU with latin2 is a normal marrying.


Some users (including me) don't always read the "known issues" chapter, 
even for a good quality software.


  Mage



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] PostgreSQL crashing

2005-12-21 Thread barai
PostgreSQL runs for a while but always eventually (30min - 2hrs) crashes. 
I thought it was memory initially but Memtest had no errors after running
for a day.  I don't think it is a hardware issues because I can run other
applications without error.  I have ran both version 8.0.3 and 8.1.1 with
the same error.  I have posted a clipping of the crash from the system
log.  Any help would be greatly appreciated.

Dec 20 17:14:57 server4 kernel: postmaster: page allocation failure.
order:0, mode:0xd0
Dec 20 17:14:57 server4 kernel:  [] __alloc_pages+0x2e1/0x2f7
Dec 20 17:14:57 server4 kernel:  [] __get_free_pages+0x18/0x24
Dec 20 17:14:57 server4 kernel:  [] kmem_getpages+0x1c/0xbb
Dec 20 17:14:57 server4 kernel:  [] cache_grow+0xab/0x138
Dec 20 17:14:57 server4 kernel:  [] cache_alloc_refill+0x165/0x19d
Dec 20 17:14:57 server4 kernel:  [] kmem_cache_alloc+0x51/0x57
Dec 20 17:14:57 server4 kernel:  [] mempool_alloc+0xb2/0x135
Dec 20 17:14:57 server4 kernel:  []
autoremove_wake_function+0x0/0x2d
Dec 20 17:14:57 server4 kernel:  []
autoremove_wake_function+0x0/0x2d
Dec 20 17:14:57 server4 kernel:  [] bio_alloc+0x15/0x168
Dec 20 17:14:57 server4 kernel:  [] sync_page_io+0x25/0xa2
Dec 20 17:14:57 server4 kernel:  [] write_disk_sb+0x5a/0x86
Dec 20 17:15:01 server4 kernel:  [] sync_sbs+0x22/0x2f
Dec 20 17:15:01 server4 kernel:  [] md_update_sb+0x84/0xc6
Dec 20 17:15:01 server4 kernel:  [] md_write_start+0x5e/0x8c
Dec 20 17:15:01 server4 kernel:  [] make_request+0x22a/0x2b3
[raid1]
Dec 20 17:15:01 server4 kernel:  []
generic_make_request+0x18e/0x19e
Dec 20 17:15:01 server4 kernel:  [] submit_bio+0xca/0xd2
Dec 20 17:15:01 server4 kernel:  []
test_set_page_writeback+0xad/0xe1
Dec 20 17:15:01 server4 kernel:  [] swap_writepage+0x9a/0xa3
Dec 20 17:15:01 server4 kernel:  [] pageout+0x8d/0xcc
Dec 20 17:15:01 server4 kernel:  [] shrink_list+0x207/0x3ed
Dec 20 17:15:01 server4 kernel:  [] __pagevec_release+0x15/0x1d
Dec 20 17:15:01 server4 kernel:  [] shrink_cache+0x1dd/0x34d
Dec 20 17:15:01 server4 kernel:  [] shrink_zone+0xa7/0xb6
Dec 20 17:15:01 server4 kernel:  [] shrink_caches+0x4c/0x57
Dec 20 17:15:01 server4 kernel:  [] try_to_free_pages+0xc3/0x1a7
Dec 20 17:15:01 server4 kernel:  [] __alloc_pages+0x1fe/0x2f7
Dec 20 17:15:01 server4 kernel:  [] __get_free_pages+0x18/0x24
Dec 20 17:15:01 server4 kernel:  [] kmem_getpages+0x1c/0xbb
Dec 20 17:15:01 server4 kernel:  [] cache_grow+0xab/0x138
...


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] view or index to optimize performance

2005-12-21 Thread Klein Balázs
Hello everybody,

I have a table that stores responses to questions in different
questionnaires. This table will grow to millions of rows.

My problem is that while most of the data in the table are rarely used in
queries one type of response will be used quite often: biodata - name,
gender, e-mail and this sort of. This data is also collected as responses to
questionnaires.

My question: what is the best strategy if I wanted to quickly retrieve
biodata from this table:

CREATE TABLE "public"."itemresponse" (
  "testoccasionid" INTEGER NOT NULL, 
  "itemorder" SMALLINT NOT NULL, 
  "response" TEXT NOT NULL, 
  "bio" INTEGER DEFAULT 0 NOT NULL, 
  "datatype" SMALLINT NOT NULL, 
  CONSTRAINT "ItemResponseText_pk" PRIMARY KEY("testoccasionid",
"itemorder"), 
  CONSTRAINT "ItemResponseText_TestOccasionID_fkey" FOREIGN KEY
("testoccasionid")
REFERENCES "public"."testoccasion"("testoccasionid")
ON DELETE NO ACTION
ON UPDATE NO ACTION
NOT DEFERRABLE
) WITH OIDS;

I can store the fact that it is biodata in the bio field - it is biodata if
the value of that field is not 0 and I can index that field and simply use
that as one of the conditions in queries.

Or should I instead create a view that contains only the biodata and select
from that? But will postgres use the indexes than? Would that be a better
approach?

Thanks for the help.
SWK





---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Indices for select count(*)?

2005-12-21 Thread Alexander Scholz

Hi, thank you for your answer.

Regarding the performance flow when trying to find out how many records 
are currently being stored in the table, I don't see how an index should 
help... Nevertheless we've created an unique index on "ID" but SELECT 
count("ID") from "XYZ" still takes 35 seconds*. (ID is the primary key 
basing on a sequence, select count(*) isn't faster.)


So - what kind of indexing would speed this up then?

Thanx in advance!

Alexander.

*) MSSQL 2005 on the same server takes 4 seconds for this query for the 
analogue table, and there hasn't any special tuning been applied, too.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] contrib extenstions

2005-12-21 Thread S McLurkin
Is there some place where I can find information on all the contrib extenstions?


[GENERAL] query for a time interval

2005-12-21 Thread Mark
Hello everybody,

I'm looking for an elegant SQL statement that will work in
Postgresql, MySQL and ORACLE.
The query will be executed by Java client.

To have this query for Postgresql is priority number one.


In this query I try to get a list of message Ids that expired.

time_to_live is in seconds.

SELECT id
  FROM mq
 WHERE now - start_date > time_to_live;

I have a following table:

CREATE TABLE mq
{
msg_id INTEGER,
retry_date  TIMESTAMP NOT NULL DEFAULT ('now'::text)::timestamp(1),
start_date  TIMESTAMP NOT NULL DEFAULT ('now'::text)::timestamp(1),
time_to_live INTEGER
}

Thanks!
Mark.

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Funky template1 problem?

2005-12-21 Thread Jerry LeVan

Hi,

I have a bunch of small dbs (levan, pictures, and a couple of others).

I have had these db's over many upgrades and I have just noticed that
template1 seems to be a bit munged...

testuser=> \c template1
You are now connected to database "template1".
template1=> \d
 List of relations
Schema |  Name  |   Type   | Owner
++--+---
public | lo_shadow  | table| levan
public | my_description | table| levan
public | person | table| levan
public | person_id_seq  | sequence | levan
public | pics   | table| levan
public | pics_ident_seq | sequence | levan
(6 rows)

This is the same as...
template1=> \c pictures
You are now connected to database "pictures".
pictures=> \d
 List of relations
Schema |  Name  |   Type   | Owner
++--+---
public | lo_shadow  | table| levan
public | my_description | table| levan
public | person | table| levan
public | person_id_seq  | sequence | levan
public | pics   | table| levan
public | pics_ident_seq | sequence | levan
(6 rows)

I found this out when I tried to create a new user
testuser for testing a totally none privileged user.
When I recreate the testuser db using template1
I found the tables from the picture db were in
the newly created db.

Is it safe to simple drop the db's from the template1
db?  ( I have no idea of how the tables became part
of template1.)

Jerry

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] out of memory during query execution

2005-12-21 Thread Seneca Cunningham
DANTE ALEXANDRA wrote:
> Last question, how can I see that my 32-bit AIX program being limited to
> 256MB of heap, as the user "pg_810" used to launch the postmaster got
> when I execute the "ulimit -a" command :
> $ ulimit -a
> time(seconds)unlimited
> file(blocks) unlimited
> data(kbytes) unlimited
> stack(kbytes)unlimited
> memory(kbytes)   unlimited
> coredump(blocks) unlimited
> nofiles(descriptors) 2000

It's actually less than 256MB of heap.  When I saw your ./configure, I
noticed that no special options were passed to the linker, so you're
using the default 32-bit memory model.  Even an unlimited ulimit does
not allow for more memory than the model sets aside.

You can try setting the environment variable LDR_CNTRL to
MAXDATA=0x4000 (where the first digit is the number of 256MB
segments to allocate to heap, max 8) before starting the postmaster, at
the cost of reducing the amount of shared memory addressable by postgres.

Diagram of default memory model (Figure 3-3):
  

The redbook it's from, "Developing and Porting C and C++ Applications on
AIX":
  

-- 
Seneca Cunningham
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] out of memory during query execution

2005-12-21 Thread Tom Lane
DANTE ALEXANDRA <[EMAIL PROTECTED]> writes:
> In my case, does this mean that each one of the two hashs and sorts will 
> take 64MB, so 192MB ?
> What do you want to say with "so the two hashes and sort would think 
> they could use 3/4ths of the available heap" ?

Right, exactly.  In this particular case I think the top-level sort is
not going to be using much memory because it won't see very many rows,
but potentially it could try to eat 64Mb just like each of the hashes.

Your log entries show that the hashes are actually eating over 100Mb
apiece.  The memory space estimation for work_mem is not completely
accurate, and is not intended to be, but I would have liked to think
it would be closer than a factor-of-2 error.  Might be worth looking
into exactly what's happening there.

> Last question, how can I see that my 32-bit AIX program being limited to 
> 256MB of heap,

For that you need to talk to an AIX expert, which I'm not.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] out of memory during query execution

2005-12-21 Thread DANTE ALEXANDRA

Hello Tom,

I've got others questions on work-mem parameter.
On the "http://www.powerpostgresql.com/Downloads/annotated_conf_80.html"; 
web site, I've read that the work-mem specifies the amount of memory to 
be used by internal sort operations ans hash tables before switching to 
temporary disk files. Moreover, for a complex query, several sort or 
hash operations might be running in parallel; each one will be allowed 
to use as much memory as this value specifies before it starts to put 
into temporary files.


In my case, does this mean that each one of the two hashs and sorts will 
take 64MB, so 192MB ?
What do you want to say with "so the two hashes and sort would think 
they could use 3/4ths of the available heap" ?


Last question, how can I see that my 32-bit AIX program being limited to 
256MB of heap, as the user "pg_810" used to launch the postmaster got 
when I execute the "ulimit -a" command :

$ ulimit -a
time(seconds)unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes)unlimited
memory(kbytes)   unlimited
coredump(blocks) unlimited
nofiles(descriptors) 2000

Thank you very much for your help.
Regards,
Alexandra DANTE


Tom Lane a écrit :


Kevin Murphy <[EMAIL PROTECTED]> writes:
 

I'm certainly not an AIX expert, but I remember my 32-bit AIX programs 
being limited to 256MB of heap by default.
   



Hmm ... if that's the case then it'd probably explain the problem.
Alexandra had work_mem set to 64MB, so the two hashes and sort would
think they could use 3/4ths of the available heap; given that there
are other needs and our management of memory-use limitations is fairly
sloppy, that could easily translate into running out.

So the answer is either to increase the available heap or reduce
work_mem to a smaller fraction of it.

regards, tom lane

 




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Inheritance Algebra

2005-12-21 Thread Karsten Hilbert
On Wed, Dec 21, 2005 at 01:52:34PM +0100, Martijn van Oosterhout wrote:

> On Sun, Dec 04, 2005 at 10:59:10PM -0700, Trent Shipley wrote:
> > Relational Constraint Inheritance Algebra
> > With regard to class and attribute uniqueness
>
> It's taken a while to digest this and sorry for the delay. While I find
> the ideas intreguing there is a little voice in the back of my head
> asking: practical applications?
I would assume quite a few people would use table
inheritance in a simple way were it available in a more
convenient fashion: to transport fields, primary and foreign
keys to child tables.

In GNUmed (a medical practice application)
 http://salaam.homeunix.com/twiki/bin/view/Gnumed/WebHome
we use inheritance to make tables inherit

a) audit fields
b) common clinical fields such as a pointer to the patient

We overcome the primary/foreign key problem by a) letting
child tables have their own primary key which is quite
useful anyways and b) re-declaring foreign keys on child
tables.

While using inheritance isn't strictly necessary it is quite
convenient and makes the schema more intuitive.

There's also one major gain: since all clinical child tables
store their unstructured narrative in a field provided by
the clin_root_item parent table doing a search across the
entire narrative of the medical record is a simple query
against one table.

http://cvs.savannah.gnu.org/viewcvs/gnumed/gnumed/server/sql/?root=gnumed

(see gmAudit.sql and gmclinical.sql)

> The only situation I've come across inheitence being truly useful would
> be where you have several different "services" which are associated
> with a customer but each require different services.
Yes, this is similar to what we do.

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Inheritance Algebra

2005-12-21 Thread Martijn van Oosterhout
On Sun, Dec 04, 2005 at 10:59:10PM -0700, Trent Shipley wrote:
> Relational Constraint Inheritance Algebra
> With regard to class and attribute uniqueness



It's taken a while to digest this and sorry for the delay. While I find
the ideas intreguing there is a little voice in the back of my head
asking: practical applications?

For programming, inheritance provides a way of reusing code in a way
that encapsulates changes. But I have yet to find a lot of data that
really needs this kind of encapsulation. I think one of the reason
inheritance hasn't had a lot of work done in PostgreSQL is because the
use-cases aren't compelling enough to make someone want to put the
effort in.

Indeed, most data is structured such that you have a unique key and
various attributes associated with that. What SQL excels at it joining
tables on those keys. The uniqueness or otherwise of non-key fields is
not generally important.

The only situation I've come across inheitence being truly useful would
be where you have several different "services" which are associated
with a customer but each require different services. But even then, the
inheritence would only be useful if code utilizing it is within the
backend. As soon as the data is transferred to the application, *that*
is where the inheritence hierarchy is and it no longer cares if the
inheritence is present in the database itself.

That's my 2c anyway...
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.


pgpuM5G2h4evb.pgp
Description: PGP signature


Re: [PERFORM] [GENERAL] need help

2005-12-21 Thread Alban Medici \(NetCentrex\)

Try to execute your query (in psql) with prefixing by EXPLAIN ANALYZE and
send us the result
 db=# EXPLAIN ANALYZE UPDATE s_apotik SET stock = 100 WHERE obat_id='A';

regards

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tino Wildenhain
Sent: mardi 6 décembre 2005 09:55
To: Jenny
Cc: pgsql-general@postgresql.org; pgsql-sql@postgresql.org;
pgsql-performance@postgresql.org
Subject: Re: [PERFORM] [GENERAL] need help

Jenny schrieb:
> I'm running PostgreSQL 8.0.3 on i686-pc-linux-gnu (Fedora Core 2). 
> I've been dealing with Psql for over than 2 years now, but I've never 
> had this case before.
> 
> I have a table that has about 20 rows in it.
> 
>Table "public.s_apotik"
> Column |  Type| Modifiers
> ---+--+--
> obat_id| character varying(10)| not null
> stock  | numeric  | not null
> s_min  | numeric  | not null
> s_jual | numeric  | 
> s_r_jual   | numeric  | 
> s_order| numeric  | 
> s_r_order  | numeric  | 
> s_bs   | numeric  | 
> last_receive   | timestamp without time zone  |
> Indexes:
>"s_apotik_pkey" PRIMARY KEY, btree(obat_id)
>
> When I try to UPDATE one of the row, nothing happens for a very long time.
> First, I run it on PgAdminIII, I can see the miliseconds are growing 
> as I waited. Then I stop the query, because the time needed for it is 
> unbelievably wrong.
> 
> Then I try to run the query from the psql shell. For example, the 
> table has obat_id : A, B, C, D.
> db=# UPDATE s_apotik SET stock = 100 WHERE obat_id='A'; ( nothing 
> happens.. I press the Ctrl-C to stop it. This is what comes out
> :)
> Cancel request sent
> ERROR: canceling query due to user request
> 
> (If I try another obat_id)
> db=# UPDATE s_apotik SET stock = 100 WHERE obat_id='B'; (Less than a 
> second, this is what comes out :) UPDATE 1
> 
> I can't do anything to that row. I can't DELETE it. Can't DROP the table. 
> I want this data out of my database.
> What should I do? It's like there's a falsely pointed index here.
> Any help would be very much appreciated.
> 

1) lets hope you do regulary backups - and actually tested restore.
1a) if not, do it right now
2) reindex the table
3) try again to modify

Q: are there any foreign keys involved? If so, reindex those tables too,
just in case.

did you vacuum regulary?

HTH
Tino

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Questions about server.

2005-12-21 Thread Richard Huxton

max chin wrote:

Dear Mr.,

My name is Chia. I'm a Malaysian student who still persuing my
computer course. Now I'm working with a IT company for industry
training. I got some questions about  server.


Hi Chia


1.)  What I knew is when too many users access a database at the same
time, it will slow down database server process. My question is how
to make database server process more faster even if a lot of users
access information in database at the same time? What kind of
technology is needed to help database server process more faster ?


The first step is to find out *why* it is too slow - not enough memory? 
Not enough disk bandwidth? Not enough CPU? Badly planned queries? Badly 
written application?


So - the first thing to do is to start monitoring your server closely.


2.) In my company, database server and web server are store inside a
machine. My question is how to separate database server and web
server from one machine to two machine? I mean how those 2 server
within one machine will be separated become 1 server within one
machine, another server within one another machine. Can you show me
the way or process of implementation?


PostgreSQL allows you to connect to it over an internet connection. You 
will need to check settings in your:

1. System firewall
2. postgresql.conf
3. pg_hba.conf (make sure you password-protect access to your database)
Details on 2,3 are in the manuals.


3.) How to back up automatically database information from host
machine to another machine every one hour and everytime update
database informations is done?


You probably want to read up on pg_dump, point-in-time-recovery (PITR) 
and replication (Slony seems popular). See the online manuals and google 
for details.



4.) Sometimes IIS web server is unavailable and the web pages can' t
display for clients. Can you tell me the reasons and the methods to
overcome the problems?


Although there are many people who use IIS in big installations, I'm not 
one of them. For maximum reliability, I'd guess Apache on top of one of 
the BSD's would be a winner, but probably not an option for you.



THANKS YOU. CAN YOU GIVE ME YOUR ANSWER AS FAST AS POSSIBLE BECAUSE I
NEED THESE IMPORTANT ANSWER URGENTLY.


If you're a student on work placement, I'm not sure you should have to 
deal with these sort of issues, certainly not urgently.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] out of memory during query execution

2005-12-21 Thread DANTE ALEXANDRA

Hello,

Just to give you explanation, I will try to explain you why I have set 
the "work_mem" to 64MB.


I noticed that some of the queries have finished with an "out of memory" 
because the file system on which temporary files were created was to 
small. Consequently, I have increase the size of this file system and I 
have set work_mem to 64MB. The query still continued to swap into 
temporary files but did not finish with an "out of memory".


The next steps for me will be to :
- decrease the work_mem
- try to build PostGreSQL with 32 bits and the option "-bmaxdata:0x4000"
- try to build PostGreSQL with 64 bits.

I will give you the results as soon as possible.

Thank you for your help.
Regards,
Alexandra DANTE


Tom Lane a écrit :


Kevin Murphy <[EMAIL PROTECTED]> writes:
 

I'm certainly not an AIX expert, but I remember my 32-bit AIX programs 
being limited to 256MB of heap by default.
   



Hmm ... if that's the case then it'd probably explain the problem.
Alexandra had work_mem set to 64MB, so the two hashes and sort would
think they could use 3/4ths of the available heap; given that there
are other needs and our management of memory-use limitations is fairly
sloppy, that could easily translate into running out.

So the answer is either to increase the available heap or reduce
work_mem to a smaller fraction of it.

regards, tom lane

 




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] out of memory during query execution

2005-12-21 Thread Martijn van Oosterhout
On Wed, Dec 21, 2005 at 10:22:05AM +0100, DANTE ALEXANDRA wrote:
> Hello,
> 
> Thank you for all the answers I've got on this problem.
> 
> Tom, I've checked this morning if the logfile contains lines like
> %s: %ld total in %ld blocks; %ld free (%ld chunks); %ld used
> and it is the case.


> ExecutorState: 21586616 total in 25 blocks; 14624 free (33 chunks); 21571992 
> used
> HashTableContext: 8192 total in 1 blocks; 8128 free (2 chunks); 64 used
> HashBatchContext: 134152352 total in 22 blocks; 14859064 free (55434 chunks); 
> 119293288 used
> HashTableContext: 8192 total in 1 blocks; 8128 free (0 chunks); 64 used
> HashBatchContext: 108527800 total in 16 blocks; 1016512 free (24 chunks); 
> 107511288 used

These are by far the largest and seem to confirm that your total memory
usage is limited to not much more than 256MB, so what that other poster
said may be relevent. Whether that as reasonable amount for a hash
table to use in your context, I'll leave that to someone else...

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.


pgp9vi0iKs5mC.pgp
Description: PGP signature


Re: [GENERAL] out of memory during query execution

2005-12-21 Thread DANTE ALEXANDRA

Hello,

According to several answers, my problem could be due to the 32 bits build.
I hope I will try to make a 64 bits build in January, in order to give 
you more explanations about the possible errors I will encounter.


I keep preciously your advice, and I hope I will be able to give you an 
answer in January.


Thank you.
Regards,
Alexandra DANTE

Seneca Cunningham a écrit :


DANTE ALEXANDRA wrote:
 


The person is charge of building PostGreSQL 8.1.0 has done a 32 bit
build and has used the "cc_r" compiler.
This person does not succeed to build PostGreSQL 8.1.0 with "gcc" and 64
bits. Unfortunatly, I don't have the errors or the logs of the 64 bits
build and I can't tell you what error occurs.
   



Too bad, I may have been able to determine what had happened with the
gcc build.

 


The build done was realized in 32 bits, with the cc_r compiler.
To build POstGreSQL, a rpm was done and the ".spec" file contained the
following instructions :
export OBJECT_MODE=32
./configure CC=/usr/vac/bin/cc_r CFLAGS="-O2 -qmaxmem=-1 -qsrcmsg
-qlargepage" --enable-thread-safety
--without-readline --prefix=%{buildroot}%{prefix}
gmake -j 4
unset OBJECT_MODE

Do you think that my problems of "out of memory" are due to the 32 bits
build ?
Do you think that I must build PostGreSQL wih 64 bits to solve this error ?
   



It is quite likely that the out of memory errors are due to your use of
the default 32-bit memory model.  In that model, a single 256MB memory
segment contains your heap, stack, thread stacks, and other per-process,
non-shared-library data.  Switching to 64-bit would stop the errors if
this is true.  It is also possible to adjust the amount of space
available to a 32-bit process' heap with the -bmaxdata linker option,
but the largest heap size that I would consider safe with 32-bit is 2GB
and comes with the cost of reducing the amount of shared memory
available to the process.

Setting OBJECT_MODE to 64 before the ./configure and gmake should result
in a 64-bit build, but I don't have a copy of IBM's compiler to test
with.  I would be interested in seeing the errors output by the 64-bit
gcc build if another build is attempted.

 




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] out of memory during query execution

2005-12-21 Thread DANTE ALEXANDRA

Hello,

Thank you for all the answers I've got on this problem.

Tom, I've checked this morning if the logfile contains lines like
%s: %ld total in %ld blocks; %ld free (%ld chunks); %ld used
and it is the case.

As I launch a series of queries, I hope that the following lines are the 
exact lines for the query which has finished with an "out of memory" :

TopMemoryContext: 40960 total in 4 blocks; 11248 free (9 chunks); 29712 used
Operator class cache: 8192 total in 1 blocks; 4936 free (0 chunks); 3256 
used
Type information cache: 8192 total in 1 blocks; 1864 free (0 chunks); 
6328 used
TopTransactionContext: 8192 total in 1 blocks; 7856 free (0 chunks); 336 
used

MessageContext: 253952 total in 5 blocks; 7784 free (8 chunks); 246168 used
smgr relation table: 8192 total in 1 blocks; 2872 free (0 chunks); 5320 used
Portal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used
PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used
PortalHeapMemory: 1024 total in 1 blocks; 880 free (0 chunks); 144 used
ExecutorState: 21586616 total in 25 blocks; 14624 free (33 chunks); 
21571992 used

HashTableContext: 8192 total in 1 blocks; 8128 free (2 chunks); 64 used
HashBatchContext: 134152352 total in 22 blocks; 14859064 free (55434 
chunks); 119293288 used

HashTableContext: 8192 total in 1 blocks; 8128 free (0 chunks); 64 used
HashBatchContext: 108527800 total in 16 blocks; 1016512 free (24 
chunks); 107511288 used

ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 8192 total in 1 blocks; 8128 free (0 chunks); 64 used
AggContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
Relcache by OID: 8192 total in 1 blocks; 3376 free (0 chunks); 4816 used
CacheMemoryContext: 516096 total in 6 blocks; 108608 free (0 chunks); 
407488 used

i_o_orderkey: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
i_ps_partkey_suppkey: 1024 total in 1 blocks; 328 free (0 chunks); 696 used
i_l_orderkey: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_index_indrelid_index: 1024 total in 1 blocks; 392 free (0 chunks); 
632 used
pg_type_typname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks); 
696 used

pg_type_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_statistic_relid_att_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_auth_members_member_role_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_auth_members_role_member_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 256 free (0 
chunks); 768 used

pg_proc_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 192 free (0 
chunks); 832 used

pg_operator_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_opclass_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 256 free (0 
chunks); 768 used
pg_namespace_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 
used
pg_namespace_nspname_index: 1024 total in 1 blocks; 392 free (0 chunks); 
632 used

pg_language_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_language_name_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 
used
pg_inherits_relid_seqno_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_index_indexrelid_index: 1024 total in 1 blocks; 392 free (0 chunks); 
632 used

pg_authid_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
pg_authid_rolname_index: 1024 total in 1 blocks; 392 free (0 chunks); 
632 used
pg_database_datname_index: 1024 total in 1 blocks; 392 free (0 chunks); 
632 used
pg_conversion_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 
632 used
pg_conversion_name_nsp_index: 1024 total in 1 blocks; 328 free (0 
chunks); 696 used
pg_conversion_default_index: 1024 tot