[GENERAL] backup of postgres scheduled with cron

2007-11-22 Thread Sorin N. Ciolofan
Hello all!

I've a small bash script backup.sh for creating dumps on my Postgre db:

#!/bin/bash 
time=`date '+%d'-'%m'-'%y'`
cd /home/swkm/services/test
  pg_dump mydb  mydb_dump_$time.out

I've edited crontab and added a line:

00 4 * * * swkm /home/swkm/services/test/backup.sh

to execute the backup.sh as user swkm daily at 4 am.

The user swkm is the user I use to create backups manually. The script
itself is executed fine if run manually but run on cron scheduler I got an
mydb_dump_$time.out file empty (of 0 kb)

Do you have any idea about what's wrong?

Thanks
Sorin





---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] [ADMIN] backup of postgres scheduled with cron

2007-11-22 Thread Sorin N. Ciolofan
Hi Marco!

Thank you for the advice.

I got:

/home/swkm/services/test/backup.sh: line 4: pg_dump: command not found
updating: mydb_dump_22-11-07.out (stored 0%)

which seems strange



-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Marco Bizzarri
Sent: Thursday, November 22, 2007 3:28 PM
To: Sorin N. Ciolofan
Cc: [EMAIL PROTECTED]; pgsql-general@postgresql.org
Subject: Re: [ADMIN] backup of postgres scheduled with cron

On Nov 22, 2007 2:19 PM, Sorin N. Ciolofan [EMAIL PROTECTED] wrote:
 Hello all!

 I've a small bash script backup.sh for creating dumps on my Postgre db:

 #!/bin/bash
 time=`date '+%d'-'%m'-'%y'`
 cd /home/swkm/services/test
   pg_dump mydb  mydb_dump_$time.out

 I've edited crontab and added a line:

 00 4 * * * swkm /home/swkm/services/test/backup.sh

 to execute the backup.sh as user swkm daily at 4 am.

 The user swkm is the user I use to create backups manually. The script
 itself is executed fine if run manually but run on cron scheduler I got an
 mydb_dump_$time.out file empty (of 0 kb)

 Do you have any idea about what's wrong?

 Thanks
 Sorin


Hi Sorin,

why don't you add a MAILTO=youraddress at the start of your
crontab file, so that you can receive a report of the problem?

Regards
Marco

-- 
Marco Bizzarri
http://iliveinpisa.blogspot.com/

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] backup of postgres scheduled with cron

2007-11-22 Thread Sorin N. Ciolofan
Thank you all,

Yes, I used the absolute path in my script and now works ok :-)

Thank you again
Sorin

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Frank Wittig
Sent: Thursday, November 22, 2007 4:01 PM
To: Sorin N. Ciolofan
Cc: [EMAIL PROTECTED]; pgsql-general@postgresql.org
Subject: Re: [GENERAL] backup of postgres scheduled with cron

Hello Sorin!

Sorin N. Ciolofan wrote:

   #!/bin/bash
   time=`date '+%d'-'%m'-'%y'`
   cd /home/swkm/services/test
   pg_dump mydb  mydb_dump_$time.out

You should output STDERR to some error logfile or set MAILTO in your
crontab.
I guess you then would have seen an error message saying that pg_dump
was not found because cron doesn't load the users environment and
therefore PATH variable isn't set.
I suggest you call pg_dump in your script by absolute path.

Greetings,
Frank Wittig




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] granting acces to an external client

2007-07-25 Thread Sorin N. Ciolofan
Hello!

I'd like to ask you what line should be added in pg_hba.conf file in order
to grant access to a user with ip
139.100.99.98 to a db named myDB with user scott with password
mikepwd?

After modifying this file is enough to issue
pg_ctl reload 
or should I restart postgres?

Thank you
With best regards,




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] [ADMIN] increasing of the shared memory does not solve theproblem of OUT of shared memory

2007-05-17 Thread Sorin N. Ciolofan
Hi again!

 

It seems that the problem was the max_fsm_relations parameter which was
increased

 

I still do not understand why in the past I received always the message:

ERROR: out of shared memory

 

Is this an appropriate message for the need for increasing this parameter?

 

With best regards,

Sorin C.

 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Adam Tauno Williams
Sent: Friday, May 11, 2007 9:42 PM
To: [EMAIL PROTECTED]
Subject: Re: [ADMIN] increasing of the shared memory does not solve
theproblem of OUT of shared memory

 

 

 

 I increased significantly the number of shared buffers from 3000 to

 100 000 (80Mb)

 Also I increased the max_locks_per_transaction from  64  to 10 000.

 I still receive the same error form Postgres:

 org.postgresql.util.PSQLException: ERROR: out of shared memory

 Is this message appropriate to the real cause of the problem or the

 reason of the failure is actually other than what is displayed in this

 message?

 

Maybe you've got an application that is doing a BEGIN WORK, but never

doing a rollback or commit?

-- 

Adam Tauno Williams, Network  Systems Administrator

Consultant - http://www.whitemiceconsulting.com

Developer - http://www.opengroupware.org

 

 

---(end of broadcast)---

TIP 9: In versions below 8.0, the planner will ignore your desire to

   choose an index scan if your joining column's datatypes do not

   match



[GENERAL] increasing of the shared memory does not solve the problem of OUT of shared memory

2007-05-11 Thread Sorin N. Ciolofan
 

Hello!

 

I increased significantly the number of shared buffers from 3000 to 100 000
(80Mb)

Also I increased the max_locks_per_transaction from  64  to 10 000.

I still receive the same error form Postgres:
org.postgresql.util.PSQLException: ERROR: out of shared memory

Is this message appropriate to the real cause of the problem or the reason
of the failure is actually other than what is displayed in this message?

 

With best regards,

Sorin N. Ciolofan

 

 



Re: [ADMIN] [GENERAL] pg_buffercache view

2007-04-26 Thread Sorin N. Ciolofan

Hello!

 Do you know which could be the reasons that could conduce an application to
not release the shared buffers, even after the application was shut down?
 I noticed that only if a pg_ctl restart command is issued some of the
buffers are set free.

Thank you very much
With best regards,
Sorin



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] [GENERAL] pg_buffercache view

2007-04-26 Thread Sorin N. Ciolofan

I don't know the algorithm on which Postgre uses the shared buffers but I'd
like to find the principles behind it. Let's assume the following scenario:
I've set shared_buffers=3000
At the starting of Postgres there are 115 buffers used by database A
After the execution of some processing caused by a java methodA1()
invocation, 2850 buffers are used by A.
What happens next if these 2850 buffers remains used even if the methodA1()
finished its execution?
Suppose that now a methodA2() invocation occurs and this method works with
database A, too. Will be the 2850 buffers reused or will postgre throw an
out of shared memory exception?
What happens if a methodB() invocation occurs, assuming that this method
tries to work with database B?
How Postgre decides the allocation of shared_buffers?


Thanks
Sorin

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Bill Moran
Sent: Thursday, April 26, 2007 3:32 PM
To: Sorin N. Ciolofan
Cc: pgsql-general@postgresql.org; [EMAIL PROTECTED]
Subject: Re: [ADMIN] [GENERAL] pg_buffercache view

In response to Sorin N. Ciolofan [EMAIL PROTECTED]:

 
 Hello!
 
  Do you know which could be the reasons that could conduce an application
to
 not release the shared buffers, even after the application was shut down?
  I noticed that only if a pg_ctl restart command is issued some of the
 buffers are set free.

The reason would be by design.

If the server flushes its cache every time the application restarts, the
cache isn't going to be very effective.

If PostgreSQL is using more shared buffers than you're comfortable with,
reduce the shared_buffers setting in the config.  That will allow the OS
to decide how to use the memory instead.

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] pg_buffercache view

2007-04-24 Thread Sorin N. Ciolofan

 Dear all,

About the pg_buffercache view:
I couldn't find the description for this view in the manual at
http://www.postgresql.org/docs/8.2/interactive/catalogs.html
However I found the readme file provided in the /contrib./pg_buffercache of
the source code for version 8.2.3
Here it's written the following description:

   Column |  references  | Description
 
+--+
   bufferid   |  | Id, 1..shared_buffers.
   relfilenode| pg_class.relfilenode | Refilenode of the relation.
   reltablespace  | pg_tablespace.oid| Tablespace oid of the relation.
   reldatabase| pg_database.oid  | Database for the relation.
   relblocknumber |  | Offset of the page in the
relation.
   isdirty|  | Is the page dirty?

I've 2 questions:
1)
I was not able to find the field oid from pg_database view. Could you
please tell me what is the actual name of the column for which reldatabase
is reffering to?
2)
In readme file is also written:
Unused buffers are shown with all fields null except buffered. 
A used buffer means that is used 100% or could it be filled only
partially? 
Is there any way to know at a certain moment with precision how much shared
memory expressed in Mb is used?

With best regards,
Sorin




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] [GENERAL] pg_buffercache view

2007-04-24 Thread Sorin N. Ciolofan
 

   Dear Mr. Bill Moran,

 

 Thank you for your answer. 

 

1) To be more clear I would like to construct a query using the reldatabase
column. In that query you quoted I can't identify the reldatabase column. I
want a query that will help me to list how many buffers are used by each
database 

 

Maybe something like:

 

SELECT d.datname, count(*) AS buffers

   FROM pg_database d, pg_buffercache b

   WHERE d.X = b.reldatabase

   GROUP BY b.reldatabase

   ORDER BY 2 DESC LIMIT 10;

 

I would like, if possible, to know which is the name of this X which
corresponds to reldatabase column

 

2) I don't know exactly which is the modality the buffers are used. Is it
possible that all buffers to be used at let's say 5% of their capacity? In
this case I see in pg_buffercache that all the shared memory is used (since
all the buffers are used) but in reality only 5% from it is actually used.

 

With best regards,

Sorin

 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Bill Moran
Sent: Tuesday, April 24, 2007 4:03 PM
To: Sorin N. Ciolofan
Cc: [EMAIL PROTECTED]; pgsql-general@postgresql.org
Subject: Re: [ADMIN] [GENERAL] pg_buffercache view

 

In response to Sorin N. Ciolofan [EMAIL PROTECTED]:

 

  Dear all,

 

 About the pg_buffercache view:

 I couldn't find the description for this view in the manual at

 http://www.postgresql.org/docs/8.2/interactive/catalogs.html

 However I found the readme file provided in the /contrib./pg_buffercache
of

 the source code for version 8.2.3

 

Since pg_buffercache is contributed software, it's not documented in the

official PostgreSQL docs.

 

 Here it's written the following description:

 

Column |  references  | Description

  


+--+

bufferid   |  | Id, 1..shared_buffers.

relfilenode| pg_class.relfilenode | Refilenode of the relation.

reltablespace  | pg_tablespace.oid| Tablespace oid of the relation.

reldatabase| pg_database.oid  | Database for the relation.

relblocknumber |  | Offset of the page in the

 relation.

isdirty|  | Is the page dirty?

 

 I've 2 questions:

 1)

 I was not able to find the field oid from pg_database view. Could you

 please tell me what is the actual name of the column for which reldatabase

 is reffering to?

 

At the end of the README is an example query that I think answers your

question:

SELECT c.relname, count(*) AS buffers

   FROM pg_class c, pg_buffercache b

   WHERE b.relfilenode = c.relfilenode

   GROUP BY c.relname

   ORDER BY 2 DESC LIMIT 10;

 

 

 2)

 In readme file is also written:

 Unused buffers are shown with all fields null except buffered. 

 A used buffer means that is used 100% or could it be filled only

 partially?

 

Yes.  The buffer is either used or not used, but pg_buffercache doesn't

know what percentage of it is used.  0% is used.  0% is not used.

 

 Is there any way to know at a certain moment with precision how much
shared

 memory expressed in Mb is used?

 

The precision is +/- 1 buffer.  I expect that trying to get more precision
out

of the system will result in considerable performance degradation as the

data is collected and/or tracked.

 

-- 

Bill Moran

http://www.potentialtech.com

 

---(end of broadcast)---

TIP 2: Don't 'kill -9' the postmaster



Re: [GENERAL] [ADMIN] Increasing the shared memory

2007-04-18 Thread Sorin N. Ciolofan

Dear all,

Thanks for your advices. I'd like to ask you where can I download the
pg_buffercache add-on and also where can I find some documentation about how
can I install it?

Thank you
Sorin
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Bill Moran
Sent: Thursday, April 12, 2007 4:14 PM
To: Sorin N. Ciolofan
Cc: 'Shoaib Mir'; pgsql-general@postgresql.org; [EMAIL PROTECTED];
'Dimitris Kotzinos'
Subject: Re: [GENERAL] [ADMIN] Increasing the shared memory

In response to Sorin N. Ciolofan [EMAIL PROTECTED]:

 I've tried first to increase the number of shared buffers,
I
 doubled it, from 1000 to 2000 (16Mb)
 
 Unfortunately this had no effect.

The difference between 8M and and 16M of shared buffers is pretty minor.
Try bumping it up to 250M or so and see if that helps.

You could install the pg_buffercache addon and monitor your buffer usage
to see how much is actually being used.

However, if the problem is write performance (which I'm inferring from your
message that it is) then increasing shared_buffers isn't liable to make a
significant improvement, unless the inserts are doing a lot of querying as
well.  With inserts, the speed is going to (most likely) be limited by the
speed of your disks.  I may have missed this information in earlier posts,
did you provide details of you hardware configuration?  Have you done tests
to find out what speed your disks are running?  Have you monitored IO
during your inserts to see if the IO subsystem is maxed out?

Also, the original problem you were trying to solve has been trimmed from
this thread, which makes me wonder if any of my advice is relevant.

 
  Then I increased the number of max_locks_per_transaction
 from 64 to 128 (these shoul assure about 12 800 lock slots) considering
 max_connections=100 and max_prepared_transaction=5  (Quote from the manual
-
 The shared lock table is created to track locks on
max_locks_per_transaction
 * (max_connections

http://www.postgresql.org/docs/8.2/interactive/runtime-config-connection.ht
 ml#GUC-MAX-CONNECTIONS  + max_prepared_transactions

http://www.postgresql.org/docs/8.2/interactive/runtime-config-resource.html
 #GUC-MAX-PREPARED-TRANSACTIONS ) objects (e.g. tables);)
 
  I've also restarted 
 
  This had also no effect. Because I can't see any
difference
 between the maximum input accepted for our application with the old
 configuration and the maximum input accepted now, with the new
 configuration. It looks like nothing happened. 
 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [ADMIN] Increasing the shared memory

2007-04-13 Thread Sorin N. Ciolofan

 I will simplify the things in order to describe when the error occurred:
The input of the application is some data which is read from files on disk,
processed and then inserted in the database in one transaction. This total
quantity of data represents an integer number of data files, n*q, where q is
a file which has always 60kb and n is the positive integer.
For n=23 and shared_buffers=1000 and max_locks_per_transaction=64 the
Postgres throws the following exception:

org.postgresql.util.PSQLException: ERROR: out of shared memory
at
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorI
mpl.java:1525)
at
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.ja
va:1309)
at
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:188)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.j
ava:452)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2St
atement.java:340)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2State
ment.java:286)
at
gr.forth.ics.rdfsuite.rssdb.repr.SSRepresentation.createClassTable(SSReprese
ntation.java:1936)
at
gr.forth.ics.rdfsuite.rssdb.repr.SSRepresentation.store(SSRepresentation.jav
a:1783)
at
gr.forth.ics.rdfsuite.swkm.model.db.impl.RDFDB_Model.storeSchema(RDFDB_Model
.java:814)
at
gr.forth.ics.rdfsuite.swkm.model.db.impl.RDFDB_Model.store(RDFDB_Model.java:
525)
at
gr.forth.ics.rdfsuite.services.impl.ImporterImpl.storeImpl(ImporterImpl.java
:79)
... 50 more

For n=23 I estimated that we create and manipulate about 8000 tables. 
One of the suggestion received here was that maybe there are not sufficient
locks slots per transaction, that's why I've increased the
max_locks_per_transaction (to 128) in order to be able to manipulate about
12 800 tables.

So, I doubled both shared_buffers and  max_locks_per_transaction and for
n=23 I received the same error. I would expect to see a difference, even a
little one, for example from n=23 to n=24 but the maximum quantity of data
accepted was the same. 

Thank you very much,
With best regards
Sorin

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 12, 2007 5:01 PM
To: Sorin N. Ciolofan
Cc: 'Shoaib Mir'; [EMAIL PROTECTED]; [EMAIL PROTECTED];
'Dimitris Kotzinos'
Subject: Re: [ADMIN] Increasing the shared memory 

Sorin N. Ciolofan [EMAIL PROTECTED] writes:
  This had also no effect. Because I can't see any
difference
 between the maximum input accepted for our application with the old
 configuration and the maximum input accepted now, with the new
 configuration. It looks like nothing happened. 

This is the first you've mentioned about *why* you wanted to increase the
settings, and what it sounds like to me is that you are increasing the
wrong thing.  What's the actual problem?

regards, tom lane



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] [ADMIN] Increasing the shared memory

2007-04-12 Thread Sorin N. Ciolofan
   Hello!

 

I've tried first to increase the number of shared buffers, I
doubled it, from 1000 to 2000 (16Mb)

Unfortunately this had no effect.

 Then I increased the number of max_locks_per_transaction
from 64 to 128 (these shoul assure about 12 800 lock slots) considering
max_connections=100 and max_prepared_transaction=5  (Quote from the manual -
The shared lock table is created to track locks on max_locks_per_transaction
* (max_connections
http://www.postgresql.org/docs/8.2/interactive/runtime-config-connection.ht
ml#GUC-MAX-CONNECTIONS  + max_prepared_transactions
http://www.postgresql.org/docs/8.2/interactive/runtime-config-resource.html
#GUC-MAX-PREPARED-TRANSACTIONS ) objects (e.g. tables);)

 I've also restarted 

 This had also no effect. Because I can't see any difference
between the maximum input accepted for our application with the old
configuration and the maximum input accepted now, with the new
configuration. It looks like nothing happened. 

 

Thanks

Sorin

  _  

From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Shoaib Mir
Sent: Monday, April 02, 2007 6:02 PM
To: Sorin N. Ciolofan
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: [ADMIN] Increasing the shared memory

 

An extract from -- http://www.powerpostgresql.com/PerfList/ might help
you

shared_buffers: 

As a reminder: This figure is NOT the total memory PostgreSQL has to work
with. It is the block of dedicated memory PostgreSQL uses for active
operations, and should be a minority of your total RAM on the machine, since
PostgreSQL uses the OS disk cache as well. Unfortunately, the exact amount
of shared buffers required is a complex calculation of total RAM, database
size, number of connections, and query complexity. Thus it's better to go
with some rules of thumb in allocating, and monitor the server (particuarly
pg_statio views) to determine adjustments. 
On dedicated servers, useful values seem to be between between 8MB and 400MB
(between 1000 and 50,000 for 8K page size). Factors which raise the desired
shared buffers are larger active portions of the database, large complex
queries, large numbers of simultaneous queries, long-running procedures or
transactions, more available RAM, and faster/more CPUs. And, of course,
other applications on the machine. Contrary to some expectations, allocating
much too much shared_buffers can actually lower peformance, due time
required for scanning. Here's some examples based on anecdotes and TPC tests
on Linux machines: 

* Laptop, Celeron processor, 384MB RAM, 25MB database: 12MB/1500
* Athlon server, 1GB RAM, 10GB decision-support database: 120MB/15000
* Quad PIII server, 4GB RAM, 40GB, 150-connection heavy transaction
processing database: 240MB/3 
* Quad Xeon server, 8GB RAM, 200GB, 300-connection heavy transaction
processing database: 400MB/5

Please note that increasing shared_buffers, and a few other memory
parameters, will require you to modify your operating system's System V
memory parameters. See the main PostgreSQL documentation for instructions on
this. 

--
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)



[GENERAL] Increasing the shared memory

2007-04-02 Thread Sorin N. Ciolofan
 

 Hello!

 

 I'd like to ask you if there is any Postgre configuration parameter (like
the ones defined in postgresql.conf file) that could be used for increasing
the shared memory for Postgre?

 

Thank you very much

With best regards,

Sorin



Re: [GENERAL] ERROR: out of shared memory

2007-04-02 Thread Sorin N. Ciolofan
Dear Mr. Tom Lane,

  From what I've read from the postgresql.conf file I've understood that
which each unit increasing of the max_locks_per_transaction parameter the
shared memory used is also increased.
  But the shared memory looks to be already fully consumed according to the
error message, or is the error message irrelevant and improper in this
situation?

With best regards,
Sorin

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, March 27, 2007 4:59 PM
To: Sorin N. Ciolofan
Cc: pgsql-general@postgresql.org; pgsql-admin@postgresql.org;
pgsql-performance@postgresql.org
Subject: Re: [GENERAL] ERROR: out of shared memory 

Sorin N. Ciolofan [EMAIL PROTECTED] writes:
 It seems that the legacy application creates tables dynamically and the
 number of the created tables depends on the size of the input of the
 application. For the specific input which generated that error I've
 estimated a number of created tables of about 4000. 
 Could be this the problem?

If you have transactions that touch many of them within one transaction,
then yup, you could be out of locktable space.  Try increasing
max_locks_per_transaction.

regards, tom lane



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [ADMIN] Increasing the shared memory

2007-04-02 Thread Sorin N. Ciolofan
 Thanks,

 

 

I've a value of 1000 set for shared_buffers, does this means
that I use 8kbX1000=8Mb of Shared Mem?



The definition from the manual is quite confusing:

 

shared_buffers (integer) 

Sets the amount of memory the database server uses for shared memory
buffers. The default is typically 32 megabytes (32MB), but may be less if
your kernel settings will not support it (as determined during initdb). This
setting must be at least 128 kilobytes and at least 16 kilobytes times
max_connections
http://www.postgresql.org/docs/current/static/runtime-config-connection.htm
l#GUC-MAX-CONNECTIONS . 

 

What does the integer number represent? Number of shared buffers? If yes,
what size does each shared buffer have?

The default is typically 32 megabytes suggests that this integer could
also represent the number of megabytes?!?

In the postgresql.conf file is an ambiguous comment that could induce the
idea that each shared buffer has 8 kb.

So, which is the meaning of this integer?

 

Thanks.

S.

 

  _  

From: Shoaib Mir [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 02, 2007 1:01 PM
To: Sorin N. Ciolofan
Cc: pgsql-general@postgresql.org; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Increasing the shared memory

 

I guess shared_buffers (in postgresql.conf file) will help you here if you
have properly setup your kernel.SHMMAX value.

--
Shoaib Mir
EnterpriseDB (www.enterprisedb.com )

On 4/2/07, Sorin N. Ciolofan [EMAIL PROTECTED] wrote:

 

 Hello!

 

 I'd like to ask you if there is any Postgre configuration parameter (like
the ones defined in postgresql.conf file) that could be used for increasing
the shared memory for Postgre?

 

Thank you very much

With best regards,

Sorin

 



Re: [GENERAL] ERROR: out of shared memory

2007-03-29 Thread Sorin N. Ciolofan
Dear Mr. Tom Lane,

Thank you very much for your answer.
It seems that the legacy application creates tables dynamically and the
number of the created tables depends on the size of the input of the
application. For the specific input which generated that error I've
estimated a number of created tables of about 4000. 
Could be this the problem?

With best regards,
Sorin

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, March 27, 2007 6:37 AM
To: Sorin N. Ciolofan
Cc: pgsql-general@postgresql.org; pgsql-admin@postgresql.org;
pgsql-performance@postgresql.org
Subject: Re: [GENERAL] ERROR: out of shared memory 

Sorin N. Ciolofan [EMAIL PROTECTED] writes:
I have to manage an application written in java which call another
module
 written in java which uses Postgre DBMS in a Linux environment. I'm new to
 Postgres. The problem is that for large amounts of data the application
 throws an:
  org.postgresql.util.PSQLException: ERROR: out of shared memory

AFAIK the only very likely way to cause that is to touch enough
different tables in one transaction that you run out of lock entries.
While you could postpone the problem by increasing the
max_locks_per_transaction setting, I suspect there may be some basic
application misdesign involved here.  How many tables have you got?

regards, tom lane



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] ERROR: out of shared memory

2007-03-26 Thread Sorin N. Ciolofan
 

Hello!

 

   I have to manage an application written in java which call another module
written in java which uses Postgre DBMS in a Linux environment. I'm new to
Postgres. The problem is that for large amounts of data the application
throws an:

 org.postgresql.util.PSQLException: ERROR: out of shared memory

 

Please, have you any idea why this error appears and what can I do in order
to fix this?

Are there some Postgre related parameters I should tune (if yes what
parameters) or is something related to the Linux OS?

 

Thank you very much

With best regards,

Sorin