Ho Cosimo,
I had read that before, so you are right. The amount of memory being used
could run much higher than I wrote.
In my case, I know that not all the connections are not busy all the time
(this isn't a web application with thousands of users connecting to a pool)
so not all active connec
Hi Amrit,
I'm sorry to hear about the disaster in Thailand. I live in a tsunami prone
area myself :-(
I think that you have enough information to solve your problem now, but it
will just take some time and testing. When you have eliminated the excessive
swapping and tuned your system as best yo
Hi,
These are the /etc/sysctl.conf settings that I am planning to use.
Coincidentally, these are the settings recommended by Oracle. If anything
they would be generous, I think.
file-max 65536 (for 2.2 and 2.4 kernels)
kernel.shmall 134217728 (=128MB)
kernel.shmmax 268435456
fs.file-max 65536
By
Pallav Kalva <[EMAIL PROTECTED]> writes:
> >> I had a setup a cronjob couple of weeks ago to run vacuum analyze every 3
> >> hours on this table and still my stats are totally wrong. This is affecting
> >> the performance of the queries running on this table very badly.
> >> How can i fix this pro
Pallav Kalva wrote:
John A Meinel wrote:
Pallav Kalva wrote:
Hi Everybody.
I have a table in my production database which gets updated
regularly and the stats on this table in pg_class are totally
wrong. I used to run vacuumdb on the whole database daily once and
when i posted the same pr
John A Meinel wrote:
Pallav Kalva wrote:
Hi Everybody.
I have a table in my production database which gets updated
regularly and the stats on this table in pg_class are totally
wrong. I used to run vacuumdb on the whole database daily once and
when i posted the same problem of wrong stats
Pallav Kalva wrote:
Hi Everybody.
I have a table in my production database which gets updated
regularly and the stats on this table in pg_class are totally wrong.
I used to run vacuumdb on the whole database daily once and when i
posted the same problem of wrong stats in the pg_class most
Hi Everybody.
I have a table in my production database which gets updated
regularly and the stats on this table in pg_class are totally wrong. I
used to run vacuumdb on the whole database daily once and when i posted
the same problem of wrong stats in the pg_class most of them from this
On Mon, 2004-12-27 at 22:31 +0700, Amrit Angsusingh wrote:
> [ [EMAIL PROTECTED] ]
> >
> > These are some settings that I am planning to start with for a 4GB RAM
> > dual
> > opteron system with a maximum of 100 connections:
> >
> >
> > shared_buffers 8192 (=67MB RAM)
> > sort_mem 4096 (=400MB RAM
Hi All,
I have a database running on Postgres 7.3.2. I am dumping the database schema from postgres 7.4.6 to restore it on the new Postgres version. The two postgres versions are running on different machines. I did the dump and tried restoring it. I got an error message saying type "lo" is not d
On Dec 23, 2004, at 4:27 PM, Joshua D. Drake wrote:
IDE disks lie about write completion (This can be disabled on some
drives) whereas SCSI drives wait for the data to actually be written
before they report success. It is quite
easy to corrupt a PG (Or most any db really) on an IDE drive. Check
Hi,
These are some settings that I am planning to start with for a 4GB RAM dual
opteron system with a maximum of 100 connections:
shared_buffers 8192 (=67MB RAM)
sort_mem 4096 (=400MB RAM for 100 connections)
effective_cache_size 38(@8KB =3.04GB RAM)
vacuum_mem 32768 KB
wal_buffers 64
checkp
12 matches
Mail list logo