I have an application which logs interactions on a regular basis. The interaction
details (their types, etc) are held in one table (tblitem) and the 'hits' are held in
tbltotal.
I have written a function to get the total 'hits' during a period and need to collect
together the information from
On 11/06/2004 12:14 Nick Trainor wrote:
[snip]
However, when I seek to ORDER the results, then it takes 'forever':
EXPLAIN ANALYSE SELECT t1.value1,t1.value2,
getday_total('1','23',t1.id::integer,'31','59','2','2004','182','153','6','2004','0')
FROM tblitem t1 WHERE t1.type_id=23::int2 and
I'm running postgrSQL 7.2 on a linux Red Hat 8.0
boxwith 2GB of RAM
When I boot-up the system , this is the TOP
situation:
11:59am up 4 min, 1 user, load average: 0.37, 0.26,
0.11
77 processes: 74
sleeping, 3 running, 0 zombie, 0 stopped
CPU states: 0.3% user, 0.7% system, 0.0% nice,
Domenico Sgarbossa wrote:
I'm running postgrSQL 7.2 on a linux Red Hat 8.0 box with 2GB of RAM
When I boot-up the system , this is the TOP situation:
11:59am up 4 min, 1 user, load average: 0.37, 0.26, 0.11
77 processes: 74 sleeping, 3 running, 0 zombie, 0 stopped
CPU states: 0.3% user, 0.7%
Thanks for your advice!
As I immagine the system released dinamically the shared memory the
problem is that
when , after vacuum, users runs my application (ERP) and the system swap on
disk so the global performance
decrease very fast until the system is so slow that I need to reboot the
Domenico Sgarbossa [EMAIL PROTECTED] writes:
Thanks for your advice!
It seems that the cached memory isn't released by the system... so the
system begin to swap to disk!
If you really think this is true, there should be a process that is
holding on to the memory. Use 'ps' to find that
Hi Tom,
Here is gdb info.
This happens in our production database
3 times this week. It's totally unacceptable.
I have written a drop/create script to
avoid reindex. However, drop/create
seems to me take more time than reindex
for the whole database.
Your help is greatly appreciated!
Litao Wu [EMAIL PROTECTED] writes:
(gdb) bt
#0 0x420e8bb2 in semop () from /lib/i686/libc.so.6
#1 0x080ffa28 in PGSemaphoreLock ()
#2 0x08116432 in LWLockAcquire ()
#3 0x0810f572 in LockBuffer ()
#4 0x0807dea3 in _bt_getbuf ()
#5 0x080813ec in _bt_leafbuild ()
#6 0x080816a6 in
Nick Trainor [EMAIL PROTECTED] writes:
What am I missing here?
The ORDER BY query has to evaluate the function at *every* row of the
table before it can sort. The other query was only evaluating the
function at twenty rows.
regards, tom lane
I am batch inserting insert statements into a database with fsync = on.
My single disk system is on a 10k drive...even though I am inside a
transaction there is at least 1 file sync per row insert.
Is this normal? On my hardware, which is pretty quick, I am limited to
about 50 inserts/sec. If
On Fri, 2004-06-11 at 14:40, Merlin Moncure wrote:
I am batch inserting insert statements into a database with fsync = on.
My single disk system is on a 10k drive...even though I am inside a
transaction there is at least 1 file sync per row insert.
Which filesystem?
PostgreSQL isn't issuing
As I understand it, sync() is never called anymore. mdsync() hits the
all the files 1 by 1 with an fsync. My understanding of the commit
process is that 30 tps is quite reasonable for my hardware.
Sorry. I didn't see the version in the subject and assumed 7.4 on a
Linux machine with
We have another production database,
which is similar with this one.
It has never had REINDEX block problem yet.
One difference between these two databases
is the one having REINDEX problem is using
NTFS file system.
Is it possible the root of problem?
Thanks,
--- Tom Lane [EMAIL PROTECTED]
Merlin Moncure [EMAIL PROTECTED] writes:
I am batch inserting insert statements into a database with fsync = on.
My single disk system is on a 10k drive...even though I am inside a
transaction there is at least 1 file sync per row insert.
Are you certain you're inside a transaction?
Tracing
Litao Wu [EMAIL PROTECTED] writes:
One difference between these two databases
is the one having REINDEX problem is using
NTFS file system.
Oh? That's interesting.
Is it possible the root of problem?
I would not expect it to show this particular symptom --- if the
backtrace is accurate.
Tom Lane wrote:
Merlin Moncure [EMAIL PROTECTED] writes:
I am batch inserting insert statements into a database with fsync =
on.
My single disk system is on a 10k drive...even though I am inside a
transaction there is at least 1 file sync per row insert.
Are you certain you're inside a
Merlin Moncure [EMAIL PROTECTED] writes:
Tom Lane wrote:
Does it help if you change FlushBuffer to release buffer lock while
flushing xlog?
Putting your change in resulted in about a 15% increase in insert
performance. There may be some quirky things going on here with NTFS...
I did an
17 matches
Mail list logo