Daniel Haensse wrote:
Dear list,
I'm running postgres on a tomcat server. The vacuum is run every hour
(cronjob) which leads to a performance drop of the tomcat applications.
I played around with renice command and I think it is possible to reduce
this effect which a renice. The problem is how c
Dear list,
I'm running postgres on a tomcat server. The vacuum is run every hour
(cronjob) which leads to a performance drop of the tomcat applications.
I played around with renice command and I think it is possible to reduce
this effect which a renice. The problem is how can I figure out the PID
Jeff Davis <[EMAIL PROTECTED]> writes:
> On Wed, 2007-05-09 at 17:29 -0700, Joshua D. Drake wrote:
>> Sounds to me like you just need to up the total amount of open files
>> allowed by the operating system.
> It looks more like the opposite, here's the docs for
> max_files_per_process:
I think J
Ralph Mason wrote:
> I have several databases. They are each about 35gb in size and have about
> 10.5K relations (count from pg_stat_all_tables) in them. Pg_class is about
> 26k rows and the data directory contains about 70k files. These are busy
> machines, they run about 50 xactions per secon
Just adding a bit of relevant information:
We have the kernel file-max setting set to 297834 (256 per 4mb of ram).
/proc/sys/fs/file-nr tells us that we have roughly 13000 allocated handles
of which zero are always free.
On 10/05/07, Jeff Davis <[EMAIL PROTECTED]> wrote:
On Wed, 2007-05-09
>To me, that means that his machine is allowing the new FD to be created,
>but then can't really support that many so it gives an error.
files-max is 297834
ulimit is 100
(doesn't make sense but there you go)
What I don’t really understand is with max_files_per_process at 800 we don't
get th
On Wed, 2007-05-09 at 17:29 -0700, Joshua D. Drake wrote:
> > 2007-05-09 03:07:50.083 GMT 1146975740: LOCATION: BasicOpenFile,
> > fd.c:471
> >
> > 2007-05-09 03:07:50.091 GMT 0: LOG: 0: duration: 12.362 ms
> >
> > 2007-05-09 03:07:50.091 GMT 0: LOCATION: exec_simple_query,
Hello,
You likely need to increase your file-max parameters using sysctl.conf.
Sincerely,
Joshua D. Drake
--
=== The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive PostgreSQL solutions since
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of CAJ CAJ
Sent: 10 May 2007 12:26
To: Ralph Mason
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance Woes
I have several databases. They are each about 35gb in size and have about
10.5K relations (count
2007-05-09 03:07:50.083 GMT 1146975740: LOCATION: BasicOpenFile,
fd.c:471
2007-05-09 03:07:50.091 GMT 0: LOG: 0: duration: 12.362 ms
2007-05-09 03:07:50.091 GMT 0: LOCATION: exec_simple_query,
postgres.c:1090
So we decreased the max_files_per_process to
I have several databases. They are each about 35gb in size and have about
10.5K relations (count from pg_stat_all_tables) in them. Pg_class is
about 26k rows and the data directory contains about 70k files. These are
busy machines, they run about 50 xactions per second, ( aproxx insert /
update
Hi,
I have several databases. They are each about 35gb in size and have about
10.5K relations (count from pg_stat_all_tables) in them. Pg_class is about
26k rows and the data directory contains about 70k files. These are busy
machines, they run about 50 xactions per second, ( aproxx insert /
How do you specify a log file for vacuum verbose to send info to? I have
verbose turned on but cannot see any log messages.
I have upped maintenance_work_mem setting from 32768 to 98304. This is on a
4 GB, 3.2 GHz Xeon, dual core, dual cpu with HTT turned on. I hope that
helps with vacuum times.
[cc'ing to pgsql-hackers since this is looking like a contrib/intarray bug]
"Valentine Gogichashvili" <[EMAIL PROTECTED]> writes:
> here is the DT
That works fine for me in 8.2:
regression=# explain SELECT id, (myintarray_int4)
FROM myintarray_table_nonulls
WHERE ARRAY[8] <@ myintarray_int4;
Hi again,
the version of the server I am on is PostgreSQL 8.2.3 on i686-pc-linux-gnu,
compiled by GCC gcc (GCC) 4.0.2 20050901 (prerelease) (SUSE Linux)
here is the DT
CREATE TABLE "versionA".myintarray_table_nonulls
(
id integer,
myintarray_int4 integer[]
)
WITHOUT OIDS;
CREATE INDEX idx_no
On Wed, 9 May 2007, Jignesh Shah wrote:
But we still pay the penalty on WAL while writing them in the first place I
guess .. Is there an option to disable it.. I can test how much is the impact
I guess couple of %s but good to verify :-) )
on modern CPU's where the CPU is significantly faster
But we still pay the penalty on WAL while writing them in the first
place I guess .. Is there an option to disable it.. I can test how much
is the impact I guess couple of %s but good to verify :-) )
Regards,
Jignesh
Alvaro Herrera wrote:
Jignesh Shah escribió:
Now comes the thing that
Peter Eisentraut <[EMAIL PROTECTED]> writes:
>> Are you using any nondefault planner settings?
> random_page_cost = 3
> effective_cache_size = 384MB
>> How big are these tables, anyway?
> email 35 MB
> email_header 421 MB
> mime_part 37 MB
Hmmm ... I see at least part of the proble
On May 7, 2007, at 11:10 PM, Yudhvir Singh Sidhu wrote:
Jim Nasby wrote:
On May 5, 2007, at 5:57 PM, Yudhvir Singh Sidhu wrote:
Problem: Vacuum times jump up from 45 minutes, or 1:30 minutes
to 6+ hours overnight, once every 1 to 3 months.
Solutions tried: db truncate - brings vacuum times d
On May 8, 2007, at 2:59 AM, [EMAIL PROTECTED] wrote:
one issue with journaling filesystems, if you journal the data as
well as the metadata you end up with a very reliable setup, however
it means that all your data needs to be written twice, oncce to the
journal, and once to the final locati
Jignesh Shah escribió:
> Now comes the thing that I am still exploring
> * Do we do checksum in WAL ? I guess we do .. Which means that we are
> now doing double checksumming on the data. One in ZFS and one in
> postgresql. ZFS does allow checksumming to be turned off (but on new
> blocks alloc
Hello Ian,
I have done some testing with postgresql and ZFS on Solaris 10 11/06.
While I work for Sun, I dont claim to be a ZFS expert (for that matter
not even Solaris or PostgreSQL).
Lets first look at the scenarios of how postgresql can be deployed on
Solaris
First the Solaris Options
1.
Am Mittwoch, 9. Mai 2007 16:11 schrieb Tom Lane:
> Well, there's something funny going on here. You've got for instance
>
>-> Index Scan using email_pkey on email (cost=0.00..3.85
> rows=1 width=8) (actual time=0.005..0.005 rows=0 loops=280990) Index Cond:
> (email.email_id = eh_from
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> OK, upgrading to 8.2.4 fixes this odd plan choice. The query does run
> a bit faster too, but the cost estimate has actually gone up!
Yeah, because the former code was making an unrealistically small
estimate of the number of tuples found by BitmapAn
Am Mittwoch, 9. Mai 2007 16:29 schrieb Alvaro Herrera:
> Peter Eisentraut wrote:
> > There's another odd thing about this plan from yesterday.
>
> Is this still 8.2.1? The logic to choose bitmap indexes was rewritten
> just before 8.2.4,
OK, upgrading to 8.2.4 fixes this odd plan choice. The que
2007/5/9, Gregory Stark <[EMAIL PROTECTED]>:
"Daniel Cristian Cruz" <[EMAIL PROTECTED]> writes:
> -> Nested Loop (cost=0.00..13187.94 rows=93 width=4) (actual
time=2.622..125.739 rows=50 loops=1)
> -> Seq Scan on u (cost=0.00..2838.80 rows=10289 width=4) (actual
time=0.012..9.86
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> I'm wondering what it wants to achieve with these three index scans:
See if you still get that with 8.2.4. choose_bitmap_and was fairly far
out in left field before that :-( ... particularly for cases with
partially redundant indexes available.
"Daniel Cristian Cruz" <[EMAIL PROTECTED]> writes:
> -> Nested Loop (cost=0.00..13187.94 rows=93 width=4) (actual
> time=2.622..125.739 rows=50 loops=1)
> -> Seq Scan on u (cost=0.00..2838.80 rows=10289 width=4) (actual
> time=0.012..9.863 rows=10291 loops=1)
> -> Index S
> AND '' IN ('', upper(b.nazwisko))
> AND '' IN ('', upper(b.imie))
> AND '7804485' IN ('', b.pesel)
> AND '' IN ('', upper(trim(b.dowseria)))
> AND '' IN ('', b.dowosnr)
> AND 0 IN (0, b.typkred)
> AND k.datazwrot IS NULL;
Hum, interesting. Most of the work Postgres does with IN clauses i
Peter Eisentraut wrote:
> There's another odd thing about this plan from yesterday.
Is this still 8.2.1? The logic to choose bitmap indexes was rewritten
just before 8.2.4,
2007-04-17 16:03 tgl
* src/backend/optimizer/path/indxpath.c:
Rewrite choose_bitmap_and() to make it more robust in
Daniel Griscom wrote:
Thanks again for all the feedback. Running on a dual processor/core
machine is clearly a first step, and I'll look into the other
suggestions as well.
As per one of the last suggestions, do consider as well putting a dual
hard disk
(as in, independent hard disks, to allo
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> Am Dienstag, 8. Mai 2007 17:53 schrieb Tom Lane:
>> Hmm, I'd have expected it to discount the repeated indexscans a lot more
>> than it seems to be doing for you. As an example in the regression
>> database, note what happens to the inner indexscan co
On Wed, 9 May 2007, Valentine Gogichashvili wrote:
I have experimented quite a lot. So first I did when starting the attempt to
move from GiST to GIN, was to drop the GiST index and create a brand new GIN
index... after that did not bring the results, I started to create all this
tables with dif
Thanks again for all the feedback. Running on a dual processor/core
machine is clearly a first step, and I'll look into the other
suggestions as well.
Thanks,
Dan
--
Daniel T. Griscom [EMAIL PROTECTED]
Suitable Systems http://www.suitable.com/
1 Centre Street, Suite 2
I'm having something weird too...
Look:
Nested Loop Left Join (cost=93.38..7276.26 rows=93 width=58) (actual
time=99.211..4804.525 rows=2108 loops=1)
-> Hash Join (cost=93.38..3748.18 rows=93 width=4) (actual
time=0.686..20.632 rows=45 loops=1)
Hash Cond: ((u.i)::text = (m.i)::text)
I have experimented quite a lot. So first I did when starting the attempt to
move from GiST to GIN, was to drop the GiST index and create a brand new GIN
index... after that did not bring the results, I started to create all this
tables with different sets of indexes and so on...
So the answer to
Do you have both indexes (GiST, GIN) on the same table ?
On Wed, 9 May 2007, Valentine Gogichashvili wrote:
Hello all,
I am trying to move from GiST intarray index to GIN intarray index, but my
GIN index is not being used by the planner.
The normal query is like that
select *
from sourcetabl
That's email from my friend.
Any hint?
Original Message
Subject: bug
Date: Wed, 09 May 2007 15:03:00 +0200
From: Michal Postupalski
To: Andrzej Zawadzki
We've just changed our database from 8.1 to 8.2 and we are
grief-stricken about very poor performance with queries using claus
Hello all,
I am trying to move from GiST intarray index to GIN intarray index, but my
GIN index is not being used by the planner.
The normal query is like that
select *
from sourcetablewith_int4
where ARRAY[myint] <@ myint_array
and some_other_filters
(with GiST index everything works fine,
There's another odd thing about this plan from yesterday.
Query:
SELECT
eh_subj.header_body AS subject,
count(distinct eh_from.header_body)
FROM
email JOIN mime_part USING (email_id)
JOIN email_header eh_subj USING (e
Am Dienstag, 8. Mai 2007 17:53 schrieb Tom Lane:
> Hmm, I'd have expected it to discount the repeated indexscans a lot more
> than it seems to be doing for you. As an example in the regression
> database, note what happens to the inner indexscan cost estimate when
> the number of outer tuples grow
<[EMAIL PROTECTED]> writes:
> with a standard 7200 rpm drive ~150 transactions/sec sounds about right
>
> to really speed things up you want to get a disk controller with a battery
> backed cache so that the writes don't need to hit the disk to be safe.
Note that this is only if you're counting t
"Alvaro Herrera" <[EMAIL PROTECTED]> writes:
> Steinar H. Gunderson wrote:
>
>> Shoot me if I'm wrong here, but doesn't VACUUM ANALYZE check _all_ tuples,
>> as compared to the random selection employed by ANALYZE?
>
> You are wrong, but it won't be me the one to shoot you.
>
> There have been noi
On Wed, 9 May 2007, Steinar H. Gunderson wrote:
On Wed, May 09, 2007 at 01:57:51AM -0700, [EMAIL PROTECTED] wrote:
given that RAID, snapshots, etc are already in the linux kernel, I suspect
that what will need to happen is for the filesystem to be ported without
those features and then the user
On Wed, May 09, 2007 at 01:57:51AM -0700, [EMAIL PROTECTED] wrote:
> given that RAID, snapshots, etc are already in the linux kernel, I suspect
> that what will need to happen is for the filesystem to be ported without
> those features and then the userspace tools (that manipulate the volumes )
Heikki Linnakangas writes:
> Guillaume Cottenceau wrote:
> > According to documentation[1], VACUUM FULL's only benefit is
> > returning unused disk space to the operating system; am I correct
> > in assuming there's also the benefit of optimizing the
> > performance of scans, because rows are phy
On Tue, 8 May 2007, Greg Smith wrote:
On Tue, 8 May 2007, Luke Lonergan wrote:
From discussions with the developers, the biggest issue is a technical
one: the Linux VFS layer makes the [ZFS] port difficult.
Difficult on two levels. First you'd have to figure out how to make it work
at al
47 matches
Mail list logo