In reading what you are describing, don't you think PG 9 goes a long way to
helping you out?
On Sat, Nov 13, 2010 at 12:53 AM, Craig Ringer
wrote:
> On 11/12/2010 02:25 AM, Kyriacos Kyriacou wrote:
>
> The
>> result is to have huge fragmentation on table space, unnecessary updates
>> in all aff
On 11/12/2010 02:25 AM, Kyriacos Kyriacou wrote:
The
result is to have huge fragmentation on table space, unnecessary updates
in all affected indexes, unnecessary costly I/O operations, poor
performance on SELECT that retrieves big record sets (i.e. reports etc)
and slower updates.
Yep. It's
2010/11/12 Tom Lane :
> Robert Haas writes:
>> On Fri, Nov 12, 2010 at 4:15 AM, Cédric Villemain
>> wrote:
I wondering if we could do something with a formula like 3 *
amount_of_data_to_read / (3 * amount_of_data_to_read +
effective_cache_size) = percentage NOT cached. That is, if
On Thu, Oct 28, 2010 at 9:23 AM, Tom Lane wrote:
> Jon Nelson writes:
>> On Wed, Oct 27, 2010 at 5:36 PM, Tom Lane wrote:
>>> It thinks it's faster, or there is some reason why it *can't* use the
>>> index, like a datatype mismatch. You could tell which by trying "set
>>> enable_seqscan = off"
Hi,
I have a question about the behavior of autovacuum. When I have a big
table A which is being processed by autovacuum, I also manually use
(full) vacuum to clean another table B. Then I found that I always got
something like “found 0 removable, 14283 nonremovable row”. However,
if I stop the au
This is my first post in this mailing list and I would like to raise an
issue that in my opinion is causing performance issues of PostgreSQL
especially in a transaction processing environment. In my company we are
using PostgreSQL for the last 8 year for our in-house developed billing
system (telec
Thanks for your answer! And I am sorry for trading the question as a bug, and
send it to 'bugs' mailing-list.
But I doubt your answer. I think the essence of the problem is when the planner
selects 'Bitmap Index Scan' and how the planner computes the cost of 'Bitmap
Index Scan'.
Tom Lane s
On Sun, Nov 7, 2010 at 10:03 PM, Cédric Villemain
wrote:
> 2010/11/8 Mark Rostron :
>>> >
>>> > What is the procedure that postgres uses to decide whether or not a
>>> > table/index block will be left in the shared_buffers cache at the end
>>> > of the operation?
>>> >
>>>
>>> The only special cas
On Fri, Nov 12, 2010 at 11:43 AM, Tom Lane wrote:
> I think his point is that we already have a proven formula
> (Mackert-Lohmann) and shouldn't be inventing a new one out of thin air.
> The problem is to figure out what numbers to apply the M-L formula to.
I'm not sure that's really measuring th
On Nov 12, 2010, at 9:13 AM, Kyriacos Kyriacou wrote:
> My suggestion had just a single difference from what currently MVCC is
> doing (btw I never said that MVCC is bad).
>
> NOW ===> on COMMIT previous version record is expired and the
> new version record (created in new dynamically
HOT also usually requires setting FILLFACTOR to something other than the
default for your table, so that there is guaranteed room in the page to modify
data without allocating a new page.
If you have fillfactor=75, then basically this proposal is already done -- each
page has 25% temp space for
On Fri, Nov 12, 2010 at 9:19 AM, Ben Chobot wrote:
> On Nov 12, 2010, at 8:14 AM, Kyriacos Kyriacou wrote:
>
> We are still using PostgreSQL 8.2.4. We are running a 24x7 system and
> database size is over 200Gb so upgrade is not an easy decision!
>
> This is why we have slony, so you can slowly up
OK, in general you have to pay for MVCC one way or another. Many
databases make you pay as you go, so to speak, by storing all the MVCC
info in a log to be applied at some future date. Other databases you
can pay later, by storing all the MVCC in the table itself. Both have
similar costs, but on
On Fri, Nov 12, 2010 at 9:22 AM, Thom Brown wrote:
> On 12 November 2010 16:14, Kyriacos Kyriacou
> wrote:
>>
>>
>>
>> We are still using PostgreSQL 8.2.4. We are running a 24x7 system and
>> database size is over 200Gb so upgrade is not an easy decision!
>>
>> I have it in my plans so in next fe
My suggestion had just a single difference from what currently MVCC is
doing (btw I never said that MVCC is bad).
NOW ===> on COMMIT previous version record is expired and the
new version record (created in new dynamically allocated
spot, as you said) is set as "active"
MY
Robert Haas writes:
> On Fri, Nov 12, 2010 at 4:15 AM, Cédric Villemain
> wrote:
>>> I wondering if we could do something with a formula like 3 *
>>> amount_of_data_to_read / (3 * amount_of_data_to_read +
>>> effective_cache_size) = percentage NOT cached. That is, if we're
>>> reading an amount
To be honest I just now read about HOT (Heap Overflow Tuple) and it
seems that will help a lot. Thanks for your point.
Kyriacos
-Original Message-
From: Kenneth Marshall [mailto:k...@rice.edu]
Sent: Friday, November 12, 2010 6:22 PM
To: Kyriacos Kyriacou
Cc: Thom Brown; pgsql-performance
On Fri, Nov 12, 2010 at 4:15 AM, Cédric Villemain
wrote:
>> I wondering if we could do something with a formula like 3 *
>> amount_of_data_to_read / (3 * amount_of_data_to_read +
>> effective_cache_size) = percentage NOT cached. That is, if we're
>> reading an amount of data equal to effective_ca
"Kyriacos Kyriacou" writes:
> We are still using PostgreSQL 8.2.4.
In that case you don't have HOT updates, so it seems to me to be a
little premature to be proposing a 100% rewrite of the system to fix
your problems.
regards, tom lane
--
Sent via pgsql-performance mail
On Nov 12, 2010, at 8:14 AM, Kyriacos Kyriacou wrote:
> We are still using PostgreSQL 8.2.4. We are running a 24x7 system and
> database size is over 200Gb so upgrade is not an easy decision!
This is why we have slony, so you can slowly upgrade your 200Gb while you're
live and then only suffer
On 12 November 2010 16:14, Kyriacos Kyriacou wrote:
>
>
> We are still using PostgreSQL 8.2.4. We are running a 24x7 system and
> database size is over 200Gb so upgrade is not an easy decision!
>
> I have it in my plans so in next few months I will setup new servers and
> upgrade to version 9.
>
>
Ah, this is a very old version. If you can take advantage of
a version with HOT support, you should be much, much happier.
Cheers,
Ken
On Fri, Nov 12, 2010 at 06:14:00PM +0200, Kyriacos Kyriacou wrote:
>
>
> We are still using PostgreSQL 8.2.4. We are running a 24x7 system and
> database size
On 11/12/2010 7:47 AM, Kyriacos Kyriacou wrote:
SUGGESTION
--
1) When a raw UPDATE is performed, store all "new raw versions" either
in separate temporary table space
or in a reserved space at the end of each table (can be allocated
dynamically) etc
Your use of "raw" is confusi
This was done already as a workaround after identifying this problem.
I just gave it as an example.
-Original Message-
From: Vitalii Tymchyshyn [mailto:tiv...@gmail.com]
Sent: Friday, November 12, 2010 5:54 PM
To: Kyriacos Kyriacou
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFO
We are still using PostgreSQL 8.2.4. We are running a 24x7 system and
database size is over 200Gb so upgrade is not an easy decision!
I have it in my plans so in next few months I will setup new servers and
upgrade to version 9.
>> Which version of PostgreSQL are you basing this on?
>>
>>-
2010/11/12 Jon Nelson :
> On Thu, Nov 11, 2010 at 10:38 PM, Pavel Stehule
> wrote:
>> 2010/11/12 Jon Nelson :
>>> On Thu, Nov 11, 2010 at 10:26 PM, Pavel Stehule
>>> wrote:
Hello
look on EXPLAIN ANALYZE command. Probably your statistic are out, and
then planner can be confus
12.11.10 15:47, Kyriacos Kyriacou написав(ла):
PROBLEM DECRIPTION
--
As an example, consider updating the "live" balance
of a customer for each phone call where the entire customer record has
to be duplicated again and again upon each call just for modifying a
numeric value!
H
On Fri, Nov 12, 2010 at 07:34:36AM -0800, bricklen wrote:
> On Fri, Nov 12, 2010 at 5:52 AM, Kenneth Marshall wrote:
> >
> > I cannot speak to your suggestion, but it sounds like you are not
> > vacuuming enough and a lot of the bloat/randomization would be helped
> > by making use of HOT updates
On Fri, Nov 12, 2010 at 5:52 AM, Kenneth Marshall wrote:
>
> I cannot speak to your suggestion, but it sounds like you are not
> vacuuming enough and a lot of the bloat/randomization would be helped
> by making use of HOT updates in which the updates are all in the same
> page and are reclaimed al
On Thu, Nov 11, 2010 at 10:38 PM, Pavel Stehule wrote:
> 2010/11/12 Jon Nelson :
>> On Thu, Nov 11, 2010 at 10:26 PM, Pavel Stehule
>> wrote:
>>> Hello
>>>
>>> look on EXPLAIN ANALYZE command. Probably your statistic are out, and
>>> then planner can be confused. EXPLAIN ANALYZE statement show i
On 12 November 2010 13:47, Kyriacos Kyriacou wrote:
> This is my first post in this mailing list and I would like to raise an
> issue that in my opinion is causing performance issues of PostgreSQL
> especially in a transaction processing environment. In my company we are
> using PostgreSQL for the
On Fri, Nov 12, 2010 at 03:47:30PM +0200, Kyriacos Kyriacou wrote:
> This is my first post in this mailing list and I would like to raise an
> issue that in my opinion is causing performance issues of PostgreSQL
> especially in a transaction processing environment. In my company we are
> using Post
This is my first post in this mailing list and I would like to raise an
issue that in my opinion is causing performance issues of PostgreSQL
especially in a transaction processing environment. In my company we are
using PostgreSQL for the last 8 year for our in-house developed billing
system (telec
2010/11/12 Vitalii Tymchyshyn :
> 12.11.10 12:56, Cédric Villemain написав(ла):
>>
>> I supposed it was an answer to my mail but not sure... please keep
>> CC'ed people, it is easier to follow threads (at least for me)
>>
>
> OK
>>
>> 2010/11/12 Vitalii Tymchyshyn:
>>
>>>
>>> I'd say there are two
12.11.10 12:56, Cédric Villemain написав(ла):
I supposed it was an answer to my mail but not sure... please keep
CC'ed people, it is easier to follow threads (at least for me)
OK
2010/11/12 Vitalii Tymchyshyn:
I'd say there are two Qs here:
1) Modify costs based on information on how
I supposed it was an answer to my mail but not sure... please keep
CC'ed people, it is easier to follow threads (at least for me)
2010/11/12 Vitalii Tymchyshyn :
> I'd say there are two Qs here:
>
> 1) Modify costs based on information on how much of the table is in cache.
> It would be great if
I'd say there are two Qs here:
1) Modify costs based on information on how much of the table is in
cache. It would be great if this can be done, but I'd prefer to have it
as admin knobs (because of plan stability). May be both admin and
automatic ways can be followed with some parallel (disab
2010/11/11 Robert Haas :
> On Thu, Nov 11, 2010 at 2:35 PM, Tom Lane wrote:
>> Robert Haas writes:
>>> Yeah. For Kevin's case, it seems like we want the caching percentage
>>> to vary not so much based on which table we're hitting at the moment
>>> but on how much of it we're actually reading.
>
2010/11/11 Tom Lane :
> Robert Haas writes:
>> Yeah. For Kevin's case, it seems like we want the caching percentage
>> to vary not so much based on which table we're hitting at the moment
>> but on how much of it we're actually reading.
>
> Well, we could certainly take the expected number of pag
39 matches
Mail list logo