2010/11/17 Humair Mohammed :
>
> There are no indexes on the tables either in SQL Server or Postgresql - I am
> comparing apples to apples here. I ran ANALYZE on the postgresql tables,
> after that query performance times are still high 42 seconds with COALESCE
> and 35 seconds with IS DISTINCT FRO
On Tue, Nov 16, 2010 at 6:25 PM, Josh Berkus wrote:
> On 11/16/10 12:39 PM, Greg Smith wrote:
>> I want to next go through and replicate some of the actual database
>> level tests before giving a full opinion on whether this data proves
>> it's worth changing the wal_sync_method detection. So far
Josh Berkus writes:
>> Well, we're not going to increase the default to gigabytes, but we could
>> very probably increase it by a factor of 10 or so without anyone
>> squawking. It's been awhile since I heard of anyone trying to run PG in
>> 4MB shmmax. How much would a change of that size help?
I wrote:
> The fsync is associated with the write, which is not done with insert
> lock held. We're not quite that dumb.
But wait --- are you thinking of the call path where a write (and
possible fsync) is forced during AdvanceXLInsertBuffer because there's
no WAL buffer space left? If so, that'
> Well, we're not going to increase the default to gigabytes, but we could
> very probably increase it by a factor of 10 or so without anyone
> squawking. It's been awhile since I heard of anyone trying to run PG in
> 4MB shmmax. How much would a change of that size help?
Last I checked, though
On Wednesday 17 November 2010 02:04:28 Tom Lane wrote:
> Andres Freund writes:
> > On Wednesday 17 November 2010 01:51:28 Tom Lane wrote:
> >> Well, there's a forced fsync after writing the last page of an xlog
> >> file, but I don't believe that proves that more than 16MB of xlog
> >> buffers is
Andres Freund writes:
> On Wednesday 17 November 2010 01:51:28 Tom Lane wrote:
>> Well, there's a forced fsync after writing the last page of an xlog
>> file, but I don't believe that proves that more than 16MB of xlog
>> buffers is useless. Other processes could still be busy filling the
>> buff
On Wednesday 17 November 2010 01:51:28 Tom Lane wrote:
> Andres Freund writes:
> > On Wednesday 17 November 2010 00:31:34 Tom Lane wrote:
> >> Well, we're not going to increase the default to gigabytes
> >
> > Especially not as I don't think it will have any effect after
> > wal_segment_size as t
Andres Freund writes:
> On Wednesday 17 November 2010 00:31:34 Tom Lane wrote:
>> Well, we're not going to increase the default to gigabytes
> Especially not as I don't think it will have any effect after
> wal_segment_size
> as that will force a write-out anyway. Or am I misremembering the
>
On Wednesday 17 November 2010 00:31:34 Tom Lane wrote:
> Josh Berkus writes:
> > On 11/16/10 12:39 PM, Greg Smith wrote:
> >> I want to next go through and replicate some of the actual database
> >> level tests before giving a full opinion on whether this data proves
> >> it's worth changing the w
Josh Berkus wrote:
On 11/16/10 12:39 PM, Greg Smith wrote:
I want to next go through and replicate some of the actual database
level tests before giving a full opinion on whether this data proves
it's worth changing the wal_sync_method detection. So far I'm torn
between whether that's the ri
On Wed, Nov 17, 2010 at 01:31, Tom Lane wrote:
> Well, we're not going to increase the default to gigabytes, but we could
> very probably increase it by a factor of 10 or so without anyone
> squawking. It's been awhile since I heard of anyone trying to run PG in
> 4MB shmmax. How much would a ch
Josh Berkus writes:
> On 11/16/10 12:39 PM, Greg Smith wrote:
>> I want to next go through and replicate some of the actual database
>> level tests before giving a full opinion on whether this data proves
>> it's worth changing the wal_sync_method detection. So far I'm torn
>> between whether tha
On 11/16/10 12:39 PM, Greg Smith wrote:
> I want to next go through and replicate some of the actual database
> level tests before giving a full opinion on whether this data proves
> it's worth changing the wal_sync_method detection. So far I'm torn
> between whether that's the right approach, or
On Tue, Nov 16, 2010 at 3:39 PM, Greg Smith wrote:
> I want to next go through and replicate some of the actual database level
> tests before giving a full opinion on whether this data proves it's worth
> changing the wal_sync_method detection. So far I'm torn between whether
> that's the right a
Time for a deeper look at what's going on here...I installed RHEL6 Beta
2 yesterday, on the presumption that since the release version just came
out this week it was likely the same version Marti tested against.
Also, it was the one I already had a DVD to install for. This was on a
laptop wit
vindex+lists-pgsql-performa...@apartia.org (Louis-David Mitterrand)
writes:
> I have to collect lots of prices from web sites and keep track of their
> changes. What is the best option?
>
> 1) one 'price' row per price change:
>
> create table price (
> id_price primary key,
>
In article <4ce2688b.2050...@tweakers.net>,
Arjen van der Meijden writes:
> On 16-11-2010 11:50, Louis-David Mitterrand wrote:
>> I have to collect lots of prices from web sites and keep track of their
>> changes. What is the best option?
>>
>> 1) one 'price' row per price change:
>>
>> create
Excerpts from kuopo's message of vie nov 12 05:01:24 -0300 2010:
> Hi,
>
> I have a question about the behavior of autovacuum. When I have a big
> table A which is being processed by autovacuum, I also manually use
> (full) vacuum to clean another table B. Then I found that I always got
> somethin
Hi,
> > If you mostly need the last few prices, I'd definitaly go with the
> > first aproach, its much cleaner. Besides, you can store a date/time
> > per price, so you know when it changed.
We too were using such an approach for 'soft deletes'. Soon we realized
that using a one char valid flag t
On Tue, Nov 16, 2010 at 12:18:35PM +0100, Arjen van der Meijden wrote:
> On 16-11-2010 11:50, Louis-David Mitterrand wrote:
> >I have to collect lots of prices from web sites and keep track of their
> >changes. What is the best option?
> >
> >1) one 'price' row per price change:
> >
> > create
On 16-11-2010 11:50, Louis-David Mitterrand wrote:
I have to collect lots of prices from web sites and keep track of their
changes. What is the best option?
1) one 'price' row per price change:
create table price (
id_price primary key,
id_product integer
2010/11/16 Louis-David Mitterrand :
> On Tue, Nov 16, 2010 at 12:03:29PM +0100, Pavel Stehule wrote:
>> Hello
>>
>> my opinion:
>>
>> @1 can be faster for access to last items with index
>> @2 can be more effective about data files length allocation
>
> Hi Pavel,
>
> What is "data files length allo
On Tue, Nov 16, 2010 at 12:03:29PM +0100, Pavel Stehule wrote:
> Hello
>
> my opinion:
>
> @1 can be faster for access to last items with index
> @2 can be more effective about data files length allocation
Hi Pavel,
What is "data files length allocation" ?
--
Sent via pgsql-performance mailin
Hello
my opinion:
@1 can be faster for access to last items with index
@2 can be more effective about data files length allocation
@1 or @2 - it depends on number of prices per product. For small
number (less 100) I am strong for @2 (if speed is important).
Personally prefer @2.
Pavel
2010/11/
Hi,
I have to collect lots of prices from web sites and keep track of their
changes. What is the best option?
1) one 'price' row per price change:
create table price (
id_price primary key,
id_product integer references product,
price inte
26 matches
Mail list logo