On Thu, 1 May 2008, D'Arcy J.M. Cain wrote:
Whenever I see one of those I simply blackhole the server sending them.
Ah, the ever popular vigilante spam method. What if the message is coming
from, say, gmail.com, and it's getting routed so that you're not sure
which account is originating it
D'Arcy J.M. Cain wrote:
On Thu, 01 May 2008 01:16:00 -0300
"Marc G. Fournier" <[EMAIL PROTECTED]> wrote:
Someone on this list has one of those 'confirm your email' filters on their
Argh! Why do people think that it is OK to make their spam problem
everyone else's problem? Whenever I
On Thu, 01 May 2008 01:16:00 -0300
"Marc G. Fournier" <[EMAIL PROTECTED]> wrote:
> Someone on this list has one of those 'confirm your email' filters on their
Argh! Why do people think that it is OK to make their spam problem
everyone else's problem? Whenever I see one of those I simply
blackho
Gregory Stark <[EMAIL PROTECTED]> writes:
> This is something that needs some serious thought though. In the case of
> partitioned tables I've seen someone get badly messed up plans because they
> had a couple hundred partitions each of which estimated to return 1 row. In
> fact of course they all
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Someone on this list has one of those 'confirm your email' filters on their
mailbox, which is bouncing back messages ... this is an attempt to try and
narrow down the address that is causing this ...
- --
Marc G. FournierHub.Org Hosting So
"Tom Lane" <[EMAIL PROTECTED]> writes:
> Right. As a matter of policy we never estimate less than one matching
> row; and I've seriously considered pushing that up to at least two rows
> except when we see that the query condition matches a unique constraint.
> You can get really bad join plans
Jeff Davis <[EMAIL PROTECTED]> writes:
> On Wed, 2008-04-30 at 10:43 -0400, Tom Lane wrote:
>> Surely that's not very sane? The MCV list plus histogram generally
>> don't include every value in the table.
> My understanding of Len's question is that, although the MCV list plus
> the histogram don
On Wed, 2008-04-30 at 10:43 -0400, Tom Lane wrote:
> > Instead I would expect an estimate of "rows=0" for values of const
> > that are not in the MCV list and not in the histogram.
>
> Surely that's not very sane? The MCV list plus histogram generally
> don't include every value in the table. II
On Wed, 30 Apr 2008, Gernot Schwed wrote:
Hi all,
looking for a HA master/master or master/slave replication solution. Our
setup consists of two databases and we want to use them both for queries.
Aside from pgpool II there seems no advisable replication solution. But
the problem seems to be
We have tried fillfactor for indices and it seems to work.
Need to try fillfactor for table. May for that reason the bulk update
queries don't get the advantage of HOT
:)
On Wed, Apr 30, 2008 at 9:45 PM, Pavan Deolasee <[EMAIL PROTECTED]>
wrote:
> On Wed, Apr 30, 2008 at 8:16 PM, Tom Lane <[EMAI
On Wed, Apr 30, 2008 at 8:16 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> > That's weird. With that fillfactor, you should have a very high
> > percentage of HOT update ratio. It could be a very special case that
> > we might be looking at.
>
> He's t
Hi all,
looking for a HA master/master or master/slave replication solution. Our
setup consists of two databases and we want to use them both for queries.
Aside from pgpool II there seems no advisable replication solution. But
the problem seems to be that we will have a single point of failure
[EMAIL PROTECTED] (Frank Ch. Eigler) writes:
> Tom Lane <[EMAIL PROTECTED]> writes:
>> Also, you need to make sure you have the FSM parameters set high enough
>> so that all the free space found by a VACUUM run can be remembered.
> Would it be difficult to arrange FSM parameters to be automaticall
"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> That's weird. With that fillfactor, you should have a very high
> percentage of HOT update ratio. It could be a very special case that
> we might be looking at.
He's testing
>> update table1 set delta1 = 100 where code/100 =999;
so all the rows
"Len Shapiro" <[EMAIL PROTECTED]> writes:
> I asked about n_distinct, whose documentation reads in part "The
> negated form is used when ANALYZE believes that the number of distinct
> values is likely to increase as the table grows". and I asked about
> why ANALYZE believes that the number of dist
Please keep list in the loop.
On Wed, Apr 30, 2008 at 6:45 PM, Gauri Kanekar
<[EMAIL PROTECTED]> wrote:
> Hi,
> We have recreated the indices with fillfactor set to 80, which has improved
> HOT
> a little,
Wait. Did you say, you recreated the indexes with fill factor ? That's
no help for HOT.
Craig Ringer wrote:
Heikki Linnakangas wrote:
Did you dump and reload the table after setting the fill factor? It
only affects newly inserted data.
VACUUM FULL or CLUSTER should do the job too, right? After all, they
recreate the table so they must take the fillfactor into account.
CLUSTER
Heikki Linnakangas wrote:
Did you dump and reload the table after setting the fill factor? It only
affects newly inserted data.
VACUUM FULL or CLUSTER should do the job too, right? After all, they
recreate the table so they must take the fillfactor into account.
--
Craig Ringer
--
Sent via
Gauri Kanekar wrote:
HOT doesn't seems to be working in our case.
This is "table1" structure :
idintegernot null
codeintegernot null
cridintegernot null
statuscharacter varying(1)default 'A'::character varying
delta1bigintdefault 0
On Wed, Apr 30, 2008 at 12:16 PM, Gauri Kanekar
<[EMAIL PROTECTED]> wrote:
> fillfactor is set to 80 as you suggested.
> delta* fields r updated and these fields are no where related to any of the
> index fields.
>
That's weird. With that fillfactor, you should have a very high
percentage of HOT u
20 matches
Mail list logo