Tom Lane wrote:
> I've been making another pass over getting rid of buildfarm failures.
> The remaining ones I see at the moment are:
>
> firefly HEAD: intermittent failures in the stats test. We seem to have
> fixed every other platform back in January, but not this one.
>
> kudu HEAD: one-time
I've been making another pass over getting rid of buildfarm failures.
The remaining ones I see at the moment are:
firefly HEAD: intermittent failures in the stats test. We seem to have
fixed every other platform back in January, but not this one.
kudu HEAD: one-time failure 6/1/06 in statement_t
"Andrew Dunstan" <[EMAIL PROTECTED]> writes:
> Tom Lane said:
>> meerkat and snake both have persistent "CVS-Unknown" failures in some
>> but not all branches. I can't see any evidence of an actual failure in
>> their logs though.
> cvs-unknown means there are unknown files in the repo:
Oh. Wel
Joshua D. Drake said:
> Tom Lane wrote:
>>
>> A more radical answer is to have the script go ahead and delete the
>> offending files itself, but I can see where that might not have good
>> fail-soft behavior ...
>
> I have manually ran a dist-clean on meerkat for 8_0 and 8_1 and am
> rerunning the
Tom Lane said:
> meerkat and snake both have persistent "CVS-Unknown" failures in some
> but not all branches. I can't see any evidence of an actual failure in
> their logs though. What I do see is "?" entries about files that
> shouldn't be there --- for instance, meerkat apparently needs a "mak
Mark Kirkwood <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> With this model, the disk cost to fetch a single
>> index entry will be estimated as random_page_cost (default 4.0) rather
>> than the current fixed 2.0. This shouldn't hurt things too much for
>> simple indexscans --- especially since
Tom Lane wrote:
Another thing that's bothering me is that the index access cost computation
(in genericcostestimate) is looking sillier and sillier:
/*
* Estimate the number of index pages that will be retrieved.
*
* For all currently-supported index types, the first page of
On Thu, Jun 01, 2006 at 08:36:16PM -0400, Greg Stark wrote:
>
> Josh Berkus writes:
>
> > Greg, Tom,
> >
> > > a) We already use block based sampling to reduce overhead. If
> > > you're talking about using the entire block and not just
> > > randomly sampled tuples from within those blocks the
Josh Berkus writes:
> Greg, Tom,
>
> > a) We already use block based sampling to reduce overhead. If you're
> > talking about using the entire block and not just randomly sampled
> > tuples from within those blocks then your sample will be biased.
>
> There are actually some really good equati
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> Speaking of plan instability, something that's badly needed is the
> ability to steer away from query plans that *might* be the most optimal,
> but also will fail horribly should the cost estimates be wrong.
You sure that doesn't leave us with the empty
On Thu, Jun 01, 2006 at 03:15:09PM -0400, Tom Lane wrote:
> These would all be nice things to know, but I'm afraid it's pie in the
> sky. We have no reasonable way to get those numbers. (And if we could
> get them, there would be another set of problems, namely plan instability:
> the planner's c
On Thu, Jun 01, 2006 at 02:25:56PM -0400, Greg Stark wrote:
>
> Josh Berkus writes:
>
> > 1. n-distinct estimation is bad, as previously discussed;
> >
> > 2. our current heuristics sampling methods prevent us from sampling more
> > than
> > 0.5% of any reasonably large table, causing all stat
Tom Lane wrote:
meerkat and snake both have persistent "CVS-Unknown" failures in some
but not all branches. I can't see any evidence of an actual failure
in their logs though. What I do see is "?" entries about files that
shouldn't be there --- for instance, meerkat apparently needs a "make
dis
meerkat and snake both have persistent "CVS-Unknown" failures in some
but not all branches. I can't see any evidence of an actual failure
in their logs though. What I do see is "?" entries about files that
shouldn't be there --- for instance, meerkat apparently needs a "make
distclean". If that'
Greg,
> > 1) You have n^2 possible two-column combinations. That's a lot of
> > processing and storage.
>
> Yes, that's the hard problem to solve. Actually, btw, it's n!, not n^2.
Ooops, bad math. Andrew pointed out it's actually n*(n-1)/2, not n!.
Also, we could omit columns unlikely to corre
Martijn van Oosterhout writes:
> Well, in that case I'd like to give some concrete suggestions:
> 1. The $libdir in future may be used to find SQL scripts as well as
> shared libraries. They'll have different extensions so no possibility
> of conflict.
No, it needs to be a separate directory, an
Greg, Tom,
> a) We already use block based sampling to reduce overhead. If you're
> talking about using the entire block and not just randomly sampled
> tuples from within those blocks then your sample will be biased.
There are actually some really good equations to work with this, estimating
bo
On Wed, May 31, 2006 at 05:33:44PM -0400, Tom Lane wrote:
> Martijn van Oosterhout writes:
> > While you do have a good point about non-binary modules, our module
> > handling need some help IMHO. For example, the current hack for CREATE
> > LANGUAGE to fix things caused by old pg_dumps. I think t
Josh Berkus writes:
> > However it will only make sense if people are willing to accept that
> > analyze will need a full table scan -- at least for tables where the DBA
> > knows that good n_distinct estimates are necessary.
>
> What about block-based sampling? Sampling 1 in 20 disk pages, r
Josh Berkus writes:
> Yeah. I've refrained from proposing changes because it's a
> pick-up-sticks. If we start modifying the model, we need to fix
> *everything*, not just one item. And then educate our users that they
> need to use the GUC variables in a different way. Here's the issues I
Greg,
> I'm convinced these two are more connected than you believe.
Actually, I think they are inseparable.
> I might be interested in implementing that algorithm that was posted a
> while back that involved generating good unbiased samples of discrete
> values. The algorithm was quite clever a
Looking at http://www.postgresql.org/ftp/stable_snapshot/ surely we have
acheived stability at least once since 2005-11-26.. :-) Can we get that
fixed?
--
Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL
---(end of broadcast)
Josh Berkus writes:
> 1. n-distinct estimation is bad, as previously discussed;
>
> 2. our current heuristics sampling methods prevent us from sampling more than
> 0.5% of any reasonably large table, causing all statistics on those tables to
> be off for any table with irregular distribution of
On Thu, 2006-06-01 at 12:45 -0400, Tom Lane wrote:
> Tzahi Fadida <[EMAIL PROTECTED]> writes:
> > I am not sure about the definition of a context of a single SQL command.
>
> Well, AFAICS selecting a disjunction ought to qualify as a single SQL
> command using a single snapshot. It's not that dif
Tom,
As you know, this is something I think about a bit too, though not
nearly as deeply as you.
In general it seems to me that for CPU-bound databases, the default values
of the cpu_xxx_cost variables are too low. I am tempted to raise the
default value of cpu_index_tuple_cost to 0.005, whi
Tzahi Fadida <[EMAIL PROTECTED]> writes:
> I am not sure about the definition of a context of a single SQL command.
Well, AFAICS selecting a disjunction ought to qualify as a single SQL
command using a single snapshot. It's not that different from a JOIN
or UNION operation, no?
> Inside C-langua
I am not sure about the definition of a context of a single SQL command.
Example of a run:
A <- SELECT getfdr('Relation1,Relation2,Relation3');
to get the result schema (takes a few milliseconds).
SELECT * FROM FullDisjunctions('Relation1,Relation2,Relation3') AS
RECORD A;
Can take a long time.
Tzahi Fadida <[EMAIL PROTECTED]> writes:
> I am using CTID for the concept of a tuple set.
> For example, the set of t1 from relation1, t1 from relation2, t10 from
> relation3 will be represented in my function as a list
> of (TableID:CTID) pairs.
> For example {(1:1),(2:1),(3:10))
> I then save th
I am using CTID for the concept of a tuple set.
For example, the set of t1 from relation1, t1 from relation2, t10 from
relation3 will be represented in my function as a list
of (TableID:CTID) pairs.
For example {(1:1),(2:1),(3:10))
I then save these in bytea arrays in a tuplestore.
This is essentia
On Thu, Jun 01, 2006 at 03:33:50PM +0300, Tzahi Fadida wrote:
> The question is, can the CTID field change throughout
> the run of my function due to some other processes working
> on the relation? Or because of command boundaries it is
> pretty much secured inside an implicit transaction?
> The pr
Hi,
I am a Google soc student and in need of some help
with PostgreSQL internals:
My C function can run (and already running)
for a very very long time on some
inputs and reiterate on relations using SPI.
Basically, I open portals and cursors to relations.
Also note that I always open the
relati
> After re-reading what I just wrote to Andreas about how compression of
> COPY data would be better done outside the backend than inside, it
> struck me that we are missing a feature that's fairly common in Unix
> programs. Perhaps COPY ought to have the ability to pipe its output
> to a shell co
Hannu Krosing said:
> Ãhel kenal päeval, N, 2006-06-01 kell 10:10, kirjutas David Hoksza:
>> It seems MyProcID is what I was searching for...
>>
>
> On a buzy server with lots of connects, procID will repeat quite often.
>
log_line-prefix has a sessionid gadget:
Session ID: A unique identifier
Ühel kenal päeval, N, 2006-06-01 kell 10:10, kirjutas David Hoksza:
> It seems MyProcID is what I was searching for...
>
On a buzy server with lots of connects, procID will repeat quite often.
--
Hannu Krosing
Database Architect
Skype Technologies OÜ
Akadeemia tee 21 F, Tallinn
Ühel kenal päeval, K, 2006-05-31 kell 17:31, kirjutas Andreas Pflug:
> Tom Lane wrote:
> > Andreas Pflug <[EMAIL PROTECTED]> writes:
> >
> >>The attached patch implements COPY ... WITH [BINARY] COMPRESSION
> >>(compression implies BINARY). The copy data uses bit 17 of the flag
> >>field to ident
On 5/31/06, Tom Lane <[EMAIL PROTECTED]> wrote:
After re-reading what I just wrote to Andreas about how compression of
COPY data would be better done outside the backend than inside, it
struck me that we are missing a feature that's fairly common in Unix
programs. Perhaps COPY ought to have the
It seems MyProcID is what I was searching for...
David Hoksza
DH> Something like this would be maybe possible, but this select can
DH> return more rows, when the user is connected with more instances...
DH> David Hoksza
DH>
>>>
37 matches
Mail list logo