?
PG_STAT_STATEMENTS_COLS :
PG_STAT_STATEMENTS_COLS_V1_0));
tuplestore_putvalues(tupstore, tupdesc, values, nulls);
-- end of diff
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http
Nachwelt aber,
insofern wir keines Burgers
Blut vergossen, aus dem nicht tausend andere der Nachwelt geschenkt werden. Der
Grund und Boden,
auf dem dereinst deutsche Bauerngeschlecht
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http
Demokratie gefordert 85. M. und
Demokratie 412. M. und Judentum 350 f., 352, 498. Staatsauffassung 420. V
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
der Sklaverei erduldet, so ist dies schlimmer, als wenn ein
solcher Staat und ein
solches Volk zertrum
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
arcane query structure to do the
same thing:
select datum from objects where key='GUID' and (xpath(E'foo/bar',
XMLPARSE(CONTENT datum))::text())[1] = 'frobozz';
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
the feature is.
Do you disagree?
cheers
andrew
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make
(CONTENT
datum)) as uuid from table;
Which produces an unusable:
{uuidb5212259-a91f-4dca-a547-4fe89cf2f32c/uuid}
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
() that doesn't work.
The API is less intuitive than the previous incarnation and is, indeed,
more difficult to use.
-Kevin
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
How difficult would it be, and does anyone think it is possible to have a
continuous restore_command ala pg_standby running AND have the database
operational in a read-only mode?
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http
in years. The numerics of it are just a description of the
probability of a duplicate sum or crc, meaning a false OK.
Also, regardless of whether or not the block is full, the block is read
and written as a block and that the underlying data unimportant.
--
Sent via pgsql-hackers mailing list (pgsql
are so fast in RAM and a block is very small. On x86 systems, depending on
page alignment, we are talking about two or three pages that will be in
memory (They were used to read the block from disk or previously
accessed).
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org
of the check is to generate a pass or fail status, and not
something to be used to find where in the block it is corrupted or attempt
to regenerate the data, then we could certainly optimize the check
algorithm. A simple checksum may be good enough.
--
Sent via pgsql-hackers mailing list (pgsql
the checksum?
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
--
Sent via pgsql-hackers mailing list (pgsql-hackers
bits in a block header,
they could be used for the check value.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
(www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http
programmatic types to a
database.
Anyone think its interesting?
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
know it is a little outside the
box thinking, what do you think?
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
it be something like: where clause first, left to right, followed
by select terms, left to right, and lastly the order by clause?
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
get
to keep both pieces.
I was kind of afraid of that. So, how could one implement such a function
set?
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
, basically, I don't want to recalculate the values for each and every
function call as that would make the system VERY slow.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
. Will they all be
called for a particular combination of t1.col1 and t2.col2, in some
unpredictable order before the next row(s) combination is evaluated or
will I have to execute the underlying algorithm for each and every call?
...Robert
--
Sent via pgsql-hackers mailing list (pgsql
.
But are all the items targeted in close proximity to each other BEFORE
moving on to the next row? What about the where clause? would that be
called out of order of the select target list? I'm doing a fairly large
amount of processing and doing it once is important.
/
--
Sent via pgsql-hackers mailing
Here is the SSL patch we discussed previously for 8.3.1.
sslconfig.patch.8.3.1
Description: Binary data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
be used from the same
system the same user.
Maybe we need to go even further and add it to the PQconnect API
sslkey=filename and sslcrt=filename in addition to sslmode?
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http
with other SSL enabled versions of itself.
I think you would agree that a hard coded immutable location for client
interface is problematic.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
that the connection be configured in one place? I
agree with Tom, if it should be done, it should be done in PQconnectdb.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
/share/keys/client.crt);
Any comments?
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
, while the client needs the ability to have
multiple keys.
Think of it this way, a specific lock only takes one key while a person
needs to carry multiple keys on a ring.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http
ssltrustcrt=certs.pem
sslcrl=crl.pem
BTW: the revocation list probably never worked in the client.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
unsubscribe
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
Maybe we make the assumption that all OS will
implement fd as an array index
The POSIX spec requires open() to assign fd's consecutively from zero.
http://www.opengroup.org/onlinepubs/007908799/xsh/open.html
With all due respect, PostgreSQL now runs natively on Win32. Having a
POSIX-only
[EMAIL PROTECTED] writes:
The POSIX spec requires open() to assign fd's consecutively from zero.
http://www.opengroup.org/onlinepubs/007908799/xsh/open.html
With all due respect, PostgreSQL now runs natively on Win32.
... using the POSIX APIs that Microsoft so kindly provides.
fd.c will
[EMAIL PROTECTED] writes:
That is hardly anything that I would feel comfortable with. Lets break
this down into all the areas that are ambiguous:
There isn't anything ambiguous about this, nor is it credible that there
are implementations that don't follow the intent of the spec.
How do you
My copy of APUE says on page 49: The file descriptor returned by open
is the lowest numbered unused descriptor. This is used by some
applications to open a new file on standard input, standard output, or
standard error.
Yes, I'll restate my questions:
What is meant by unused? Is it read to
[EMAIL PROTECTED] wrote:
The point is that this *is* silly, but I am at a loss to understand why
it
isn't a no-brainer to change. Why is there a fight over a trivial change
which will ensure that PostgreSQL aligns to the documented behavior of
open()
(Why characterise this as a fight,
Tom Lane wrote:
[EMAIL PROTECTED] writes:
Please see my posting about using a macro for snprintf.
Wasn't the issue about odd behavior of the Win32 linker choosing the
wrong
vnsprintf?
You're right, the point about the macro was to avoid linker weirdness on
Windows. We need to do that
Tom Lane wrote:
Bruce Momjian pgman@candle.pha.pa.us writes:
Please see my posting about using a macro for snprintf. If the
current
implementation of snprintf is enough for our existing translation
users
we probably don't need to add anything more to it because snprintf
will
not be
From what I recall from the conversation, I would say rename the vsprintf
and the sprintf functions in postgres to pq_vsnprintf and pq_snprintf.
Define a couple macros: (in some common header, pqprintf.h?)
#define snprintf pq_snprintf
#define vsnprintf pq_snprintf
Then just maintain the postgres
Tom recently said, when talking about allowing the user (in this case me)
from passing a hash table size to create index:
but that doesn't mean I want to make the user deal with it.
I started thinking about this and, maybe I'm old fashioned, but I would
like the ability to deal with it. So much
Pailloncy Jean-Gerard wrote:
You should have a look to this thread
http://archives.postgresql.org/pgsql-hackers/2005-02/msg00263.php
Take a look at this paper about lock-free parallel hash table
http://www.cs.rug.nl/~wim/mechver/hashtable/
Is this relevant? Hash indexes are on-disk data
[EMAIL PROTECTED] writes:
Anyway, IMHO, hash indexes would be dramatically improved if you could
specify your own hashing function
That's called a custom operator class.
Would I also be able to query the bucket size and all that?
and declare initial table size.
It would be interesting
Hello hackers,
i'm wondering if is possible to somehow spread pretty big db (aprox 50G)
over few boxes to get more speed ?
if anyone did that i'd be glad to have some directions in right way,
I have done different elements of clusering with PostgreSQL on a per task
basis, but not a fully
I'm wondering,
is there any sense to cluster table using two-column index ?
We've had this discussion a few weeks ago. Look at the archives for my
post One Big Trend
The problem is that while the statistics can resonably deal with the
primary column it completely misses the trends
Ühel kenal päeval (teisipäev, 1. märts 2005, 14:54-0500), kirjutas
[EMAIL PROTECTED]:
Now, it occurs to me that if my document reference table can refer to
something other than an indexed primary key, I can save a lot of index
processing time in PostgreSQL if I can have a safe analogy to
The big question is why our own vsnprintf() is not being called from
snprintf() in our port file.
I have seen this problem before, well, it isn't really a problem I guess.
I'm not sure of the gcc compiler options, but
On the Microsoft compiler if you specify the option /Gy it separates
Yes, strangly the Window's linker is fine because libpqdll.def defines
what symbols are exported. I don't think Unix has that capability.
A non-static public function in a Windows DLL is not available for
dynamic linking unless explicitly declared as dll export. This behavior is
completely
Bruce Momjian pgman@candle.pha.pa.us writes:
Tom Lane wrote:
First line of thought: we surely must not insert a snprintf into
libpq.so unless it is 100% up to spec *and* has no performance issues
... neither of which can be claimed of the CVS-tip version.
Agreed, and we have to support all
I don't think we really need any more fundamentally nonconcurrent index
types :-(
Tom, I posted a message about a week ago (I forget the name) about a
persistent reference index, sort of like CTID, but basically a table
lookup. The idea is to simulate a structure that ISAM sort of techniques
[EMAIL PROTECTED] writes:
Tom, I posted a message about a week ago (I forget the name) about a
persistent reference index, sort of like CTID, but basically a table
lookup. The idea is to simulate a structure that ISAM sort of techniques
can work in PostgreSQL.
Eliminating the bitmap index
OK, lets step back a bit and see if there is a solution that fits what we
think we need and PostgreSQL.
Lets talk about FTSS, its something I can discuss easily. It is a two
stage system with an indexer and a server. Only the data to be indexed is
in the database, all the FTSS data structures are
Nicolai Tufar wrote:
On Tue, 1 Mar 2005 00:55:20 -0500 (EST), Bruce Momjian
My next guess
is that Win32 isn't handling va_arg(..., long long int) properly.
I am trying various combination of number and types
of parameters in my test program and everything prints fine.
When it comes to
I spent all day debugging it. Still have absolutely
no idea what could possibly go wrong. Does
anyone have a slightest clue what can it be and
why it manifests itself only on win32?
It may be that the CLIB has badly broken support for 64bit integers on 32
bit platforms. Does anyone know of
On Tue, 1 Mar 2005 15:38:58 -0500 (EST), [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
Is there a reason why we don't use the snprintf that comes with the
various C compilers?
snprintf() is usually buried in OS libraries. We implement
our own snprintf to make things like this:
Linux and Solaris 10 x86 pass regression tests fine when I force the use
of new
snprintf(). The problem should be win32 - specific. I will
investigate it throughly
tonight. Can someone experienced in win32 what can possibly be the
problem?
Do we have any idea about what format string
Magnus Hagander [EMAIL PROTECTED] writes:
My results are:
Fisrt, baseline:
* Linux, with fsync (default), write-cache disabled: no data corruption
* Linux, with fsync (default), write-cache enabled: usually no data
corruption, but two runs which had
That makes sense.
* Win32, with fsync,
On Sat, Feb 19, 2005 at 18:04:42 -0500,
Now, lets imagine PostgreSQL is being developed by a large company. QA
announces it has found a bug that will cause all the users data to
disappear if they don't run a maintenence program correctly. Vacuuming
one
or two tables is not enough, you have
On Sun, 20 Feb 2005 [EMAIL PROTECTED] wrote:
On Sat, Feb 19, 2005 at 18:04:42 -0500,
Now, lets imagine PostgreSQL is being developed by a large company.
QA
announces it has found a bug that will cause all the users data to
disappear if they don't run a maintenence program correctly.
Jim C. Nasby wrote:
On Mon, Feb 14, 2005 at 09:55:38AM -0800, Ron Mayer wrote:
I still suspect that the correct way to do it would not be
to use the single correlation, but 2 stats - one for estimating
how sequential/random accesses would be; and one for estimating
the number of pages
[EMAIL PROTECTED] writes:
I think there should be a 100% no data loss fail safe.
OK, maybe I was overly broad in my statement, but I assumed a context that
I guess you missed. Don't you think that in normal operations, i.e. with
no hardware of OS failure, we should see any data loss as
On Fri, 18 Feb 2005 22:35:31 -0500, Tom Lane [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] writes:
I think there should be a 100% no data loss fail safe.
Possibly we need to recalibrate our expectations here. The current
situation is that PostgreSQL will not lose data if:
1. Your
On Sat, Feb 19, 2005 at 13:35:25 -0500,
[EMAIL PROTECTED] wrote:
The catastrophic failure of the database because a maintenence function
is
not performed is a problem with the software, not with the people using
it.
There doesn't seem to be disagreement that something should be done
[ Shrugs ] and looks at other database systems ...
CA has put Ingres into Open Source last year.
Very reliable system with a replicator worth looking at.
Just a thought.
The discussion on hackers is how to make PostgreSQL better. There are many
different perspectives, differences are
I want to see if there is a concensus of opinion out there.
We've all known that data loss could happen if vacuum is not run and you
perform more than 2b transactions. These days with faster and bigger
computers and disks, it more likely that this problem can be hit in months
-- not years.
To
More suggestions:
(1) At startup, postmaster checks for an XID, if it is close to a problem,
force a vacuum.
(2) At sig term shutdown, can the postmaster start a vacuum?
(3) When the XID count goes past the trip wire can it spontaneously
issue a vacuum?
NOTE:
Suggestions 1 and 2 are for 8.0
On Sat, 19 Feb 2005 04:10 am, Tom Lane wrote:
[EMAIL PROTECTED] writes:
In fact, I think it is so bad, that I think we need to back-port a fix
to
previous versions and issue a notice of some kind.
They already do issue notices --- see VACUUM.
A real fix (eg the forcible stop we were
Gaetano Mendola [EMAIL PROTECTED] writes:
We do ~4000 txn/minute so in 6 month you are screewd up...
Sure, but if you ran without vacuuming for 6 months, wouldn't you notice
the
huge slowdowns from all those dead tuples before that?
I would think that only applies to databases where
The checkpointer is entirely incapable of either detecting the problem
(it doesn't have enough infrastructure to examine pg_database in a
reasonable way) or preventing backends from doing anything if it did
know there was a problem.
Well, I guess I meant 'some regularly running process'...
[EMAIL PROTECTED] writes:
Maybe I'm missing something, but shouldn't the prospect of data loss
(even
in the presense of admin ignorance) be something that should be
unacceptable? Certainly within the realm normal PostgreSQL operation.
[ shrug... ] The DBA will always be able to find a way
On Wed, 16 Feb 2005 [EMAIL PROTECTED] wrote:
Once autovacuum gets to the point where it's used by default, this
particular failure mode should be a thing of the past, but in the
meantime I'm not going to panic about it.
I don't know how to say this without sounding like a jerk, (I
On Wed, 16 Feb 2005, Joshua D. Drake wrote:
Do you have a useful suggestion about how to fix it? Stop working is
handwaving and merely basically saying, one of you people should do
something about this is not a solution to the problem, it's not even
an
approach towards a solution to the
Stephan Szabo [EMAIL PROTECTED] writes:
Right, but since the how to resolve it currently involves executing a
query, simply stopping dead won't allow you to resolve it. Also, if we
stop at the exact wraparound point, can we run into problems actually
trying to do the vacuum if that's still
On Wed, 16 Feb 2005 [EMAIL PROTECTED] wrote:
On Wed, 16 Feb 2005 [EMAIL PROTECTED] wrote:
Once autovacuum gets to the point where it's used by default, this
particular failure mode should be a thing of the past, but in the
meantime I'm not going to panic about it.
I don't
I will be at the BLU booth Tuesday.
Any and all, drop by.
I will be on Boston for Linuxworld from Tuesday through Thursday. I
will read email only occasionally.
--
Bruce Momjian| http://candle.pha.pa.us
pgman@candle.pha.pa.us | (610) 359-1001
I was at Linux world Tuesday, it was pretty good. I was in the org
pavilion, where the real Linux resides. The corporate people were on the
other side of the room. (There was a divider where the rest rooms and
elevators were.)
I say that this was where the real linux resides because all the real
Probably off-topic, but I think it's worth to see what astronomers are
doing with their very big spatial databases. For example, we are working
with more than 500,000,000 rows catalog and we use some special
transformation
of coordinates to integer numbers with preserving objects closeness.
It must be possible to create a tool based on the PostgreSQL sources that
can read all the tuples in a database and dump them to a file stream. All
the data remains in the file until overwritten with data after a vacuum.
It *should* be doable.
If there data in the table is worth anything, then it
I think you're pretty well screwed as far as getting it *all* back goes,
but you could use pg_resetxlog to back up the NextXID counter enough to
make your tables and databases reappear (and thereby lose the effects of
however many recent transactions you back up over).
Once you've found a
On Thu, 2005-02-10 at 14:37 -0500, Bruce Momjian wrote:
No, we feel that is of limited value. If the optimizer isn't doing
things properly, we will fix it.
I agree that improving the optimizer is the right answer for normal
usage, so I can't get excited about query-level plan hints, but I
Might it be possible to contact IBM directly and ask if they will allow
usage of the patent for PostgreSQL. They've let 500 patents for open
source, maybe they'll give a write off for this as well.
There is an advantage beyond just not having to re-write the code, but it
would also be sort of an
[EMAIL PROTECTED] wrote:
Might it be possible to contact IBM directly and ask if they will allow
usage of the patent for PostgreSQL. They've let 500 patents for open
source, maybe they'll give a write off for this as well.
There is an advantage beyond just not having to re-write the code,
[EMAIL PROTECTED] writes:
I think that is sort of arrogant. Look at Oracle, you can give the
planner
hints in the form of comments.
Arrogant or not, that's the general view of the people who work on the
planner.
The real issue is not so much whether the planner will always get things
For about 5 years now, I have been using a text search engine that I wrote
and maintain.
In the beginning, I hacked up function mechanisms to return multiple value
sets and columns. Then PostgreSQL aded setof and it is was cool. Then it
was able to return a set of rows, which was even better.
that with fairly static
data.
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Thursday, February 10, 2005 11:22 AM
To: pgsql-hackers@postgresql.org
Subject: [HACKERS] New form of index persistent reference
For about 5 years now, I have been using a text search
I wrote a message caled One Big trend vs multiple smaller trends in table
statistics that, I think, explains what we've been seeing.
[EMAIL PROTECTED] wrote:
In this case, the behavior observed could be changed by altering the
sample size for a table. I submit that an arbitrary fixed sample
Mark,
Hey, I can give you a copy of RT1 which is fine, but it is 1.1G
compressed. I'd have to mail you a DVD.
Sure, cool.
[address info sniped]
I would be willing to send a couple DVDs (on a regular basis) to anyone
who is able to post this on a good mirror that anyone could get at.
I can
On Wed, Feb 09, 2005 at 07:30:16PM -0500, [EMAIL PROTECTED] wrote:
I would love to keep these things current for PG development, but my
company's server is on a plan that gets 1G free, and is billed after
that. Also, I am on a broadband line at my office, and uploading the
data
would take
[EMAIL PROTECTED] wrote:
In this case, the behavior observed could be changed by altering the
sample size for a table. I submit that an arbitrary fixed sample size is
not a good base for the analyzer, but that the sample size should be
based
on the size of the table or some calculation of
[EMAIL PROTECTED] writes:
The basic problem with a fixed sample is that is assumes a normal
distribution.
That's sort of true, but not in the way you think it is.
[snip]
Greg, I think you have an excellent ability to articulate stats, but I
think that the view that this is like election
A couple of us using the US Census TIGER database have noticed something
about the statistics gathering of analyze. If you follow the thread Query
Optimizer 8.0.1 you'll see the progression of the debate.
To summarize what I think we've seen:
The current implementation of analyze is designed
packed on a few pages;
even though there is no total-ordering across the whole table.
Stephan Szabo described this as a clumping effect:
http://archives.postgresql.org/pgsql-performance/2003-01/msg00286.php
Yes.
I think we are describing the exact same issue
I haven't worked with GiST, although I have been curious from time to
time. Just never had the time to sit, read, and try out the GiST system.
On my text search system (FTSS) I use functions that return sets of data.
It make be easier to implement that than a GiST.
Basically, I create a unique
A question to the hackers:
Is there a way, and if I'm being stupid please tell me, to use something
like a row ID to reference a row in a PostgreSQL database? Allowing the
database to find a specific row without using an index?
I mean, an index has to return something like a row ID for the
[EMAIL PROTECTED] writes:
Is there a way, and if I'm being stupid please tell me, to use something
like a row ID to reference a row in a PostgreSQL database? Allowing the
database to find a specific row without using an index?
ctid ... which changes on every update ...
Well, how does an
[EMAIL PROTECTED] writes:
One of the things that is disturbing to me about the analyze settings is
that it wants to sample the same number of records from a table
regardless
of the size of that table.
The papers that I looked at say that this rule has a good solid
statistical foundation,
[EMAIL PROTECTED] writes:
On a very basic level, why bother sampling the whole table at all? Why
not
check one block and infer all information from that? Because we know
that
isn't enough data. In a table of 4.6 million rows, can you say with any
mathmatical certainty that a sample of 100
On Mon, Feb 07, 2005 at 11:27:59 -0500,
[EMAIL PROTECTED] wrote:
It is inarguable that increasing the sample size increases the accuracy
of
a study, especially when diversity of the subject is unknown. It is
known
that reducing a sample size increases probability of error in any poll
or
On Mon, Feb 07, 2005 at 13:28:04 -0500,
What you are saying here is that if you want more accurate statistics, you
need to sample more rows. That is true. However, the size of the sample
is essentially only dependent on the accuracy you need and not the size
of the population, for large
Maybe I am missing something - ISTM that you can increase your
statistics target for those larger tables to obtain a larger (i.e.
better) sample.
No one is arguing that you can't manually do things, but I am not the
first to notice this. I saw the query planner doing something completely
1 - 100 of 241 matches
Mail list logo