On Wed, 2007-05-02 at 23:59 +0100, Heikki Linnakangas wrote:
Umm, you naturally have just entry per relation, but we were talking
about how many entries the table needs to hold.. You're patch had a
hard-coded value of 1000 which is quite arbitrary.
We need to think of the interaction with
For info, the buildfarm script failed to leave the broken tree behind
again so I was unable to get a dump from the affected index.
Andrew; My run logs show that the script did think it was leaving the
tree behind (it included the 'leaving error trees' text that you asked
me to add to the
Jeff Davis wrote:
On Wed, 2007-05-02 at 23:59 +0100, Heikki Linnakangas wrote:
Jeff Davis wrote:
On Wed, 2007-05-02 at 20:58 +0100, Heikki Linnakangas wrote:
Jeff Davis wrote:
What should be the maximum size of this hash table?
Good question. And also, how do you remove entries from it?
I
On 5/2/07, Gregory Stark [EMAIL PROTECTED] wrote:
Can we? I mean, sure you can break the patch up into chunks which might
make
it easier to read, but are any of the chunks useful alone?
Well I agree, it would be a tough job. I can try and break the patch into
several self-complete
Simon Riggs wrote:
We need to think of the interaction with partitioning here. People will
ask whether we would recommend that individual partitions of a large
table should be larger/smaller than a particular size, to allow these
optimizations to kick in.
My thinking is that database designers
Joshua D. Drake wrote:
You can now checkout a pgsql converted to svn repo here:
http://projects.commandprompt.com/public/pgsql/repo/
I'd like to use svnsync (see [1]) to create a read-only mirror of the
pgsql svn repository (see [2]). svnsync requires svn version = 1.4 as
source repository
Hello,
The svn repository is currently accessible only via https,the address is:
https://projects.commandprompt.com/public/pgsql/repo/
AFAIK Joshua planned to upgrade svn to version 1.4, however, I don't know when
it would happen.
Hannes Eder wrote:
Joshua D. Drake wrote:
You can now checkout
Tom Lane [EMAIL PROTECTED] writes:
Gregory Stark [EMAIL PROTECTED] writes:
You keep saying that but I think it's wrong. There are trivial patches that
were submitted last year that are still sitting in the queue.
Um ... which ones exactly? I don't see *anything* in the current queue
that
Josh Berkus wrote:
Bruce, all,
No, my point is that 100% information is already available by looking at
email archives. What we need is a short description of where we are on
each patch --- that is a manual process, not something that can be
automated.
Tom has posted it --- tell me
We have _ample_ evidence that the problem is lack of people able to
review patches, and yet there is this discussion to track patches
better. It reminds me of someone who has lost their keys in an alley,
but is looking for them in the street because the light is better there.
Bruce, I guess
Csaba Nagy wrote:
We have _ample_ evidence that the problem is lack of people able to
review patches, and yet there is this discussion to track patches
better. It reminds me of someone who has lost their keys in an alley,
but is looking for them in the street because the light is better
On Thu, 2007-05-03 at 13:51, Bruce Momjian wrote:
I believe the problem is not that there isn't enough information, but
not enough people able to do the work. Seeking solutions in areas that
aren't helping was the illustration.
Yes Bruce, but you're failing to see that a more structured
Bruce Momjian wrote:
Csaba Nagy wrote:
We have _ample_ evidence that the problem is lack of people able to
review patches, and yet there is this discussion to track patches
better. It reminds me of someone who has lost their keys in an alley,
but is looking for them in the street because the
Dave Page wrote:
For info, the buildfarm script failed to leave the broken tree behind
again so I was unable to get a dump from the affected index.
Andrew; My run logs show that the script did think it was leaving the
tree behind (it included the 'leaving error trees' text that you asked
me
We recently experienced a hard media failure. It turns out our admin guys
have been doing backups by backing up the pgsql data directory, but not
prepping the db first.
After a restore, when I try to start the database I get LOG: logger
shutting down and nothing else. This is Postgres 8.1.3.
On Thu, May 03, 2007 at 08:37:33AM -0400, David Dollar wrote:
We recently experienced a hard media failure. It turns out our admin guys
have been doing backups by backing up the pgsql data directory, but not
prepping the db first.
What platform is this?
After a restore, when I try to start
Bruce Momjian wrote:
Csaba Nagy wrote:
We have _ample_ evidence that the problem is lack of people able to
review patches, and yet there is this discussion to track patches
better. It reminds me of someone who has lost their keys in an alley,
but is looking for them in the street because
I often need in command line to get the code of function, so I make a patch
for pg_dump, thanks this patch pg_dump is able to dump only one functions or
all the functions.
The argument is --object or -B
Example:
./pg_dump -Bfunction:test_it -Bfunction:dblink_open
To dump the functions
Bruce Momjian wrote:
Csaba Nagy wrote:
We have _ample_ evidence that the problem is lack of people able to
review patches, and yet there is this discussion to track patches
better. It reminds me of someone who has lost their keys in an alley,
but is looking for them in the street because the
Bruce,
Get rid of gborg and let's talk.
Touche'.
Actually, AFAICT, the only active thing left on GBorg is WWW. If we move
that, we can shut it down. Any objections?
Why am I having to spend hours in Syndey saying the same thing? Why
don't you guys go ahead and change things, and when
Josh Berkus wrote:
Bruce,
Get rid of gborg and let's talk.
Touche'.
Actually, AFAICT, the only active thing left on GBorg is WWW. If we move
that, we can shut it down. Any objections?
This should be a different thread *but* to my knowledge there is more
than WWW active on Gborg. Or at
On Thu, 2007-05-03 at 09:25 +0100, Heikki Linnakangas wrote:
The hash table keeps track of ongoing seq scans. That's presumably
related to number of backends; I can't imagine a plan where a backend is
executing more than one seq scan at a time, so that gives us an upper
bound of NBackends
On Thu, 2007-05-03 at 08:01 +0100, Simon Riggs wrote:
On Wed, 2007-05-02 at 23:59 +0100, Heikki Linnakangas wrote:
Umm, you naturally have just entry per relation, but we were talking
about how many entries the table needs to hold.. You're patch had a
hard-coded value of 1000 which is
On Thu, 2007-26-04 at 18:07 -0400, Neil Conway wrote:
(1) I believe the reasoning for Tom's earlier change was not to reduce
the I/O between the backend and the pgstat process [...]
Tom, any comments on this? Your change introduced an undocumented
regression into 8.2. I think you're on the hook
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Why not just send a notice out stated that Gborg will be shutdown as of June
1st ... give a finite deadline to move things over to pgfoundry ... just
because we 'shut down' the site on June 1st, it doesn't mean we are going to
wipe it all out, we
Jeff Davis wrote:
What I was trying to say before is that, in my design, it keeps track of
relations being scanned, not scans happening. So 5 backends scanning the
same table would result in one entry that consists of the most recently-
read block by any of those backends for that relation.
Pavel, my apologies for not getting back to you sooner.
On Wed, 2007-25-04 at 07:12 +0200, Pavel Stehule wrote:
example: I have table with attr. cust_id, and I want to use parametrized
view (table function) where I want to have attr cust_id on output.
Hmm, I see your point. I'm personally
Neil Conway [EMAIL PROTECTED] writes:
Pavel, my apologies for not getting back to you sooner.
On Wed, 2007-25-04 at 07:12 +0200, Pavel Stehule wrote:
example: I have table with attr. cust_id, and I want to use parametrized
view (table function) where I want to have attr cust_id on output.
On Thu, 2007-05-03 at 19:27 +0100, Heikki Linnakangas wrote:
I understand that the data structure keeps track of relations being
scanned, with one entry per such relation. I think that's very good
design, and I'm not suggesting to change that.
But what's the right size for that? We don't
I was just getting ready to suggest such an approach. We could
email all the project admins for the reamaining projects with the
dead-line. Backup the information and tell people who to contact in
order to claim whatever information they want. Once the dead-line is
past you can simply
I just noticed that my recent change to prevent PG_RE_THROW from dying
if there's noplace to longjmp to has provoked a whole lot of warnings
that were not there before. Apparently this is because gcc understands
that siglongjmp() never returns, but is not equally clueful about
pg_re_throw().
We
Tom Lane wrote:
We can fix this for gcc by putting __attribute__((noreturn)) on the
declaration of pg_re_throw(), but what about other compilers?
Sun studio also complains about it :(.
Zdenek
---(end of broadcast)---
TIP 5:
Hannes Eder [EMAIL PROTECTED] writes:
Tome Lane wrote:
We can fix this for gcc by putting __attribute__((noreturn)) on the
declaration of pg_re_throw(), but what about other compilers?
For MSVC 2005 use __declspec(noreturn) (see [1]). I think this also work for
some older versions of MSVC.
Tome Lane wrote:
We can fix this for gcc by putting __attribute__((noreturn)) on the
declaration of pg_re_throw(), but what about other compilers?
For MSVC 2005 use __declspec(noreturn) (see [1]). I think this also work for
some older versions of MSVC.
Regards,
Hannes Eder
References:
[1]
I have done the following test and I am unable to understand the results. I
have tried debugging the code and I have reached down to the Storage Layer. I
am playing with the optimizer etc.. I no very little about the internals of the
Executor.
If you could point out to me what possible
On Thu, 2007-05-03 at 14:33 -0700, jaba the mobzy wrote:
mycorr_100 took 11.4 s to run although it had to fetch 10 row from
the base table.
mycorr_10 took 24.4 s to run although it had to fetch 10563 row from
the base table.
This is because the physical distribution of data is different.
Jeff Davis [EMAIL PROTECTED] writes:
On Thu, 2007-05-03 at 14:33 -0700, jaba the mobzy wrote:
mycorr_100 took 11.4 s to run although it had to fetch 10 row from
the base table.
mycorr_10 took 24.4 s to run although it had to fetch 10563 row from
the base table.
This is because the
37 matches
Mail list logo