I am working on some beginner level patches to help clean up the
timestamp code in PostgreSQL.
Basically, my first few patches are aimed at removing the dependence
on the HAVE_INT_TIMESTAMP to choose types for variables. I will
eventually try to remove the use of HAVE_INT64_TIMESTAMP to choose
On Tue, 2008-02-26 at 15:19 +0900, ITAGAKI Takahiro wrote:
Simon Riggs [EMAIL PROTECTED] wrote:
One of the reasons why I hadn't wanted to pursue earlier ideas to use
LOCK was that applying a lock will prevent running in parallel, which
ultimately may prevent further performance gains.
On Tue, Feb 26, 2008 at 12:15:05AM +, Dave Page wrote:
On Mon, Feb 25, 2008 at 11:48 PM, Peter Eisentraut [EMAIL PROTECTED] wrote:
Dave Page wrote:
Yes, because newer builds may be linked against updated runtime
versions. We need to be sure the installer will upgrade the file so it
On Tue, Feb 26, 2008 at 12:22:03AM -0800, Warren Turkal wrote:
As a result, I have a few questions about the timestamp code. In what
instances is the floating point timestamp recommended? I see that
Debian and Ubuntu ship packages that use the int64 timestamps. Is the
backend smart enough to
Am Dienstag, 26. Februar 2008 schrieb Tom Lane:
Why should it complain more or less than about the SUBSYS.o
files?
It has a hard-wired rule not to complain about files named *.o.
Well, we could name the output file SUBSYS.o if that is not too confusing. :)
--
Peter Eisentraut
On Tue, 2008-02-26 at 00:39 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
... So it would be good if we could dump objects in 3 groups
1. all commands required to re-create table
2. data
3. all commands required to complete table after data load
[ much subsequent
I'm looking at ways to reduce the number of network calls and/or the
waiting time while we perform network COPY.
The COPY calls in libpq allow asynchronous actions, yet are coded in a
synchronous manner in pg_dump, Slony and psql \copy.
Does anybody have any experience with running COPY in
On Tue, 2008-02-19 at 14:18 -0800, Jeff Davis wrote:
On Wed, 2008-02-06 at 15:13 +, Simon Riggs wrote:
The -s option creates the table, as well as creating constraints and
indexes. These objects need to be dropped prior to loading, if we are to
follow the performance recommendations in
On Tue, Feb 26, 2008 at 11:00:33AM +, Simon Riggs wrote:
I'm looking at ways to reduce the number of network calls and/or the
waiting time while we perform network COPY.
The COPY calls in libpq allow asynchronous actions, yet are coded in a
synchronous manner in pg_dump, Slony and psql
On Tue, Feb 26, 2008 at 12:39:29AM -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
... So it would be good if we could dump objects in 3 groups
1. all commands required to re-create table
2. data
3. all commands required to complete table after data load
[ much subsequent
Le mardi 26 février 2008, Simon Riggs a écrit :
So that would mean we would run an unload like this
pg_dump --pre-schema-file=f1 --save-snapshot -snapshot-id=X
pg_dump -t bigtable --data-file=f2.1 --snapshot-id=X
pg_dump -t bigtable2 --data-file=f2.2 --snapshot-id=X
pg_dump -T bigtable -T
On Tue, 2008-02-26 at 12:46 +0100, Dimitri Fontaine wrote:
Le mardi 26 février 2008, Simon Riggs a écrit :
So that would mean we would run an unload like this
pg_dump --pre-schema-file=f1 --save-snapshot -snapshot-id=X
pg_dump -t bigtable --data-file=f2.1 --snapshot-id=X
pg_dump -t
On Tue, Feb 26, 2008 at 5:35 PM, Simon Riggs [EMAIL PROTECTED] wrote:
On Tue, 2008-02-26 at 12:46 +0100, Dimitri Fontaine wrote:
As a user I'd really prefer all of this to be much more transparent, and
could
well imagine the -Fc format to be some kind of TOC + zip of table data +
post
On Tue, 2008-02-26 at 18:19 +0530, Tom Dunstan wrote:
On Tue, Feb 26, 2008 at 5:35 PM, Simon Riggs [EMAIL PROTECTED] wrote:
On Tue, 2008-02-26 at 12:46 +0100, Dimitri Fontaine wrote:
As a user I'd really prefer all of this to be much more transparent, and
could
well imagine the -Fc
As a user I'd really prefer all of this to be much more transparent,
and could
well imagine the -Fc format to be some kind of TOC + zip of table
data + post
load instructions (organized per table), or something like this.
In fact just what you described, all embedded in a single file.
For the joy of all of you: that's the correct WIP patch.
At the moment it only tries to create runs uding two heaps. Hope you can
help me with writing those runs on tapes.
I'd be very pleased to give you more details.
Thenks for your time.
Regards, Manolo.
Simon Riggs wrote:
Separate files seems much simpler...
Yes, We need to stick to the KISS principle.
ISTM that we could simply invent a new archive format of d for directory.
BTW, parallel dumping might be important, but is really much less so
than parallel restoring in my book.
kenneth d'souza wrote:
Refering to this request http://momjian.us/mhonarc/patches_hold/msg00022.html
I have created a patch. The output doesn't exaclty match with what is stated
here
http://momjian.us/mhonarc/patches_hold/msg00023.html. However, it
does tell the required details in a
Le mardi 26 février 2008, Tom Dunstan a écrit :
On Tue, Feb 26, 2008 at 5:35 PM, Simon Riggs [EMAIL PROTECTED] wrote:
Le mardi 26 février 2008, Dimitri Fontaine a écrit :
We could even support some option for the user to tell us which
disk arrays to use for parallel dumping.
pg_dump
In pgAgent.cpp, I would like to add LIMIT as shown below:
LogMessage(_(Checking for jobs to run), LOG_DEBUG);
DBresult *res=serviceConn-Execute(
wxT(SELECT J.jobid )
wxT( FROM pgagent.pga_job J )
wxT( WHERE jobenabled )
wxT( AND jobagentid IS NULL )
wxT( AND jobnextrun = now() )
wxT(
Roberts, Jon wrote:
In pgAgent.cpp, I would like to add LIMIT as shown below:
[snip]
What do you guys think?
What has this to do with -hackers?
I don't even know what project this refers to - it certainly doesn't
refer to core postgres, which is what -hackers is about.
On Tue, Feb 26, 2008 at 08:28:11AM -0500, Andrew Dunstan wrote:
Simon Riggs wrote:
Separate files seems much simpler...
Yes, We need to stick to the KISS principle.
ISTM that we could simply invent a new archive format of d for directory.
Yeah, you can always ZIP (or whatever)
Roberts, Jon wrote:
In pgAgent.cpp, I would like to add LIMIT as shown below:
[snip]
What do you guys think?
What has this to do with -hackers?
I don't even know what project this refers to - it certainly doesn't
refer to core postgres, which is what -hackers is about.
On Tue, 2008-02-26 at 08:28 -0500, Andrew Dunstan wrote:
Simon Riggs wrote:
Separate files seems much simpler...
Yes, We need to stick to the KISS principle.
ISTM that we could simply invent a new archive format of d for directory.
...and then dump each table to a separate file?
Roberts, Jon wrote:
Roberts, Jon wrote:
In pgAgent.cpp, I would like to add LIMIT as shown below:
What do you guys think?
What has this to do with -hackers?
I don't even know what project this refers to - it certainly doesn't
refer to core postgres, which is what -hackers is
On Tue, Feb 26, 2008 at 08:10:09AM -0600, Roberts, Jon wrote:
Roberts, Jon wrote:
In pgAgent.cpp, I would like to add LIMIT as shown below:
[snip]
What do you guys think?
What has this to do with -hackers?
I don't even know what project this refers to -
On Tue, 2008-02-26 at 15:12 +0100, Magnus Hagander wrote:
On Tue, Feb 26, 2008 at 08:28:11AM -0500, Andrew Dunstan wrote:
Simon Riggs wrote:
Separate files seems much simpler...
Yes, We need to stick to the KISS principle.
ISTM that we could simply invent a new archive
Magnus Hagander wrote:
On Tue, Feb 26, 2008 at 08:28:11AM -0500, Andrew Dunstan wrote:
Simon Riggs wrote:
Separate files seems much simpler...
Yes, We need to stick to the KISS principle.
ISTM that we could simply invent a new archive format of d for directory.
Yeah, you can always ZIP (or
-Original Message-
From: Magnus Hagander [mailto:[EMAIL PROTECTED]
Sent: Tuesday, February 26, 2008 8:17 AM
To: Roberts, Jon
Cc: Andrew Dunstan; pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] pgAgent job limit
On Tue, Feb 26, 2008 at 08:10:09AM -0600, Roberts, Jon wrote:
Looking at the profile for COPY and then a profile for CREATE TABLE AS
SELECT (CTAS) there is clearly too much overhead caused by inserting the
rows one at a time.
Flat profile of CTAS: (2 cols of this output removed for clarity)
Each sample counts as 0.01 seconds.
% cumulative self
Magnus Hagander [EMAIL PROTECTED] writes:
On Tue, Feb 26, 2008 at 12:39:29AM -0500, Tom Lane wrote:
BTW, what exactly was the use-case for this?
One use-case would be when you have to make some small change to the schema
while reloading it, that's still compatible with the data format. Then
For the joy of all of you: that's the correct WIP patch.
At the moment it only tries to create runs uding two heaps. Hope you can
help me with writing those runs on tapes.
I'd be very pleased to give you more details.
Thenks for your time.
Regards, Manolo.
Peter Eisentraut [EMAIL PROTECTED] writes:
Am Dienstag, 26. Februar 2008 schrieb Tom Lane:
Why should it complain more or less than about the SUBSYS.o
files?
It has a hard-wired rule not to complain about files named *.o.
Well, we could name the output file SUBSYS.o if that is not too
On Tue, 2008-02-26 at 10:03 -0500, Tom Lane wrote:
Magnus Hagander [EMAIL PROTECTED] writes:
On Tue, Feb 26, 2008 at 12:39:29AM -0500, Tom Lane wrote:
BTW, what exactly was the use-case for this?
One use-case would be when you have to make some small change to the schema
while reloading
Tom Lane wrote:
Magnus Hagander [EMAIL PROTECTED] writes:
On Tue, Feb 26, 2008 at 12:39:29AM -0500, Tom Lane wrote:
BTW, what exactly was the use-case for this?
One use-case would be when you have to make some small change to the schema
while reloading it, that's still
Le mardi 26 février 2008, Tom Lane a écrit :
In short, what I think we need here is just some more conveniently
defined extraction filter switches than --schema-only and --data-only.
There's no need for any fundamental change to pg_dump's architecture.
Forgive me if what I will say bellow is
Simon Riggs [EMAIL PROTECTED] writes:
1.41 7.25 0.13 1029CacheInvalidateHeapTuple
which together account for more than 50% of CPU.
The second column is cumulative percentage (the third column is self time).
So together this function and all the items above it account for
On Tue, Feb 26, 2008 at 03:15:59PM +, Simon Riggs wrote:
On Tue, 2008-02-26 at 10:03 -0500, Tom Lane wrote:
Magnus Hagander [EMAIL PROTECTED] writes:
On Tue, Feb 26, 2008 at 12:39:29AM -0500, Tom Lane wrote:
BTW, what exactly was the use-case for this?
One use-case would be when
Tom Lane wrote:
Peter Eisentraut [EMAIL PROTECTED] writes:
Am Dienstag, 26. Februar 2008 schrieb Tom Lane:
Why should it complain more or less than about the SUBSYS.o
files?
It has a hard-wired rule not to complain about files named *.o.
Well, we could name the
On Tue, 2008-02-26 at 15:25 +, Gregory Stark wrote:
Simon Riggs [EMAIL PROTECTED] writes:
1.41 7.25 0.13 1029CacheInvalidateHeapTuple
which together account for more than 50% of CPU.
The second column is cumulative percentage (the third column is self time).
So
Robert Lor wrote:
Proposed changes:
* Switch from using DTRACE_PROBEn macros to the dynamically generated
macros (remove pg_trace.h)
* Use dtrace -h to create a header file that contains the
dynamically generated macros to be used in the source code instead of
the DTRACE_PROBEn macros.
*
On Mon, 25 Feb 2008 22:25:09 -0800
Joshua D. Drake [EMAIL PROTECTED] wrote:
24 connections: 3.5 hours
12 connections: 4.5 hours
--
The PostgreSQL Company since 1997: http://www.commandprompt.com/
PostgreSQL Community Conference: http://www.postgresqlconference.org/
Donate to the PostgreSQL
On Tue, 2008-02-26 at 16:28 +0100, Magnus Hagander wrote:
On Tue, Feb 26, 2008 at 03:15:59PM +, Simon Riggs wrote:
So if I understand:
* we add switches to pg_dump to dump out separate files with --pre,
--post and --data (or other names) [TODO: Simon]
* we add switches to
Simon Riggs [EMAIL PROTECTED] writes:
My thinking is to do either:
* keep it as simple as possible to allow DBA to manually improve
performance
* express dependency information in the pg_dump output to allow some
level of parallelism to use that information to advantage automatically
I'm
Robert Lor [EMAIL PROTECTED] writes:
* This is unrelated to making DTrace work on Leopard, but I'd like to
propose that we split the probes into generic database and Postgres
specific providers, called database and postgresql respectively.
Other databases will be adding DTrace probes as
Simon Riggs [EMAIL PROTECTED] writes:
So if I understand:
* we add switches to pg_dump to dump out separate files with --pre,
--post and --data (or other names) [TODO: Simon]
* we add switches to pg_restore to load/dump from the single archive
file the subsets of --pre, --post, --data
On Thursday 21 February 2008 21:33, Tom Lane wrote:
Josh Berkus [EMAIL PROTECTED] writes:
On Thursday 21 February 2008 11:36, Tom Lane wrote:
Would it satisfy people if plpgsql were in postgres, but neither
template DB, after initdb?
No, the real-world use-case we're trying to satisfy
Robert Treat wrote:
They are then free to muck about that database, installing anything
they want, but they cannot load any procedural languages since they
only have non-superuser accounts.
Except that they can in 8.3.
--
Alvaro Herrera
On Tue, 2008-02-26 at 10:48 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
My thinking is to do either:
* keep it as simple as possible to allow DBA to manually improve
performance
* express dependency information in the pg_dump output to allow some
level of parallelism to
Hi,
I'm working on a GiST opclass to support prefix searching as presented here:
http://pgsql.tapoueh.org/site/html/prefix/index.html
http://prefix.projects.postgresql.org/README.html
http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/prefix/prefix/
In order to have a much more efficient index, I
Robert Treat wrote:
There are a lot of people who have a database provider of some sort who
creates a database for them, giving them ownership of that specific database,
with pg_hba.conf specifying connection only to that db. They are then free to
muck about that database, installing
On Tue, 2008-02-26 at 11:59 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
So if I understand:
* we add switches to pg_dump to dump out separate files with --pre,
--post and --data (or other names) [TODO: Simon]
* we add switches to pg_restore to load/dump from the
On Tue, 26 Feb 2008, Simon Riggs wrote:
Splitting up the dump is the enabler for splitting up the load.
While the pg_dump split train seems to be leaving the station, I feel
compelled to point out that focus does nothing to help people who are
bulk-loading data that came from somewhere
Robert Treat [EMAIL PROTECTED] writes:
On Thursday 21 February 2008 21:33, Tom Lane wrote:
So it's still 100% unclear to me who we are catering to.
There are a lot of people who have a database provider of some sort who
creates a database for them, giving them ownership of that specific
Am Dienstag, 26. Februar 2008 schrieb Andrew Dunstan:
This appears to have broken Win32 Mingw builds - see
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=dawn_batdt=2008-02-26%20
07:00:01
There have been a few changes after that. Let's see what is says in a few
hours.
--
Peter Eisentraut
Sooner or later we ought to completely reimplement LISTEN/NOTIFY,
but while thinking about Joel Stevenson's performance issue I had a
sudden idea for a very simple change that would buy some improvement.
Currently, all operations in async.c take ExclusiveLock on pg_listener,
but it strikes me that
Greg Smith [EMAIL PROTECTED] writes:
While the pg_dump split train seems to be leaving the station, I feel
compelled to point out that focus does nothing to help people who are
bulk-loading data that came from somewhere else.
What are you imagining here ... a plain SQL script containing
On Tue, 2008-02-26 at 12:27 -0500, Greg Smith wrote:
On Tue, 26 Feb 2008, Simon Riggs wrote:
Splitting up the dump is the enabler for splitting up the load.
While the pg_dump split train seems to be leaving the station, I feel
compelled to point out that focus does nothing to help people
Simon Riggs [EMAIL PROTECTED] writes:
I've not been advocating improving pg_restore, which is where the -Fc
tricks come in.
...
I see you thought I meant pg_restore. I don't thinking extending
pg_restore in that way is of sufficiently generic use to make it
worthwhile. Extending psql would
Tom Lane wrote:
Since the performance advantages are still somewhat hypothetical,
I think we should reach for the low-hanging fruit first. If concurrent
pg_restore really does prove to be the best thing since sliced bread,
*then* would be the time to start thinking about whether it's possible
On Tue, 2008-02-26 at 17:16 +, Simon Riggs wrote:
I've not been advocating improving pg_restore, which is where the -Fc
tricks come in.
Oh, then it was a miscommunication on my part, because I was talking
about pg_restore, and I assumed you were as well.
Regards,
Jeff Davis
Tom Lane wrote:
I'm unconvinced that there will be any probes that are common to all
databases. I'd skip this part...
Any reason why we can't consider probes like transaction-start,
transaction-commit, or transaction-abort as common probes that can also
be used in other (maybe no all)
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Tue, 26 Feb 2008 10:32:59 -0800
Jeff Davis [EMAIL PROTECTED] wrote:
On Tue, 2008-02-26 at 17:16 +, Simon Riggs wrote:
I've not been advocating improving pg_restore, which is where the
-Fc tricks come in.
Oh, then it was a
On Tue, 2008-02-26 at 00:22 -0800, Warren Turkal wrote:
As a result, I have a few questions about the timestamp code. In what
instances is the floating point timestamp recommended?
One circumstance is when there isn't a native int64 type available. The
floating point datetime code is the
Robert Lor [EMAIL PROTECTED] writes:
Tom Lane wrote:
I'm unconvinced that there will be any probes that are common to all
databases. I'd skip this part...
Any reason why we can't consider probes like transaction-start,
transaction-commit, or transaction-abort as common probes that can
Joshua D. Drake [EMAIL PROTECTED] writes:
IMO the place to start is COPY which is per my tests, slow. Multi
worker connection restore is great and I have proven that with some
work it can provide o.k. results but it is certainly not acceptable.
It was already pointed out to you that we can
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Tue, 26 Feb 2008 13:58:25 -0500
Tom Lane [EMAIL PROTECTED] wrote:
Joshua D. Drake [EMAIL PROTECTED] writes:
IMO the place to start is COPY which is per my tests, slow. Multi
worker connection restore is great and I have proven that with some
On Tue, 26 Feb 2008, Tom Lane wrote:
What are you imagining here ... a plain SQL script containing
database-independent INSERT commands? That's going to suck compared
to COPY no matter what.
Think 100GB+ of data that's in a CSV or delimited file. Right now the
best import path is with
Tom Lane wrote:
Joshua D. Drake [EMAIL PROTECTED] writes:
IMO the place to start is COPY which is per my tests, slow. Multi
worker connection restore is great and I have proven that with some
work it can provide o.k. results but it is certainly not acceptable.
It was already pointed out
Tom Lane wrote:
I'm unimpressed; it's not at all clear that you'd be measuring quite the
same thing in, say, mysql as in postgres.
I think it depends on the probe, but for transaction rates like start or
commit, don't you think it's pretty black white as long as the probes
are placed in
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Tue, 26 Feb 2008 14:17:24 -0500 (EST)
Greg Smith [EMAIL PROTECTED] wrote:
On Tue, 26 Feb 2008, Tom Lane wrote:
What are you imagining here ... a plain SQL script containing
database-independent INSERT commands? That's going to suck
On Tue, 2008-02-26 at 16:18 -0300, Alvaro Herrera wrote:
Tom Lane wrote:
Joshua D. Drake [EMAIL PROTECTED] writes:
IMO the place to start is COPY which is per my tests, slow. Multi
worker connection restore is great and I have proven that with some
work it can provide o.k. results but
On Tue, 2008-02-26 at 13:18 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
I've not been advocating improving pg_restore, which is where the -Fc
tricks come in.
...
I see you thought I meant pg_restore. I don't thinking extending
pg_restore in that way is of sufficiently
Neil Conway [EMAIL PROTECTED] writes:
On Tue, 2008-02-26 at 00:22 -0800, Warren Turkal wrote:
As a result, I have a few questions about the timestamp code. In what
instances is the floating point timestamp recommended?
One circumstance is when there isn't a native int64 type available.
There
Tom,
That argument *was* valid ... before 8.3. Nowadays non-superuser DB
owners can install trusted PLs in their DBs by themselves. (At least
by default.) So I'm still unconvinced that we need more changes.
I agree.
--
Josh Berkus
PostgreSQL @ Sun
San Francisco
Joshua D. Drake [EMAIL PROTECTED] writes:
HOT works because EDB refused to accept the inherit limitations of
PostgreSQL. COPY is no different in that aspect. Maybe it can't go
exponentially faster but the math says, if done correctly, it can.
You can always make it faster if it doesn't have to
On Tue, 2008-02-26 at 14:55 -0500, Tom Lane wrote:
Anyway I think they both have their place.
I think it is very fragile to have the semantics of a builtin datatype
change depending on a configure option. It makes it difficult to write
applications that depend on the unique properties of
Tom Lane [EMAIL PROTECTED] writes:
Possibly I have a different view of the uses of dtrace than you do, but
most of the events I'd be interested in probing are probably pretty
Postgres-specific.
I think both types of probes are useful to different people.
One of the really neat things about
On Tue, 2008-02-26 at 13:18 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
I've not been advocating improving pg_restore, which is where the -Fc
tricks come in.
...
I see you thought I meant pg_restore. I don't thinking extending
pg_restore in that way is of sufficiently
On Tue, 2008-02-26 at 11:33 -0800, Josh Berkus wrote:
We need two (or more) PostgreSQL hackers to volunteer to regularly check the
Coverity reports and either fix/forward the bugs found, or (more often) mark
them as non-bugs in the Coverity system.
I take a look at this periodically.
On Tuesday 26 February 2008 12:20, Andrew Dunstan wrote:
Robert Treat wrote:
There are a lot of people who have a database provider of some sort who
creates a database for them, giving them ownership of that specific
database, with pg_hba.conf specifying connection only to that db. They
Tom Lane [EMAIL PROTECTED] writes:
Simon Riggs [EMAIL PROTECTED] writes:
I've not been advocating improving pg_restore, which is where the -Fc
tricks come in.
...
I see you thought I meant pg_restore. I don't thinking extending
pg_restore in that way is of sufficiently generic use to make
There is a build farm failure now because (apparently) an old make version:
http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dawn_batdt=2008-02-26%2019:00:01
The new amended backend linking code needs GNU make 3.80, released 2002-10-03.
Should we just require that? I think the
Gregory Stark wrote:
I think both types of probes are useful to different people.
I think certain higher level probes can be really useful to DBAs.
Perhaps looking at the standard database SNMP MIB counters would give us a
place to start for upward facing events people want to trace for
Peter Eisentraut [EMAIL PROTECTED] writes:
There is a build farm failure now because (apparently) an old make version:
http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dawn_batdt=2008-02-26%2019:00:01
The new amended backend linking code needs GNU make 3.80, released
2002-10-03.
Are
Robert Lor [EMAIL PROTECTED] writes:
Tom Lane wrote:
I'm unimpressed; it's not at all clear that you'd be measuring quite the
same thing in, say, mysql as in postgres.
I think it depends on the probe, but for transaction rates like start or
commit, don't you think it's pretty black white
Gregory Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
We can easily, and backwards-compatibly, improve pg_restore to do
concurrent restores. Trying to make psql do something like this will
require a complete rewrite, and there is no prospect that it will work
for any
Neil,
I take a look at this periodically. Apparently the last run of the tool
for Postgres happened on October 30th -- do you know if there's a way to
schedule more frequent runs?
If we get volunteers set up, they will start running it daily.
--
--Josh
Josh Berkus
PostgreSQL @ Sun
San
Robert,
Interesting, seems pghackers dropped me from the list, so I missed
several messages in the thread.
Huh. I thougth this was just my full mailbox ... I missed all of the
messages this weekend. Who else got nailed?
Marc?
--
--Josh
Josh Berkus
PostgreSQL @ Sun
San Francisco
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Tue, 26 Feb 2008 14:45:23 -0800
Josh Berkus [EMAIL PROTECTED] wrote:
Neil,
I take a look at this periodically. Apparently the last run of the
tool for Postgres happened on October 30th -- do you know if
there's a way to schedule more
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Tue, 26 Feb 2008 14:47:09 -0800
Josh Berkus [EMAIL PROTECTED] wrote:
Robert,
Interesting, seems pghackers dropped me from the list, so I missed
several messages in the thread.
Huh. I thougth this was just my full mailbox ... I missed
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Tue, 26 Feb 2008 07:43:23 -0800
Joshua D. Drake [EMAIL PROTECTED] wrote:
On Mon, 25 Feb 2008 22:25:09 -0800
Joshua D. Drake [EMAIL PROTECTED] wrote:
24 connections: 3.5 hours
12 connections: 4.5 hours
24 connections RAW_BUF_SIZE 524288
Joshua D. Drake [EMAIL PROTECTED] writes:
However one observation that I am going to (try) to test is that we are
spending a lot of time waiting for the last thread to finish.
IOW you haven't balanced the work given to each thread very well?
Or is there something else happening?
How exactly
On Tue, 2008-02-26 at 14:57 -0800, Joshua D. Drake wrote:
Would there be a way to script the responses to flag us for things
that are important?
I think you need human verification / analysis, which isn't an easy
thing to script.
-Neil
---(end of
Tom Lane wrote:
Peter Eisentraut [EMAIL PROTECTED] writes:
There is a build farm failure now because (apparently) an old make version:
http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dawn_batdt=2008-02-26%2019:00:01
The new amended backend linking code needs GNU make 3.80,
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Tue, 26 Feb 2008 18:39:53 -0500
Tom Lane [EMAIL PROTECTED] wrote:
Joshua D. Drake [EMAIL PROTECTED] writes:
However one observation that I am going to (try) to test is that we
are spending a lot of time waiting for the last thread to finish.
Andrew Dunstan [EMAIL PROTECTED] writes:
only objfiles.txt uses the somewhat arcane feature that is apparently
breaking dawn_bat - it is apparently not used anywhere else in our build
system.
Is that really the only way we can do it? Maybe Peter understands it but
I don't, and I'm not
Joshua D. Drake [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] wrote:
How exactly are you allocating tasks to threads in this prototype,
anyway?
Right there is no balance here. Let me explain what I did. I performed
a pg_restore -l to get the TOC file. I then broke that file up into
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Tue, 26 Feb 2008 19:03:57 -0500
Tom Lane [EMAIL PROTECTED] wrote:
So you have four serialization points not just one; at each one the
slowest subtask forces everyone else to wait, even if there's work
that could potentially be done on other
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
only objfiles.txt uses the somewhat arcane feature that is apparently
breaking dawn_bat - it is apparently not used anywhere else in our build
system.
Is that really the only way we can do it? Maybe Peter understands it but
I
1 - 100 of 110 matches
Mail list logo