Le 23 juil. 2015 19:27, Alvaro Herrera alvhe...@2ndquadrant.com a
écrit :
Laurent Laborde wrote:
Friendly greetings !
What's the status of parallel clusterdb please ?
I'm having fun (and troubles) applying the vacuumdb patch to clusterdb.
This thread also talk about unifying code
On Fri, Jan 2, 2015 at 3:18 PM, Amit Kapila amit.kapil...@gmail.com wrote:
Okay, I have marked this patch as Ready For Committer
Notes for Committer -
There is one behavioural difference in the handling of --analyze-in-stages
switch, when individual tables (by using -t option) are analyzed
Laurent Laborde wrote:
Friendly greetings !
What's the status of parallel clusterdb please ?
I'm having fun (and troubles) applying the vacuumdb patch to clusterdb.
This thread also talk about unifying code for parallelizing clusterdb and
reindex.
Was anything done about it ? Because i
2015-01-29 10:28 GMT+01:00 Fabrízio de Royes Mello fabriziome...@gmail.com
:
Em quinta-feira, 29 de janeiro de 2015, Pavel Stehule
pavel.steh...@gmail.com escreveu:
Hi
I am testing this feature on relative complex schema (38619 tables in db)
and I got deadlock
[pavel@localhost bin]$
Em quinta-feira, 29 de janeiro de 2015, Pavel Stehule
pavel.steh...@gmail.com escreveu:
Hi
I am testing this feature on relative complex schema (38619 tables in db)
and I got deadlock
[pavel@localhost bin]$ /usr/local/pgsql/bin/vacuumdb test2 -fz -j 4
vacuumdb: vacuuming database test2
Pavel Stehule wrote:
should not be used a pessimist controlled locking instead?
Patches welcome.
--
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training Services
--
Sent via pgsql-hackers mailing list
Hi
I am testing this feature on relative complex schema (38619 tables in db)
and I got deadlock
[pavel@localhost bin]$ /usr/local/pgsql/bin/vacuumdb test2 -fz -j 4
vacuumdb: vacuuming database test2
vacuumdb: vacuuming of database test2 failed: ERROR: deadlock detected
DETAIL: Process 24689
On 23 January 2015 21:10, Alvaro Herrera Wrote,
In case you're up for doing some more work later on, there are two
ideas
here: move the backend's TranslateSocketError to src/common, and try to
merge pg_dump's select_loop function with the one in this new code.
But that's for another patch
On 23 January 2015 23:55, Alvaro Herrera,
-j1 is now the same as not specifying anything, and vacuum_one_database
uses more common code in the parallel and not-parallel cases: the not-
parallel case is just the parallel case with a single connection, so
the setup and shutdown is mostly the
Alvaro Herrera wrote:
I'm tweaking your v24 a bit more now, thanks -- main change is to make
vacuum_one_database be called only to run one analyze stage, so it never
iterates for each stage; callers must iterate calling it multiple times
in those cases. (There's only one callsite that needs
Andres Freund wrote:
On 2014-12-31 18:35:38 +0530, Amit Kapila wrote:
+ PQsetnonblocking(connSlot[0].connection, 1);
+
+ for (i = 1; i concurrentCons; i++)
+ {
+ connSlot[i].connection = connectDatabase(dbname, host, port,
username,
+
On 22 January 2015 23:16, Alvaro Herrera Wrote,
Here's v23.
There are two things that continue to bother me and I would like you,
dear patch author, to change them before committing this patch:
1. I don't like having vacuum_one_database() and a separate
vacuum_one_database_parallel(). I
Dilip kumar wrote:
Changes:
1. In current patch vacuum_one_database (for table list), have the table loop
outside and analyze_stage loop inside, so it will finish
All three stage for one table first and then pick the next table. But
vacuum_one_database_parallel will do the stage loop
Here's v23.
I reworked a number of things. First, I changed trivial stuff like
grouping all the vacuuming options in a struct, to avoid passing an
excessive number of arguments to functions. full, freeze, analyze_only,
and_analyze and verbose are all in a single struct now. Also, the
On Thu, Jan 22, 2015 at 8:22 AM, Alvaro Herrera alvhe...@2ndquadrant.com
wrote:
Amit Kapila wrote:
On Wed, Jan 21, 2015 at 8:51 PM, Alvaro Herrera
alvhe...@2ndquadrant.com
wrote:
I didn't understand the coding in GetQueryResult(); why do we check
the
result status of the last
Amit Kapila wrote:
On Wed, Jan 21, 2015 at 8:51 PM, Alvaro Herrera alvhe...@2ndquadrant.com
wrote:
I didn't understand the coding in GetQueryResult(); why do we check the
result status of the last returned result only? It seems simpler to me
to check it inside the loop, but maybe
I didn't understand the coding in GetQueryResult(); why do we check the
result status of the last returned result only? It seems simpler to me
to check it inside the loop, but maybe there's a reason you didn't do it
like that?
Also, what is the reason we were ignoring those errors only in
Michael Paquier wrote:
Andres, this patch needs more effort from the author, right? So
marking it as returned with feedback.
I will give this patch a look in the current commitfest, if you can
please set as 'needs review' instead with me as reviewer, so that I
don't forget, I would appreciate
On Fri, Jan 16, 2015 at 12:53 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
Michael Paquier wrote:
Andres, this patch needs more effort from the author, right? So
marking it as returned with feedback.
I will give this patch a look in the current commitfest, if you can
please set as
On Sun, Jan 4, 2015 at 10:57 AM, Andres Freund and...@2ndquadrant.com wrote:
On 2014-12-31 18:35:38 +0530, Amit Kapila wrote:
+ termoption-j replaceable
class=parameterjobs/replaceable/option/term
+ termoption--jobs=replaceable
class=parameternjobs/replaceable/option/term
+
On 2014-12-31 18:35:38 +0530, Amit Kapila wrote:
+ termoption-j replaceable
class=parameterjobs/replaceable/option/term
+ termoption--jobs=replaceable
class=parameternjobs/replaceable/option/term
+ listitem
+ para
+Number of concurrent connections to perform
On Fri, Jan 2, 2015 at 11:47 AM, Dilip kumar dilip.ku...@huawei.com wrote:
On 31 December 2014 18:36, Amit Kapila Wrote,
The patch looks good to me. I have done couple of
cosmetic changes (spelling mistakes, improve some comments,
etc.), check the same once and if you are okay, we can
Amit Kapila amit.kapil...@gmail.com wrote:
Notes for Committer -
There is one behavioural difference in the handling of --analyze-in-stages
switch, when individual tables (by using -t option) are analyzed by
using this switch, patch will process (in case of concurrent jobs) all the
given
On Fri, Jan 2, 2015 at 8:34 PM, Kevin Grittner kgri...@ymail.com wrote:
Amit Kapila amit.kapil...@gmail.com wrote:
Notes for Committer -
There is one behavioural difference in the handling of
--analyze-in-stages
switch, when individual tables (by using -t option) are analyzed by
using
On Mon, Dec 29, 2014 at 11:10 AM, Dilip kumar dilip.ku...@huawei.com
wrote:
On 29 December 2014 10:22 Amit Kapila Wrote,
I think nothing more to be handled from my side, you can go ahead with
review..
The patch looks good to me. I have done couple of
cosmetic changes (spelling mistakes,
On Wed, Dec 24, 2014 at 4:00 PM, Dilip kumar dilip.ku...@huawei.com wrote:
Case1:In Case for CompleteDB:
In base code first it will process all the tables in stage 1 then in
stage2 and so on, so that at some time all the tables are analyzed at least
up to certain stage.
But If we process all
On 29 December 2014 10:22 Amit Kapila Wrote,
Case1:In Case for CompleteDB:
In base code first it will process all the tables in stage 1 then in stage2
and so on, so that at some time all the tables are analyzed at least up to
certain stage.
But If we process all the stages for one table
On 19 December 2014 16:41, Amit Kapila Wrote,
One idea is to send all the stages and corresponding Analyze commands
to server in one go which means something like
BEGIN; SET default_statistics_target=1; SET vacuum_cost_delay=0;
Analyze t1; COMMIT;
BEGIN; SET default_statistics_target=10; RESET
On Mon, Dec 15, 2014 at 4:18 PM, Dilip kumar dilip.ku...@huawei.com wrote:
On December 2014 17:31 Amit Kapila Wrote,
Hmm, theoretically I think new behaviour could lead to more I/O in
certain cases as compare to existing behaviour. The reason for more I/O
is that in the new behaviour,
On December 2014 17:31 Amit Kapila Wrote,
I suggest rather than removing, edit the comment to indicate
the idea behind code at that place.
Done
Okay, I think this part of code is somewhat similar to what
is done in pg_dump/parallel.c with some differences related
to handling of inAbort. One
On Sat, Dec 6, 2014 at 9:01 PM, Amit Kapila amit.kapil...@gmail.com wrote:
If you agree, then we should try to avoid this change in new behaviour.
Still seeing many concerns about this patch, so marking it as returned
with feedback. If possible, switching it to the next CF would be fine
I guess
December 2014 20:01
To: Dilip kumar
Cc: Magnus Hagander; Alvaro Herrera; Jan Lentfer; Tom Lane;
PostgreSQL-development; Sawada Masahiko; Euler Taveira
Subject: Re: [HACKERS] TODO : Allow parallel cores to be used by vacuumdb [ WIP
]
On Mon, Dec 1, 2014 at 12:18 PM, Dilip kumar
dilip.ku
On Mon, Dec 8, 2014 at 7:33 AM, Dilip kumar dilip.ku...@huawei.com wrote:
On 06 December 2014 20:01 Amit Kapila Wrote
I wanted to understand what exactly the above loop is doing.
a.
first of all the comment on top of it says Some of the slot
are free, ..., if some slot is free, then why
On Mon, Dec 1, 2014 at 12:18 PM, Dilip kumar dilip.ku...@huawei.com wrote:
On 24 November 2014 11:29, Amit Kapila Wrote,
I have verified that all previous comments are addressed and
the new version is much better than previous version.
here we are setting each target once and doing for all
On 24 November 2014 11:29, Amit Kapila Wrote,
I think still some of the comments given upthread are not handled:
a. About cancel handling
Your Actual comment was --
One other related point is that I think still cancel handling mechanism
is not completely right, code is doing that when there
On 23 November 2014 14:45, Amit Kapila Wrote
Thanks a lot for debugging and fixing the issue..
The stacktrace of crash is as below:
#0 0x0080108cf3a4 in .strlen () from /lib64/libc.so.6
#1 0x0080108925bc in ._IO_vfprintf () from /lib64/libc.so.6
#2 0x0080108bc1e0 in
On Mon, Nov 24, 2014 at 7:34 AM, Dilip kumar dilip.ku...@huawei.com wrote:
On 23 November 2014 14:45, Amit Kapila Wrote
Thanks a lot for debugging and fixing the issue..
Latest patch is attached, please have a look.
I think still some of the comments given upthread are not handled:
a.
On Mon, Nov 17, 2014 at 8:55 AM, Dilip kumar dilip.ku...@huawei.com wrote:
On 13 November 2014 15:35 Amit Kapila Wrote,
As mentioned by you offlist that you are not able reproduce this
issue, I have tried again and what I observe is that I am able to
reproduce it only on *release* build
On 13 November 2014 15:35 Amit Kapila Wrote,
As mentioned by you offlist that you are not able reproduce this
issue, I have tried again and what I observe is that I am able to
reproduce it only on *release* build and some cases work without
this issue as well,
example:
./vacuumdb
On Mon, Oct 27, 2014 at 5:26 PM, Amit Kapila amit.kapil...@gmail.com
wrote:
Going further with verification of this patch, I found below issue:
Run the testcase.sql file at below link:
http://www.postgresql.org/message-id/4205e661176a124faf891e0a6ba9135266347...@szxeml509-mbs.china.huawei.com
On Tue, Oct 28, 2014 at 9:27 AM, Dilip kumar dilip.ku...@huawei.com wrote:
On 28 October 2014 09:18, Amit Kapila Wrote,
I am worried about the case if after setting the inAbort flag,
PQCancel() fails (returns error).
If select(maxFd + 1, workerset, NULL, NULL, tv); come out, we can
know
On Sat, Oct 25, 2014 at 5:52 PM, Amit Kapila amit.kapil...@gmail.com
wrote:
***
*** 358,363 handle_sigint(SIGNAL_ARGS)
--- 358,364
/* Send QueryCancel if we are processing a database query */
if (cancelConn != NULL)
{
+ inAbort = true;
if
On 25 October 2014 17:52, Amit Kapila Wrote,
***
*** 358,363 handle_sigint(SIGNAL_ARGS)
--- 358,364
/* Send QueryCancel if we are processing a database query */
if (cancelConn != NULL)
{
+ inAbort = true;
if (PQcancel(cancelConn, errbuf, sizeof(errbuf)))
On Tue, Oct 28, 2014 at 9:03 AM, Dilip kumar dilip.ku...@huawei.com wrote:
On 25 October 2014 17:52, Amit Kapila Wrote,
***
*** 358,363 handle_sigint(SIGNAL_ARGS)
--- 358,364
/* Send QueryCancel if we are processing a database query */
if (cancelConn !=
On 28 October 2014 09:18, Amit Kapila Wrote,
I am worried about the case if after setting the inAbort flag,
PQCancel() fails (returns error).
If select(maxFd + 1, workerset, NULL, NULL, tv); come out, we can know
whether it came out because of cancel query and handle it accordingly.
Yeah,
On Tue, Oct 7, 2014 at 11:10 AM, Dilip kumar dilip.ku...@huawei.com wrote:
On 26 September 2014 12:24, Amit Kapila Wrote,
I don't think this can handle cancel requests properly because
you are just setting it in GetIdleSlot() what if the cancel
request came during GetQueryResult() after
On Fri, Oct 17, 2014 at 1:31 AM, Simon Riggs si...@2ndquadrant.com wrote:
On 16 October 2014 15:09, Amit Kapila amit.kapil...@gmail.com wrote:
I think doing anything on the server side can have higher complexity
like:
a. Does this function return immediately after sending request to
On 17 October 2014 12:52, Amit Kapila amit.kapil...@gmail.com wrote:
It is quite possible, but still I think to accomplish such a function,
we need to have some mechanism where it can inform auto vacuum
and then some changes in auto vacuum to receive/read that information
and reply back. I
Amit Kapila wrote:
On Fri, Oct 17, 2014 at 1:31 AM, Simon Riggs si...@2ndquadrant.com wrote:
On 16 October 2014 15:09, Amit Kapila amit.kapil...@gmail.com wrote:
c) seems like the only issue that needs any thought. I don't think its
going to be that hard.
I don't see any problems
On 17 October 2014 14:05, Alvaro Herrera alvhe...@2ndquadrant.com wrote:
Of course, this is a task that requires much more thinking, design, and
discussion than just adding multi-process capability to vacuumdb ...
Yes, please proceed with this patch as originally envisaged. No more
comments
On 16 October 2014 06:05, Amit Kapila amit.kapil...@gmail.com wrote:
On Thu, Oct 16, 2014 at 8:08 AM, Simon Riggs si...@2ndquadrant.com wrote:
I've been trying to review this thread with the thought what does
this give me?. I am keen to encourage contributions and also keen to
extend our
On Thu, Oct 16, 2014 at 1:56 PM, Simon Riggs si...@2ndquadrant.com wrote:
On 16 October 2014 06:05, Amit Kapila amit.kapil...@gmail.com wrote:
On Thu, Oct 16, 2014 at 8:08 AM, Simon Riggs si...@2ndquadrant.com
wrote:
This patch seems to allow me to run multiple VACUUMs at once. But I
can
On 16 October 2014 15:09, Amit Kapila amit.kapil...@gmail.com wrote:
Just send a message to autovacuum to request an immediate action. Let
it manage the children and the tasks.
SELECT pg_autovacuum_immediate(nworkers = N, list_of_tables);
Request would allocate an additional N workers
On 27 September 2014 03:55, Jeff Janes jeff.ja...@gmail.com wrote:
On Fri, Sep 26, 2014 at 11:47 AM, Alvaro Herrera alvhe...@2ndquadrant.com
wrote:
Gavin Flower wrote:
Curious: would it be both feasible and useful to have multiple
workers process a 'large' table, without complicating
On Thu, Oct 16, 2014 at 8:08 AM, Simon Riggs si...@2ndquadrant.com wrote:
I've been trying to review this thread with the thought what does
this give me?. I am keen to encourage contributions and also keen to
extend our feature set, but I do not wish to complicate our code base.
Dilip's
On 26 September 2014 01:24, Jeff Janes Wrote,
I think you have an off-by-one error in the index into the array of file
handles.
Actually the problem is that the socket for the master connection was not
getting initialized, see my one line addition here.
connSlot =
On 26 September 2014 12:24, Amit Kapila Wrote,
I don't think this can handle cancel requests properly because
you are just setting it in GetIdleSlot() what if the cancel
request came during GetQueryResult() after sending sql for
all connections (probably thats the reason why Jeff is not able
to
On Wed, Sep 24, 2014 at 3:18 PM, Dilip kumar dilip.ku...@huawei.com wrote:
On 24 August 2014 11:33, Amit Kapila Wrote
7. I think in new mechanism cancel handler will not work.
In single connection vacuum it was always set/reset
in function executeMaintenanceCommand(). You might need
to
Amit Kapila wrote:
Today while again thinking about the startegy used in patch to
parallelize the operation (vacuum database), I think we can
improve the same for cases when number of connections are
lesser than number of tables in database (which I presume
will normally be the case).
On Fri, Sep 26, 2014 at 7:06 PM, Alvaro Herrera alvhe...@2ndquadrant.com
wrote:
Amit Kapila wrote:
Today while again thinking about the startegy used in patch to
parallelize the operation (vacuum database), I think we can
improve the same for cases when number of connections are
lesser
On 27/09/14 01:36, Alvaro Herrera wrote:
Amit Kapila wrote:
Today while again thinking about the startegy used in patch to
parallelize the operation (vacuum database), I think we can
improve the same for cases when number of connections are
lesser than number of tables in database (which I
Gavin Flower wrote:
Curious: would it be both feasible and useful to have multiple
workers process a 'large' table, without complicating things too
much? The could each start at a different position in the file.
Feasible: no. Useful: maybe, we don't really know. (You could just as
well
On 9/26/14, 2:38 PM, Gavin Flower wrote:
Curious: would it be both feasible and useful to have multiple workers
process a 'large' table, without complicating things too much? The
could each start at a different position in the file.
Not really feasible without a major overhaul. It might be
On Fri, Sep 26, 2014 at 11:47 AM, Alvaro Herrera alvhe...@2ndquadrant.com
wrote:
Gavin Flower wrote:
Curious: would it be both feasible and useful to have multiple
workers process a 'large' table, without complicating things too
much? The could each start at a different position in the
On 27/09/14 11:36, Gregory Smith wrote:
On 9/26/14, 2:38 PM, Gavin Flower wrote:
Curious: would it be both feasible and useful to have multiple
workers process a 'large' table, without complicating things too
much? The could each start at a different position in the file.
Not really
On Wed, Sep 24, 2014 at 2:48 AM, Dilip kumar dilip.ku...@huawei.com wrote:
On 24 August 2014 11:33, Amit Kapila Wrote
Thanks for you comments, i have worked on both the review comment lists,
sent on 19 August, and 24 August.
Latest patch is attached with the mail..
Hi Dilip,
I think
On Thu, Sep 25, 2014 at 10:00 AM, Jeff Janes jeff.ja...@gmail.com wrote:
On Wed, Sep 24, 2014 at 2:48 AM, Dilip kumar dilip.ku...@huawei.com
wrote:
On 24 August 2014 11:33, Amit Kapila Wrote
Thanks for you comments, i have worked on both the review comment lists,
sent on 19 August, and
On 24 August 2014 11:33, Amit Kapila Wrote
Thanks for you comments, i have worked on both the review comment lists, sent
on 19 August, and 24 August.
Latest patch is attached with the mail..
on 19 August:
You can compare against SQLSTATE by using below API.
val =
On Tue, Aug 19, 2014 at 4:27 PM, Amit Kapila amit.kapil...@gmail.com
wrote:
Few more comments:
Some more comments:
1. I could see one shortcoming in the way the patch has currently
parallelize the
work for --analyze-in-stages. Basically patch is performing the work for
each stage
for
On 21 August 2014 08:31, Amit Kapila Wrote,
Not sure. How about *concurrent* or *multiple*?
multiple isn't right, but we could say concurrent.
I also find concurrent more appropriate.
Dilip, could you please change it to concurrent in doc updates,
variables, functions unless you see any
On Tue, Aug 19, 2014 at 7:08 AM, Amit Kapila amit.kapil...@gmail.com wrote:
On Fri, Aug 15, 2014 at 12:55 AM, Robert Haas robertmh...@gmail.com wrote:
On Mon, Aug 11, 2014 at 12:59 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
1.
+Number of parallel connections to perform the
On Thu, Aug 21, 2014 at 12:04 AM, Robert Haas robertmh...@gmail.com wrote:
On Tue, Aug 19, 2014 at 7:08 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
On Fri, Aug 15, 2014 at 12:55 AM, Robert Haas robertmh...@gmail.com
wrote:
On Mon, Aug 11, 2014 at 12:59 AM, Amit Kapila
On Wed, Aug 13, 2014 at 4:01 PM, Dilip kumar dilip.ku...@huawei.com wrote:
On 11 August 2014 10:29, Amit kapila wrote,
5.
res = executeQuery(conn,
select relname, nspname from pg_class c, pg_namespace ns
where (relkind = \'r\' or relkind = \'m\')
and
On Fri, Aug 15, 2014 at 12:55 AM, Robert Haas robertmh...@gmail.com wrote:
On Mon, Aug 11, 2014 at 12:59 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
1.
+Number of parallel connections to perform the operation. This
option will enable the vacuum
+operation to run on
On Mon, Aug 11, 2014 at 12:59 AM, Amit Kapila amit.kapil...@gmail.com wrote:
1.
+Number of parallel connections to perform the operation. This
option will enable the vacuum
+operation to run on parallel connections, at a time one table will
be operated on one connection.
a.
On 11 August 2014 10:29, Amit kapila wrote,
1. I have fixed all the review comments except few, and modified patch is
attached.
2. For not fixed comments, find inline reply in the mail..
1.
+Number of parallel connections to perform the operation. This option
will enable the
On Mon, Aug 4, 2014 at 11:41 AM, Dilip kumar dilip.ku...@huawei.com wrote:
On 31 July 2014 10:59, Amit kapila Wrote,
Thanks for the review and valuable comments.
I have fixed all the comments and attached the revised patch.
I have again looked into your revised patch and would like
to
July 2014 10:59
To: Dilip kumar
Cc: Magnus Hagander; Alvaro Herrera; Jan Lentfer; Tom Lane;
PostgreSQL-development; Sawada Masahiko; Euler Taveira
Subject: Re: [HACKERS] TODO : Allow parallel cores to be used by vacuumdb [ WIP
]
On Fri, Jul 18, 2014 at 10:22 AM, Dilip kumar
dilip.ku
On Fri, Jul 18, 2014 at 10:22 AM, Dilip kumar dilip.ku...@huawei.com
wrote:
On 16 July 2014 12:13, Magnus Hagander Wrote,
Yeah, those are exactly my points. I think it would be significantly
simpler to do it that way, rather than forking and threading. And also
easier to make portable...
(and
On Wed, Jul 16, 2014 at 5:30 AM, Dilip kumar dilip.ku...@huawei.com wrote:
On 16 July 2014 12:13 Magnus Hagander Wrote,
Yeah, those are exactly my points. I think it would be significantly
simpler to do it that way, rather than forking and threading. And also
easier to make portable...
Jeff Janes wrote:
Should we push the refactoring through anyway? I have a hard time
believing that pg_dump is going to be the only client program we ever have
that will need process-level parallelism, even if this feature itself does
not need it. Why make the next person who comes along
On 16 July 2014 12:13, Magnus Hagander Wrote,
Yeah, those are exactly my points. I think it would be significantly simpler
to do it that way, rather than forking and threading. And also easier to make
portable...
(and as a optimization on Alvaros suggestion, you can of course reuse the
On Jul 16, 2014 7:05 AM, Alvaro Herrera alvhe...@2ndquadrant.com wrote:
Tom Lane wrote:
Dilip kumar dilip.ku...@huawei.com writes:
On 15 July 2014 19:01, Magnus Hagander Wrote,
I am late to this game, but the first thing to my mind was - do we
really need the whole forking/threading
:)
Thanks Regards,
Dilip Kumar
From: Magnus Hagander [mailto:mag...@hagander.net]
Sent: 16 July 2014 12:13
To: Alvaro Herrera
Cc: Dilip kumar; Jan Lentfer; Tom Lane; PostgreSQL-development; Sawada
Masahiko; Euler Taveira
Subject: Re: [HACKERS] TODO : Allow parallel cores to be used by vacuumdb
On Tue, Jul 1, 2014 at 6:25 AM, Dilip kumar dilip.ku...@huawei.com wrote:
On 01 July 2014 03:48, Alvaro Wrote,
In particular, pgpipe is almost an exact duplicate between them,
except the copy in vac_parallel.c has fallen behind changes made to
parallel.c. (Those changes would have fixed
Dilip kumar dilip.ku...@huawei.com writes:
On 15 July 2014 19:01, Magnus Hagander Wrote,
I am late to this game, but the first thing to my mind was - do we
really need the whole forking/threading thing on the client at all?
Thanks for the review, I understand you point, but I think if we have
Tom Lane wrote:
Dilip kumar dilip.ku...@huawei.com writes:
On 15 July 2014 19:01, Magnus Hagander Wrote,
I am late to this game, but the first thing to my mind was - do we
really need the whole forking/threading thing on the client at all?
Thanks for the review, I understand you point,
On Fri, Jun 27, 2014 at 4:10 AM, Dilip kumar dilip.ku...@huawei.com wrote:
On 27 June 2014 02:57, Jeff Wrote,
Based on that, I find most importantly that it doesn't seem to
correctly vacuum tables which have upper case letters in the name,
because it does not quote the table names when they
On Fri, Jul 4, 2014 at 1:15 AM, Dilip kumar dilip.ku...@huawei.com wrote:
In attached patch, I have moved pgpipe, piperead functions to src/port/pipe.c
If we want to consider proceeding with this approach, you should
probably separate this into a refactoring patch that doesn't do
anything but
On Wed, Jul 2, 2014 at 11:45 PM, Alvaro Herrera alvhe...@2ndquadrant.com
wrote:
Jeff Janes wrote:
I would only envision using the parallel feature for vacuumdb after a
pg_upgrade or some other major maintenance window (that is the only
time I ever envision using vacuumdb at all). I don't
On Mon, Jun 30, 2014 at 3:17 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
Jeff Janes wrote:
In particular, pgpipe is almost an exact duplicate between them,
except the copy in vac_parallel.c has fallen behind changes made to
parallel.c. (Those changes would have fixed the Windows
Jeff Janes wrote:
I would only envision using the parallel feature for vacuumdb after a
pg_upgrade or some other major maintenance window (that is the only
time I ever envision using vacuumdb at all). I don't think autovacuum
can be expected to handle such situations well, as it is designed
On Wed, Jul 2, 2014 at 2:27 PM, Dilip kumar dilip.ku...@huawei.com wrote:
On 01 July 2014 22:17, Sawada Masahiko Wrote,
I have executed latest patch.
One question is that how to use --jobs option is correct?
$ vacuumdb -d postgres --jobs=30
I got following error.
vacuumdb: unrecognized
On Tue, Jul 1, 2014 at 1:25 PM, Dilip kumar dilip.ku...@huawei.com wrote:
On 01 July 2014 03:48, Alvaro Wrote,
In particular, pgpipe is almost an exact duplicate between them,
except the copy in vac_parallel.c has fallen behind changes made to
parallel.c. (Those changes would have fixed
On Fri, Jun 27, 2014 at 4:10 AM, Dilip kumar dilip.ku...@huawei.com wrote:
...
Updated patch is attached in the mail..
Thanks Dilip.
I get a compiler warning when building on Windows. When I started
looking into that, I see that two files have too much code duplication
between them:
Jeff Janes wrote:
In particular, pgpipe is almost an exact duplicate between them,
except the copy in vac_parallel.c has fallen behind changes made to
parallel.c. (Those changes would have fixed the Windows warnings). I
think that this function (and perhaps other parts as
On 01 July 2014 03:31, Jeff Janes Wrote,
I get a compiler warning when building on Windows. When I started
looking into that, I see that two files have too much code duplication
between them:
Thanks for Reviewing,
src/bin/scripts/vac_parallel.c (new file)
src/bin/pg_dump/parallel.c
On 01 July 2014 03:48, Alvaro Wrote,
In particular, pgpipe is almost an exact duplicate between them,
except the copy in vac_parallel.c has fallen behind changes made to
parallel.c. (Those changes would have fixed the Windows warnings).
I
think that this function (and perhaps other
On Thu, Jun 26, 2014 at 2:35 AM, Dilip kumar dilip.ku...@huawei.com wrote:
Thank you for giving your time, Please review the updated patch attached in
the mail.
1. Rebased the patch
2. Implemented parallel execution for new option --analyze-in-stages
Hi Dilip,
Thanks for
Hi,
I got following FAILED when I patched v3 to HEAD.
$ patch -d. -p1 ../patch/vacuumdb_parallel_v3.patch
patching file doc/src/sgml/ref/vacuumdb.sgml
Hunk #1 succeeded at 224 (offset 20 lines).
patching file src/bin/scripts/Makefile
Hunk #2 succeeded at 65 with fuzz 2 (offset -1 lines).
1 - 100 of 113 matches
Mail list logo