Le 23 juil. 2015 19:27, "Alvaro Herrera" a
écrit :
>
> Laurent Laborde wrote:
>
> > Friendly greetings !
> >
> > What's the status of parallel clusterdb please ?
> > I'm having fun (and troubles) applying the vacuumdb patch to clusterdb.
> >
> > This thread also talk about unifying code for parall
Laurent Laborde wrote:
> Friendly greetings !
>
> What's the status of parallel clusterdb please ?
> I'm having fun (and troubles) applying the vacuumdb patch to clusterdb.
>
> This thread also talk about unifying code for parallelizing clusterdb and
> reindex.
> Was anything done about it ? Bec
On Fri, Jan 2, 2015 at 3:18 PM, Amit Kapila wrote:
>
>
> Okay, I have marked this patch as "Ready For Committer"
>
> Notes for Committer -
> There is one behavioural difference in the handling of --analyze-in-stages
> switch, when individual tables (by using -t option) are analyzed by
> using thi
Pavel Stehule wrote:
> should not be used a pessimist controlled locking instead?
>
Patches welcome.
--
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postg
2015-01-29 10:28 GMT+01:00 Fabrízio de Royes Mello
:
>
>
> Em quinta-feira, 29 de janeiro de 2015, Pavel Stehule <
> pavel.steh...@gmail.com> escreveu:
>
> Hi
>>
>> I am testing this feature on relative complex schema (38619 tables in db)
>> and I got deadlock
>>
>> [pavel@localhost bin]$ /usr/lo
Em quinta-feira, 29 de janeiro de 2015, Pavel Stehule <
pavel.steh...@gmail.com> escreveu:
> Hi
>
> I am testing this feature on relative complex schema (38619 tables in db)
> and I got deadlock
>
> [pavel@localhost bin]$ /usr/local/pgsql/bin/vacuumdb test2 -fz -j 4
> vacuumdb: vacuuming database
Hi
I am testing this feature on relative complex schema (38619 tables in db)
and I got deadlock
[pavel@localhost bin]$ /usr/local/pgsql/bin/vacuumdb test2 -fz -j 4
vacuumdb: vacuuming database "test2"
vacuumdb: vacuuming of database "test2" failed: ERROR: deadlock detected
DETAIL: Process 24689
On 23 January 2015 23:55, Alvaro Herrera,
> -j1 is now the same as not specifying anything, and vacuum_one_database
> uses more common code in the parallel and not-parallel cases: the not-
> parallel case is just the parallel case with a single connection, so
> the setup and shutdown is mostly the
On 23 January 2015 21:10, Alvaro Herrera Wrote,
> In case you're up for doing some more work later on, there are two
> ideas
> here: move the backend's TranslateSocketError to src/common, and try to
> merge pg_dump's select_loop function with the one in this new code.
> But that's for another pat
Alvaro Herrera wrote:
> I'm tweaking your v24 a bit more now, thanks -- main change is to make
> vacuum_one_database be called only to run one analyze stage, so it never
> iterates for each stage; callers must iterate calling it multiple times
> in those cases. (There's only one callsite that nee
Andres Freund wrote:
> On 2014-12-31 18:35:38 +0530, Amit Kapila wrote:
> > + PQsetnonblocking(connSlot[0].connection, 1);
> > +
> > + for (i = 1; i < concurrentCons; i++)
> > + {
> > + connSlot[i].connection = connectDatabase(dbname, host, port,
> > username,
> > +
Dilip kumar wrote:
> Changes:
> 1. In current patch vacuum_one_database (for table list), have the table loop
> outside and analyze_stage loop inside, so it will finish
> All three stage for one table first and then pick the next table. But
> vacuum_one_database_parallel will do the stage loop o
On 22 January 2015 23:16, Alvaro Herrera Wrote,
> Here's v23.
>
> There are two things that continue to bother me and I would like you,
> dear patch author, to change them before committing this patch:
>
> 1. I don't like having vacuum_one_database() and a separate
> vacuum_one_database_parallel()
Here's v23.
I reworked a number of things. First, I changed trivial stuff like
grouping all the vacuuming options in a struct, to avoid passing an
excessive number of arguments to functions. full, freeze, analyze_only,
and_analyze and verbose are all in a single struct now. Also, the
stage_comm
On Thu, Jan 22, 2015 at 8:22 AM, Alvaro Herrera
wrote:
>
> Amit Kapila wrote:
> > On Wed, Jan 21, 2015 at 8:51 PM, Alvaro Herrera <
alvhe...@2ndquadrant.com>
> > wrote:
> > >
> > > I didn't understand the coding in GetQueryResult(); why do we check
the
> > > result status of the last returned resu
Amit Kapila wrote:
> On Wed, Jan 21, 2015 at 8:51 PM, Alvaro Herrera
> wrote:
> >
> > I didn't understand the coding in GetQueryResult(); why do we check the
> > result status of the last returned result only? It seems simpler to me
> > to check it inside the loop, but maybe there's a reason you
On Wed, Jan 21, 2015 at 8:51 PM, Alvaro Herrera
wrote:
>
> I didn't understand the coding in GetQueryResult(); why do we check the
> result status of the last returned result only? It seems simpler to me
> to check it inside the loop, but maybe there's a reason you didn't do it
> like that?
>
> A
I didn't understand the coding in GetQueryResult(); why do we check the
result status of the last returned result only? It seems simpler to me
to check it inside the loop, but maybe there's a reason you didn't do it
like that?
Also, what is the reason we were ignoring those errors only in
"comple
On 04 January 2015 07:27, Andres Freund Wrote,
> On 2014-12-31 18:35:38 +0530, Amit Kapila wrote:
> > + -j class="parameter">jobs
> > + --jobs= class="parameter">njobs
> > +
> > +
> > +Number of concurrent connections to perform the operation.
> > +This optio
On Fri, Jan 16, 2015 at 12:53 AM, Alvaro Herrera
wrote:
> Michael Paquier wrote:
>
>> Andres, this patch needs more effort from the author, right? So
>> marking it as returned with feedback.
>
> I will give this patch a look in the current commitfest, if you can
> please set as 'needs review' inst
Michael Paquier wrote:
> Andres, this patch needs more effort from the author, right? So
> marking it as returned with feedback.
I will give this patch a look in the current commitfest, if you can
please set as 'needs review' instead with me as reviewer, so that I
don't forget, I would appreciate
On Sun, Jan 4, 2015 at 10:57 AM, Andres Freund wrote:
> On 2014-12-31 18:35:38 +0530, Amit Kapila wrote:
>> + -j > class="parameter">jobs
>> + --jobs=> class="parameter">njobs
>> +
>> +
>> +Number of concurrent connections to perform the operation.
>> +This o
On 2014-12-31 18:35:38 +0530, Amit Kapila wrote:
> + -j class="parameter">jobs
> + --jobs= class="parameter">njobs
> +
> +
> +Number of concurrent connections to perform the operation.
> +This option will enable the vacuum operation to run on asynchronous
> +
On Fri, Jan 2, 2015 at 8:34 PM, Kevin Grittner wrote:
>
> Amit Kapila wrote:
>
> > Notes for Committer -
> > There is one behavioural difference in the handling of
--analyze-in-stages
> > switch, when individual tables (by using -t option) are analyzed by
> > using this switch, patch will process
Amit Kapila wrote:
> Notes for Committer -
> There is one behavioural difference in the handling of --analyze-in-stages
> switch, when individual tables (by using -t option) are analyzed by
> using this switch, patch will process (in case of concurrent jobs) all the
> given tables for stage-1 and
On Fri, Jan 2, 2015 at 11:47 AM, Dilip kumar wrote:
>
> On 31 December 2014 18:36, Amit Kapila Wrote,
>
> >The patch looks good to me. I have done couple of
>
> >cosmetic changes (spelling mistakes, improve some comments,
>
> >etc.), check the same once and if you are okay, we can move
>
> >ahead
On 31 December 2014 18:36, Amit Kapila Wrote,
>The patch looks good to me. I have done couple of
>cosmetic changes (spelling mistakes, improve some comments,
>etc.), check the same once and if you are okay, we can move
>ahead.
Thanks for review and changes, changes looks fine to me..
Regards,
D
On Mon, Dec 29, 2014 at 11:10 AM, Dilip kumar
wrote:
>
> On 29 December 2014 10:22 Amit Kapila Wrote,
>
>
> I think nothing more to be handled from my side, you can go ahead with
review..
>
The patch looks good to me. I have done couple of
cosmetic changes (spelling mistakes, improve some commen
On 29 December 2014 10:22 Amit Kapila Wrote,
>> Case1:In Case for CompleteDB:
>>
>> In base code first it will process all the tables in stage 1 then in stage2
>> and so on, so that at some time all the tables are analyzed at least up to
>> certain stage.
>>
>> But If we process all the stages f
On Wed, Dec 24, 2014 at 4:00 PM, Dilip kumar wrote:
>
> Case1:In Case for CompleteDB:
>
> In base code first it will process all the tables in stage 1 then in
stage2 and so on, so that at some time all the tables are analyzed at least
up to certain stage.
>
> But If we process all the stages for o
On 19 December 2014 16:41, Amit Kapila Wrote,
>One idea is to send all the stages and corresponding Analyze commands
>to server in one go which means something like
>"BEGIN; SET default_statistics_target=1; SET vacuum_cost_delay=0;
> Analyze t1; COMMIT;"
>"BEGIN; SET default_statistics_target=10;
On Mon, Dec 15, 2014 at 4:18 PM, Dilip kumar wrote:
>
> On December 2014 17:31 Amit Kapila Wrote,
>
>
> >Hmm, theoretically I think new behaviour could lead to more I/O in
>
> >certain cases as compare to existing behaviour. The reason for more I/O
>
> >is that in the new behaviour, while doing A
On December 2014 17:31 Amit Kapila Wrote,
>I suggest rather than removing, edit the comment to indicate
>the idea behind code at that place.
Done
>Okay, I think this part of code is somewhat similar to what
>is done in pg_dump/parallel.c with some differences related
>to handling of inAbort. On
On Mon, Dec 8, 2014 at 7:33 AM, Dilip kumar wrote:
> On 06 December 2014 20:01 Amit Kapila Wrote
>
> >I wanted to understand what exactly the above loop is doing.
>
>
>
> >a.
>
> >first of all the comment on top of it says "Some of the slot
>
> >are free, ...", if some slot is free, then why do yo
will break the do..while loop.
From: Amit Kapila [mailto:amit.kapil...@gmail.com]
Sent: 06 December 2014 20:01
To: Dilip kumar
Cc: Magnus Hagander; Alvaro Herrera; Jan Lentfer; Tom Lane;
PostgreSQL-development; Sawada Masahiko; Euler Taveira
Subject: Re: [HACKERS] TODO : Allow parallel core
On Sat, Dec 6, 2014 at 9:01 PM, Amit Kapila wrote:
> If you agree, then we should try to avoid this change in new behaviour.
Still seeing many concerns about this patch, so marking it as returned
with feedback. If possible, switching it to the next CF would be fine
I guess as this work is still be
On Mon, Dec 1, 2014 at 12:18 PM, Dilip kumar wrote:
>
> On 24 November 2014 11:29, Amit Kapila Wrote,
>
I have verified that all previous comments are addressed and
the new version is much better than previous version.
>
> here we are setting each target once and doing for all the tables..
>
Hm
On 24 November 2014 11:29, Amit Kapila Wrote,
>I think still some of the comments given upthread are not handled:
>
>a. About cancel handling
Your Actual comment was -->
>One other related point is that I think still cancel handling mechanism
>is not completely right, code is doing that when th
On Mon, Nov 24, 2014 at 7:34 AM, Dilip kumar wrote:
>
> On 23 November 2014 14:45, Amit Kapila Wrote
>
>
>
> Thanks a lot for debugging and fixing the issue..
>
>
>
> Latest patch is attached, please have a look.
>
I think still some of the comments given upthread are not handled:
a. About canc
On 23 November 2014 14:45, Amit Kapila Wrote
Thanks a lot for debugging and fixing the issue..
>The stacktrace of crash is as below:
>#0 0x0080108cf3a4 in .strlen () from /lib64/libc.so.6
>#1 0x0080108925bc in ._IO_vfprintf () from /lib64/libc.so.6
>#2 0x0080108bc1e0 in .__GL__IO_
On Mon, Nov 17, 2014 at 8:55 AM, Dilip kumar wrote:
>
> On 13 November 2014 15:35 Amit Kapila Wrote,
> >As mentioned by you offlist that you are not able reproduce this
>
> >issue, I have tried again and what I observe is that I am able to
>
> >reproduce it only on *release* build and some cases
On 13 November 2014 15:35 Amit Kapila Wrote,
>As mentioned by you offlist that you are not able reproduce this
>issue, I have tried again and what I observe is that I am able to
>reproduce it only on *release* build and some cases work without
>this issue as well,
>example:
>./vacuumdb --analyze-i
On Mon, Oct 27, 2014 at 5:26 PM, Amit Kapila
wrote:
>
>
> Going further with verification of this patch, I found below issue:
> Run the testcase.sql file at below link:
>
http://www.postgresql.org/message-id/4205e661176a124faf891e0a6ba9135266347...@szxeml509-mbs.china.huawei.com
> ./vacuumdb --ana
On Tue, Oct 28, 2014 at 9:27 AM, Dilip kumar wrote:
> On 28 October 2014 09:18, Amit Kapila Wrote,
>
> >I am worried about the case if after setting the inAbort flag,
>
> >PQCancel() fails (returns error).
>
> >
>
> >> If select(maxFd + 1, workerset, NULL, NULL, &tv); come out, we can
know whether
On 28 October 2014 09:18, Amit Kapila Wrote,
>I am worried about the case if after setting the inAbort flag,
>PQCancel() fails (returns error).
>
>> If select(maxFd + 1, workerset, NULL, NULL, &tv); come out, we can know
>> whether it came out because of cancel query and handle it accordingly.
>>
On Tue, Oct 28, 2014 at 9:03 AM, Dilip kumar wrote:
>
> On 25 October 2014 17:52, Amit Kapila Wrote,
>
> >***
>
> >*** 358,363 handle_sigint(SIGNAL_ARGS)
>
> >--- 358,364
>
> >
>
> > /* Send QueryCancel if we are processing a database query */
>
> > if (cancelConn != NULL)
On 25 October 2014 17:52, Amit Kapila Wrote,
>***
>*** 358,363 handle_sigint(SIGNAL_ARGS)
>--- 358,364
>
> /* Send QueryCancel if we are processing a database query */
> if (cancelConn != NULL)
> {
>+ inAbort = true;
> if (PQcancel(cancelConn, errbuf, sizeof(errbuf)))
> f
On Sat, Oct 25, 2014 at 5:52 PM, Amit Kapila
wrote:
>
>
> ***
> *** 358,363 handle_sigint(SIGNAL_ARGS)
> --- 358,364
>
> /* Send QueryCancel if we are processing a database query */
> if (cancelConn != NULL)
> {
> + inAbort = true;
> if (PQcancel(cancelConn, errbuf, s
On Tue, Oct 7, 2014 at 11:10 AM, Dilip kumar wrote:
>
> On 26 September 2014 12:24, Amit Kapila Wrote,
>
> >I don't think this can handle cancel requests properly because
>
> >you are just setting it in GetIdleSlot() what if the cancel
>
> >request came during GetQueryResult() after sending sql fo
On 17 October 2014 14:05, Alvaro Herrera wrote:
> Of course, this is a task that requires much more thinking, design, and
> discussion than just adding multi-process capability to vacuumdb ...
Yes, please proceed with this patch as originally envisaged. No more
comments from me.
--
Simon Rigg
Amit Kapila wrote:
> On Fri, Oct 17, 2014 at 1:31 AM, Simon Riggs wrote:
> >
> > On 16 October 2014 15:09, Amit Kapila wrote:
> > c) seems like the only issue that needs any thought. I don't think its
> > going to be that hard.
> >
> > I don't see any problems with the other points. You can make
On 17 October 2014 12:52, Amit Kapila wrote:
> It is quite possible, but still I think to accomplish such a function,
> we need to have some mechanism where it can inform auto vacuum
> and then some changes in auto vacuum to receive/read that information
> and reply back. I don't think any such
On Fri, Oct 17, 2014 at 1:31 AM, Simon Riggs wrote:
>
> On 16 October 2014 15:09, Amit Kapila wrote:
>
> > I think doing anything on the server side can have higher complexity
like:
> > a. Does this function return immediately after sending request to
> > autovacuum, if yes then the behaviour of
On 16 October 2014 15:09, Amit Kapila wrote:
>> Just send a message to autovacuum to request an immediate action. Let
>> it manage the children and the tasks.
>>
>>SELECT pg_autovacuum_immediate(nworkers = N, list_of_tables);
>>
>> Request would allocate an additional N workers and immediatel
On Thu, Oct 16, 2014 at 1:56 PM, Simon Riggs wrote:
> On 16 October 2014 06:05, Amit Kapila wrote:
> > On Thu, Oct 16, 2014 at 8:08 AM, Simon Riggs
wrote:
> >>
> >> This patch seems to allow me to run multiple VACUUMs at once. But I
> >> can already do this, with autovacuum.
> >>
> >> Is there a
On 16 October 2014 06:05, Amit Kapila wrote:
> On Thu, Oct 16, 2014 at 8:08 AM, Simon Riggs wrote:
>>
>>
>> I've been trying to review this thread with the thought "what does
>> this give me?". I am keen to encourage contributions and also keen to
>> extend our feature set, but I do not wish to c
On Thu, Oct 16, 2014 at 8:08 AM, Simon Riggs wrote:
>
>
> I've been trying to review this thread with the thought "what does
> this give me?". I am keen to encourage contributions and also keen to
> extend our feature set, but I do not wish to complicate our code base.
> Dilip's developments do se
On 27 September 2014 03:55, Jeff Janes wrote:
> On Fri, Sep 26, 2014 at 11:47 AM, Alvaro Herrera
> wrote:
>>
>> Gavin Flower wrote:
>>
>> > Curious: would it be both feasible and useful to have multiple
>> > workers process a 'large' table, without complicating things too
>> > much? The could ea
On 26 September 2014 12:24, Amit Kapila Wrote,
>I don't think this can handle cancel requests properly because
>you are just setting it in GetIdleSlot() what if the cancel
>request came during GetQueryResult() after sending sql for
>all connections (probably thats the reason why Jeff is not able
>
On 26 September 2014 01:24, Jeff Janes Wrote,
>I think you have an off-by-one error in the index into the array of file
>handles.
>Actually the problem is that the socket for the master connection was not
>getting initialized, see my one line addition here.
> connSlot = (ParallelSlot*)pg_ma
On 27/09/14 11:36, Gregory Smith wrote:
On 9/26/14, 2:38 PM, Gavin Flower wrote:
Curious: would it be both feasible and useful to have multiple
workers process a 'large' table, without complicating things too
much? The could each start at a different position in the file.
Not really feasible
On Fri, Sep 26, 2014 at 11:47 AM, Alvaro Herrera
wrote:
> Gavin Flower wrote:
>
> > Curious: would it be both feasible and useful to have multiple
> > workers process a 'large' table, without complicating things too
> > much? The could each start at a different position in the file.
>
> Feasible
On 9/26/14, 2:38 PM, Gavin Flower wrote:
Curious: would it be both feasible and useful to have multiple workers
process a 'large' table, without complicating things too much? The
could each start at a different position in the file.
Not really feasible without a major overhaul. It might be m
Gavin Flower wrote:
> Curious: would it be both feasible and useful to have multiple
> workers process a 'large' table, without complicating things too
> much? The could each start at a different position in the file.
Feasible: no. Useful: maybe, we don't really know. (You could just as
well h
On 27/09/14 01:36, Alvaro Herrera wrote:
Amit Kapila wrote:
Today while again thinking about the startegy used in patch to
parallelize the operation (vacuum database), I think we can
improve the same for cases when number of connections are
lesser than number of tables in database (which I pres
On Fri, Sep 26, 2014 at 7:06 PM, Alvaro Herrera
wrote:
>
> Amit Kapila wrote:
>
> > Today while again thinking about the startegy used in patch to
> > parallelize the operation (vacuum database), I think we can
> > improve the same for cases when number of connections are
> > lesser than number of
Amit Kapila wrote:
> Today while again thinking about the startegy used in patch to
> parallelize the operation (vacuum database), I think we can
> improve the same for cases when number of connections are
> lesser than number of tables in database (which I presume
> will normally be the case). C
On Wed, Sep 24, 2014 at 3:18 PM, Dilip kumar wrote:
> On 24 August 2014 11:33, Amit Kapila Wrote
>
> >7. I think in new mechanism cancel handler will not work.
>
> >In single connection vacuum it was always set/reset
>
> >in function executeMaintenanceCommand(). You might need
>
> >to set/reset it
On Thu, Sep 25, 2014 at 10:00 AM, Jeff Janes wrote:
> On Wed, Sep 24, 2014 at 2:48 AM, Dilip kumar
> wrote:
>
>> On 24 August 2014 11:33, Amit Kapila Wrote
>>
>>
>>
>> Thanks for you comments, i have worked on both the review comment lists,
>> sent on 19 August, and 24 August.
>>
>>
>>
>> Lates
On Wed, Sep 24, 2014 at 2:48 AM, Dilip kumar wrote:
> On 24 August 2014 11:33, Amit Kapila Wrote
>
>
>
> Thanks for you comments, i have worked on both the review comment lists,
> sent on 19 August, and 24 August.
>
>
>
> Latest patch is attached with the mail..
>
Hi Dilip,
I think you have an
On 24 August 2014 11:33, Amit Kapila Wrote
Thanks for you comments, i have worked on both the review comment lists, sent
on 19 August, and 24 August.
Latest patch is attached with the mail..
on 19 August:
>You can compare against SQLSTATE by using below API.
>val = PQresultErrorFi
On Tue, Aug 19, 2014 at 4:27 PM, Amit Kapila
wrote:
>
> Few more comments:
>
Some more comments:
1. I could see one shortcoming in the way the patch has currently
parallelize the
work for --analyze-in-stages. Basically patch is performing the work for
each stage
for multiple tables in concu
On 21 August 2014 08:31, Amit Kapila Wrote,
> >>
> > >Not sure. How about *concurrent* or *multiple*?
> >
> >multiple isn't right, but we could say concurrent.
>I also find concurrent more appropriate.
>Dilip, could you please change it to concurrent in doc updates,
>variables, functions unless yo
On Thu, Aug 21, 2014 at 12:04 AM, Robert Haas wrote:
>
> On Tue, Aug 19, 2014 at 7:08 AM, Amit Kapila
wrote:
> > On Fri, Aug 15, 2014 at 12:55 AM, Robert Haas
wrote:
> >>
> >> On Mon, Aug 11, 2014 at 12:59 AM, Amit Kapila
> >> wrote:
> >> >
> >> > a. How about describing w.r.t asynchronous conn
On Tue, Aug 19, 2014 at 7:08 AM, Amit Kapila wrote:
> On Fri, Aug 15, 2014 at 12:55 AM, Robert Haas wrote:
>>
>> On Mon, Aug 11, 2014 at 12:59 AM, Amit Kapila
>> wrote:
>> > 1.
>> > +Number of parallel connections to perform the operation. This
>> > option will enable the vacuum
>> > +
On Fri, Aug 15, 2014 at 12:55 AM, Robert Haas wrote:
>
> On Mon, Aug 11, 2014 at 12:59 AM, Amit Kapila
wrote:
> > 1.
> > +Number of parallel connections to perform the operation. This
> > option will enable the vacuum
> > +operation to run on parallel connections, at a time one ta
On Wed, Aug 13, 2014 at 4:01 PM, Dilip kumar wrote:
> On 11 August 2014 10:29, Amit kapila wrote,
> >5.
>
> >res = executeQuery(conn,
>
> >"select relname, nspname from pg_class c, pg_namespace ns"
>
> >" where (relkind = \'r\' or relkind = \'m\')"
>
> >" and c.
On Mon, Aug 11, 2014 at 12:59 AM, Amit Kapila wrote:
> 1.
> +Number of parallel connections to perform the operation. This
> option will enable the vacuum
> +operation to run on parallel connections, at a time one table will
> be operated on one connection.
>
> a. How about describ
On 11 August 2014 10:29, Amit kapila wrote,
1. I have fixed all the review comments except few, and modified patch is
attached.
2. For not fixed comments, find inline reply in the mail..
>1.
>+Number of parallel connections to perform the operation. This option
>will enable th
On Mon, Aug 4, 2014 at 11:41 AM, Dilip kumar wrote:
>
> On 31 July 2014 10:59, Amit kapila Wrote,
>
>
>
> Thanks for the review and valuable comments.
> I have fixed all the comments and attached the revised patch.
I have again looked into your revised patch and would like
to share my findings wi
July 2014 10:59
To: Dilip kumar
Cc: Magnus Hagander; Alvaro Herrera; Jan Lentfer; Tom Lane;
PostgreSQL-development; Sawada Masahiko; Euler Taveira
Subject: Re: [HACKERS] TODO : Allow parallel cores to be used by vacuumdb [ WIP
]
On Fri, Jul 18, 2014 at 10:22 AM, Dilip kumar
mailto:dilip.ku
On Fri, Jul 18, 2014 at 10:22 AM, Dilip kumar
wrote:
> On 16 July 2014 12:13, Magnus Hagander Wrote,
> >Yeah, those are exactly my points. I think it would be significantly
simpler to do it that way, rather than forking and threading. And also
easier to make portable...
>
> >(and as a optimizatio
Jeff Janes wrote:
> Should we push the refactoring through anyway? I have a hard time
> believing that pg_dump is going to be the only client program we ever have
> that will need process-level parallelism, even if this feature itself does
> not need it. Why make the next person who comes along
On Wed, Jul 16, 2014 at 5:30 AM, Dilip kumar wrote:
> On 16 July 2014 12:13 Magnus Hagander Wrote,
>
> >>Yeah, those are exactly my points. I think it would be significantly
> simpler to do it that way, rather than forking and threading. And also
> easier to make portable...
>
> >>(and as a opt
On 16 July 2014 12:13, Magnus Hagander Wrote,
>Yeah, those are exactly my points. I think it would be significantly simpler
>to do it that way, rather than forking and threading. And also easier to make
>portable...
>(and as a optimization on Alvaros suggestion, you can of course reuse the
>
remove vac_parallel.h file and no need of refactoring
also:)
Thanks & Regards,
Dilip Kumar
From: Magnus Hagander [mailto:mag...@hagander.net]
Sent: 16 July 2014 12:13
To: Alvaro Herrera
Cc: Dilip kumar; Jan Lentfer; Tom Lane; PostgreSQL-development; Sawada
Masahiko; Euler Taveira
Subject: R
On Jul 16, 2014 7:05 AM, "Alvaro Herrera" wrote:
>
> Tom Lane wrote:
> > Dilip kumar writes:
> > > On 15 July 2014 19:01, Magnus Hagander Wrote,
> > >> I am late to this game, but the first thing to my mind was - do we
> > >> really need the whole forking/threading thing on the client at all?
> >
Tom Lane wrote:
> Dilip kumar writes:
> > On 15 July 2014 19:01, Magnus Hagander Wrote,
> >> I am late to this game, but the first thing to my mind was - do we
> >> really need the whole forking/threading thing on the client at all?
>
> > Thanks for the review, I understand you point, but I think
Dilip kumar writes:
> On 15 July 2014 19:01, Magnus Hagander Wrote,
>> I am late to this game, but the first thing to my mind was - do we
>> really need the whole forking/threading thing on the client at all?
> Thanks for the review, I understand you point, but I think if we have do this
> direc
On Tue, Jul 1, 2014 at 6:25 AM, Dilip kumar wrote:
> On 01 July 2014 03:48, Alvaro Wrote,
>
>> > In particular, pgpipe is almost an exact duplicate between them,
>> > except the copy in vac_parallel.c has fallen behind changes made to
>> > parallel.c. (Those changes would have fixed the Windows w
On Fri, Jun 27, 2014 at 4:10 AM, Dilip kumar wrote:
> On 27 June 2014 02:57, Jeff Wrote,
>
>> Based on that, I find most importantly that it doesn't seem to
>> correctly vacuum tables which have upper case letters in the name,
>> because it does not quote the table names when they need quotes.
>
>
On Fri, Jul 4, 2014 at 1:15 AM, Dilip kumar wrote:
> In attached patch, I have moved pgpipe, piperead functions to src/port/pipe.c
If we want to consider proceeding with this approach, you should
probably separate this into a refactoring patch that doesn't do
anything but move code around and a f
On Wed, Jul 2, 2014 at 11:45 PM, Alvaro Herrera
wrote:
>
> Jeff Janes wrote:
>
> > I would only envision using the parallel feature for vacuumdb after a
> > pg_upgrade or some other major maintenance window (that is the only
> > time I ever envision using vacuumdb at all). I don't think autovacuu
On Wed, Jul 2, 2014 at 2:27 PM, Dilip kumar wrote:
> On 01 July 2014 22:17, Sawada Masahiko Wrote,
>
>
>> I have executed latest patch.
>> One question is that how to use --jobs option is correct?
>> $ vacuumdb -d postgres --jobs=30
>>
>> I got following error.
>> vacuumdb: unrecognized option '
Jeff Janes wrote:
> I would only envision using the parallel feature for vacuumdb after a
> pg_upgrade or some other major maintenance window (that is the only
> time I ever envision using vacuumdb at all). I don't think autovacuum
> can be expected to handle such situations well, as it is design
On Mon, Jun 30, 2014 at 3:17 PM, Alvaro Herrera
wrote:
> Jeff Janes wrote:
>
>> In particular, pgpipe is almost an exact duplicate between them,
>> except the copy in vac_parallel.c has fallen behind changes made to
>> parallel.c. (Those changes would have fixed the Windows warnings). I
>> think
On Tue, Jul 1, 2014 at 1:25 PM, Dilip kumar wrote:
> On 01 July 2014 03:48, Alvaro Wrote,
>
>> > In particular, pgpipe is almost an exact duplicate between them,
>> > except the copy in vac_parallel.c has fallen behind changes made to
>> > parallel.c. (Those changes would have fixed the Windows w
On 01 July 2014 03:48, Alvaro Wrote,
> > In particular, pgpipe is almost an exact duplicate between them,
> > except the copy in vac_parallel.c has fallen behind changes made to
> > parallel.c. (Those changes would have fixed the Windows warnings).
> I
> > think that this function (and perhaps ot
On 01 July 2014 03:31, Jeff Janes Wrote,
>
> I get a compiler warning when building on Windows. When I started
> looking into that, I see that two files have too much code duplication
> between them:
Thanks for Reviewing,
>
> src/bin/scripts/vac_parallel.c (new file)
> src/bin/pg_dump/paral
Jeff Janes wrote:
> In particular, pgpipe is almost an exact duplicate between them,
> except the copy in vac_parallel.c has fallen behind changes made to
> parallel.c. (Those changes would have fixed the Windows warnings). I
> think that this function (and perhaps other parts as
> well--"exit_h
1 - 100 of 116 matches
Mail list logo