Stefan Beller <sbel...@google.com> writes:

> I'd like to counter your argument with quoting code from update_clone
> method:
> 
>      run_processes_parallel(1, get_next_task, start_failure,
> task_finished, &pp);
>
>      if (pp.print_unmatched) {
>          printf("#unmatched\n");
>          return 1;
>      }
>
>      for_each_string_list_item(item, &pp.projectlines) {
>          utf8_fprintf(stdout, "%s", item->string);
>      }
>
> So we do already all the cloning first, and then once we did all of that
> we just put out all accumulated lines of text. (It was harder to come up with
> a sufficient file name than just storing it in memory. I don't think
> memory is an
> issue here, only a few bytes per submodule. So even 1000 submodules would
> consume maybe 100kB)

That does not sound like a counter-argument; two bad design choices
compensating each other's shortcomings, perhaps ;-)

> Having a file though would allow us to continue after human
> interaction fixed one problem.

Yes.  That does sound like a better design.

This obviously depends on the impact to the other part of what
cmd_update() does, but your earlier idea to investigate the
feasibility and usefulness of updating "clone --recurse-submodules"
does sound like a good thing to do, too.  That's an excellent point.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to