On Sat, Oct 17, 2015 at 03:49:17PM -0700, John Johansen wrote:
> This adds a basic support for parallel compiles. It uses a fork()/wait
> @@ -286,8 +324,13 @@ static int process_arg(int c, char *optarg)
>               option = OPTION_ADD;
>               break;
>       case 'd':
> -             debug++;
> -             skip_read_cache = 1;
> +             if (!optarg) {
> +                     debug++;
> +                     skip_read_cache = 1;
> +             } else if (strcmp(optarg, "jobs") == 0 ||
> +                        strcmp(optarg, "j") == 0) {
> +                     debug_jobs = true;
> +             }
>               break;

This could use an 'else' branch to report unknown argument optarg, so that
e.g. "-d job" or "-d foo" will exit with an error message rather than
continue on silently.

> +#define work_spawn(WORK, RESULT)                                     \
> +do {                                                                 \
> +     if (jobs_max == 1) {                                            \
> +             /* no parallel work so avoid fork() overhead */         \
> +             RESULT(WORK);                                           \
> +             break;                                                  \
> +     }                                                               \

I'm not sure I like this optimization; even though it looks correct, I
fear it would complicate future maintenance efforts. (If we want to use
work_spawn to work on a single item, then call work_sync_one(), the whole
thing will exit because of an error return from wait().) If it really
saves a ton of time maybe it's worth the extra maintenance effort but I
think even the tiniest of environments will still benefit from two jobs.

> +/* sadly C forces us to do this with exit, long_jump or returning error
> + * form work_spawn and work_sync. We could throw a C++ exception, is it
> + * worth doing it to avoid the exit here.

I think we ought to avoid C++ exceptions entirely.

> + *
> + * atm not all resources maybe cleanedup at exit
> + */
> +int last_error = 0;
> +void handle_work_result(int retval)
> +{
> +     if (retval) {
> +             last_error = retval;
> +             if (abort_on_error)
> +                     exit(last_error);
> +     }
> +}
> +
> +static void setup_parallel_compile(void)
> +{
> +     /* jobs_count and paralell_max set by default, config or args */
> +     long n = sysconf(_SC_NPROCESSORS_ONLN);
> +     if (jobs_count == JOBS_AUTO)
> +             jobs_count = n;
> +     if (jobs_max == JOBS_AUTO)
> +             jobs_max = n;
> +     /* jobs_max will be used internally */
> +     jobs_max = min(jobs_count, jobs_max);

I think we should also have jobs_max = max(1, jobs_max); to ensure that
negative numbers don't sneak through somewhere.

> +     njobs = 0;
> +     if (debug_jobs)
> +             fprintf(stderr, "MAX jobs: %ld\n", jobs_max);
> +}
> +

Thanks

Attachment: signature.asc
Description: Digital signature

-- 
AppArmor mailing list
AppArmor@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/apparmor

Reply via email to