Helmut Hartl schrieb:

After doing years of multithreaded development in our communication appliance we found that apple came to a similar solution like our own, and took the time to write it down :-)

http://developer.apple.com/mac/library/documentation/General/Conceptual/ConcurrencyProgrammingGuide/ThreadMigration/ThreadMigration.html#//apple_ref/doc/uid/TP40008091-CH105-SW1

Nice :-)

But another idea came just into mind, something like PERT/GANTT diagrams, used to optimize production plants. What if we instrument the current code with time measurement for all tasks, that can be parallelized in future? Then somebody could write an analytic program for the tasks and their measured times, so that we can learn more about the real chances and possible wins...

From
  A uses B,C;
  B uses D,E;
  C uses D,F;
we get the sequential flow, using e.g. A1/2 for the interface(1)/implementation(2) processing:
  A1 B,                         C;                   A2
        B1 D,       E;       B2    C1 D, F;       C2
              D1 D2    E1 E2          <-    F1 F2
or flat
  A1 B, B1 D, D1 D2 E; E1 E2 C; C1 D, F; F1 F2 C2 A2
that could be parallized into
  A1 B, B1 D, D1 E; E1 C; C1 D, F; F1
                +D2   +E2+B2         +F2+C2+A2

Here we can see that massive parallelism occurs at the end of the compilation, so that the determination of the shortest pathes (to the first compilable unit) may be the key to best performance, whereas a restriction of the number of parallel threads may result in performance degradation. But we will know more only after an analysis of real-life projects, and with real times for the parallel parts.

DoDi

_______________________________________________
fpc-devel maillist  -  fpc-devel@lists.freepascal.org
http://lists.freepascal.org/mailman/listinfo/fpc-devel

Reply via email to