See: https://blog.golang.org/pipelines (in which Go computes MD5 sums of 
files under a dir)

A machine K has:

- M total memory in bytes
- C number of (logical) CPUs

and a workload W consisting in MD5'ing some files under directory d/:

- N total number of files
- m bytes per file (all files are of the same length)

Disk input is not a concern.

Goal: minimise time to from start of W to end of W in these 2 cases:

A) W is the only workload on K
B) W is one of Z different and opaque workloads on K

There are enough files in d/ that K could not keep all of them in memory at 
the same time.

How many goroutines should K spawn for W?

If not one per file (cardinality == N), how should I rate-{limit,optimise} 
fan-out?

Is there a way to pass parameters to some optimisation function that, 
considering a scarce resource R (say: memory)
and a lambda to apply to each item to calculate its cost k with respect to 
R, would schedule work to keep concurrent
use of R below threshold T? (e.g. 1 large file completes with k=large, 
making room for 2 smaller ones with k=small,k=small
with total R usage still < T but with the best "bin-packing" of R below T 
possible at that time given the pending work?)

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to