I just swapped a machine to death by starting 1 jobs per CPU on a 48 core machine. The problem was that each job took more than 1/48th of the memory.
That got me thinking: Would it make sense to have a setting in GNU Parallel that automatically run 'ulimit ' with the relevant amount of memory, so if you ask for X jobs to be run on a given server, then each job is only allowed 1/X'th of the memory on the machine. If the jobs takes more than that, it gets out-of-memory when it tries to allocate more memory. I am pretty sure it does make sense to have that. But would it also make sense to have this as default (with an option to override it)? Or will that be an unpleasant surprise? For most situations it will not make a difference, so I am interested in whether you will anticipate more surprise by having your big job killed or the happyness by not swapping the machine to death? /Ole
