Hi from Beautiful Brittany, I was brought up in the era of IBM 360/370 main-Frame computers, where the Operating Systems used a dynamic CPU allocation, based upon the the CPU requirements. For the novice, in this era, it seemed strange to allocate CPU time in an “apparent” reverse order of logic, ie : The Input/Output bound programs had a higher priority than CPU bound programs. This worked well for several decades, based upon the simple precept that the highest priority program would go rapidly into a wait state, while the I/O device did its transfer relatively slowly, thus allowing the lower priority programs to sponge up the maximum CPU time for their Nitty-Gritty CPU-bound activities. It worked well !
Now we come to todays problem. Today, on my latest Mac computers, I launch long-duration data transfers (of many gb’s - no longer I/O bound ‘cos of the fantastic data transfer speeds, and find that my request to do a simple Google lookup is now taking eons to execute, because of the extremely high data-transfer rates (and low wait time) of the Data Transfer functions. I’ve lost touch with Operating Systems over the years, but I wonder if anybody out there has any knowledge of CPU allocation techniques on the latest micro-computers, using giga-bit data-transfer rates. It may be “round-robin” techniques as with old IBM-MFT systems. but it is giving me pains (like now, I never risk launching a long-standing I/O disk copy if I want to get back into my computer quickly) ….. Just a tad perplexed …… -Francis “Nothing should ever be done for the first time !” _______________________________________________ use-livecode mailing list use-livecode@lists.runrev.com Please visit this url to subscribe, unsubscribe and manage your subscription preferences: http://lists.runrev.com/mailman/listinfo/use-livecode