I must admit the hardest part for me of abandoning Dos, is losing a whole lot of programs I love that will never be ported to Linux, and which I will never try to emulate by writing my own source code for. I am no stranger to Unix, and am not afraid of it at all.
That said, I found some of the actual technical limits interesting and wondered how true they were: > The 386 offers extreme power, and DOS is not able to use it. > All later processors offered only speed improvements, but the 386 lay the > basic for modern OSes. > (486 integrated FPU and cache, and was slightly pipelined. > 586 fully pipelined, and superscalar (can execute up to 2 commands per > clock. > 686 integrated 2nd level cache on the die, speculative execution of up to 3 > commands per clock.) I get from this that inherent dos limits prevent it from running fast hardware devices, thinks like USB ports and firewire and so on with the proper speed. I did not know that. It makes me wonder why some versions of dos are so popular for use with embedded devices. Lineo or whatever their name is now sells version 7.3 of Dr. Dos at quite a price as ``the perfect solution for your embedded devices''. The general story is how fast dos can be handling real time software process control. Seems to be a conflict of claims here. > There are some tries to evade DOS defficiencies. > Eg XMS to address more memory. > This works so, that you load a driver in DOS, which switches to protected > mode (where you can address the RAM) and copies small parts to under 1 MB > if needed. > Naturally this is painfully slow (every mode switch is expensive, and the > copying of the data is also slow) ``Painfully slow'' really -- a few more fractions of a second for an application to do something? Does this really compare to the speed difference, using any kind of comperable hardware of the 1K elegant dos program written entirely in assembler to the similar winblows program with 7 directories 16 dll libraries, and 300 files -- that does the same thing. As a user (Unix has that quaint phrase ``user time'' versus cpu time etc.) I have almost always found a dos program does a practical task faster and more stably than certainly the winblows equivalent. I can not speak to the Unix equivalent. Obviously, one can simply port a lot of C code over to Unix. > Eg 32 bit addressing. > DJGPP creates 32 bit protected mode executables - GREAT .... > _BUT_ DOS can't operate in Protected mode, so these programs have to switch > back to real mode, for every IO operation, and for every DOS service they > need. (read DJGPP faq Why are IO bound processes so slow) > [PS: this is the same solution as win9X ... switch to PM do your thing > there, and call dos functions in realmode if needed Never knew or thought about this. When dos has to do some I/O related thing for your -- back you go to real mode and 16 bit processing? Do some of the newer doses, such as the DPMI mode for Dr. Dos, Free Dos, etc over come this limitation?a Howard Schwartz ---------------------------------------- howardbschwartz "at" california.com