With all the talk about bugs and slowness on a 386/486/586 -- does anyone
think those platforms will have multi-T disks hooked up to them?

Now bugs in the compiler are a problem, but at some point in the future, one
would hope we could move to a compiler that can handle division w/no
problems.  

This is a foward thinking problem.  As I wrote to someone else:

If you changed all the block number definitions to use 'block_nr_t'
instead of int *and* if block_nr_t is typedef'ed to be an int, 
then there would be no performance impact and the code would be 
marginally more clear with marginally better type checking. 

I'm just thinking about when T's become like today's G's, and P's (Peta)
become the next "really big size". 

There'd also have to be a #define for casting to a mathematical object (int,
long int, or long long int) in order for the compiler not to complain about
doing mathematical operations on a non-int.

Once the above is in place does anyone see a problem if the defines were
moved into the arch specific includes and made it arch-specific?  Does
gcc have a bug on platforms that have native 64-bit registers?  

Second question -- within the ia32 domain, can it be selectable per/
CPU type?  Like say only for CPU=686 or above?  

Third, could it eventually be a compile-time option if desired? 

Thanks...
-linda

--
Linda A Walsh                    | Trust Technology, Core Linux, SGI
[EMAIL PROTECTED]                      | Voice: (650) 933-5338                        
 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/

Reply via email to