> Ugh... yes, but not with an 80386, i486, Pentium, Pentium-MMX,
> 5x86, Crusoe, WinChip, K6, K6-2, or 6x86. Also not with XT disks
> or anything off the EISA, VLB, and MCA busses.
Lots of people are building terabyte sized arrays on K6 type boxes. A PII
or Athlon is just overkill for the job
Al
Rik van Riel writes:
> On Mon, 4 Sep 2000, Stephen C. Tweedie wrote:
>> On Fri, Sep 01, 2000 at 09:16:23AM -0700, Linda Walsh wrote:
>>> With all the talk about bugs and slowness on a 386/486/586
>>> -- does anyone think those platforms will have multi-T disks
>>> hooked up to them?
Note: no "68
On Mon, 4 Sep 2000, Stephen C. Tweedie wrote:
> On Fri, Sep 01, 2000 at 09:16:23AM -0700, Linda Walsh wrote:
>
> > With all the talk about bugs and slowness on a 386/486/586 -- does anyone
> > think those platforms will have multi-T disks hooked up to them?
>
> Yes. They are already doing it, a
Hi,
On Thu, Aug 31, 2000 at 05:59:09PM -0400, Richard B. Johnson wrote:
> Long long things, even it they work well, are not very nice on 32 bit
> machines. For the time being, I'd advise increasing cluster size rather
> than using 64 bit values.
Doesn't help, because we're talking about numbers
Hi,
On Fri, Sep 01, 2000 at 09:16:23AM -0700, Linda Walsh wrote:
> With all the talk about bugs and slowness on a 386/486/586 -- does anyone
> think those platforms will have multi-T disks hooked up to them?
Yes. They are already doing it, and the number of people trying is
growing rapidly. I
Hi,
On Fri, Sep 01, 2000 at 01:30:26PM +0300, Matti Aarnio wrote:
>
> Stephen, could you have a moment to look at the struct buffer_head {}
> alignment matters ? And possible configure time change to make the
> block number possibly a 'long long' variable ?
> Changeing field order mi
Hi,
On Fri, Sep 01, 2000 at 12:09:23AM -0700, Linda Walsh wrote:
> Perhaps an "easy" way to go would be to convert block numbers to
> type "block_nr_t", then one could measure the difference that 10's of
> nanoseconds make against seeks and reads of disk data.
You might not find it just taki
> >
> > Tsk. Showing my age and ignorance, I guess. I was using the gcc -v trick back
> > at Auspex in '93. ...Guess the compiler driver has gotten smarter since.
> > You know how it goes- you do a trick once- you don't change it for years...
>
> According to the ChangeLog of the 2.7.2.3 compile
On Fri, Sep 01, 2000 at 12:01:39PM -0700, Matthew Jacob wrote:
> >
> > Or use --print-libgcc-file-name:
> >
> > `gcc --print-libgcc-file-name`
> >
> > where are the options normally used to compile code (ie, for example
> > on machines that optionally do not have a floating point use, add
>
> Or use --print-libgcc-file-name:
>
> `gcc --print-libgcc-file-name`
>
> where are the options normally used to compile code (ie, for example
> on machines that optionally do not have a floating point use, adding
> -msoft-float would select the libgcc.a that was compiled with -msoft-
On Fri, Sep 01, 2000 at 10:34:19AM -0700, Matthew Jacob wrote:
> > On Fri, Sep 01, 2000 at 03:15:27PM +0200, Andi Kleen wrote:
> > > So what do you propose to use when a long long division is needed (after
> > > much thought and considering all alternatives etc.etc.) ?
> >
> > Link against libgc
On Fri, Sep 01, 2000 at 03:15:27PM +0200, Andi Kleen wrote:
> The previous analysis was not quite right though (%cl is actually loaded,
> just %eax gets bogus input from the long long shift)
Perhaps, but it's sure not obvious:
bh->b_blocknr = (long)mp->pbm_bn +
(mp->pbm_
> On Fri, Sep 01, 2000 at 03:15:27PM +0200, Andi Kleen wrote:
> > So what do you propose to use when a long long division is needed (after
> > much thought and considering all alternatives etc.etc.) ?
>
> Link against libgcc, what else?
As also does anyone who does solaris drivers (x86 or sparc
> 41-bit filesize should be enough for the 32-bit machines.
>
> By the time people start using >41-bit files, don't you think
> they'll have an AMD-64, PPC or Merced CPU to handle the bigger
> file sizes?
Nope
Alan
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
th
On Fri, Sep 01, 2000 at 03:15:27PM +0200, Andi Kleen wrote:
> So what do you propose to use when a long long division is needed (after
> much thought and considering all alternatives etc.etc.) ?
Link against libgcc, what else?
We should have been doing that since the beginning instead of
making
On Fri, Sep 01, 2000 at 12:09:23AM -0700, Linda Walsh wrote:
> Perhaps an "easy" way to go would be to convert block numbers to
> type "block_nr_t", then one could measure the difference that 10's of
> nanoseconds make against seeks and reads of disk data.
That's not the issue. The issue is t
On Fri, 1 Sep 2000, Alan Cox wrote:
> > With all the talk about bugs and slowness on a 386/486/586 -- does anyone
> > think those platforms will have multi-T disks hooked up to them?
>
> Yes. The poor handling of 64bit numbers hasnt gone away on
> PentiumII or Athlon as far as I can tell
41-bit
On Fri, Sep 01, 2000 at 09:16:23AM -0700, Linda Walsh wrote:
> With all the talk about bugs and slowness on a 386/486/586 -- does anyone
> think those platforms will have multi-T disks hooked up to them?
Propably not.
...
> If you changed all the block number definitions to use 'block_nr_
> With all the talk about bugs and slowness on a 386/486/586 -- does anyone
> think those platforms will have multi-T disks hooked up to them?
Yes. The poor handling of 64bit numbers hasnt gone away on PentiumII or Athlon
as far as I can tell
-
To unsubscribe from this list: send the line "unsub
With all the talk about bugs and slowness on a 386/486/586 -- does anyone
think those platforms will have multi-T disks hooked up to them?
Now bugs in the compiler are a problem, but at some point in the future, one
would hope we could move to a compiler that can handle division w/no
problems.
On Fri, 1 Sep 2000, Matti Aarnio wrote:
> On Fri, Sep 01, 2000 at 03:15:27PM +0200, Andi Kleen wrote:
> > > ( Not 'unsigned long long' )
> >
> > The shift on pbm_offset operates on long long.
>
> Uh, somehow I thought the reference was about bh->b_blocknr;
> Ok, never mind.
>
>
On Fri, Sep 01, 2000 at 03:15:27PM +0200, Andi Kleen wrote:
> > ( Not 'unsigned long long' )
>
> The shift on pbm_offset operates on long long.
Uh, somehow I thought the reference was about bh->b_blocknr;
Ok, never mind.
> The previous analysis was not quite right though (
On Fri, Sep 01, 2000 at 04:01:43PM +0300, Matti Aarnio wrote:
> On Fri, Sep 01, 2000 at 02:44:04PM +0200, Andi Kleen wrote:
> > > To my knowlege it's only been speed related issues, not
> > > correctness issues, that have been the cause for the
> > > fear and loathing of long long.
> >
> > There
On Fri, Sep 01, 2000 at 02:44:04PM +0200, Andi Kleen wrote:
> > To my knowlege it's only been speed related issues, not
> > correctness issues, that have been the cause for the
> > fear and loathing of long long.
>
> There are several parts of XFS which do not compile correctly with gcc
> 2.95.2,
On Thu, Aug 31, 2000 at 09:50:35PM -0700, Richard Henderson wrote:
> On Fri, Sep 01, 2000 at 12:16:38AM +0300, Matti Aarnio wrote:
> > Also (I recall) because GCC's 'long long' related operations
> > and optimizations have been buggy in past, and there is no
> > sufficient experience t
On Fri, 1 Sep 2000, Linda Walsh wrote:
> Perhaps an "easy" way to go would be to convert block numbers to
> type "block_nr_t", then one could measure the difference that 10's of
> nanoseconds make against seeks and reads of disk data.
True for DOS.
On Linux, most file operations are done in R
Stephen, could you have a moment to look at the struct buffer_head {}
alignment matters ? And possible configure time change to make the
block number possibly a 'long long' variable ?
Changeing field order might be doable now, while I definitely think that
changeing blocknumber vari
nology, Core Linux, SGI
[EMAIL PROTECTED] | Voice: (650) 933-5338
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Richard
> Henderson
> Sent: Thursday, August 31, 2000 9:51 PM
> To: Matti Aarnio
>
On Fri, Sep 01, 2000 at 12:16:38AM +0300, Matti Aarnio wrote:
> Also (I recall) because GCC's 'long long' related operations
> and optimizations have been buggy in past, and there is no
> sufficient experience to convince him that they work now better
> with the recommended
> And the below is what percentage of time doing disk i/o?
but most file operations don't do physical IO.
> > it again! It doesn't scale well. The long long code is nearly 10 times
> > slower! You can do `gcc -S -o xxx name.c` and see why.
it's silly to talk about unoptimized code. and to spur
And the below is what percentage of time doing disk i/o?
> Just put this in a loop and time it. Change SIZE to long long, and do
> it again! It doesn't scale well. The long long code is nearly 10 times
> slower! You can do `gcc -S -o xxx name.c` and see why.
>
>
> #define SIZE long
>
> SIZE
On Thu, 31 Aug 2000, Linda Walsh wrote:
> > It is propably from reasoning of:
> >
> > "there is really no point in it, as at 32bit systems
> > int and long are same size, thus same limit comes
> > with both types."
> >
> > At 64-bit machines ther
On Thu, Aug 31, 2000 at 01:46:36PM -0700, Linda Walsh wrote:
> > It is propably from reasoning of:
> >
> > "there is really no point in it, as at 32bit systems
> > int and long are same size, thus same limit comes
> > with both types."
> >
> > At
> It is propably from reasoning of:
>
> "there is really no point in it, as at 32bit systems
>int and long are same size, thus same limit comes
>with both types."
>
> At 64-bit machines there is, of course, definite difference.
---
> Some underlying block device subsystems can address that
> currently, some others have inherent 512 byte "page_size"
> with signed indexes... I think SCSI is in the first camp,
> while IDE is in second. (And Ingo has assured us that RAID
> code should handle thi
On Thu, Aug 31, 2000 at 01:13:09PM -0700, Linda Walsh wrote:
> Ooopsthe time frame is closer to today on part of this.
> While it may be a while before we hit the 1T limit on 1 single device,
> things like readpage, do so based of the inode -- which on a metadisk
> could have a filesize much l
Ooopsthe time frame is closer to today on part of this. While it may
be a while before we hit the 1T limit on 1 single device, things like
readpage, do so based of the inode -- which on a metadisk could have a
filesize much larger than current physical device limits. So it seems
that at leas
37 matches
Mail list logo