Hi,
On Tue, May 15, 2001 at 04:37:01PM +1200, Chris Wedgwood wrote:
> On Sun, May 13, 2001 at 08:39:23PM -0600, Richard Gooch wrote:
>
> Yeah, we need a decent unfragmenter. We can do that now with
> bmap().
>
> SCT wrote a defragger for ext2 but it only handles 1k blocks :(
Actually,
Hi,
On Fri, May 18, 2001 at 09:55:14AM +0200, Rogier Wolff wrote:
> The "boot quickly" was an example. "Load netscape quickly" on some
> systems is done by dd-ing the binary to /dev/null.
This is one of the reasons why some filesystems use extent maps
instead of inode indirection trees. The
Hi,
On Sat, May 19, 2001 at 12:47:15PM -0700, Linus Torvalds wrote:
>
> On Sat, 19 May 2001, Pavel Machek wrote:
> >
> > > Don't get _too_ hung up about the power-management kind of "invisible
> > > suspend/resume" sequence where you resume the whole kernel state.
> >
> > Ugh. Now I'm
Hi,
On Sat, May 19, 2001 at 12:47:15PM -0700, Linus Torvalds wrote:
On Sat, 19 May 2001, Pavel Machek wrote:
Don't get _too_ hung up about the power-management kind of invisible
suspend/resume sequence where you resume the whole kernel state.
Ugh. Now I'm confused. How do you do
Hi,
On Tue, May 15, 2001 at 04:37:01PM +1200, Chris Wedgwood wrote:
On Sun, May 13, 2001 at 08:39:23PM -0600, Richard Gooch wrote:
Yeah, we need a decent unfragmenter. We can do that now with
bmap().
SCT wrote a defragger for ext2 but it only handles 1k blocks :(
Actually, I
Hi,
On Fri, May 18, 2001 at 09:55:14AM +0200, Rogier Wolff wrote:
The boot quickly was an example. Load netscape quickly on some
systems is done by dd-ing the binary to /dev/null.
This is one of the reasons why some filesystems use extent maps
instead of inode indirection trees. The
> I'm confused. I've always wondered that before you suspend the state
> of a machine to disk, why we just don't throw away unnecessary data
> like anything not actively referenced.
swsusp does exactly that.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of
On Sat, 19 May 2001, Pavel Machek wrote:
>
> > Don't get _too_ hung up about the power-management kind of "invisible
> > suspend/resume" sequence where you resume the whole kernel state.
>
> Ugh. Now I'm confused. How do you do usefull resume from disk when you
> don't restore complete state?
Hi!
> > resume from disk is actually pretty hard to do in way it is readed linearily.
> >
> > While playing with swsusp patches (== suspend to disk) I found out that
> > it was slow. It needs to do atomic snapshot, and only reasonable way to
> > do that is free half of RAM, cli() and copy.
>
>
On Tue, 15 May 2001, Pavel Machek wrote:
>
> resume from disk is actually pretty hard to do in way it is readed linearily.
>
> While playing with swsusp patches (== suspend to disk) I found out that
> it was slow. It needs to do atomic snapshot, and only reasonable way to
> do that is free
On Tue, 15 May 2001, Pavel Machek wrote:
resume from disk is actually pretty hard to do in way it is readed linearily.
While playing with swsusp patches (== suspend to disk) I found out that
it was slow. It needs to do atomic snapshot, and only reasonable way to
do that is free half of
Hi!
resume from disk is actually pretty hard to do in way it is readed linearily.
While playing with swsusp patches (== suspend to disk) I found out that
it was slow. It needs to do atomic snapshot, and only reasonable way to
do that is free half of RAM, cli() and copy.
Note that
On Sat, 19 May 2001, Pavel Machek wrote:
Don't get _too_ hung up about the power-management kind of invisible
suspend/resume sequence where you resume the whole kernel state.
Ugh. Now I'm confused. How do you do usefull resume from disk when you
don't restore complete state? Do you
Linus Torvalds wrote:
> I'm really serious about doing "resume from disk". If you want a fast
> boot, I will bet you a dollar that you cannot do it faster than by loading
> a contiguous image of several megabytes contiguously into memory. There is
> NO overhead, you're pretty much guaranteed
Linus Torvalds wrote:
I'm really serious about doing resume from disk. If you want a fast
boot, I will bet you a dollar that you cannot do it faster than by loading
a contiguous image of several megabytes contiguously into memory. There is
NO overhead, you're pretty much guaranteed platter
Hi!
> Besides, just how often do you reboot the box? If that's the hotspot for
> you - when the hell does the boor beast find time to do something useful?
Ten times a day?
But booting is special case: You can read your mail while compiling kernel,
but try to read your mail while your machine
Hi!
> And because your suspend/resume idea isn't really going to help me
> much. That's because my boot scripts have the notion of
> "personalities" (change the boot configuration by asking the user
> early on in the boot process). If I suspend after I've got XDM
> running, it's too late.
Why
Hi!
> I'm really serious about doing "resume from disk". If you want a fast
> boot, I will bet you a dollar that you cannot do it faster than by loading
> a contiguous image of several megabytes contiguously into memory. There is
> NO overhead, you're pretty much guaranteed platter speeds, and
Hi!
I'm really serious about doing resume from disk. If you want a fast
boot, I will bet you a dollar that you cannot do it faster than by loading
a contiguous image of several megabytes contiguously into memory. There is
NO overhead, you're pretty much guaranteed platter speeds, and there
Hi!
Besides, just how often do you reboot the box? If that's the hotspot for
you - when the hell does the boor beast find time to do something useful?
Ten times a day?
But booting is special case: You can read your mail while compiling kernel,
but try to read your mail while your machine is
Hi!
And because your suspend/resume idea isn't really going to help me
much. That's because my boot scripts have the notion of
personalities (change the boot configuration by asking the user
early on in the boot process). If I suspend after I've got XDM
running, it's too late.
Why not
Anton Altaparmakov wrote:
>
> True, but I was under the impression that Linus' master plan was that the
> two would be in entirely separate name spaces using separate cached copies
> of the device blocks.
>
Nothing was said about the superblock at all.
-hpa
--
<[EMAIL PROTECTED]> at
At 02:30 16/05/2001, H. Peter Anvin wrote:
>Anton Altaparmakov wrote:
> > And how are you thinking of this working "without introducing new
> > interfaces" if the caches are indeed incoherent? Please correct me if I
> > understand wrong, but when two caches are incoherent, I thought it means
> >
At 02:30 16/05/2001, H. Peter Anvin wrote:
Anton Altaparmakov wrote:
And how are you thinking of this working without introducing new
interfaces if the caches are indeed incoherent? Please correct me if I
understand wrong, but when two caches are incoherent, I thought it means
that the
Anton Altaparmakov wrote:
True, but I was under the impression that Linus' master plan was that the
two would be in entirely separate name spaces using separate cached copies
of the device blocks.
Nothing was said about the superblock at all.
-hpa
--
[EMAIL PROTECTED] at work,
Anton Altaparmakov wrote:
>
> And how are you thinking of this working "without introducing new
> interfaces" if the caches are indeed incoherent? Please correct me if I
> understand wrong, but when two caches are incoherent, I thought it means
> that the above _would_ screw up unless protected
At 23:35 15/05/2001, H. Peter Anvin wrote:
>"Albert D. Cahalan" wrote:
> > H. Peter Anvin writes:
> > > This would leave no way (without introducing new interfaces) to write,
> > > for example, the boot block on an ext2 filesystem. Note that the
> > > bootblock (defined as the first 1024 bytes)
"Albert D. Cahalan" wrote:
>
> H. Peter Anvin writes:
>
> > This would leave no way (without introducing new interfaces) to write,
> > for example, the boot block on an ext2 filesystem. Note that the
> > bootblock (defined as the first 1024 bytes) is not actually used by
> > the filesystem,
H. Peter Anvin writes:
> This would leave no way (without introducing new interfaces) to write,
> for example, the boot block on an ext2 filesystem. Note that the
> bootblock (defined as the first 1024 bytes) is not actually used by
> the filesystem, although depending on the block size it may
On Tue, May 15, 2001 at 02:02:29PM -0700, Linus Torvalds wrote:
> In article <[EMAIL PROTECTED]>,
> Alexander Viro <[EMAIL PROTECTED]> wrote:
> >On Tue, 15 May 2001, H. Peter Anvin wrote:
> >
> >> Alexander Viro wrote:
> >> > >
> >> > > None whatsoever. The one thing that matters is that noone
Alexander Viro wrote:
>
> void *.
>
> Look, methods of your address_space certainly know what they hell they
> are dealing with. Just as autofs_root_readdir() knows what inode->u.generic_ip
> really points to.
>
> Anybody else has no business to care about the contents of ->host.
>
Why do we
In article <[EMAIL PROTECTED]>,
Alexander Viro <[EMAIL PROTECTED]> wrote:
>>
>> How would you know what datatype it is? A union? Making "struct
>> block_device *" a "struct inode *" in a nonmounted filesystem? In a
>> devfs? (Seriously. Being able to do these kinds of data-structural
>>
On Tue, 15 May 2001, Alexander Viro wrote:
> On 15 May 2001, Kai Henningsen wrote:
>
> > [EMAIL PROTECTED] (Alexander Viro) wrote on 15.05.01 in
><[EMAIL PROTECTED]>:
> >
> > > ... and Multics had all access to files through equivalent of mmap()
> > > in 60s. "Segments" in ls(1) got that
In article <[EMAIL PROTECTED]>,
Alexander Viro <[EMAIL PROTECTED]> wrote:
>On Tue, 15 May 2001, H. Peter Anvin wrote:
>
>> Alexander Viro wrote:
>> > >
>> > > None whatsoever. The one thing that matters is that noone starts making
>> > > the assumption that mapping->host->i_mapping == mapping.
On 15 May 2001, Kai Henningsen wrote:
> [EMAIL PROTECTED] (Alexander Viro) wrote on 15.05.01 in
><[EMAIL PROTECTED]>:
>
> > ... and Multics had all access to files through equivalent of mmap()
> > in 60s. "Segments" in ls(1) got that name for a good reason.
>
> Where's something called
[EMAIL PROTECTED] (Alexander Viro) wrote on 15.05.01 in
<[EMAIL PROTECTED]>:
> ... and Multics had all access to files through equivalent of mmap()
> in 60s. "Segments" in ls(1) got that name for a good reason.
Where's something called "segments" connected with ls(1)? I can't seem to
find
On Tue, 15 May 2001, H. Peter Anvin wrote:
> Alexander Viro wrote:
> > >
> > > What else could it be, since it's a "struct inode *"? NULL?
> >
> > struct block_device *, for one thing. We'll have to do that as soon
> > as we do block devices in pagecache.
> >
>
> How would you know what
On Tue, 15 May 2001, H. Peter Anvin wrote:
> Alexander Viro wrote:
> > >
> > > None whatsoever. The one thing that matters is that noone starts making
> > > the assumption that mapping->host->i_mapping == mapping.
> >
> > One actually shouldn't assume that mapping->host is an inode.
> >
>
Alexander Viro wrote:
> >
> > What else could it be, since it's a "struct inode *"? NULL?
>
> struct block_device *, for one thing. We'll have to do that as soon
> as we do block devices in pagecache.
>
How would you know what datatype it is? A union? Making "struct
block_device *" a
On Tue, 15 May 2001, H. Peter Anvin wrote:
> Alexander Viro wrote:
> >
> > On 15 May 2001, H. Peter Anvin wrote:
> >
> > > isofs wouldn't be too bad as long as struct mapping:struct inode is a
> > > many-to-one mapping.
> >
> > Erm... What's wrong with
Alexander Viro wrote:
> >
> > None whatsoever. The one thing that matters is that noone starts making
> > the assumption that mapping->host->i_mapping == mapping.
>
> One actually shouldn't assume that mapping->host is an inode.
>
What else could it be, since it's a "struct inode *"? NULL?
Alexander Viro wrote:
>
> On 15 May 2001, H. Peter Anvin wrote:
>
> > isofs wouldn't be too bad as long as struct mapping:struct inode is a
> > many-to-one mapping.
>
> Erm... What's wrong with inode->u.isofs_i.my_very_own_address_space ?
>
None whatsoever. The one thing that matters is
On 15 May 2001, H. Peter Anvin wrote:
> isofs wouldn't be too bad as long as struct mapping:struct inode is a
> many-to-one mapping.
Erm... What's wrong with inode->u.isofs_i.my_very_own_address_space ?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of
Followup to: <[EMAIL PROTECTED]>
By author:Anton Altaparmakov <[EMAIL PROTECTED]>
In newsgroup: linux.dev.kernel
>
> They shouldn't, but maybe some stupid utility or a typo will do it creating
> two incoherent copies of the same block on the device. -> Bad Things can
> happen.
>
> Can't
Followup to: <[EMAIL PROTECTED]>
By author:Alexander Viro <[EMAIL PROTECTED]>
In newsgroup: linux.dev.kernel
>
> UNIX-like ones (and that includes QNX) are easy. HFS is hopeless - it won't
> be fixed unless authors will do it. Tigran will probably fix BFS just as a
> learning experience ;-)
>And because your suspend/resume idea isn't really going to help me
>much. That's because my boot scripts have the notion of
>"personalities" (change the boot configuration by asking the user
>early on in the boot process). If I suspend after I've got XDM
>running, it's too late.
On Tuesday, May 15, 2001 04:33:57 AM -0400 Alexander Viro
<[EMAIL PROTECTED]> wrote:
>
>
> On Tue, 15 May 2001, Linus Torvalds wrote:
>
>> Looks like there are 19 filesystems that use the buffer cache right now:
>>
>> grep -l bread fs/*/*.c | cut -d/ -f2 | sort -u | wc
>>
>> So quite
On Tuesday 15 May 2001 12:44, Alexander Viro wrote:
> On Tue, 15 May 2001, Daniel Phillips wrote:
> > That's because you left out his invalidate:
> >
> > * create an instance in pagecache
> > * start reading into buffer cache (doesn't invalidate, right?)
> > * start writing using
On Tue, 15 May 2001, Daniel Phillips wrote:
> That's because you left out his invalidate:
>
> * create an instance in pagecache
> * start reading into buffer cache (doesn't invalidate, right?)
> * start writing using pagecache (invalidate buffer copy)
Bzzert. You have a
On Tuesday 15 May 2001 08:57, Alexander Viro wrote:
> On Tue, 15 May 2001, Richard Gooch wrote:
> > > What happens if you create a buffer cache entry? Does that
> > > invalidate the page cache one? Or do you just allow invalidates
> > > one way, and not the other? And why=
> >
> > I just figured
[EMAIL PROTECTED] said:
> JFFS - dunno.
Bah. JFFS doesn't use any of those horrible block device thingies.
--
dwmw2
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
At 08:13 15/05/01, Linus Torvalds wrote:
>On Tue, 15 May 2001, Richard Gooch wrote:
> > So what happens if I dd from the block device and also from a file on
> > the mounted FS, where that file overlaps the bnums I dd'ed? Do we get
> > two copies in the page cache? One for the block device
Alan Cox <[EMAIL PROTECTED]> writes:
> > Larry, go read up on TOPS-20. :-) SunOS did give unix mmap(), but it
> > did not come up the idea.
> Seems to be TOPS-10
> http://www.opost.com/dlm/tenex/fjcc72/
TENEX is not TOPS-10. TOPS-10 didn't get virtual memory until around
1974. By then,
On Tue, 15 May 2001, Linus Torvalds wrote:
> Looks like there are 19 filesystems that use the buffer cache right now:
>
> grep -l bread fs/*/*.c | cut -d/ -f2 | sort -u | wc
>
> So quite a bit of work involved.
UNIX-like ones (and that includes QNX) are easy. HFS is hopeless - it
On Tue, 15 May 2001, Chris Wedgwood wrote:
>
> On Tue, May 15, 2001 at 12:13:13AM -0700, Linus Torvalds wrote:
>
> We should not create crap code just because we _can_.
>
> How about removing code?
Absolutely. It's not all that often that we can do it, but when we can,
it's the best thing
On Tue, 15 May 2001, Richard Gooch wrote:
> >
> > What happens if you create a buffer cache entry? Does that
> > invalidate the page cache one? Or do you just allow invalidates one
> > way, and not the other? And why=
>
> I just figured on one way invalidates, because that seems cheap and
>
On Tue, 15 May 2001, Richard Gooch wrote:
> > What happens if you create a buffer cache entry? Does that
> > invalidate the page cache one? Or do you just allow invalidates one
> > way, and not the other? And why=
>
> I just figured on one way invalidates, because that seems cheap and
> easy
Linus Torvalds writes:
>
> On Tue, 15 May 2001, Richard Gooch wrote:
> >
> > However, what about simply invalidating an entry in the buffer cache
> > when you do a write from the page cache?
>
> And how do you do the invalidate the other way, pray tell?
>
> What happens if you create a buffer
On Tue, 15 May 2001, Richard Gooch wrote:
>
> However, what about simply invalidating an entry in the buffer cache
> when you do a write from the page cache?
And how do you do the invalidate the other way, pray tell?
What happens if you create a buffer cache entry? Does that invalidate the
Linus Torvalds writes:
> You could choose to do "partial coherency", ie be coherent only one
> way, for example. That would make the coherency overhead much less,
> but would also make the caches basically act very unpredictably -
> you might have somebody write through the page cache yet on a
Linus Torvalds writes:
You could choose to do partial coherency, ie be coherent only one
way, for example. That would make the coherency overhead much less,
but would also make the caches basically act very unpredictably -
you might have somebody write through the page cache yet on a read
On Tue, 15 May 2001, Richard Gooch wrote:
However, what about simply invalidating an entry in the buffer cache
when you do a write from the page cache?
And how do you do the invalidate the other way, pray tell?
What happens if you create a buffer cache entry? Does that invalidate the
page
Linus Torvalds writes:
On Tue, 15 May 2001, Richard Gooch wrote:
However, what about simply invalidating an entry in the buffer cache
when you do a write from the page cache?
And how do you do the invalidate the other way, pray tell?
What happens if you create a buffer cache
On Tue, 15 May 2001, Richard Gooch wrote:
What happens if you create a buffer cache entry? Does that
invalidate the page cache one? Or do you just allow invalidates one
way, and not the other? And why=
I just figured on one way invalidates, because that seems cheap and
easy and has
On Tue, 15 May 2001, Richard Gooch wrote:
What happens if you create a buffer cache entry? Does that
invalidate the page cache one? Or do you just allow invalidates one
way, and not the other? And why=
I just figured on one way invalidates, because that seems cheap and
easy and has
On Tue, 15 May 2001, Chris Wedgwood wrote:
On Tue, May 15, 2001 at 12:13:13AM -0700, Linus Torvalds wrote:
We should not create crap code just because we _can_.
How about removing code?
Absolutely. It's not all that often that we can do it, but when we can,
it's the best thing in the
On Tue, 15 May 2001, Linus Torvalds wrote:
Looks like there are 19 filesystems that use the buffer cache right now:
grep -l bread fs/*/*.c | cut -d/ -f2 | sort -u | wc
So quite a bit of work involved.
UNIX-like ones (and that includes QNX) are easy. HFS is hopeless - it won't
be
At 08:13 15/05/01, Linus Torvalds wrote:
On Tue, 15 May 2001, Richard Gooch wrote:
So what happens if I dd from the block device and also from a file on
the mounted FS, where that file overlaps the bnums I dd'ed? Do we get
two copies in the page cache? One for the block device access, and
[EMAIL PROTECTED] said:
JFFS - dunno.
Bah. JFFS doesn't use any of those horrible block device thingies.
--
dwmw2
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
On Tuesday 15 May 2001 08:57, Alexander Viro wrote:
On Tue, 15 May 2001, Richard Gooch wrote:
What happens if you create a buffer cache entry? Does that
invalidate the page cache one? Or do you just allow invalidates
one way, and not the other? And why=
I just figured on one way
On Tue, 15 May 2001, Daniel Phillips wrote:
That's because you left out his invalidate:
* create an instance in pagecache
* start reading into buffer cache (doesn't invalidate, right?)
* start writing using pagecache (invalidate buffer copy)
Bzzert. You have a race
On Tuesday 15 May 2001 12:44, Alexander Viro wrote:
On Tue, 15 May 2001, Daniel Phillips wrote:
That's because you left out his invalidate:
* create an instance in pagecache
* start reading into buffer cache (doesn't invalidate, right?)
* start writing using pagecache
On Tuesday, May 15, 2001 04:33:57 AM -0400 Alexander Viro
[EMAIL PROTECTED] wrote:
On Tue, 15 May 2001, Linus Torvalds wrote:
Looks like there are 19 filesystems that use the buffer cache right now:
grep -l bread fs/*/*.c | cut -d/ -f2 | sort -u | wc
So quite a bit of work
And because your suspend/resume idea isn't really going to help me
much. That's because my boot scripts have the notion of
personalities (change the boot configuration by asking the user
early on in the boot process). If I suspend after I've got XDM
running, it's too late.
Preface: As
Followup to: [EMAIL PROTECTED]
By author:Alexander Viro [EMAIL PROTECTED]
In newsgroup: linux.dev.kernel
UNIX-like ones (and that includes QNX) are easy. HFS is hopeless - it won't
be fixed unless authors will do it. Tigran will probably fix BFS just as a
learning experience ;-) ADFS
Followup to: [EMAIL PROTECTED]
By author:Anton Altaparmakov [EMAIL PROTECTED]
In newsgroup: linux.dev.kernel
They shouldn't, but maybe some stupid utility or a typo will do it creating
two incoherent copies of the same block on the device. - Bad Things can
happen.
Can't we simply
On 15 May 2001, H. Peter Anvin wrote:
isofs wouldn't be too bad as long as struct mapping:struct inode is a
many-to-one mapping.
Erm... What's wrong with inode-u.isofs_i.my_very_own_address_space ?
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a
Alexander Viro wrote:
On 15 May 2001, H. Peter Anvin wrote:
isofs wouldn't be too bad as long as struct mapping:struct inode is a
many-to-one mapping.
Erm... What's wrong with inode-u.isofs_i.my_very_own_address_space ?
None whatsoever. The one thing that matters is that noone
Alexander Viro wrote:
None whatsoever. The one thing that matters is that noone starts making
the assumption that mapping-host-i_mapping == mapping.
One actually shouldn't assume that mapping-host is an inode.
What else could it be, since it's a struct inode *? NULL?
-hpa
On Tue, 15 May 2001, H. Peter Anvin wrote:
Alexander Viro wrote:
On 15 May 2001, H. Peter Anvin wrote:
isofs wouldn't be too bad as long as struct mapping:struct inode is a
many-to-one mapping.
Erm... What's wrong with inode-u.isofs_i.my_very_own_address_space ?
None
Alexander Viro wrote:
What else could it be, since it's a struct inode *? NULL?
struct block_device *, for one thing. We'll have to do that as soon
as we do block devices in pagecache.
How would you know what datatype it is? A union? Making struct
block_device * a struct inode * in
On Tue, 15 May 2001, H. Peter Anvin wrote:
Alexander Viro wrote:
None whatsoever. The one thing that matters is that noone starts making
the assumption that mapping-host-i_mapping == mapping.
One actually shouldn't assume that mapping-host is an inode.
What else could it
On Tue, 15 May 2001, H. Peter Anvin wrote:
Alexander Viro wrote:
What else could it be, since it's a struct inode *? NULL?
struct block_device *, for one thing. We'll have to do that as soon
as we do block devices in pagecache.
How would you know what datatype it is? A
[EMAIL PROTECTED] (Alexander Viro) wrote on 15.05.01 in
[EMAIL PROTECTED]:
... and Multics had all access to files through equivalent of mmap()
in 60s. Segments in ls(1) got that name for a good reason.
Where's something called segments connected with ls(1)? I can't seem to
find the
On 15 May 2001, Kai Henningsen wrote:
[EMAIL PROTECTED] (Alexander Viro) wrote on 15.05.01 in
[EMAIL PROTECTED]:
... and Multics had all access to files through equivalent of mmap()
in 60s. Segments in ls(1) got that name for a good reason.
Where's something called segments
In article [EMAIL PROTECTED],
Alexander Viro [EMAIL PROTECTED] wrote:
On Tue, 15 May 2001, H. Peter Anvin wrote:
Alexander Viro wrote:
None whatsoever. The one thing that matters is that noone starts making
the assumption that mapping-host-i_mapping == mapping.
One actually
In article [EMAIL PROTECTED],
Alexander Viro [EMAIL PROTECTED] wrote:
How would you know what datatype it is? A union? Making struct
block_device * a struct inode * in a nonmounted filesystem? In a
devfs? (Seriously. Being able to do these kinds of data-structural
equivalence is IMO
On Tue, 15 May 2001, Alexander Viro wrote:
On 15 May 2001, Kai Henningsen wrote:
[EMAIL PROTECTED] (Alexander Viro) wrote on 15.05.01 in
[EMAIL PROTECTED]:
... and Multics had all access to files through equivalent of mmap()
in 60s. Segments in ls(1) got that name for a good
Alexander Viro wrote:
void *.
Look, methods of your address_space certainly know what they hell they
are dealing with. Just as autofs_root_readdir() knows what inode-u.generic_ip
really points to.
Anybody else has no business to care about the contents of -host.
Why do we need a
On Tue, May 15, 2001 at 02:02:29PM -0700, Linus Torvalds wrote:
In article [EMAIL PROTECTED],
Alexander Viro [EMAIL PROTECTED] wrote:
On Tue, 15 May 2001, H. Peter Anvin wrote:
Alexander Viro wrote:
None whatsoever. The one thing that matters is that noone starts making
the
H. Peter Anvin writes:
This would leave no way (without introducing new interfaces) to write,
for example, the boot block on an ext2 filesystem. Note that the
bootblock (defined as the first 1024 bytes) is not actually used by
the filesystem, although depending on the block size it may
Albert D. Cahalan wrote:
H. Peter Anvin writes:
This would leave no way (without introducing new interfaces) to write,
for example, the boot block on an ext2 filesystem. Note that the
bootblock (defined as the first 1024 bytes) is not actually used by
the filesystem, although
At 23:35 15/05/2001, H. Peter Anvin wrote:
Albert D. Cahalan wrote:
H. Peter Anvin writes:
This would leave no way (without introducing new interfaces) to write,
for example, the boot block on an ext2 filesystem. Note that the
bootblock (defined as the first 1024 bytes) is not
Anton Altaparmakov wrote:
And how are you thinking of this working without introducing new
interfaces if the caches are indeed incoherent? Please correct me if I
understand wrong, but when two caches are incoherent, I thought it means
that the above _would_ screw up unless protected by
Alan Cox [EMAIL PROTECTED] writes:
Larry, go read up on TOPS-20. :-) SunOS did give unix mmap(), but it
did not come up the idea.
Seems to be TOPS-10
http://www.opost.com/dlm/tenex/fjcc72/
TENEX is not TOPS-10. TOPS-10 didn't get virtual memory until around
1974. By then, TENEX had
Linus Torvalds writes:
>
> On Mon, 14 May 2001, Richard Gooch wrote:
> >
> > Is there some fundamental reason why a buffer cache can't ever be
> > fast?
>
> Yes.
>
> Or rather, there is a fundamental reason why we must NEVER EVER look at
> the buffer cache: it is not coherent with the page
On Mon, 14 May 2001, David S. Miller wrote:
>
> Larry McVoy writes:
> > Hell, that's the OS that gave us mmap, remember that?
>
> Larry, go read up on TOPS-20. :-) SunOS did give unix mmap(), but it
> did not come up the idea.
s/TOPS-20/Multics/
-
To unsubscribe from this list: send the
On Mon, 14 May 2001, Linus Torvalds wrote:
> The current page cache is completely non-coherent (with _anything_: it's
> not coherent with other files using a page cache because they have a
> different index, and it's not coherent with the buffer cache because that
> one isn't even in the same
Larry McVoy writes:
> Hell, that's the OS that gave us mmap, remember that?
Larry, go read up on TOPS-20. :-) SunOS did give unix mmap(), but it
did not come up the idea.
Later,
David S. Miller
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
On Mon, 14 May 2001, Larry McVoy wrote:
> Hell, that's the OS that gave us mmap, remember that?
"I got it from Agnes..."
Don't get me wrong, SunOS 4 was probably the nicest thing Sun had ever
released and I love it, but mmap(2) was _not_ the best of ideas. Files
as streams of bytes and
1 - 100 of 139 matches
Mail list logo