On Wednesday 06 March 2019 20:44:54 Steve Magnani wrote:
> (Please remove at least LKML when responding. Mailing lists are a
> scattershot attempt to reach others who might be interested in this topic
> since I'm not aware of any linux-udf mailing list. )
IIRC there is no linux-udf mailing list, b
(Please remove at least LKML when responding. Mailing lists are a
scattershot attempt to reach others who might be interested in this
topic since I'm not aware of any linux-udf mailing list. )
A few months ago I stumbled across an interesting bit of abandonware in
the Sourceforge CVS repo that
While traversing dirents in f2fs_fill_dentries(), if bitmap is valid,
filename length should not be zero, otherwise, directory structure
consistency could be corrupted, in this case, let's print related
info and set SBI_NEED_FSCK to trigger fsck for repairing.
Signed-off-by: Chao Yu
---
fs
On 2018/12/13 11:56, Jaegeuk Kim wrote:
> On 12/07, Chao Yu wrote:
>> On 2018/12/1 4:36, Jaegeuk Kim wrote:
>>> On 11/30, Chao Yu wrote:
>>>> On 2018/11/29 9:52, Jaegeuk Kim wrote:
>>>>> This adds an option in ioctl(F2FS_IOC_SHUTDOWN) in order to tri
On 12/07, Chao Yu wrote:
> On 2018/12/1 4:36, Jaegeuk Kim wrote:
> > On 11/30, Chao Yu wrote:
> >> On 2018/11/29 9:52, Jaegeuk Kim wrote:
> >>> This adds an option in ioctl(F2FS_IOC_SHUTDOWN) in order to trigger fsck
> >>> by
> >>> setting a
On 2018/12/1 4:36, Jaegeuk Kim wrote:
> On 11/30, Chao Yu wrote:
>> On 2018/11/29 9:52, Jaegeuk Kim wrote:
>>> This adds an option in ioctl(F2FS_IOC_SHUTDOWN) in order to trigger fsck by
>>> setting a NEED_FSCK flag.
>>>
>>> Signed-off-by: Jaegeuk K
, Jaegeuk Kim wrote:
>>>>>> On 11/28, Chao Yu wrote:
>>>>>>> Hi Jaeguek,
>>>>>>>
>>>>>>> On 2018/11/28 15:31, Jaegeuk Kim wrote:
>>>>>>>> If we want to re-enable nat_bits, we rely on fsck which
wrote:
> >>>>> Hi Jaeguek,
> >>>>>
> >>>>> On 2018/11/28 15:31, Jaegeuk Kim wrote:
> >>>>>> If we want to re-enable nat_bits, we rely on fsck which requires full
> >>>>>> scan
> >>>
line or mount without
it without losing data), as well as replication aware disk space accounting
and rereplication/scrubbing. But it's coming.
Most of the activity lately has actually been happening in the userspace
tooling, though:
We now have a userspace fsck: we've actually had m
On 08.02.2016, David Niklas wrote:
> Alas, my beautiful fs has become damaged and fsck does nothing, I think
> it's a nop.
> What is wrong, something in the btree, the original message was in
> syslog but it seems to have rotated, I could tell you but I'd have to
> caus
Hi David,
On Mon, 2016-02-08 at 13:53 -0500, David Niklas wrote:
> Alas, my beautiful fs has become damaged and fsck does nothing, I think
> it's a nop.
> What is wrong, something in the btree, the original message was in
> syslog but it seems to have rotated, I could tell yo
Alas, my beautiful fs has become damaged and fsck does nothing, I think
it's a nop.
What is wrong, something in the btree, the original message was in
syslog but it seems to have rotated, I could tell you but I'd have to
cause my kernel to remount my home dir RO, which is not acceptab
Hello Kernel list!
On two machines a very specific repository the SHA1 implementation of
git-fsck and git-show fails in 9/10 cases for a specific 39MB blob.
This only occurs on vanilla Linux kernels 3.7.10, 3.8.0 (Ubuntu),
3.9.11, 3.10.5 _but not on_ 3.6.11 and 3.5.7
For details please refer to
On Tue, May 28, 2013 at 12:12:30PM -0400, Dave Jones wrote:
> box crashed, and needed rebooting. On next bootup, when it found the dirty
> partition,
> xfs chose to spew and then hang instead of replaying the journal and mounting
> :(
>
> Dave
Dave kindly provided me with a metadump of th
On Tue, May 28, 2013 at 05:41:37PM -0400, Dave Jones wrote:
> On Wed, May 29, 2013 at 07:32:48AM +1000, Dave Chinner wrote:
> > On Tue, May 28, 2013 at 05:15:44PM -0400, Dave Jones wrote:
> > > On Wed, May 29, 2013 at 07:10:12AM +1000, Dave Chinner wrote:
> > > > On Tue, May 28, 2013 at 12:12:3
On Wed, May 29, 2013 at 07:32:48AM +1000, Dave Chinner wrote:
> On Tue, May 28, 2013 at 05:15:44PM -0400, Dave Jones wrote:
> > On Wed, May 29, 2013 at 07:10:12AM +1000, Dave Chinner wrote:
> > > On Tue, May 28, 2013 at 12:12:30PM -0400, Dave Jones wrote:
> > > > box crashed, and needed reboo
On Tue, May 28, 2013 at 05:15:44PM -0400, Dave Jones wrote:
> On Wed, May 29, 2013 at 07:10:12AM +1000, Dave Chinner wrote:
> > On Tue, May 28, 2013 at 12:12:30PM -0400, Dave Jones wrote:
> > > box crashed, and needed rebooting. On next bootup, when it found the
> dirty partition,
> > > xfs cho
On Wed, May 29, 2013 at 07:10:12AM +1000, Dave Chinner wrote:
> On Tue, May 28, 2013 at 12:12:30PM -0400, Dave Jones wrote:
> > box crashed, and needed rebooting. On next bootup, when it found the dirty
> > partition,
> > xfs chose to spew and then hang instead of replaying the journal and
>
On Tue, May 28, 2013 at 12:12:30PM -0400, Dave Jones wrote:
> box crashed, and needed rebooting. On next bootup, when it found the dirty
> partition,
> xfs chose to spew and then hang instead of replaying the journal and mounting
> :(
>
> Dave
>
> [ 14.694731] SGI XFS with ACLs, securit
box crashed, and needed rebooting. On next bootup, when it found the dirty
partition,
xfs chose to spew and then hang instead of replaying the journal and mounting :(
Dave
[ 14.694731] SGI XFS with ACLs, security attributes, realtime, large
block/inode numbers, debug enabled
[ 14.72
Hi Dave,
Thank you for your insightful post. The answer to the riddle is that
the PHTree scheme as described in the link you cited has already
become "last gen" and that, after roughly ten years of searching, I am
cautiously optimistic that I have discovered a satisfactory next gen
indexing scheme
On Wed, Mar 20, 2013 at 06:49:49PM -0700, Daniel Phillips wrote:
> On Tue, Mar 19, 2013 at 11:54 PM, Rob Landley wrote:
> > I'm confused, http://tux3.org/ lists a bunch of dates from 5 years ago, then
> > nothing. Is this project dead or not?
>
> Not. We haven't done much about updating tux3.org
On Tue, Mar 19, 2013 at 11:54 PM, Rob Landley wrote:
> I'm confused, http://tux3.org/ lists a bunch of dates from 5 years ago, then
> nothing. Is this project dead or not?
Not. We haven't done much about updating tux3.org lately, however you
will find plenty of activity here:
https://github
On 03/19/2013 06:00:32 PM, Martin Steigerwald wrote:
Am Dienstag, 29. Januar 2013 schrieb Daniel Phillips:
> On Mon, Jan 28, 2013 at 5:40 PM, Theodore Ts'o
wrote:
> > On Mon, Jan 28, 2013 at 04:20:11PM -0800, Darrick J. Wong wrote:
> >> On Mon, Jan 28, 2013 at 03:27:38PM -0800, David Lang wro
Am Mittwoch, 20. März 2013 schrieb David Lang:
> On Wed, 20 Mar 2013, Martin Steigerwald wrote:
> > Am Dienstag, 29. Januar 2013 schrieb Daniel Phillips:
> >> On Mon, Jan 28, 2013 at 5:40 PM, Theodore Ts'o wrote:
> >>> On Mon, Jan 28, 2013 at 04:20:11PM -0800, Darrick J. Wong wrote:
> On Mon,
On Wed, 20 Mar 2013, Martin Steigerwald wrote:
Am Dienstag, 29. Januar 2013 schrieb Daniel Phillips:
On Mon, Jan 28, 2013 at 5:40 PM, Theodore Ts'o wrote:
On Mon, Jan 28, 2013 at 04:20:11PM -0800, Darrick J. Wong wrote:
On Mon, Jan 28, 2013 at 03:27:38PM -0800, David Lang wrote:
The situati
On Tue, Mar 19, 2013 at 9:04 PM, David Lang wrote:
> On Wed, 20 Mar 2013, Martin Steigerwald wrote:
>
>> Am Dienstag, 29. Januar 2013 schrieb Daniel Phillips:
>>>
>>> On Mon, Jan 28, 2013 at 5:40 PM, Theodore Ts'o wrote:
On Mon, Jan 28, 2013 at 04:20:11PM -0800, Darrick J. Wong wrote:
>
Am Dienstag, 29. Januar 2013 schrieb Daniel Phillips:
> On Mon, Jan 28, 2013 at 5:40 PM, Theodore Ts'o wrote:
> > On Mon, Jan 28, 2013 at 04:20:11PM -0800, Darrick J. Wong wrote:
> >> On Mon, Jan 28, 2013 at 03:27:38PM -0800, David Lang wrote:
> >> > The situation I'm thinking of is when dealing w
t unique hashes,
but actually knowing which blocks are in files and which are not. Before
(a hypothetical) Tux3 fsck repair would be so bold as to reattach some lost
metadata to the place it thinks it belongs, all of the following would need
to be satisfied:
* The lost metadata subtree is completely
;The thing that jumps out at me with this is the question of how you will
> > >>>avoid the 'filesystem image in a file' disaster that reiserfs had (where
> > >>>it's fsck could mix up metadata chunks from the main filesystem with
> > >>>met
>avoid the 'filesystem image in a file' disaster that reiserfs had (where
> >>>it's fsck could mix up metadata chunks from the main filesystem with
> >>>metadata chunks from any filesystem images that it happened to stumble
> >>>across when
On Mon, 28 Jan 2013, Theodore Ts'o wrote:
On Sun, Jan 27, 2013 at 10:13:37PM -0800, Daniel Phillips wrote:
The thing that jumps out at me with this is the question of how you will
avoid the 'filesystem image in a file' disaster that reiserfs had (where
it's fsck could mi
On Sun, Jan 27, 2013 at 10:13:37PM -0800, Daniel Phillips wrote:
> > The thing that jumps out at me with this is the question of how you will
> > avoid the 'filesystem image in a file' disaster that reiserfs had (where
> > it's fsck could mix up metadata chu
On Sun, 27 Jan 2013, Daniel Phillips wrote:
Compared to Ext2/3/4, Tux3 has a big disadvantage in terms of fsck: it does
not confine inode table blocks to fixed regions of the volume. Tux3 may store
any metadata block anywhere, and tends to stir things around to new locations
during normal
On Sun, Jan 27, 2013 at 10:02 PM, David Lang wrote:
> On Sun, 27 Jan 2013, Daniel Phillips wrote:
> The thing that jumps out at me with this is the question of how you will
> avoid the 'filesystem image in a file' disaster that reiserfs had (where
> it's fsck could mix
On Sunday 20 January 2008 18:51, Theodore Tso wrote:
> On Sat, Jan 19, 2008 at 08:10:20PM -0800, Daniel Phillips wrote:
> > I can see value in preemptively loading indirect blocks into the
> > buffer cache, but is building a second-order extent tree really
> > worth the effort? Probing the buffer
No, it didn't. I measured read from a 10GB sequentially laid out file
with standard benchmarking practices (cold cache, multiple runs, low
std. deviation in results, etc.) and here are the results:
File created by vanilla Ext3 being read by vanilla Ext3:
Total: 3m16.1s
User: 0.0.5s
Sys: 13.9
File
> On Wed, 23 Jan 2008 04:12:16 -0500 Abhishek Rai <[EMAIL PROTECTED]> wrote:
> > I'm wondering about the interaction between this code and the
> > buffer_boundary() logic. I guess we should disable the buffer_boundary()
> > handling when this code is in effect. Have you reviewed and tested that
>
On Sat, Jan 19, 2008 at 08:10:20PM -0800, Daniel Phillips wrote:
>
> I can see value in preemptively loading indirect blocks into the buffer
> cache, but is building a second-order extent tree really worth the
> effort? Probing the buffer cache is very fast.
It's not that much effort, and for
On Thursday 17 January 2008 04:47, Abhishek Rai wrote:
> > if Abhishek wants to pursue it, would be to pull in all of the
> > indirect blocks when the file is opened, and create an in-memory
> > extent tree that would speed up access to the file. It's rarely
> > worth doing this without metacluste
On Tuesday 15 January 2008 03:04, Andrew Morton wrote:
> I'm wondering about the real value of this change, really.
>
> In any decent environment, people will fsck their ext3 filesystems
> during planned downtime, and the benefit of reducing that downtime
> from 6 hours/machin
Hi!
> > I guess I should try to measure it. (Linux already does writeback
> > caching, with 2GB of memory. I wonder how important disks's 2MB of
> > cache can be).
>
> It serves essentially the same purpose as the 'async' option in /etc/exports
> (i.e. we declare it "done" when the other end of t
I just had a talk with a colleague, John Palmer, who worked on disk drive
design for about 5 years in the '90s and he gave me a very confident,
credible explanation of some of the things we've been wondering about disk
drive power loss in this thread, complete with demonstrations of various
gen
ten contents of that sector
at the host.
Correct.
Having our tools (fsck especially) be resilient in the face of media
errors is really critical. Although I don't think the scenario of a
partially written sector is common, media errors in general are common
and can develop over time.
A
"H. Peter Anvin" <[EMAIL PROTECTED]> wrote on 01/18/2008 07:08:30 AM:
> Bryan Henderson wrote:
> >
> > We weren't actually talking about writing out the cache. While that
was
> > part of an earlier thread which ultimately conceded that disk drives
most
> > probably do not use the spinning di
or
at the host.
Having our tools (fsck especially) be resilient in the face of media
errors is really critical. Although I don't think the scenario of a
partially written sector is common, media errors in general are common
and can develop over time.
Ted brought up the separate issue of the
Bryan Henderson wrote:
We weren't actually talking about writing out the cache. While that was
part of an earlier thread which ultimately conceded that disk drives most
probably do not use the spinning disk energy to write out the cache, the
claim was then made that the drive at least surviv
On Thu, Jan 17, 2008 at 04:31:48PM -0800, Bryan Henderson wrote:
> But I heard some years ago from a disk drive engineer that that is a myth
> just like the rotational energy thing. I added that to the discussion,
> but admitted that I haven't actually seen a disk drive write a partial
> sector
Ric Wheeler <[EMAIL PROTECTED]> wrote on 01/17/2008 03:18:05 PM:
> Theodore Tso wrote:
> > On Wed, Jan 16, 2008 at 09:02:50PM -0500, Daniel Phillips wrote:
> >> Have you observed that in the wild? A former engineer of a disk
drive
> >> company suggests to me that the capacitors on the board prov
Theodore Tso wrote:
On Wed, Jan 16, 2008 at 09:02:50PM -0500, Daniel Phillips wrote:
Have you observed that in the wild? A former engineer of a disk drive
company suggests to me that the capacitors on the board provide enough
power to complete the last sector, even to park the head.
Even if
> interrupt which caused the Irix to run around frantically shutting
> down DMA's for a controlled shutdown. Of course, PC-class hardware
> has none of this. My source for this was Jim Mostek, one of the
PC class hardware has a power good signal which drops just before the
rest.
--
To unsubscri
r this is better. If online incremental fsck is
something people want, then please speak up here and that will very
definitely help make it happen.
On the walk-before-run principle, it would initially just be
filesystem checking, not repair. But even this would help, by setting
per-group chec
On Wed, Jan 16, 2008 at 09:02:50PM -0500, Daniel Phillips wrote:
>
> Have you observed that in the wild? A former engineer of a disk drive
> company suggests to me that the capacitors on the board provide enough
> power to complete the last sector, even to park the head.
>
The problem isn't wit
"Daniel Phillips" <[EMAIL PROTECTED]> wrote on 01/16/2008 06:02:50 PM:
> On Jan 16, 2008 2:06 PM, Bryan Henderson <[EMAIL PROTECTED]> wrote:
> > >The "disk motor as a generator" tale may not be purely folklore. When
> > >an IDE drive is not in writeback mode, something special needs to
done
> > >
On Tue 2008-01-15 20:36:16, Chris Mason wrote:
> On Tue, 15 Jan 2008 20:24:27 -0500
> "Daniel Phillips" <[EMAIL PROTECTED]> wrote:
>
> > On Jan 15, 2008 7:15 PM, Alan Cox <[EMAIL PROTECTED]> wrote:
> > > > Writeback cache on disk in iteself is not bad, it only gets bad
> > > > if the disk is not e
On Tue, 15 Jan 2008, Daniel Phillips wrote:
> Along with this effort, could you let me know if the world actually
> cares about online fsck? Now we know how to do it I think, but is it
> worth the effort.
Most users seem to care deeply about "things just work". Here is why
On Jan 15, 2008 10:28 AM, Theodore Tso <[EMAIL PROTECTED]> wrote:
> Also, it's not just reducing fsck times, although that's the main one.
> The last time this was suggested, the rationale was to speed up the
> "rm dvd.iso" case. Also, something which *cou
t different.
>
> (Having said that, I'll admit the one 52T filesystem is an SGI Itanium box
> running Suse and using XFS rather than ext3).
>
> Has anybody done a back-of-envelope of what this would do for fsck times for
> a "max realistically achievable ext3 filesyste
On Jan 15, 2008 22:05 -0500, Rik van Riel wrote:
> With a filesystem that is compartmentalized and checksums metadata,
> I believe that an online fsck is absolutely worth having.
>
> Instead of the filesystem resorting to mounting the whole volume
> read-only on certain erro
On Jan 16, 2008 2:06 PM, Bryan Henderson <[EMAIL PROTECTED]> wrote:
> >The "disk motor as a generator" tale may not be purely folklore. When
> >an IDE drive is not in writeback mode, something special needs to done
> >to ensure the last write to media is not a scribble.
>
> No it doesn't. The las
Alan Cox wrote:
>> Writeback cache on disk in iteself is not bad, it only gets bad if the
>> disk is not engineered to save all its dirty cache on power loss,
>> using the disk motor as a generator or alternatively a small battery.
>> It would be awfully nice to know which brands fail here, if any,
On Jan 16, 2008 3:49 AM, Pavel Machek <[EMAIL PROTECTED]> wrote:
>
> ext3's "lets fsck on every 20 mounts" is good idea, but it can be
> annoying when developing. Having option to fsck while filesystem is
> online takes that annoyance away.
I'm sure everyone
> And I think there's a problem with drives that, upon sensing the
> unreadable sector, assign an alternate even though the sector is fine, and
> you eventually run out of spares.
You are assuming drives can't tell the difference between stray data loss
and sectors that can't be recovered by rew
>The "disk motor as a generator" tale may not be purely folklore. When
>an IDE drive is not in writeback mode, something special needs to done
>to ensure the last write to media is not a scribble.
No it doesn't. The last write _is_ a scribble. Systems that make atomic
updates to disk drives us
On Wed, Jan 16, 2008 at 08:43:25AM +1100, David Chinner wrote:
> ext3 is not the only filesystem that will have trouble due to
> volatile write caches. We see problems often enough with XFS
> due to volatile write caches that it's in our FAQ:
In fact it will hit every filesystem. A write-back cac
On Wed, 16 Jan 2008 12:51:44 +0100, Pavel Machek said:
> I guess I should try to measure it. (Linux already does writeback
> caching, with 2GB of memory. I wonder how important disks's 2MB of
> cache can be).
It serves essentially the same purpose as the 'async' option in /etc/exports
(i.e. we de
On Tue 2008-01-15 18:44:26, Daniel Phillips wrote:
> On Jan 15, 2008 6:07 PM, Pavel Machek <[EMAIL PROTECTED]> wrote:
> > I had write cache enabled on my main computer. Oops. I guess that
> > means we do need better documentation.
>
> Writeback cache on disk in iteself is not bad, it only gets bad
Hi!
> Along with this effort, could you let me know if the world actually
> cares about online fsck?
I'm not the world's spokeperson (yet ;-).
> Now we know how to do it I think, but is it
> worth the effort.
ext3's "lets fsck on every 20 mounts" is g
On Tue, 15 Jan 2008 10:09:16 EST, Ric Wheeler said:
> I actually think that the value of this kind of reduction is huge. We
> have seen fsck run for days (not just hours) which makes the "restore
> from backup" versus "fsck" decision favor the tapes...
Funny th
On Tue, 15 Jan 2008 03:04:41 PST, Andrew Morton said:
> In any decent environment, people will fsck their ext3 filesystems during
> planned downtime, and the benefit of reducing that downtime from 6
> hours/machine to 2 hours/machine is probably fairly small, given that there
> i
On Tue, 15 Jan 2008 20:44:38 -0500
"Daniel Phillips" <[EMAIL PROTECTED]> wrote:
> Along with this effort, could you let me know if the world actually
> cares about online fsck? Now we know how to do it I think, but is it
> worth the effort.
With a filesystem that
Hi Pavel,
Along with this effort, could you let me know if the world actually
cares about online fsck? Now we know how to do it I think, but is it
worth the effort.
Regards,
Daniel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message
On Tue, 15 Jan 2008 20:24:27 -0500
"Daniel Phillips" <[EMAIL PROTECTED]> wrote:
> On Jan 15, 2008 7:15 PM, Alan Cox <[EMAIL PROTECTED]> wrote:
> > > Writeback cache on disk in iteself is not bad, it only gets bad
> > > if the disk is not engineered to save all its dirty cache on
> > > power loss,
On Jan 15, 2008 7:15 PM, Alan Cox <[EMAIL PROTECTED]> wrote:
> > Writeback cache on disk in iteself is not bad, it only gets bad if the
> > disk is not engineered to save all its dirty cache on power loss,
> > using the disk motor as a generator or alternatively a small battery.
> > It would be awf
> Writeback cache on disk in iteself is not bad, it only gets bad if the
> disk is not engineered to save all its dirty cache on power loss,
> using the disk motor as a generator or alternatively a small battery.
> It would be awfully nice to know which brands fail here, if any,
> because writeback
Abhishek Rai wrote:
Hi,
(Sorry for the inconvenience caused due to previous misformatted patch, I've
most definitely figured out the problem this time. Thanks Michael, Daniel, and
Ingo for your help.)
This patch speeds up e2fsck on Ext3 significantly using a technique called
Metaclustering.
On Jan 15, 2008 6:07 PM, Pavel Machek <[EMAIL PROTECTED]> wrote:
> I had write cache enabled on my main computer. Oops. I guess that
> means we do need better documentation.
Writeback cache on disk in iteself is not bad, it only gets bad if the
disk is not engineered to save all its dirty cache on
e few disks that did this rapidly got firmware updates
> > > because there are other OS's that can't cope.
> > >
> > > > If disk does not lie, but powerfail during write may cause random
> > > > numbers to be returned on read -- can fsck handl
properly?
> >
> > Nope. However the few disks that did this rapidly got firmware updates
> > because there are other OS's that can't cope.
> >
> > > If disk does not lie, but powerfail during write may cause random
> > > numbers to be returned on
ates
> because there are other OS's that can't cope.
>
> > If disk does not lie, but powerfail during write may cause random
> > numbers to be returned on read -- can fsck handle that?
>
> most of the time. and fsck knows about writing sectors to remove read
>
On Tue, Jan 15, 2008 at 01:15:33PM +, Christoph Hellwig wrote:
> They won't fsck in planned downtimes. They will have to use fsck when
> the shit hits the fan and they need to. Not sure about ext3, but big
> XFS user with a close tie to the US goverment were concerned about t
Andrew Morton wrote:
I'm wondering about the real value of this change, really.
In any decent environment, people will fsck their ext3 filesystems during
planned downtime, and the benefit of reducing that downtime from 6
hours/machine to 2 hours/machine is probably fairly small, given
On Tue, Jan 15, 2008 at 01:15:33PM +, Christoph Hellwig wrote:
> They won't fsck in planned downtimes. They will have to use fsck when
> the shit hits the fan and they need to. Not sure about ext3, but big
> XFS user with a close tie to the US goverment were concerned about t
On Tue, Jan 15, 2008 at 03:04:41AM -0800, Andrew Morton wrote:
> I'm wondering about the real value of this change, really.
>
> In any decent environment, people will fsck their ext3 filesystems during
> planned downtime, and the benefit of reducing that downtime from 6
> hour
I'm wondering about the real value of this change, really.
In any decent environment, people will fsck their ext3 filesystems during
planned downtime, and the benefit of reducing that downtime from 6
hours/machine to 2 hours/machine is probably fairly small, given that there
is no se
What are ext3 expectations of disk (is there doc somewhere)? For
example... if disk does not lie, but powerfail during write damages
the sector -- is ext3 still going to work properly?
If disk does not lie, but powerfail during write may cause random
numbers to be returned on read -- can fsck handle that?
On Mon, 14 Jan 2008 08:39:01 -0500
Abhishek Rai <[EMAIL PROTECTED]> wrote:
> This is the patch for 2.6.24-rc6 -mm tree, please let me know if anyone would
> like a patch against another recent kernel. Ingo Molnar has already posted a
> patch for 2.6.24-rc7.
>
Please always retain and maintain
On Sunday 13 January 2008 01:14, Michael Gerdau wrote:
> This patch does not apply to a clean 2.6.23.13 tree using either of
> patch -p1patch -p1 -l http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
ping.
In this case I am listening to Chicken Little carefully and really do
believe the sky will fall if we fail to come up with an incremental
online fsck some time in the next few years. I realize the challenge
verges on insane, but I have been slowly chewing away at this question
for some t
OS's that can't cope.
> If disk does not lie, but powerfail during write may cause random
> numbers to be returned on read -- can fsck handle that?
most of the time. and fsck knows about writing sectors to remove read
errors in metadata blocks.
> What abou disk that kills 5
ppy here. Lets say... internal hdd in thinkpad
x60?
What are ext3 expectations of disk (is there doc somewhere)? For
example... if disk does not lie, but powerfail during write damages
the sector -- is ext3 still going to work properly?
If disk does not lie, but powerfail during write may cause ra
After a unclean shutdown, assuming you have decent hardware that
> doesn't lie about when blocks hit iron oxide, you shouldn't have any
> corruption at all. If you have crappy hardware, then all bets are off
Maybe with barriers...
> > So let's take advantage of this fac
This patch does not apply to a clean 2.6.23.13 tree using either of
patch -p1
signature.asc
Description: This is a digitally signed message part.
On Wednesday 09 January 2008 01:16, Andreas Dilger wrote:
> While an _incremental_ fsck isn't so easy for existing filesystem
> types, what is pretty easy to automate is making a read-only snapshot
> of a filesystem via LVM/DM and then running e2fsck against that. The
> kernel an
ve decent hardware that
doesn't lie about when blocks hit iron oxide, you shouldn't have any
corruption at all. If you have crappy hardware, then all bets are off
> So let's take advantage of this fact and do an optimistic fsck, to
> assure integrity per-dir, and assume no
Bodo Eggert wrote:
> Al Boldi <[EMAIL PROTECTED]> wrote:
> > Even after a black-out shutdown, the corruption is pretty minimal, using
> > ext3fs at least. So let's take advantage of this fact and do an
> > optimistic fsck, to assure integrity per-dir, and assume n
Al Boldi <[EMAIL PROTECTED]> wrote:
> Even after a black-out shutdown, the corruption is pretty minimal, using
> ext3fs at least. So let's take advantage of this fact and do an optimistic
> fsck, to assure integrity per-dir, and assume no external corruption. Then
> we r
optimistically, we would conclude
> that no fsck will be needed,
And that's the reality, because people are mostly optimistic and feel
extremely tempted to just force-mount a dirty ext3fs, instead of waiting
hours-on-end for a complete fsck, which mostly comes back with some benign
"
On Wed, 9 Jan 2008 14:52:14 +0300
Al Boldi <[EMAIL PROTECTED]> wrote:
> Ok, but let's look at this a bit more opportunistic / optimistic.
You can't play fast and loose with data integrity.
Besides, if we looked at things optimistically, we would conclude
that no fsck w
Valerie Henson wrote:
> On Jan 8, 2008 8:40 PM, Al Boldi <[EMAIL PROTECTED]> wrote:
> > Rik van Riel wrote:
> > > Al Boldi <[EMAIL PROTECTED]> wrote:
> > > > Has there been some thought about an incremental fsck?
> > > >
> >
1 - 100 of 183 matches
Mail list logo