> Eventually the problem was diagnosed to be caused by the data on the two
> mirrored disks not being identical.
I guess you didn't disable the write-cache of your Harddrives?
With write-cache enabled (/ no USV) it's somewhat unfair to
blaime 'md'..
> It seems that the kernel does not check the
> > This is why ZFS offers block checksums... it can then try all the
> > permutations of raid regens to find a solution which gives the right
> > checksum.
>
> Isn't there a way to do this at the block layer? Something in
> device-mapper?
Remember: Suns new Filesystem + Suns new Volume Manage
> You do it turns out. Its becoming an issue more and more that the sheer
> amount of storage means that the undetected error rate from disks,
> hosts, memory, cables and everything else is rising.
IMHO the possibility to hit such a random-so-far-undetected-corruption
is very low with one of the b
> > While filesystem speed is nice, it also would be great if reiser4.x would
> > be
> > very robust against any kind of hardware failures.
>
> Can't have both.
..and some people simply don't care about this:
If you are running a 'big' Storage-System with battery protected
WriteCache, Mirrori
> suspect, particularly with 7200/min (s)ATA crap.
Quoting myself (again):
>> A quick'n'dirty ZFS-vs-UFS-vs-Reiser3-vs-Reiser4-vs-Ext3 'benchmark'
Yeah, the test ran on a single SATA-Harddisk (quick'n'dirty).
I'm so sorry but i don't have access to a $$$ Raid-System at home.
Anyway: The test s
> So ZFS isn't "state-of-the-art"?
Of course it's state-of-the-art (on Solaris ;-) )
> WAFL is for high-turnover filesystems on RAID-5 (and assumes flash memory
> staging areas).
s/RAID-5/RAID-4/
> Not your run-of-the-mill desktop...
The WAFL-Thing was just a joke ;-)
Regards,
Adrian
> > Great to see that Sun ships a state-of-the-art Filesystem with
> > Solaris... I think linux should do the same...
>
> This would be worthwhile, if only to be able to futz around in Solaris-made
> filesystems.
s/I think linux should do the same/I think linux should include Reiser4/
;-)
> F
> All the more important to think about FS requirements *before*
> newfs-ing if a quick "one day for rsync/star/dump+restore" isn't
> available. If you're hitting, for instance, the hash collision problem
> in reiser3, you're as dead as with a FS without inodes.
Quoting myself:
>> Let's face it:
> Well - easy to fix, newfs again with proper inode density (perhaps 1 per
> 2 kB) and redo the migration.
Ehr: Such a migration (on a very busy system) takes *some* time (weeks).
Re-Doing (migrate users back / recreate the FS / start again) the whole
thing isn't really an option..
> Of course
Hello Matthias,
> This looks rather like an education issue rather than a technical limit.
We aren't talking about the same issue: I was asking to do it
on-the-fly. Umounting the filesystem, running e2fsck and resize2fs
is something different ;-)
> Which is untrue at least for Solaris, which all
> > And EXT3 imposes practical limits that ReiserFS doesn't as well. The big
> > one being a fixed number of inodes that can't be adjusted on the fly,
>
> Right. Plan ahead.
Ok: Assume that i've read the mke2fs manpage and added more inodes to
my filesystem.
So: What happens if i need to grow my
> Would the system crash if you dd to another directory of the filesystem?
Writing/dd'ing to other directories (= *not* subdirectories of the
homedir) worked fine.
> Would you please recompile with reiserfs debug on and crash again and
> send us all kernel output related to the crash?
Hmm.. Th
Hi Vladimir,
> Can you reproduce this easily?
> If no, please tell more about what did filesystem do when panic occured.
I've seen the same problem (vs-6030) today on one of our hosts:
It happened as soon as somebody tried to write data to
/home/$affected_user/ (..or a subdirectory)
eg:
dd i
> Ok, so I saw a few messages show up today -- what happened to all the
> ones in the meantime?
Maybe bounced? .. Namesys had/HAS some ugly DNS-Problems:
http://www.dnsreport.com/tools/dnsreport.ch?domain=namesys.com
:-(
Hi,
If anyone is interested:
I ran a small filesystem benchmark on my x86 PC.
It includes:
On Linux:
* Reiser4
* ReiserFS
* Ext3
On Solaris (Using 'gnusolaris'[.org] -> Alpha 2)
* UFS
* ZFS
NetApp's 'Postmark' was used to perform the tests.
(Postmark simulates something like M
> If you want high performance acls,
> sponsor me to supervise the work for reiser4, and they will be very high
> performance. Making them high performance in reiser4 is straightforward
> and easy.
How much would it cost?
1'000$ ? 10'000$?
-- Adrian
--
"Wow, I'm Dazzled!
These graphs are
> mount --bind /meta/vfs/some/chroot /some/chroot/meta
This maybe funny if you got 1-2 chrooted applications.
But it will be a nightmare if you got 20-30 chrooted applications.
--
We're working on it, slowly but surely...or not-so-surely in the spots
we're not so sure... -- Larry Wall
> so all we have left is the issue of whether using /meta costs us
> performance, or whether breaking POSIX to add a symlink (such as
> foo/...) really gives us that much more usability.
IMHO '/meta' isn't such a good idea, because a chrooted application
won't be able to use it.
> > Not everyone will want
> > to reformat at once, but as the reiser4 code matures and proves itself
> > (even more than it already has),
>
> I for one have seen mainly people with wild claims that it will make their
> machines much faster, and com
> mount -o remount,rw /dev/md1 <--- (no '/somewhere' !!!)
Hmm... this seems to make fsck happy :)
But once i bootet using /dev/md1 as my rootfs, fsck.reiser4
complains again after the next reboot (into my rescue fs)
-- Adrian
--
We're working on it, slowly but surely...or not-so-sure
> I am about the particular fsck message that appeares when you
> use -5 reiser4 patch:
Ok, but i think it's still strange:
This message only re-appears if i do a:
mount -o ro /dev/md1 /somewhere
mount -o remount,rw /dev/md1 /somewhere <--- !!!
umount /dev/md1
fsck.reiser4 /dev/md1 <--
Hi Vitaly,
> there was a format change to work with encryption plugin in
> -5 reiser4 patch. progs do not have its support yet. grub works
> through the progs code so its the same problem, mkisofs is not
> relevant here.
I don't think thats the problem:
It looks like a remount bug:
See <[EMA
> Now the same thing happens again :-/
Ok, i know why it only got corrupted after using
the partition as rootfs :
My Reiser4 partition doesn't like to get remounted rw:
Running
1) mount /dev/md1 /somewhere
2) umount /dev/md1
3) mount -o ro /dev/md1 /somewhere
4) umount /dev/md1
works wit
I upgraded to Linux 2.6.11.11 using the -5 reiser4 patch.
It fixed it.. somewhat.. it's still funky:
* mkisofs doesn't crash with the new kernel, yeah!
* after running mkisofs, grub can't read the filesystem anymore..
The filesystem got corrupted. (It was ok before i booted
into 2.6.11.
> what reiser4 patch do you use for this kernel?
That should be
ftp://ftp.namesys.com/pub/reiser4-for-2.6/2.6.11/reiser4-for-2.6.11-4.patch.gz
I'll give -5 a try this evening
> Do you have more than one mounted reiser4 partition?
No, only /dev/md1 (My Rootfs)
Hi,
Well, i managed to crash reiser4 ;-)
I created an iso-image on my reiser4 filesystem (it's my rootfs)
using mkisofs. mkisofs aborted because the filesystem was full.
After freeing up some space, i ran mkisofs again and: *bam*
fsck.reiser4 told me to run '--rebuild-sb' but looks like
it didn'
> ReiserFS: sda3: warning: sh-2021: reiserfs_fill_super: can not find
> reiserfs on sda3
Ehrm, This sounds like Reiser3,
does your kernel support Reiser4? Maybe you should use modprobe?
--
We're working on it, slowly but surely...or not-so-surely in the spots
we're not so sure...
Hi,
> IIRC a fund drive was talked about earlier and for some reason
> discarded. Can't even remember who the guy behind it was, I'm afraid.
It was me..
And i'd still buy them a Mac.. No problem..
-- Adrian
> > Use ftp://80.133.138.104:12121
>
> sorry, but I get 'no route to host' every time.
80.133.138.104 is owned by the 'Deutsche Telekom AG'
(An ISP in Germany).
Looks like a dynamic IP, currently not in use -> No route
-- Adrian
> A flaw in the filesystem, in my opinion, is equivalent to the space
> ship crashing and all crew members die.
No, it isn't..
A dying filesystem is a bad thing.. But it's just a filesystem..
Hello Hans,
> There was a read slowdown, in latest release of reiser4, see patch I
> cc'd this list on a few emails ago.
I saw the patch 5 seconds after i've posted my message ;)
I'll re-run my test using the patch and with/without the extents
option..
> Did you time the sync command or?
I ra
I also noticed some odd slowness of reiser4
(Running 2.6.10 using the latest 2.6.10-rcsomething reiser4 patch)
What i did:
I created a small script wich creates MANY (= 195075) directories
like this: 1/[1-3]/[1-255]/[1-255]
After this, i ran 'sync && find . > /dev/null && rm -rf *'
Well,
> Also, can this be done when the partition is mounted read-only?
I did this some months ago and my kernel gave me a nice Oops ;-)
Nikita told me on #reiser4 that it isn't possible to fsck a (ro)-mounted
Filesystem..
Maybe this changed and it's possible now..
bye
Hi,
Nikita provided a small patch for this problem on #reiser4:
--- dir.c.org 2004-08-29 11:32:40.0 +0200
+++ dir.c 2004-08-29 12:03:40.0 +0200
@@ -126,6 +126,9 @@
data.mode = object->i_mode;
data.id = inode_file_plugin(object)->h.id;
+ if (!inode_f
Hello,
> - 2.6.9-rc1-mm1 has a bug that affects all filesystems (pointed out
> earlier today). so don't use if you love your data ;)
Can you tell us more about this bug?
(I'm using 2.6.9-rc1-mm1 and would like to know, what will happen ;) )
> Probably a good idea. Wish I had money for it.
How much would namesys need?
37 matches
Mail list logo