Re: ZFS - no thanks!
On Sat, Aug 11, 2007 at 07:27:40PM +0200, Wojciech Puchar wrote: > >I doubt it. More likely you are having problems from trying to use > >your i386 system on an amd64 kernel, which will be looking in a > >different place for the i386 libraries. > > i was using qemu to emulate amd64. tried i386 live CD 5 minutes ago. > > SAME EFFECT! system hangs while doing import! OK, but this was not the problem we were discussing in this paragraph. In the part of your email that you snipped you discussed problems with the amd64 system failing to recognize libraries on your i386 disk image. This is expected behaviour. > no crash so i can't do a crashdump :( You'll need to enable DDB and obtain some debugging information. See the developers handbook for full instructions. > you said that i did quick testing. yes - because before it i already new > what i want to test. The problem is that you didn't bother to read the documentation when you encountered "problems" (or ask for help), but instead guessed about how you think the system should work. Unfortunately, your guesses were in disagreement with documented reality in most of the cases you mentioned. Kris pgpC4pHoUWgPw.pgp Description: PGP signature
Re: ZFS - no thanks!
I doubt it. More likely you are having problems from trying to use your i386 system on an amd64 kernel, which will be looking in a different place for the i386 libraries. i was using qemu to emulate amd64. tried i386 live CD 5 minutes ago. SAME EFFECT! system hangs while doing import! no crash so i can't do a crashdump :( you said that i did quick testing. yes - because before it i already new what i want to test. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: ZFS - no thanks!
On Sat, Aug 11, 2007 at 01:25:44PM +0200, Wojciech Puchar wrote: > just ended testing. Wow, that was quick. It looks like you made some very hasty judgements, and a lot of the problems you encountered were quite frankly your own fault. > after having all my data (test system fortunately) on ZFS including root, > i lost /boot partition, which was on pendrive to make testing easier. > > well - no problem - i've started my normal 6.2 syste, got bootonly CD, > removed mfsroot, added (as on ZFS system) > vfs.root.mountfrom="zfs:tank/root", put tu pendrive, bsdlabel -B etc.. > > started.. > > it CANNOT mount zfs. Yes, clearly documented. > well - i don't have live CD Easy to download one. > the only thing left was 7.0 livecd but amd64 only. > > well - i have qemu. > > started qemu with this image, added -hda with my ZFS disk, started. > > got to fixid, kldload zfs.ko > > zfs import - well all's fine > > zfs import -f tank > > after some time done > > tried any command - looks like no libraries. well - probably it > overmounted / from zfs... I doubt it. More likely you are having problems from trying to use your i386 system on an amd64 kernel, which will be looking in a different place for the i386 libraries. > next reboot > > now > > zfs import -R /mnt -f tank > > > and it's waiting forever. doing nothing, both CTRL-C and CTRL-Z doesn't > work. > > it's doing nothing as i see that qemu uses almost no CPU. You may have found a bug, unfortunately you needed to obtain more debugging information to be a useful report. > it uses HUGE amount of RAM. Documented. > it's set copies=n is a joke. > you have no warranty where are the copies (often on the same disk). Not according to the documentation. > zpool scrub DOES NOT move copies to other disk from the same when other is > made available!! It should, I think. > raidz can't be expanded Dunno what you mean by expanded. > cache flushing CAN NOT be disable for selected device, only for > everything. > > my USB-IDE converter make doesn't allow it, but my 2.5" does! > i use USB-IDE converted with disk as a backup. with ZFS it's impossible > unless i will turn off flushing for everything - losing it's important > adventage. Not sure what you mean here either. > disks based pool (no mirror/raidz) won't start AT ALL with one element > unattached!!! EVEN if everything has copies>1 !!! That's not what copies is supposed to be used for. If you want degraded mode, use a mirror/raid configuration,. > summary: excellent idea turned into pile of s...t summary: user had many incorrect expectations about the software and is not willing to correctly report possible bugs so they can be fixed, therefore should avoid running all pre-release versions of freebsd. Kris pgpUhimTHddG0.pgp Description: PGP signature
Re: ZFS - no thanks!
Hi Wojciech, let me start with pointing out that ZFS is still an experimental feature. Secondly, this is the wrong list, because ZFS is a feature of FreeBSD-CURRENT. Adding freebsd-current to CC, maybe one of the ZFS developers can give their $0.02. On 11/08/07, Wojciech Puchar <[EMAIL PROTECTED]> wrote: > just ended testing. > > after having all my data (test system fortunately) on ZFS including root, > i lost /boot partition, which was on pendrive to make testing easier. > > well - no problem - i've started my normal 6.2 syste, got bootonly CD, > removed mfsroot, added (as on ZFS system) > vfs.root.mountfrom="zfs:tank/root", put tu pendrive, bsdlabel -B etc.. If I understand you correctly you tried to use ZFS with FreeBSD 6.2. Won't work. > on the other hand it's faster over UFS on small files but not that faster. > it uses HUGE amount of RAM. ZFS probably isn't for Joe average user. No offense ment with this, but ZFS does has a nice set of features, but not many are needed for every days work. At least if you're on a workstation. > > it's set copies=n is a joke. > you have no warranty where are the copies (often on the same disk). Yes, because copies=n isn't there to protect against an entire disk failing. It protects data against block failures. So if an individual block on the disk wents bad, as it's most often the case when a disk starts to die, you have some more backups with the correct checksum. > zpool scrub DOES NOT move copies to other disk from the same when other is > made available!! No, because it's not dedicated to do this. AFAIK zfs does this automagically every time you attach a disk and add it to a certain zpool. scrubbing is in cases where disk faults have been found. It means that the ZFS compares the data and it's checksum, thus locating any trouble and fixes it. > raidz can't be expanded > > cache flushing CAN NOT be disable for selected device, only for > everything. > > my USB-IDE converter make doesn't allow it, but my 2.5" does! > i use USB-IDE converted with disk as a backup. with ZFS it's impossible > unless i will turn off flushing for everything - losing it's important > adventage. > > > disks based pool (no mirror/raidz) won't start AT ALL with one element > unattached!!! EVEN if everything has copies>1 !!! I don't know your setup but for me it works fine. I'm currently at the Chaos Communication Camp, next to me is an AthlonXP powered with FreeBSD-CURRENT installed. 3x400GB HDD using ZFS, giving 732GB net capacity. 2GB RAM. We had a few issues with the hardware. One of the disks is unstable, leading to crashes eventually. I started the system without the disk, I changed the disks position, moved them from a PCI Controller to the onboard controller and stuff. ZFS came up fine. I'm really impressed with ZFS and it's features. The system is pretty busy, we've 15 users max with 1MBit/sec allowed. Firewall states a throughput of 90MBit/sec upstream, saturating the 100MBit/sec NIC. With ZFS you're making use of all your HW, CPU, RAM, PCI Bus etc. So if something 's wrong with your HW you'll notice. But that doesn't necessarily mean that it's related to ZFS. For example I encountered a poor system performance, with lots of interrupts. Tried to tweak ZFS a bit, didn't make a difference. Then I took my "primary slave disk" from the PCI Controller, attached it to onboard primary master - and things went out fine. Just as a side note: If your dmesg reports something like this: atapci0: port 0xdc00-0xdc07,0xd800-0xd803,0xd400-0xd407,0xd000-0xd003,0xcc00-0xcc0f irq 10 at device 6.0 on pci0 ...remove it. ;-) Christian ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
ZFS - no thanks! - few more words
for those who got much better experience with ZFS and for everyone else using anything: please do remember - no RAID hardware or software, no zfs, no anything is a replacement for REGULAR BACKUPS done on removable media or different machine. in second case at least sometimes it should be done to removable media too. with this way i never lost any data, not using any mirrors or RAID-5 ever. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
ZFS - no thanks!
just ended testing. after having all my data (test system fortunately) on ZFS including root, i lost /boot partition, which was on pendrive to make testing easier. well - no problem - i've started my normal 6.2 syste, got bootonly CD, removed mfsroot, added (as on ZFS system) vfs.root.mountfrom="zfs:tank/root", put tu pendrive, bsdlabel -B etc.. started.. it CANNOT mount zfs. well - i don't have live CD the only thing left was 7.0 livecd but amd64 only. well - i have qemu. started qemu with this image, added -hda with my ZFS disk, started. got to fixid, kldload zfs.ko zfs import - well all's fine zfs import -f tank after some time done tried any command - looks like no libraries. well - probably it overmounted / from zfs... next reboot now zfs import -R /mnt -f tank and it's waiting forever. doing nothing, both CTRL-C and CTRL-Z doesn't work. it's doing nothing as i see that qemu uses almost no CPU. "EXCELLENT" filesystem with "excellent" protection of my data on the other hand it's faster over UFS on small files but not that faster. it uses HUGE amount of RAM. it's set copies=n is a joke. you have no warranty where are the copies (often on the same disk). zpool scrub DOES NOT move copies to other disk from the same when other is made available!! raidz can't be expanded cache flushing CAN NOT be disable for selected device, only for everything. my USB-IDE converter make doesn't allow it, but my 2.5" does! i use USB-IDE converted with disk as a backup. with ZFS it's impossible unless i will turn off flushing for everything - losing it's important adventage. disks based pool (no mirror/raidz) won't start AT ALL with one element unattached!!! EVEN if everything has copies>1 !!! summary: excellent idea turned into pile of s...t no thanks ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"