Re: [zfs-discuss] ZFS ok for single disk dev box? D1B1A95FBD cf7341ac8eb0a97fccc477127fd...@sn2prd0410mb372.namprd04.prod.outlook.com

2012-08-30 Thread Nomen Nescio
  Hi. I have a spare off the shelf consumer PC and was thinking about loading
  Solaris on it for a development box since I use Studio @work and like it
  better than gcc. I was thinking maybe it isn't so smart to use ZFS since it
  has only one drive. If ZFS detects something bad it might kernel panic and
  lose the whole system right? I realize UFS /might/ be ignorant of any
  corruption but it might be more usable and go happily on it's way without
  noticing? Except then I have to size all the partitions and lose out on
  compression etc. Any suggestions thankfully received.
 
 Suppose you start getting checksum errors.  Then you *do* want to notice.

I'm not convinced. I understand the theoretical value of ZFS but it
introduces a whole new layer of problems other filesystems don't have. Even
if it's right in theory it doesn't always make things better in reality. I
like the features it provides and not having to size filesystems like in
the old days is great, but ZFS can and does have bugs and like anything else
is not perfect. Aside from Microsoft which used to be guaranteed to corrupt
filesystems I haven't ever had corruption that caused me any problems.
Certainly there must have been corruptions because of software bugs and
crappy hardware but they had no visible effect and that is good enough for
me in this situation I asked about. I feel this issue is a little overblown
given most of the world runs on  other enterprise filesystems and the world
hasn't come to and end yet. ZFS is an important step in the right direction
but it doesn't mean you can't live without it's error detection. We lived
without it until now. What I find hard to live without is the management
features it gives you which is why I have a dilemna.

In this specific use case I would rather have a system that's still bootable
and runs as best it can than an unbootable system that has detected an
integrity problem especially at this point in ZFS's life. If ZFS would not
panic the kernel and give the option to fail or mark file(s) bad, I would
like it more. 

But having the ability manage the disk with one pool and the other nice
features like compression plus the fact it works nicely on good hardware
make it hard to go back once you made the jump. Choices, choices.

  Even if your system does crash, at least you now have an opportunity to
  recognize there is a problem, and think about your backups, rather than
  allowing the corruption to proliferate. 

This isn't a production box as I said it's an unused PC with a single drive,
and I don't have anybody's bank accounts on it. I can rsync whatever I work
on that day to a backup server. It won't be a disaster if UFS suddenly
becomes unreliable and I lose a file or two, or if a drive fails, but it
would be very annoying if ZFS barfed on a technicality and I had to
reinstall the whole OS because of a kernel panic and an unbootable system.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ok for single disk dev box?

2012-08-30 Thread Nomen Nescio

  would be very annoying if ZFS barfed on a technicality and I had to
  reinstall the whole OS because of a kernel panic and an unbootable system.
 
 It shouldn't do that.

I agree but it seems like other people had it happen.

 Plus, if you look around a bit, you'll find some tutorials to back up
 the entire OS using zfs send-receive. So even if for some reason the
 OS becomes unbootable (e.g. blocks on some critical file is corrupted,
 which would cause panic/crash no matter what filesystem you use), the
 reinstall process is basically just a zfs send-receive plus
 installing the bootloader, so it can be VERY fast.

Now that is interesting. But how do you do a receive before you reinstall?
Live cd??

Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ok for single disk dev box?

2012-08-30 Thread Nomen Nescio
Thanks, sounds awesome! Pretty much takes away my concern of using ZFS!

Stu

 
 Now that is interesting. But how do you do a receive before you reinstall?
 Live cd??
 
 
 Just boot off of the CD (or jumpstart server) to single user mode.  Format
 your new disk, create a zpool, zfs recv, installboot (or installgrub),
 reboot and done. 



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Anybody running S10 or 11 on AMD bulldozer 8 core?

2012-05-25 Thread Nomen Nescio
I have a chance to pick up a system at a reasonable price built with an AMD
FX8120 8 core 3.1 GHz on a Gigabyte motherboard. Is anybody runing with this
combo? Looking for info from an actual user. Thanks.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server upgrade

2012-02-16 Thread Nomen Nescio
 I would recommend solaris 11 express based on personal experience.  It
 gets bugfixes and new features sooner than commercial solaris.

I thought they stopped making 11 Express available when 11 went out?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] about btrfs and zfs

2011-11-13 Thread Nomen Nescio
 LOL.  Well, for what it's worth, there are three common pronunciations for
 btrfs.  Butterfs, Betterfs, and B-Tree FS (because it's based on b-trees.)
 Check wikipedia.  (This isn't really true, but I like to joke, after
 saying something like that, I wrote the wikipedia page just now.)   ;-)

You forget Broken Tree File System, Badly Trashed File System, etc. Follow
the newsgroup and you'll get plenty more ideas for names ;-)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Saving data across install

2011-08-03 Thread Nomen Nescio
I installed a Solaris 10 development box on a 500G root mirror and later I
received some smaller drives. I learned from this list its better to have
the root mirror on the smaller small drives and then create another mirror
on the original 500G drives so I copied everything that was on the small
drives onto the 500G mirror to free up the smaller drives for a new install.

After my install completes on the smaller mirror, how do I access the 500G
mirror where I saved my data? If I simply create a tank mirror using those
drives will it recognize there's data there and make it accessible? Or will
it destroy my data? Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Saving data across install

2011-08-03 Thread Nomen Nescio
Please ignore this post. Bad things happened and now there is another thread
for it. Thank you.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-30 Thread Nomen Nescio
 Actually, you do want /usr and much of /var on the root pool, they
 are integral parts of the svc:/filesystem/local needed to bring up
 your system to a useable state (regardless of whether the other
 pools are working or not).

Ok. I have my feelings on that topic but they may not be as relevant for
ZFS. It may be because I tried to avoid single points of failure on other
systems with techniques that don't map to ZFS or Solaris. I believe I can
bring up several OS without /usr or /var although they complain they will
work. But I'll take your point here.

 Depending on the OS versions, you can do manual data migrations
 to separate datasets of the root pool, in order to keep some data
 common between OE's or to enforce different quotas or compression
 rules. For example, on SXCE and Solaris 10 (but not on oi_148a)
 we successfully splice out many filesystems in such a layout
 (the example below also illustrates multiple OEs):

Thanks, I have done similar things but I didn't know if they were
approved.

 And you can not boot from any pool other than a mirror or a
 single drive. Rationale: a single BIOS device must be sufficient
 to boot the system and contain all the data needed to boot.

Definitely important fact here.

Thanks for all the info!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Encryption accelerator card recommendations.

2011-06-28 Thread Nomen Nescio
 All (Ultra)SPARC T2, T2+, and T3 CPUs should have these capabilities; if
 you have some other CPU the capabilities are probably not present. Run
 'prtdiag | head -20' to see the CPUs on your system/s; run cryptoadm(1M)
 with the  list option (Solaris 10+) to see the software and hardware
 providers available.
 
 For further assistance your best bet would be  crypto-discuss (this has
 gotten OT for zfs-discuss):
 
http://mail.opensolaris.org/pipermail/crypto-discuss/

Thanks, I'll ask over there. I understood there was a broadcomm add on card
for servers but from your answer it seems the CPU supports crypto
operations. I don't understand what the purpose of having both support it is
if they want to sell crypto cards.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-23 Thread Nomen Nescio
Hello Bob! Thanks for the reply. I was thinking about going with a 3 way
mirror and a hot spare. But I don't think I can upgrade to larger drives
unless I do it all at once, is that correct?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-21 Thread Nomen Nescio
Hello Marty! 

 With four drives you could also make a RAIDZ3 set, allowing you to have
 the lowest usable space, poorest performance and worst resilver times
 possible.

That's not funny. I was actually considering this :p

But you have to admit, it would probably be somewhat reliable!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-16 Thread Nomen Nescio
Has there been any change to the server hardware with respect to number of
drives since ZFS has come out? Many of the servers around still have an even
number of drives (2, 4) etc. and it seems far from optimal from a ZFS
standpoint. All you can do is make one or two mirrors, or a 3 way mirror and
a spare, right? Wouldn't it make sense to ship with an odd number of drives
so you could at least RAIDZ? Or stop making provision for anything except 1
or two drives or no drives at all and require CD or netbooting and just
expect everybody to be using NAS boxes? I am just a home server user, what
do you guys who work on commercial accounts think? How are people using
these servers?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Any use for extra drives?

2011-03-23 Thread Nomen Nescio
Hi ladies and gents, I've got a new Solaris 10 development box with ZFS
mirror root using 500G drives. I've got several extra 320G drives and I'm
wondering if there's any way I can use these to good advantage in this
box. I've got enough storage for my needs with the 500G pool. At this point
I would be looking for a way to speed things up if possible or add
redundancy if necessary but I understand I can't use these smaller drives to
stripe the root pool, so what would you suggest? Thanks.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss