Re: [zfs-discuss] HELP!!! Need to disable zfs

2008-11-25 Thread Mike Gerdts
Boot from the other root drive, mount up the "bad" one at /mnt. Then: # mv /mnt/etc/zfs/zpool.cache /mnt/etc/zpool.cache.bad On Tue, Nov 25, 2008 at 8:18 AM, Mike DeMarco <[EMAIL PROTECTED]> wrote: > My root drive is ufs. I have corrupted my zpool which is on a differen

[zfs-discuss] HELP!!! Need to disable zfs

2008-11-25 Thread Mike DeMarco
My root drive is ufs. I have corrupted my zpool which is on a different drive than the root drive. My system paniced and now it core dumps when it boots up and hits zfs start. I have a alt root drive that can boot the system up with but how can I disable zfs from starting on a different drive?

Re: [zfs-discuss] Performance bake off vxfs/ufs/zfs need some help

2008-11-23 Thread Mike Gerdts
or clustered storage as well. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs w/ SATA port multipliers?

2008-11-20 Thread mike
I think you'll need to get device support first. Last I checked there was still no device support for PMPs, sadly. On Thu, Nov 20, 2008 at 4:52 PM, Krenz von Leiberman <[EMAIL PROTECTED]> wrote: > Does ZFS support pooled, mirrored, and raidz storage with > SATA-port-multipliers (http://www.serial

[zfs-discuss] ZFS performance

2008-11-16 Thread Mike Futerko
at is wrong etc? Thanks in advance for any advice, Mike ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS snapshot list

2008-11-15 Thread Mike Futerko
Hi > [Default] On Sat, 15 Nov 2008 11:37:50 +0200, Mike Futerko > <[EMAIL PROTECTED]> wrote: > >> Hello >> >> Is there any way to list all snapshots of particular file system >> without listing the snapshots of its children file systems? > > fs

[zfs-discuss] ZFS snapshot list

2008-11-15 Thread Mike Futerko
Hello Is there any way to list all snapshots of particular file system without listing the snapshots of its children file systems? Thanks, Mike ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

Re: [zfs-discuss] Best SXCE version for ZFS Home Server

2008-11-14 Thread mike
On Fri, Nov 14, 2008 at 3:18 PM, Al Hopper <[EMAIL PROTECTED]> wrote: >> No clue. My friend also upgraded to b101. Said it was working awesome >> - improved network performance, etc. Then he said after a few days, >> he's decided to downgrade too - too many other weird side effects. > > Any more d

Re: [zfs-discuss] Best SXCE version for ZFS Home Server

2008-11-14 Thread mike
erent Solaris versions: http://blogs.sun.com/weber/entry/solaris_opensolaris_nevada_indiana_sxde On Fri, Nov 14, 2008 at 2:15 AM, Vincent Boisard <[EMAIL PROTECTED]> wrote: > Do you have an idea if your problem is due to live upgrade or b101 itself ? > > Vincent > > On Thu, Nov

Re: [zfs-discuss] Best SXCE version for ZFS Home Server

2008-11-13 Thread mike
Depends on your hardware. I've been stable for the most part on b98. Live upgrade to b101 messed up my networking to nearly a standstill. It stuck even after I nuked the upgrade. I had to reinstall b98. On Nov 13, 2008, at 10:01 AM, "Vincent Boisard" <[EMAIL PROTECTED]> wrote: Thanks for

Re: [zfs-discuss] 10u6 any patches yet?

2008-11-12 Thread Mike Watkins
Will probably have a 10_recommended u6 patch bundle sometime in December... For now, to get to u6 (and ZFS) you must do LU (ie u5 to u6) Just FYI On Wed, Nov 12, 2008 at 12:48 PM, Johan Hartzenberg <[EMAIL PROTECTED]>wrote: > > > On Wed, Nov 12, 2008 at 8:15 PM, Vincent Fox <[EMAIL PROTECTED]>w

Re: [zfs-discuss] ZFS on Fit-PC Slim?

2008-11-10 Thread Mike Futerko
svcadm disable cde-login I'd also recommend to disable some other unnecessary processes, ex: svcs | egrep 'webco|wbem|avahi|print|font|cde|sendm|name-service-cache|opengl' | awk '{print $3}' | xargs -n1 svcadm disable Thi

[zfs-discuss] Some Samba questions

2008-11-02 Thread mike
s chmod 0755 $foo fixes it - the ACL inheriting doesn't seem to be remembered or I'm not understanding it properly... The user 'mike' should have -all- the privileges, period, no matter what the client machine is etc. I am mounting it -as- mike from both clients...

Re: [zfs-discuss] zfs zpool recommendation

2008-10-29 Thread Mike
By Better I meant the best practice for a server running the Netbackup application. I am not seeing how using raidz would be a performance hit. Usually stripes perform faster than mirrors. -- This message posted from opensolaris.org ___ zfs-discuss ma

[zfs-discuss] zfs zpool recommendation

2008-10-29 Thread Mike
Hi all, I have been asked to build a new server and would like to get some opinions on how to setup a zfs pool for the application running on the server. The server will be exclusively for running netbackup application. Now which would be better? setting up a raidz pool with 6x146gig drives or

Re: [zfs-discuss] Strange result when syncing between SPARC and x86

2008-10-29 Thread Mike Futerko
and try moving a pool between them to see what > happens... Would be interesting to know how it'll work if move whole zpool not just sync with send/recv. But I think all will be fine there as is seems the problem is in send/recv part on the file system itself on different architectures. Th

Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2008-10-26 Thread mike
On Sun, Oct 26, 2008 at 12:47 AM, Peter Bridge <[EMAIL PROTECTED]> wrote: > Well for a home NAS I'm looking at noise as a big factor. Also for a 24x7 > box, power consumption, that's why the northbridge is putting me off slightly. That's why I built a full-sized tower using a Lian-Li case with

Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2008-10-23 Thread mike
I'm running ZFS on nevada (b94 and b98) on two machines at home, both with 4 gig ram. one has a quad core intel core2 w/ ECC ram, the other has normal RAM and an athlon 64 dual-core low power. both seem to be working great. On Thu, Oct 23, 2008 at 2:04 PM, Peter Bridge <[EMAIL PROTECTED]> wrote: >

Re: [zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!

2008-10-18 Thread Mike La Spina
ember it now some where in its definitions. You need to remove the second datastore from VMware and delete the target definition and ZFS backing store. Once you recreate the backing and target you should have a new GUID and iqn which should cure the issue. Regards, Mike -- This message posted

Re: [zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!

2008-10-17 Thread Mike La Spina
Hello Tano, The issue here is not the target or VMware but a missing GUID on the target as the issue. Observe the target smf properties using iscsitadm list target -v You have iSCSI Name: iqn.1986-03.com.sun:02:35ec26d8-f173-6dd5-b239-93a9690ffe46.vscsi Connections: 0 ACL list: TPGT list: TPG

Re: [zfs-discuss] 200805 Grub problems

2008-10-16 Thread Mike Aldred
Ok, I managed to get my grub menu (and spashimage) back by following: http://www.genunix.org/wiki/index.php/ZFS_rpool_Upgrade_and_GRUB Initially, I just did it for the boot enviroment I wanted to use, but it didn't seem to work, so I also did it for the previous boot enviroment. I'm not sure w

Re: [zfs-discuss] Strange result when syncing between SPARC and x86

2008-10-16 Thread Mike Futerko
Hi Just checked with snv_99 on x86 (VMware install) - same result :( Regards Mike [EMAIL PROTECTED] wrote: >> Hello >> >> >> Today I've suddenly noticed that symlinks (at least) are corrupted when >> sync ZFS from SPARC to x86 (zfs send | ssh | zfs re

[zfs-discuss] Strange result when syncing between SPARC and x86

2008-10-16 Thread Mike Futerko
ce yet to test on latest OpenSolaris. Any suggestions? Thanks Mike ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Looking for some hardware answers, maybe someone on this list could help

2008-10-16 Thread mike
On Wed, Oct 15, 2008 at 9:13 PM, Al Hopper <[EMAIL PROTECTED]> wrote: > The exception to the "rule" of multiple 12v output sections is PC > Power & Cooling - who claim that there is no technical advantage to > having multiple 12v outputs (and this "feature" is only a marketing > gimmick). But now

Re: [zfs-discuss] Looking for some hardware answers, maybe someone on this list could help

2008-10-15 Thread mike
Yeah for this plan I needed with 8 onboard SATA or another 8 port SATA controller, so I opted just to get two of the PCI-X ones. The Supermicro 5-in-3's don't have a fan alarm so you could remove it or find a quieter fan. I think most of them have quite noisy fans (the main goal for this besides l

Re: [zfs-discuss] Looking for some hardware answers, maybe someone on this list could help

2008-10-15 Thread mike
27;m >> going to pick up a couple of Supermicro's 5-in-3 enclosures for mine: >> >> http://www.newegg.com/Product/Product.aspx?Item=N82E16817121405 >> >> >> Scott >> >> On Wed, Oct 15, 2008 at 12:26 AM, mike <[EMAIL PROTECTED]> wrote: >

Re: [zfs-discuss] Looking for some hardware answers, maybe someone on this list could help

2008-10-15 Thread mike
l of it thanks to Newegg. I will need to pick up some 4-in-3 enclosures and a better CPU heatsink/fan - this is supposed to be quiet but it has an annoying hum. Weird. Anyway, so far so good. Hopefully the power supply can handle all 16 disks too... On Thu, Oct 9, 2008 at 12:46 PM, mike &

Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-13 Thread Mike Gerdts
On Thu, Oct 9, 2008 at 10:33 PM, Mike Gerdts <[EMAIL PROTECTED]> wrote: > On Thu, Oct 9, 2008 at 10:18 AM, Mike Gerdts <[EMAIL PROTECTED]> wrote: >> On Thu, Oct 9, 2008 at 10:10 AM, Greg Shaw <[EMAIL PROTECTED]> wrote: >>> Nevada isn't production co

Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-10 Thread Mike Gerdts
ld be used to deal with cases that prevent your normal (>4 GB) boot environment from booting. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-09 Thread Mike Gerdts
On Thu, Oct 9, 2008 at 10:18 AM, Mike Gerdts <[EMAIL PROTECTED]> wrote: > On Thu, Oct 9, 2008 at 10:10 AM, Greg Shaw <[EMAIL PROTECTED]> wrote: >> Nevada isn't production code. For real ZFS testing, you must use a >> production release, currently Solaris 10 (updat

Re: [zfs-discuss] 200805 Grub problems

2008-10-09 Thread Mike Aldred
I seem to be having the same problem as well. Has anyone found out what the cause is, and how to fix it? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zf

Re: [zfs-discuss] Looking for some hardware answers, maybe someone on this list could help

2008-10-09 Thread mike
; supports ECC ram. Coincidentally, it's also the chipset used in the > Sun Ultra 24 workstation > (http://www.sun.com/desktop/workstation/ultra24/index.xml). > > > On Mon, Oct 6, 2008 at 1:41 PM, mike <[EMAIL PROTECTED]> wrote: >> I posted a thread here... >

Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-09 Thread Mike Gerdts
I pushed for and got a fix. However, that pool was still lost. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-09 Thread Mike Gerdts
- core developers of dtrace were quite interested in the kernel crash dump. http://mail.opensolaris.org/pipermail/zfs-discuss/2008-September/051109.html Panic during ON build. Pool was lost, no response from list. -- Mike Gerdts http://mgerdts.blogspot.com/

Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-09 Thread Mike Gerdts
ast year I've lost more ZFS file systems than I have any other type of file system in the past 5 years. With other file systems I can almost always get some data back. With ZFS I can't get any back. -- Mike Gerdts http://mgerdts.blogspot.com/ _

Re: [zfs-discuss] Looking for some hardware answers, maybe someone on this list could help

2008-10-07 Thread mike
patible, and have to return it online... On Tue, Oct 7, 2008 at 1:33 AM, gm_sjo <[EMAIL PROTECTED]> wrote: > 2008/10/6 mike <[EMAIL PROTECTED]>: >> I am trying to finish building a system and I kind of need to pick >> working NIC and onboard SATA chipsets (video is not a

[zfs-discuss] Looking for some hardware answers, maybe someone on this list could help

2008-10-06 Thread mike
I posted a thread here... http://forums.opensolaris.com/thread.jspa?threadID=596 I am trying to finish building a system and I kind of need to pick working NIC and onboard SATA chipsets (video is not a big deal - I can get a silent PCIe card for that, I already know one which works great) I need

Re: [zfs-discuss] OT: ramdisks (Was: Re: create raidz with 1 disk offline)

2008-09-29 Thread Mike Gerdts
9 0 0 0 0 0 0 0 0 0 543 972 518 0 0 100 >From a free memory standpoint, the current state of the system is very different than the typical state since boot. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailin

Re: [zfs-discuss] web interface not showing up

2008-09-24 Thread mike
On Wed, Sep 24, 2008 at 9:37 PM, James Andrewartha <[EMAIL PROTECTED]> wrote: > Can you post the java error to the list? Do you have gzip compressed or > aclinherit properties on your filesystems, hitting bug 6715550? > http://mail.opensolaris.org/pipermail/zfs-discuss/2008-June/048457.html > http

Re: [zfs-discuss] Which is better for root ZFS: mlc or slc SSD?

2008-09-24 Thread Mike Gerdts
200807/ See "Flash Storage Memory" by Adam Leventhal, page 47. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] web interface not showing up

2008-09-22 Thread mike
On Sun, Sep 21, 2008 at 11:49 PM, Volker A. Brandt <[EMAIL PROTECTED]> wrote: > Hmmm... I run Solaris 10/sparc U4. My /usr/java points to > jdk/jdk1.5.0_16. I am using Firefox 2.0.0.16. Works For Me(TM) ;-) > Sorry, can't help you any further. Maybe a question for desktop-discuss? it's a jav

Re: [zfs-discuss] web interface not showing up

2008-09-21 Thread mike
On Sun, Sep 21, 2008 at 1:31 PM, Volker A. Brandt <[EMAIL PROTECTED]> wrote: > Yes, you need to set the corresponding SMF property. Check > for the value of "options/tcp_listen": > > # svcprop -p options/tcp_listen webconsole > true > > If it says "false", you need to set it to "true". Here's

[zfs-discuss] Panic + corrupted pool in snv_98

2008-09-21 Thread Mike Gerdts
ive on another system, but can be imported using the '-f' flag. see: http://www.sun.com/msg/ZFS-8000-5E config: export FAULTED corrupted data c6t0d0UNAVAIL corrupted data -- Mike Gerdts http://mgerdts.blogspot.com/

Re: [zfs-discuss] web interface not showing up

2008-09-19 Thread mike
On Fri, Sep 19, 2008 at 10:16 AM, Volker A. Brandt <[EMAIL PROTECTED]> wrote: > You need to check if the SMF service is running: > # svcadm -v enable webconsole > svc:/system/webconsole:console enabled. > # svcs webconsole > STATE STIMEFMRI > online 19:07:24 svc:/system/

Re: [zfs-discuss] [storage-discuss] A few questions

2008-09-16 Thread mike
On Tue, Sep 16, 2008 at 2:28 PM, Peter Tribble <[EMAIL PROTECTED]> wrote: > For what it's worth, we put all the disks on our thumpers into a single pool - > mostly it's 5x 8+1 raidz1 vdevs with a hot spare and 2 drives for the OS and > would happily go much bigger. so you have 9 drive raidz1 (8 d

Re: [zfs-discuss] Snapshots during a scrub

2008-09-05 Thread mike
Okay, well I am running snv_94 already. So I guess I'm good :) On Fri, Sep 5, 2008 at 10:23 AM, Mark Shellenbaum <[EMAIL PROTECTED]> wrote: > mike wrote: >> >> I have a weekly scrub setup, and I've seen at least once now where it >> says "don't

[zfs-discuss] Snapshots during a scrub

2008-09-05 Thread mike
I have a weekly scrub setup, and I've seen at least once now where it says "don't snapshot while scrubbing" Is this a data integrity issue, or will it make one or both of the processes take longer? Thanks ___ zfs-discuss mailing list zfs-discuss@opensol

Re: [zfs-discuss] raidz2 group size

2008-09-03 Thread mike
Yeah, I'm looking at using 10 disks or 16 disks (depending on which chassis I get) - and I would like reasonable redundancy (not HA-crazy redundancy where I can suffer tons of failures, I can power this down and replace disks, it's a home server) and maximize the amount of usable space. Putting up

Re: [zfs-discuss] ZFS, Kernel Panic on import

2008-08-29 Thread Mike Aldred
Ok, I've managed to get around the kernel panic. [EMAIL PROTECTED]:~/Download$ pfexec mdb -kw Loading modules: [ unix genunix specfs dtrace cpu.generic uppc pcplusmp scsi_vhci zfs sd ip hook neti sctp arp usba uhci s1394 fctl md lofs random sppp ipc ptm fcip fcp cpc crypto logindmux ii nsctl sdb

[zfs-discuss] ZFS, Kernel Panic on import

2008-08-29 Thread Mike Aldred
G'day, I've got a OpenSolaris server n95, that I use for media, serving. It's uses a DQ35JOE motherboard, dual core, and I have my rpool mirrored on two IDE 40GB drives, and my media mirrored on 2 x 500GB SATA drives. I've got a few CIFS shares on the media drive, and I'm using MediaTomb to s

Re: [zfs-discuss] ZFS boot and LU

2008-08-26 Thread mike
On 8/26/08, Cyril Plisko <[EMAIL PROTECTED]> wrote: > that's very interesting ! Can you share more info on what these > bugs/issues are ? Since it is LU related I guess we'll never see these > via opensolaris.org, right ? So I would appreciate if community will > be updated when these fixes will

Re: [zfs-discuss] ZFS deduplication

2008-08-26 Thread Mike Gerdts
ermail/zfs-code/2007-March/000448.html -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Best layout for 15 disks?

2008-08-22 Thread mike
yeah i am on gigabit, but the clients are things like an xbox which is only 10/100, etc. right now the setup works fine. i'm thinking the new CIFS implementation should make it run even cleaner too. On 8/22/08, Ross Smith <[EMAIL PROTECTED]> wrote: > Yup, you got it, and an 8 disk raid-z2 array sh

Re: [zfs-discuss] Best layout for 15 disks?

2008-08-22 Thread mike
On 8/22/08, Ross <[EMAIL PROTECTED]> wrote: > Yes, that looks pretty good mike. There are a few limitations to that as you > add the 2nd raidz2 set, but nothing major. When you add the extra disks, > your original data will still be stored on the first set of disks, if you

Re: [zfs-discuss] Best layout for 15 disks?

2008-08-22 Thread mike
On 8/22/08, Kyle McDonald <[EMAIL PROTECTED]> wrote: > You only need 1 disk to use ZFS root. You won't have any redundancy, but as > Darren said in another email, you can convert single device vDevs to > Mirror'd vDevs later without any hassle. I'd just get some 80 gig disks and mirror them. Migh

Re: [zfs-discuss] Best layout for 15 disks?

2008-08-22 Thread mike
On 8/22/08, Rich Teer <[EMAIL PROTECTED]> wrote: > ZFS boot works fine; it only recently integrated into Nevada, but it > has been in use for quite some time now. Yeah I got the install option when I installed snv_94 but wound up not having enough disks to use it. > Even better: just use ZFS roo

Re: [zfs-discuss] Best layout for 15 disks?

2008-08-22 Thread mike
It looks like this will be the way I do it: initially: zpool create mypool raidz2 disk0 disk1 disk2 disk3 disk4 disk5 disk6 disk7 when I need more space and buy 8 more disks: zpool add mypool raidz2 disk8 disk9 disk10 disk11 disk12 disk13 disk14 disk15 Correct? > Enable compression, and set up

Re: [zfs-discuss] Best layout for 15 disks?

2008-08-22 Thread mike
On 8/22/08, Kyle McDonald <[EMAIL PROTECTED]> wrote: > Antoher note, as someone said earlier, if you can go to 16 drives, you > should consider 2 8disk RAIDZ2 vDevs, over 2 7disk RAIDZ vDevs with a spare, > or (I would think) even a 14disk RAIDZ2 vDev with a spare. > > If you can (now or later) ge

Re: [zfs-discuss] Best layout for 15 disks?

2008-08-22 Thread mike
On 8/22/08, Darren J Moffat <[EMAIL PROTECTED]> wrote: > I could if I wanted to add another vdev to this pool but it doesn't > have to be raidz it could be raidz2 or mirror. > If they did they are wrong, hope the above clarifies. I get it now. If you add more disks they have to be in their own m

Re: [zfs-discuss] Best layout for 15 disks?

2008-08-22 Thread mike
> No that isn't correct. > One or move vdevs create a pool. Each vdev in a pool can be a > different type, eg a mix or mirror, raidz, raidz2. > There is no such thing as zdev. Sorry :) Okay, so you can create a zpool from multiple vdevs. But you cannot add more vdevs to a zpool once the zpool

Re: [zfs-discuss] Best layout for 15 disks?

2008-08-22 Thread mike
Oh sorry - for boot I don't care if it's redundant or anything. Worst case the drive fails, I replace it and reinstall, and just re-mount the ZFS stuff. If I have the space in the case and the ports I could get a pair of 80 gig drives or something and mirror them using SVM (which was recommende

Re: [zfs-discuss] Best layout for 15 disks?

2008-08-22 Thread mike
I hear everyone's concerns about multiple parity disks. Are there any benchmarks or numbers showing the performance difference using a 15 disk raidz2 zpool? I am fine sacrificing some performance but obviously don't want to make the machine crawl. It sounds like I could go with 15 disks evenly

Re: [zfs-discuss] Best layout for 15 disks?

2008-08-22 Thread mike
likewise i could also do something like zpool create tank raidz1 disk1 disk2 disk3 disk4 disk5 disk6 disk7 \ raidz1 disk8 disk9 disk10 disk11 disk12 disk13 disk14 disk15 and i'd have a 7 disk raidz1 and an 8 disk raidz1... and i'd have 15 disks still broken up into not-too-horrible pool sizes an

Re: [zfs-discuss] Best layout for 15 disks?

2008-08-22 Thread mike
see, originally when i read about zfs it said it could expand to petabytes or something. but really, that's not as a single "filesystem" ? that could only be accomplished through combinations of pools? i don't really want to have to even think about managing two separate "partitions" - i'd like

Re: [zfs-discuss] Best layout for 15 disks?

2008-08-21 Thread mike
i could probably do 16 disks and maybe do a raidz on both for 14 disks usable combined... that's probably as redundant as i'd need, i think. can you combine two zpools together? or will i have two separate "partitions" (i.e. i'll have "tank" for example and "tank2" instead of making one single lar

[zfs-discuss] Best layout for 15 disks?

2008-08-21 Thread mike
Question #1: I've seen 5-6 disk zpools are the most recommended setup. In traditional RAID terms, I would like to do RAID5 + hot spare (13 disks usable) out of the 15 disks (like raidz2 I suppose). What would make the most sense to setup 15 disks with ~ 13 disks of usable space? This is for a h

Re: [zfs-discuss] ZFS on 32bit.

2008-08-06 Thread Mike Gerdts
53-02 this week. In a separate thread last week (?) Enda said that it should be out within a couple weeks. Mike -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-31 Thread mike
i must pose the question then: is ECC required? i am running non-ECC RAM right now on my machine (it's AMD and it would support ECC, i'd just have to buy it online and wait for it) but will it have any negative effects on ZFS integrity/checksumming if ECC RAM is not used? obviously it's nice t

[zfs-discuss] Kernel panic on ZFS snapshot destroy

2008-07-31 Thread Mike Futerko
x27;ve attached a screenshot if it may be useful. Any help would be appreciated... Thanks, Mike <>___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-30 Thread mike
Yeah but 2.5" aren't that big yet. What, they max out ~ 320 gig right? I want 1tb+ disks :) This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-29 Thread mike
exactly. that's why i'm trying to get an account on that site (looks like open registration for the forums is disabled) so i can shoot the breeze and talk about all this stuff too. zfs would be perfect for this as most these guys are trying to find hardware raid cards that will fit, etc... wit

Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-29 Thread mike
that mashie link might be exactly what i wanted... that mini-itx board w/ 6 SATA. use CF maybe for boot (might need IDE to CF converter) - 5 drive holder (hotswap as a bonus) - you get 4 gig ram, core2-based chip (64-bit), onboard graphics, 5 SATA2 drives... that is cool. however. would need to

Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-29 Thread mike
I'd say some good places to look are silentpcreview.com and mini-itx.com. I found this tasty morsel on an ad at mini-itx... http://www.american-portwell.com/product.php?productid=16133 6x onboard SATA. 4 gig support. core2duo support. which means 64 bit = yes, 4 gig = yes, 6x sata is nice. now

Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-29 Thread mike
I didn't use any. That would be my -ideal- setup :) I waited and waited, and still no eSATA/Port Multiplier support out there, or isn't stable enough. So I scrapped it. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discu

Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-28 Thread mike
Holy crap! That sounds cool. Firmware-based-VPN connectivity! At Intel we're getting better too I suppose. Anyway... I don't know where you're at in the company but you should rattle some cages about my idea :) This message posted from opensolaris.org

Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-28 Thread mike
I would love to go back to using shuttles. Actually, my ideal setup would be: Shuttle XPC w/ 2x PCI-e x8 or x16 lanes 2x PCI-e eSATA cards (each with 4 eSATA port multiplier ports) then I could chain up to 8 enclosures off a single small, nearly silent host machine. 8 enclosures x 5 drives = 40

Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-28 Thread mike
I have built mine the last few days, and it seems to be running fine right now. Originally I wanted Solaris 10, but switched to using SXCE (nevada build 94, the latest right now) because I wanted the new CIFS support and some additional ZFS features. Here's my setup. These were my goals: - Quie

Re: [zfs-discuss] [zfs-code] Peak every 4-5 second

2008-07-26 Thread Ellis, Mike
Bob Says: "But a better solution is to assign a processor set to run only the application -- a good idea any time you need a predictable response." Bob's suggestion above along with "no interrupts on that pset", and a fixed scheduling class for the application/processes in question could

Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-25 Thread mike
yeah, i have not been pleased with the quality of the HCL. there's plenty of hardware discussed on the forums and if you search the bugs db that has been confirmed and/or fixed to work on various builds of osol and solaris 10. i wound up buying an AMD based machine (i wanted Intel) with 6 onboa

Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-25 Thread mike
Don't take my opinion. I am a newbie to everything solaris. >From what it looks like in the HCL, some of the VIA stuff is supported. Like I >said I tried some nexenta CD... They don't make 64-bit, first off, and I am not sure if any of their mini-itx boards support more than 2 gig ram. ZFS love

Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-25 Thread mike
i have that chassis too. did solaris install for you? what version/build? i think i tried a nexenta build and it crapped out on install. i also only have 2 gigs of ram in it and a CF card to boot off of... 4 drives is too small for what i want, 5 drives would be my minimum. i was hoping this wo

Re: [zfs-discuss] zfs write cache enable on boot disks ?

2008-07-24 Thread Mike Gerdts
dynamic data that needs to survive a reboot, it would seem to make a lot of sense to enable write cache on such disks. This assumes that ZFS does the flush no matter whether it thinks the write cache is enabled or not. Am I wrong about this somehow? -- Mike Gerdts http://mgerdts.blogspot.com/

Re: [zfs-discuss] Supermicro AOC-USAS-L8i

2008-07-24 Thread mike
Did you have success? What version of Solaris? OpenSolaris? etc? I'd want to use this card with the latest Solaris 10 (update 5?) The connector on the adapter itself is "IPASS" and the Supermicro part number for cables from the adapter to standard SATA drives is CBL-0118L-02 "IPASS to 4 SATA C

Re: [zfs-discuss] Cannot attach mirror to SPARC zfs root pool

2008-07-24 Thread Mike Gerdts
Prior to build , bug 6668666 causes the following platform-dependent steps to also be needed: On sparc systems: # installboot -F zfs /usr/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0 On x86 systems: # ... -- Mike Gerdts http://mgerdts.blogspot.com/ _

Re: [zfs-discuss] [zfs-code] Peak every 4-5 second

2008-07-23 Thread Ellis, Mike
Would adding a dedicated ZIL/SLOG (what is the difference between those 2 exactly? Is there one?) help meet your requirement? The idea would be to use some sort of relatively large SSD drive of some variety to absorb the initial write-hit. After hours when things quieit down (or perhaps during

Re: [zfs-discuss] ZFS deduplication

2008-07-23 Thread Mike Gerdts
nd" to be a stable format and get integration with enterprise backup software that can perform restores in a way that maintains space efficiency. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS deduplication

2008-07-22 Thread Mike Gerdts
plication under a wide variety of > circumstances. The key thing here is that distributed applications will not play nicely. In my best use case, Solaris zones and LDoms are the "application". I don't expect or want Solaris to form some sort of P2P storage system across my data

[zfs-discuss] Remove log device?

2008-07-13 Thread Mike Gerdts
It seems as though there is no way to remove a log device once it is added. Is this correct? Assuming this is correct, is there any reason that adding the ability to remove the log device would be particularly tricky? -- Mike Gerdts http://mgerdts.blogspot.com

Re: [zfs-discuss] Largest (in number of files) ZFS instance tested

2008-07-11 Thread Mike Gerdts
her than operations per second. This was with several (<100) processes contending for reading directory contents, file creations, and file deletions. This is where I found the script that though that "touch $dir/test.$$" (followed by rm) was the right way to check to see if a

Re: [zfs-discuss] proposal partial/relative paths for zfs(1)

2008-07-10 Thread Mike Gerdts
omplexity that will turn into a long-term management problem as sysadmins split or merge pools, change pool naming schemes, reorganize dataset hierarchies, etc. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] proposal partial/relative paths for zfs(1)

2008-07-10 Thread Mike Gerdts
On Thu, Jul 10, 2008 at 11:31 AM, Darren J Moffat <[EMAIL PROTECTED]> wrote: > Mike Gerdts wrote: >> >> On Thu, Jul 10, 2008 at 5:42 AM, Darren J Moffat <[EMAIL PROTECTED]> >> wrote: >>> >>> Thoughts ? Is this useful for anyone else ? My above e

Re: [zfs-discuss] proposal partial/relative paths for zfs(1)

2008-07-10 Thread Mike Gerdts
s like the following should work unambigously: # zfs snapshot ./[EMAIL PROTECTED] # zfs snapshot `pwd`/[EMAIL PROTECTED] -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/

Re: [zfs-discuss] X4540

2008-07-09 Thread Mike Gerdts
-Server.html 2. http://www.sun.com/servers/x64/x4540/specs.xml -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] X4540

2008-07-09 Thread Mike Gerdts
r that I connect to 10 gigabit Ethernet or the SAN (FC tape drives). -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS deduplication

2008-07-08 Thread Mike Gerdts
as a result all of the deduped copies would be sequential as well. What's more - it is quite likely to be in the ARC or L2ARC. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS deduplication

2008-07-08 Thread Mike Gerdts
tion of more storage because of efficiencies of the storage devices make it the same cost as less storage, then perhaps allocating more per student is feasible. Or maybe tuition could drop by a few bucks. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ z

Re: [zfs-discuss] ZFS deduplication

2008-07-07 Thread Mike Gerdts
r my operations. Yes, teaching the user the > "right thing" is useful, but that user isn't there to know how to "manage > data" for my benefit. They're there to learn how to be filmmakers, > journalists, speech pathologists, etc. Well said. -- Mike Gerdts h

Re: [zfs-discuss] ZFS deduplication

2008-07-07 Thread Mike Gerdts
On Mon, Jul 7, 2008 at 9:24 PM, Bob Friesenhahn <[EMAIL PROTECTED]> wrote: > On Mon, 7 Jul 2008, Mike Gerdts wrote: >> There tend to be organizational walls between those that manage >> storage and those that consume it. As storage is distributed across >> a netw

Re: [zfs-discuss] ZFS deduplication

2008-07-07 Thread Mike Gerdts
the patches remains per-server used space. Additionally the other space used by the installed patches remains used. Deduplication can reclaim the majority of the space. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss ma

Re: [zfs-discuss] Changing GUID

2008-07-07 Thread Mike Gerdts
rage - each server is a dataless FRU. If Vendor X supports deduplication of live data (hint) I only need about 25% of space that I would need if I weren't using clones + deduplication. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-

<    1   2   3   4   5   6   >