Re: [zfs-discuss] Looking for some hardware answers, maybe someone on this list could help

2008-10-16 Thread mike
On Wed, Oct 15, 2008 at 9:13 PM, Al Hopper <[EMAIL PROTECTED]> wrote: > The exception to the "rule" of multiple 12v output sections is PC > Power & Cooling - who claim that there is no technical advantage to > having multiple 12v outputs (and this "feature" is only a marketing > gimmick). But now

Re: [zfs-discuss] Looking for some hardware answers, maybe someone on this list could help

2008-10-16 Thread gm_sjo
> On Wed, Oct 15, 2008 at 9:13 PM, Al Hopper <[EMAIL PROTECTED]> wrote: >> The exception to the "rule" of multiple 12v output sections is PC >> Power & Cooling - who claim that there is no technical advantage to >> having multiple 12v outputs (and this "feature" is only a marketing >> gimmick). Bu

Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-16 Thread Ross
Well obviously recovery scenarios need testing, but I still don't see it being that bad. My thinking on this is: 1. Loss of a server is very much the worst case scenario. Disk errors are much more likely, and with raid-z2 pools on the individual servers this should not pose a problem. I als

Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-16 Thread Gray Carper
Howdy! Very valuable advice here (and from Bob, who made similar comments - thanks, Bob!). I think, then, we'll generally stick to 128K recordsizes. In the case of databases, we'll stray as appropriate, and we may also stray with the HPC compute cluster if we can get demonstrate that it is worth i

Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-16 Thread Ross
Miles makes a good point here, you really need to look at how this copes with various failure modes. Based on my experience, iSCSI is something that may cause you problems. When I tested this kind of setup last year I found that the entire pool hung for 3 minutes any time an iSCSI volume went

Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-16 Thread Gray Carper
Oops - one thing I meant to mention: We only plan to cross-site replicate data for those folks who require it. The HPC data crunching would have no use for it, so that filesystem wouldn't be replicated. In reality, we only expect a select few users, with relatively small filesystems, to actually ne

Re: [zfs-discuss] Tuning for a file server, disabling data cache (almost)

2008-10-16 Thread Tomas Ögren
On 15 October, 2008 - Richard Elling sent me these 4,3K bytes: > Tomas Ögren wrote: > > Hello. > > > > Executive summary: I want arc_data_limit (like arc_meta_limit, but for > > data) and set it to 0.5G or so. Is there any way to "simulate" it? > > > > We describe how to limit the size of the

Re: [zfs-discuss] Tuning for a file server, disabling data cache (almost)

2008-10-16 Thread Darren J Moffat
Tomas Ögren wrote: > On 15 October, 2008 - Richard Elling sent me these 4,3K bytes: > >> Tomas Ögren wrote: >>> Hello. >>> >>> Executive summary: I want arc_data_limit (like arc_meta_limit, but for >>> data) and set it to 0.5G or so. Is there any way to "simulate" it? >>> >> We describe how to

[zfs-discuss] Lost Disk Space

2008-10-16 Thread Ben Rockwood
I've been struggling to fully understand why disk space seems to vanish. I've dug through bits of code and reviewed all the mails on the subject that I can find, but I still don't have a proper understanding of whats going on. I did a test with a local zpool on snv_97... zfs list, zpool list,

Re: [zfs-discuss] Strange result when syncing between SPARC and x86

2008-10-16 Thread Casper . Dik
>Hello > > >Today I've suddenly noticed that symlinks (at least) are corrupted when >sync ZFS from SPARC to x86 (zfs send | ssh | zfs recv). > >Example is: > >[EMAIL PROTECTED] ls -la /data/zones/testfs/root/etc/services >lrwxrwxrwx 1 root root 15 Oct 13 14:35 >/data/zones/testfs/ro

Re: [zfs-discuss] Strange result when syncing between SPARC and x86

2008-10-16 Thread Mike Futerko
Hi Just checked with snv_99 on x86 (VMware install) - same result :( Regards Mike [EMAIL PROTECTED] wrote: >> Hello >> >> >> Today I've suddenly noticed that symlinks (at least) are corrupted when >> sync ZFS from SPARC to x86 (zfs send | ssh | zfs recv). >> >> Example is: >> >> [EMAIL PROTE

Re: [zfs-discuss] Tuning for a file server, disabling data cache (almost)

2008-10-16 Thread Tomas Ögren
On 16 October, 2008 - Ross sent me these 1,1K bytes: > I might be misunderstanding here, but I don't see how you're going to > improve on "zfs set primarycache=metadata". > > You complain that ZFS throws away 96kb of data if you're only reading > 32kb at a time, but then also complain that you ar

[zfs-discuss] Improving zfs send performance

2008-10-16 Thread Scott Williamson
On Wed, Oct 15, 2008 at 9:37 PM, Brent Jones <[EMAIL PROTECTED]> wrote: > Scott, > > Can you tell us the configuration that you're using that is working for > you? > Were you using RaidZ, or RaidZ2? I'm wondering what the "sweetspot" is > to get a good compromise in vdevs and usable space/perform

Re: [zfs-discuss] Improving zfs send performance

2008-10-16 Thread Ross
Ok, I'm not entirely sure this is the same problem, but it does sound fairly similar. Apologies for hijacking the thread if this does turn out to be something else. After following the advice here to get mbuffer working with zfs send / receive, I found I was only getting around 10MB/s throughp

Re: [zfs-discuss] Improving zfs send performance

2008-10-16 Thread Ross Smith
> Try to separate the two things:> > (1) Try /dev/zero -> mbuffer --- network > ---> mbuffer > /dev/null > That should give you wirespeed I tried that already. It still gets just 10-11MB/s from this server. I can get zfs send / receive and mbuffer working at 30MB/s though from a couple of test

Re: [zfs-discuss] Improving zfs send performance

2008-10-16 Thread Ross Smith
Oh dear god. Sorry folks, it looks like the new hotmail really doesn't play well with the list. Trying again in plain text: > Try to separate the two things: > > (1) Try /dev/zero -> mbuffer --- network ---> mbuffer> /dev/null > That should give you wirespeed I tried that already. It s

Re: [zfs-discuss] Tuning for a file server, disabling data cache (almost)

2008-10-16 Thread Tomas Ögren
On 16 October, 2008 - Darren J Moffat sent me these 1,7K bytes: > Tomas Ögren wrote: > > On 15 October, 2008 - Richard Elling sent me these 4,3K bytes: > > > >> Tomas Ögren wrote: > >>> Hello. > >>> > >>> Executive summary: I want arc_data_limit (like arc_meta_limit, but for > >>> data) and set i

Re: [zfs-discuss] zfs cp hangs when the mirrors are removed ..

2008-10-16 Thread Richard Elling
Karthik Krishnamoorthy wrote: > We did try with this > > zpool set failmode=continue option > > and the wait option before pulling running the cp command and pulling > out the mirrors and in both cases there was a hang and I have a core > dump of the hang as well. > You have to wait for the

[zfs-discuss] Strange result when syncing between SPARC and x86

2008-10-16 Thread Mike Futerko
Hello Today I've suddenly noticed that symlinks (at least) are corrupted when sync ZFS from SPARC to x86 (zfs send | ssh | zfs recv). Example is: [EMAIL PROTECTED] ls -la /data/zones/testfs/root/etc/services lrwxrwxrwx 1 root root 15 Oct 13 14:35 /data/zones/testfs/root/etc/servi

Re: [zfs-discuss] Tuning for a file server, disabling data cache (almost)

2008-10-16 Thread Ross
I might be misunderstanding here, but I don't see how you're going to improve on "zfs set primarycache=metadata". You complain that ZFS throws away 96kb of data if you're only reading 32kb at a time, but then also complain that you are IO/s bound and that this is restricting your maximum transf

[zfs-discuss] ZFS pool not imported on boot on Solaris Xen PV DomU

2008-10-16 Thread Francois Goudal
Hi, I am trying a setup with a Linux Xen Dom0 on which runs an OpenSolaris 2008.05 DomU. I have 8 hard disk partitions that I exported to the DomU (they are visible as c4d[1-8]p0) I have created a raidz2 pool on these virtual disks. Now, if I shutdown the system and I start it again, the pool is

Re: [zfs-discuss] Tuning for a file server, disabling data cache (almost)

2008-10-16 Thread Richard Elling
Tomas Ögren wrote: > On 16 October, 2008 - Darren J Moffat sent me these 1,7K bytes: > > >> Tomas Ögren wrote: >> >>> On 15 October, 2008 - Richard Elling sent me these 4,3K bytes: >>> >>> Tomas Ögren wrote: > Hello. > > Executive summary: I want arc_da

Re: [zfs-discuss] Improving zfs send performance

2008-10-16 Thread Carsten Aulbert
Hi Scott, Scott Williamson wrote: > You seem to be using dd for write testing. In my testing I noted that > there was a large difference in write speed between using dd to write > from /dev/zero and using other files. Writing from /dev/zero always > seemed to be fast, reaching the maximum of ~200M

Re: [zfs-discuss] Improving zfs send performance

2008-10-16 Thread Carsten Aulbert
Hi Ross Ross wrote: > Now though I don't think it's network at all. The end result from that > thread is that we can't see any errors in the network setup, and using > nicstat and NFS I can show that the server is capable of 50-60MB/s over the > gigabit link. Nicstat also shows clearly that b

Re: [zfs-discuss] Improving zfs send performance

2008-10-16 Thread Scott Williamson
Hi Carsten, You seem to be using dd for write testing. In my testing I noted that there was a large difference in write speed between using dd to write from /dev/zero and using other files. Writing from /dev/zero always seemed to be fast, reaching the maximum of ~200MB/s and using cp which would p

Re: [zfs-discuss] ZFS pool not imported on boot on Solaris Xen PV DomU

2008-10-16 Thread Richard Elling
Francois Goudal wrote: > Hi, > I am trying a setup with a Linux Xen Dom0 on which runs an OpenSolaris > 2008.05 DomU. > I have 8 hard disk partitions that I exported to the DomU (they are visible > as c4d[1-8]p0) > I have created a raidz2 pool on these virtual disks. > Now, if I shutdown the syst

Re: [zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!

2008-10-16 Thread Ryan Arneson
Tano wrote: > I'm not sure if this is a problem with the iscsitarget or zfs. I'd greatly > appreciate it if it gets moved to the proper list. > > Well I'm just about out of ideas on what might be wrong.. > > Quick history: > > I installed OS 2008.05 when it was SNV_86 to try out ZFS with VMWare. F

Re: [zfs-discuss] am I "screwed"?

2008-10-16 Thread Johan Hartzenberg
On Mon, Oct 13, 2008 at 10:25 PM, dick hoogendijk <[EMAIL PROTECTED]> wrote: > > > > We have to dig deeper with kmdb. But before we do that, tell me please > what is an easy way to transfer the messages from the failsafe login on > the problematic machine to i.e. this S10u5 server. All former scre

Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-16 Thread Miles Nordin
> "r" == Ross <[EMAIL PROTECTED]> writes: r> 1. Loss of a server is very much the worst case scenario. r> Disk errors are much more likely, and with raid-z2 pools on r> the individual servers yeah, it kind of sucks that the slow resilvering speed enforces this two-tier scheme

Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-16 Thread Marion Hakanson
[EMAIL PROTECTED] said: > It's interesting how the speed and optimisation of these maintenance > activities limit pool size. It's not just full scrubs. If the filesystem is > subject to corruption, you need a backup. If the filesystem takes two months > to back up / restore, then you need really

Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-16 Thread Erast Benson
pNFS is NFS-centric of course and it is not yet stable, isn't it? btw, what is the ETA for pNFS putback? On Thu, 2008-10-16 at 12:20 -0700, Marion Hakanson wrote: > [EMAIL PROTECTED] said: > > It's interesting how the speed and optimisation of these maintenance > > activities limit pool size. It'

Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-16 Thread Nicolas Williams
On Thu, Oct 16, 2008 at 12:20:36PM -0700, Marion Hakanson wrote: > I'll chime in here with feeling uncomfortable with such a huge ZFS pool, > and also with my discomfort of the ZFS-over-ISCSI-on-ZFS approach. There > just seem to be too many moving parts depending on each other, any one of > which

Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-16 Thread Marion Hakanson
[EMAIL PROTECTED] said: > In general, such tasks would be better served by T5220 (or the new T5440 :-) > and J4500s. This would change the data paths from: > client T5220 X4500 disks to > client T5440 disks > > With the J4500 you get the same storage density as th

Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-16 Thread Miles Nordin
> "nw" == Nicolas Williams <[EMAIL PROTECTED]> writes: nw> But does it work well enough? It may be faster than NFS if You're talking about different things. Gray is using NFS period between the storage cluster and the compute cluster, no iSCSI. Gray's (``does it work well enough''): i

Re: [zfs-discuss] Improving zfs send performance

2008-10-16 Thread Scott Williamson
So I am zfs sending ~450 datasets between thumpers running SOL10U5 via ssh, most are empty except maybe 10 that have a few GB of files. I see the following output on one that contained ~1GB of files in my send report: Output from zfs receive -v "received 1.07Gb stream in 30 seconds (36.4Mb/sec)"

Re: [zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!

2008-10-16 Thread Tano
Thank you Ryan for your response, I have included all the information you requested in line to this document: I will be testing SNV_86 again to see whether the problem persists, maybe it's my hardware. I will confirm that soon enough. On Thu, October 16, 2008 10:31 am, Ryan Arneson wrote: > Tan

Re: [zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!

2008-10-16 Thread Tano
Also I had read your blog post previously. I will be taking advantage of the cloning/snapshot section of your blog once I am successful writing to the Targets. Thanks again! -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-d

Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-16 Thread Nicolas Williams
On Thu, Oct 16, 2008 at 04:30:28PM -0400, Miles Nordin wrote: > > "nw" == Nicolas Williams <[EMAIL PROTECTED]> writes: > > nw> But does it work well enough? It may be faster than NFS if > > You're talking about different things. Gray is using NFS period > between the storage cluster and

Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-16 Thread Miles Nordin
> "nw" == Nicolas Williams <[EMAIL PROTECTED]> writes: > "mh" == Marion Hakanson <[EMAIL PROTECTED]> writes: nw> I was replying to Marion's [...] nw> ZFS-over-iSCSI could certainly perform better than NFS, better than what, ZFS-over-'mkfile'-files-on-NFS? No one was suggesting th

Re: [zfs-discuss] Enable compression on ZFS root

2008-10-16 Thread Vincent Fox
> No, the last arguments are not options.  > Unfortunately, > the syntax doesn't provide a way to specify > compression > at the creation time.  It should, though.  > Or perhaps > compression should be the default. Should I submit an RFE somewhere then? -- This message posted from opensolaris.org

Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-16 Thread David Magda
On Oct 16, 2008, at 15:20, Marion Hakanson wrote: > For the stated usage of the original poster, I think I would aim > toward > turning each of the Thumpers into an NFS server, configure the head- > node > as a pNFS/NFSv4.1 It's a shame that Lustre isn't available on Solaris yet either. _

Re: [zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!

2008-10-16 Thread Nigel Smith
I googled on some sub-strings from your ESX logs and found these threads on the VmWare forum which lists similar error messages, & suggests some actions to try on the ESX server: http://communities.vmware.com/message/828207 Also, see this thread: http://communities.vmware.com/thread/131923 Are

Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-16 Thread Marion Hakanson
[EMAIL PROTECTED] said: > but Marion's is not really possible at all, and won't be for a while with > other groups' choice of storage-consumer platform, so it'd have to be > GlusterFS or some other goofy fringe FUSEy thing or not-very-general crude > in-house hack. Well, of course the magnitude o

Re: [zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!

2008-10-16 Thread Ryan Arneson
Nigel Smith wrote: > I googled on some sub-strings from your ESX logs > and found these threads on the VmWare forum > which lists similar error messages, > & suggests some actions to try on the ESX server: > > http://communities.vmware.com/message/828207 > > Also, see this thread: > > http://commu

Re: [zfs-discuss] 200805 Grub problems

2008-10-16 Thread Mike Aldred
Ok, I managed to get my grub menu (and spashimage) back by following: http://www.genunix.org/wiki/index.php/ZFS_rpool_Upgrade_and_GRUB Initially, I just did it for the boot enviroment I wanted to use, but it didn't seem to work, so I also did it for the previous boot enviroment. I'm not sure w

Re: [zfs-discuss] Enable compression on ZFS root

2008-10-16 Thread dick hoogendijk
Vincent Fox wrote: >> Or perhaps compression should be the default. No way please! Things taking even more memory should never be the default. An installation switch would be nice though. Freedom of coice ;-) -- Dick Hoogendijk -- PGP/GnuPG key: F86289CE ++ http://nagual.nl/ | SunOS 10u5 05/08

[zfs-discuss] How can i make my zpool as faulted.

2008-10-16 Thread yuvraj
Hi Friends, I have create my own Zpool on Solaris 10 & also created ZFS for the same. Now i wanna make that zpool as a faulted one. If you are aware of those command. Please do reply for the same problem. You may reply me on [EMAIL PROTECTED] Thanks in advance.

Re: [zfs-discuss] How can i make my zpool as faulted.

2008-10-16 Thread Sanjeev
Yuvraj, Can you please post the details of the zpool ? 'zpool status' should give you that. You could pull out one of the disks. Thanks and regards, Sanjeev. On Thu, Oct 16, 2008 at 11:22:43PM -0700, yuvraj wrote: > Hi Friends, > I have create my own Zpool on Solaris 10 & also