Re: [zfs-discuss] ACLs and Windows

2011-08-14 Thread Sigbjorn Lie

Hi,

By setting both aclmode and aclinherit to passthrough, you can use ACL's 
for NFS4 and CIFS sharing on the same zfs dataset, without the ACL's 
being destroyed by NFS4 clients.


If you would like to map a CIFS (windows) account to an unix account, 
see the idmap command. "# idmap add winuser unixuser" should do the 
trick for you.


If you add an id mapping for a windows user to root (# idmap add 
administrator root), you'll be able to have root access when configuring 
ACL's from Windows when connected to the CIFS share with the 
administrator account. You can use this to configure ACL's for spesific 
users. Just make sure you have completed all the idmapping before 
setting ACL's, and keep the administrator account in the ACL.




Regards,
Siggi



On 08/12/2011 10:52 AM, Lanky Doodle wrote:

Hiya,

My S11E server is needed to serve Windows clients. I read a while ago (last 
year!) about 'fudging' it so that Everyone has read/write access.

Is it possible for me to lock this down to users? I only have a single user on 
my Windows clients and in some case (htpc) this user is logged on automatically.

So could I map a Windows user with a Solaris user (matching credentials) and 
only give (owner) access to my ZFS filesystems to this user?

Thanks


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS acl inherit problem

2011-07-18 Thread Sigbjorn Lie

Hi,

Ok, I've done this with success on NexentaStor 3.0.5, using zpool 
version 26. I know the aclmode was removed at some point after this, but 
then put back in later. (Search the list for details.)


I've got the acl's below set on the top level directory. I put my users 
requirering access in the group_with_write_access. I found that the nfs 
anonymous account requires the "read attributes" access for Linux 
clients to be able to mount the folder. This folder is also shared with 
kerberos (sec=krb5).



A:fdg:group_with_write_acc...@my.nfs4.id:rwadxtTnNcy
A::nfsanonym...@my.nfs4.id:ty
A:fd:r...@my.nfs4.id:rwaDdxtTnNcCoy
A:fdni:r...@my.nfs4.id:rwaDdxTNCoy


Rgds,
Siggi




On 07/17/2011 03:37 PM, anikin anton wrote:

Hi!
But in 28 version of zfs there is no aclmode option at all (i use oi_148).
Also tried set this options to passthrough in oi_151 which has aclmode, but 
this not working for me.
 From Windows (cifs) - no problem, all acl's inherited correctly.
But from Linux (nfs) - acl user names inherited correctly, but permissions not 
inherited as i wish.
Maybe i need to set another properties, or permissions?
Like that:
$ /bin/ls -lV /rpool/test
total 6
drwxrwsrwx+  2 2147483650 staff  3 Jul 17 17:33 cifs_folder
 user:2147483650:rwxpdDaARWcCos:fdI:allow
  group@:rwxpdDaARWcCos:fdI:allow
  owner@:rwxpdDaARWcCos:fdI:allow
   everyone@:rwxpdDaARWcCos:fdI:allow
drwxrwxr-x+  2 500  staff  3 Jul 17 17:36 nfs_folder
 user:2147483650:rwxpdDaARWcCos:fdI:allow
  owner@:rwxp--aARWcCos:---:allow
  group@:rwxp--a-R-c--s:---:allow
   everyone@:r-x---a-R-c--s:---:allow

Thanks!



Hi,

Set the zfs properties aclmode *and* aclinherit
properties to
passthrough for the dataset you're writing to.

This works for me having both Windows clients using
cifs, and Linux
clients using nfs.



Regards,
Siggi



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS acl inherit problem

2011-07-16 Thread Sigbjorn Lie

Hi,

Set the zfs properties aclmode *and* aclinherit properties to 
passthrough for the dataset you're writing to.


This works for me having both Windows clients using cifs, and Linux 
clients using nfs.




Regards,
Siggi





On 06/01/2011 08:51 AM, lance wilson wrote:

The problem is that nfs clients that connect to my solaris 11 express server 
are not inheriting the acl's that are set for the share. They create files that 
don't have any acl assigned to them, just the normal unix file permissions. Can 
someone please provide some additional things to test so that I can get this 
sorted out.

This is the output of a normal ls -al

drwxrwxrwx+ 5 root root 11 2011-05-31 11:14 acltest

Looking at the acl's that are assigned to the share with ls -vd

drwxrwxrwx+ 5 root root 11 May 31 11:14 /smallstore/acltest
0:user:root:list_directory/read_data/add_file/write_data
/add_subdirectory/append_data/read_xattr/write_xattr/execute
/delete_child/read_attributes/write_attributes/delete/read_acl
/write_acl/write_owner/synchronize:file_inherit/dir_inherit:allow
1:everyone@:list_directory/read_data/add_file/write_data
/add_subdirectory/append_data/read_xattr/write_xattr/execute
/delete_child/read_attributes/write_attributes/delete/read_acl
/synchronize:file_inherit/dir_inherit:allow

The compact version is ls -Vd

drwxrwxrwx+ 5 root root 11 May 31 11:14 /smallstore/acltest
user:root:rwxpdDaARWcCos:fd-:allow
everyone@:rwxpdDaARWc--s:fd-:allow

The parent share has the following permissions
drwxr-xr-x+ 5 root root 5 May 30 22:26 /smallstore/
user:root:rwxpdDaARWcCos:fd-:allow
everyone@:r-x---a-R-c---:fd-:allow
owner@:rwxpdDaARWcCos:fd-:allow

This is the acl for the files created by a ubuntu client. There is no acl 
inheritance occurring.

-rw-r--r-- 1 1000 1000 0 May 31 22:20 /smallstore/acltest/ubuntu_file
owner@:rw-p--aARWcCos:---:allow
group@:r-a-R-c--s:---:allow
everyone@:r-a-R-c--s:---:allow

This is the acl for files created by a user from a windows client. There is 
full acl inheritance.
-rwxrwxrwx+ 1 ljw staff 0 May 31 22:22 /smallstore/acltest/windows_file
user:root:rwxpdDaARWcCos:--I:allow
everyone@:rwxpdDaARWc--s:--I:allow

The acl inheritance is on at both the share and directory levels so it should 
be passing them to files that are created.

smallstore aclinherit restricted default
smallstore/acltest aclinherit passthrough local

Again any help would be most appreciated.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun T3-2 and ZFS on JBODS

2011-03-03 Thread Sigbjorn Lie
Hi,

This turned out to be a scheduler issue. The system was still running the 
default TS scheduler. By
switching to the FSS scheduler the performance was back to what it was before 
the system was
reinstalled.

When using the TS scheduler the writes would not evenly spread across the 
drives. We have in total
60 drives in the pool, and a few of the drives would peak in write throughput, 
while the other
disks we're almost idle. After changing to the FSS scheduler all writes we're 
evenly distributed
across the drives.

This would just affect the user processes generating the load, as the zpool 
processes are still
using the SDC scheduler introduced in S10 U9.

I will do some testing on the loadbalance on/off. We have nearline SAS disks, 
which does have dual
path from the disk, however it's still just 7200rpm drives.

Are you using SATA , SAS or SAS-nearline in your array? Do you have multiple 
SAS connections to
your arrays, or do you use a single connection per array only?


Rgds,
Siggi


On Wed, March 2, 2011 18:19, Marion Hakanson wrote:
> sigbj...@nixtra.com said:
>> I've played around with turning on and off mpxio on the mpt_sas driver,
>> disabling increased the performance from 30MB / sec, but it's still far from 
>> the original
>> performance. I've attached some dumps of zpool iostat before and after 
>> reinstallation.
>
> I find "zpool iostat" is less useful in telling what the drives are
> doing than "iostat -xn 1".  In particular, the latter will give you an idea 
> of how many operations
> are queued per drive, and how long it's taking the drives to handle those 
> operations, etc.
>
> On our Solaris-10 systems (U8 and U9), if mpxio is enabled, you really
> want to set loadbalance=none.  The default (round-robin) makes some of our 
> JBOD's (Dell MD1200) go
> really slow for writes.  I see you have tried with mpxio disabled, so your 
> issue may be different.
>
>
> You don't say what you're doing to generate your test workload, but there
> are some workloads which will speed up a lot if the ZIL is disabled.  Maybe 
> that or some other
> /etc/system tweaks were in place on the original system.
> Also use "format -e" and its "write_cache" commands to see if the drives'
> write caches are enabled or not.
>
> Regards,
>
>
> Marion
>
>
>
>


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] [Fwd: Sun T3-2 and ZFS on JBODS]

2011-03-02 Thread Sigbjorn Lie
I forgot to mention, the server was jumpstarted with Solaris 10 U9, and the 
latest patch cluster
was downloaded and applied.




 Original Message 

Subject: [zfs-discuss] Sun T3-2 and ZFS on JBODS
From:"Sigbjorn Lie" 
Date:Wed, March 2, 2011 12:35
To:  zfs-discuss@opensolaris.org
--

Hi,

We have purchased a new Sun (Oracle) T3-2 machine and 5 shelves of 12 x 2TB 
SAS2 JBOD disks for
our new backup server. Each shelf is connected via a single SAS cable to a 
seperate SAS
controller.

When the system arrived it had Solaris 10 U9 preinstalled. We tested ZFS 
performance and got
roughly 1.3GB / sec when we configured a RADIZ2 per 12 disks, and roughly 1.7GB 
/ sec when we
configured a RAIDZ1 per 6 disks. Good performance.

Then I noticed that some packages we're missing from the installation, and I 
decided to jumpstart
the server to get the server installed the same way as every other Solaris 
server we have. After
it's been reinstalled I get 100-300MB / sec, and very sporadic writes.

I've played around with turning on and off mpxio on the mpt_sas driver, 
disabling increased the
performance from 30MB / sec, but it's still far from the original performance. 
I've attached some
dumps of zpool iostat before and after reinstallation.

Any suggestions to what settings might cause this? What to try to increase the 
performance?


Regards,
Siggi



Before:
pool0   51.5G   109T 93  10.4K   275K  1.27G
pool0   59.8G   109T 92  10.8K   274K  1.32G
pool0   68.1G   109T 92  10.6K   274K  1.30G
pool0   76.5G   109T 92  11.4K   274K  1.39G
pool0   85.0G   109T 92  10.2K   274K  1.25G
pool0   93.6G   109T 92  11.2K   274K  1.37G
pool0   93.6G   109T  0  9.77K  0  1.20G
pool0102G   109T  1  10.6K  5.99K  1.30G
pool0111G   109T  0  11.5K  0  1.41G
pool0119G   109T  1  10.7K  5.99K  1.31G
pool0127G   109T 89  11.1K   268K  1.36G
pool0136G   109T  1  11.8K  5.99K  1.44G



After:
pool0   30.8G   109T  0297  0  36.2M
pool0   30.8G   109T  0  2.85K  0   358M
pool0   33.7G   109T  0760  0  91.5M
pool0   33.7G   109T  0  0  0  0
pool0   33.7G   109T  0  0  0  0
pool0   33.7G   109T  0  0  0  0
pool0   33.7G   109T  0954  0   117M
pool0   33.7G   109T  0  2.03K  0   255M
pool0   36.2G   109T  0358  0  42.2M
pool0   36.2G   109T  0  0  0  0
pool0   36.2G   109T  0858  0   105M
pool0   36.2G   109T  0  1.28K  0   160M
pool0   38.4G   109T  0890  0   107M
pool0   38.4G   109T  0  0  0  0
pool0   38.4G   109T  0  0  0  0
pool0   38.4G   109T  0  1.57K  0   197M
pool0   40.3G   109T  0850  0   103M
pool0   40.3G   109T  0  0  0  0
pool0   40.3G   109T  0  0  0  0
pool0   40.3G   109T  0  0  0  0
pool0   40.3G   109T  0  2.58K  0   320M
pool0   42.2G   109T  0 12  0   102K


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Sun T3-2 and ZFS on JBODS

2011-03-02 Thread Sigbjorn Lie
Hi,

We have purchased a new Sun (Oracle) T3-2 machine and 5 shelves of 12 x 2TB 
SAS2 JBOD disks for
our new backup server. Each shelf is connected via a single SAS cable to a 
seperate SAS
controller.

When the system arrived it had Solaris 10 U9 preinstalled. We tested ZFS 
performance and got
roughly 1.3GB / sec when we configured a RADIZ2 per 12 disks, and roughly 1.7GB 
/ sec when we
configured a RAIDZ1 per 6 disks. Good performance.

Then I noticed that some packages we're missing from the installation, and I 
decided to jumpstart
the server to get the server installed the same way as every other Solaris 
server we have. After
it's been reinstalled I get 100-300MB / sec, and very sporadic writes.

I've played around with turning on and off mpxio on the mpt_sas driver, 
disabling increased the
performance from 30MB / sec, but it's still far from the original performance. 
I've attached some
dumps of zpool iostat before and after reinstallation.

Any suggestions to what settings might cause this? What to try to increase the 
performance?


Regards,
Siggi



Before:
pool0   51.5G   109T 93  10.4K   275K  1.27G
pool0   59.8G   109T 92  10.8K   274K  1.32G
pool0   68.1G   109T 92  10.6K   274K  1.30G
pool0   76.5G   109T 92  11.4K   274K  1.39G
pool0   85.0G   109T 92  10.2K   274K  1.25G
pool0   93.6G   109T 92  11.2K   274K  1.37G
pool0   93.6G   109T  0  9.77K  0  1.20G
pool0102G   109T  1  10.6K  5.99K  1.30G
pool0111G   109T  0  11.5K  0  1.41G
pool0119G   109T  1  10.7K  5.99K  1.31G
pool0127G   109T 89  11.1K   268K  1.36G
pool0136G   109T  1  11.8K  5.99K  1.44G



After:
pool0   30.8G   109T  0297  0  36.2M
pool0   30.8G   109T  0  2.85K  0   358M
pool0   33.7G   109T  0760  0  91.5M
pool0   33.7G   109T  0  0  0  0
pool0   33.7G   109T  0  0  0  0
pool0   33.7G   109T  0  0  0  0
pool0   33.7G   109T  0954  0   117M
pool0   33.7G   109T  0  2.03K  0   255M
pool0   36.2G   109T  0358  0  42.2M
pool0   36.2G   109T  0  0  0  0
pool0   36.2G   109T  0858  0   105M
pool0   36.2G   109T  0  1.28K  0   160M
pool0   38.4G   109T  0890  0   107M
pool0   38.4G   109T  0  0  0  0
pool0   38.4G   109T  0  0  0  0
pool0   38.4G   109T  0  1.57K  0   197M
pool0   40.3G   109T  0850  0   103M
pool0   40.3G   109T  0  0  0  0
pool0   40.3G   109T  0  0  0  0
pool0   40.3G   109T  0  0  0  0
pool0   40.3G   109T  0  2.58K  0   320M
pool0   42.2G   109T  0 12  0   102K


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ideas for ghetto file server data reliability?

2010-11-15 Thread Sigbjorn Lie
Do you need registered ECC, or will non-reg ECC do to get around this
issue you described?



On Mon, 2010-11-15 at 16:48 +0700, VO wrote:
> Hello List,
> 
> I recently got bitten by a "panic on `zpool import`" problem (same CR
> 6915314), while testing a ZFS file server. Seems the pool is pretty much
> gone, did try
> - zfs:zfs_recover=1 and aok=1 in /etc/system
> - `zpool import -fF -o ro`
> to no avail. I don't think I will be taking the time trying to fix it unless
> someone has good ideas. I suspect bad data was written to the pool and seems
> there is no way to recover; fmdump shows problem with same block on all
> disks IIRC.
> 
> The server hardware is pretty ghetto with whitebox components such as
> non-ECC RAM (cause of the pool loss). I know the hardware sucks but
> sometimes non-technical people don't understand the value of data before it
> is lost.. I was lucky the system had not been sent out yet and the project
> was "simply" delayed.
> 
> In light of this experience, I would say raidz is not useful in certain
> hardware failure scenarios. Bad bit in the RAM at the wrong time and the
> whole pool is lost.
> 
> Does the list have any ideas on how to make this kind of ghetto system more
> resilient (short of buy ECC RAM and mobo for it)?
> 
> I was thinking something like this:
> - pool1: raidz pool for the bulk data
> - pool2: mirror pool for backing up the raidz pool, only imported when the
> copying pool1 to pool2
> 
> What would be the most reliable way to copy the data from pool1 to pool2
> keeping in mind "bad bit in RAM and everything is lost"? I worry most about
> corrupting the pool2 also if pool1 has gone bad or there is a similar
> hardware failure again. Or is this whole idea just added complexity with no
> real benefit?
> 
> 
> Regards,
> 
> Ville
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP ProLiant N36L

2010-11-11 Thread Sigbjorn Lie
Did you try NexentaCore? You gain full control over command line, like in 
OpenSolaris. However it
seems faster and got more bugs fixed than OpenSolaris b134.

I have already had OpenSolaris b134 crash one of my disk systems, I would never 
install it
again... Besides, I would never get full 1 gigabit speed over NFS with 
OpenSolaris, I barely
managed a 100-200 Mbits/sec, however in NexentaCore with the same hardware, Im 
maxing out my
gigabit network.




On Thu, November 11, 2010 14:13, Eugen Leitl wrote:
>

> Big thanks! I think I'll also buy one before long. The
> power savings alone should be worth it over lifetime.
>
> On Wed, Nov 10, 2010 at 11:03:21PM -0800, Krist van Besien wrote:
>
>> I just bought one. :-)
>>
>>
>> My imprssions:
>>
>>
>> - Installed Nexentastor community edition in it. All hardware was recognized 
>> and works. No
>> problem there. I am however rather underwhelmed by the Nexentastor system 
>> and will problably
>> just install Opensolaris on it (b134) this evening. I want to use the box as 
>> a NAS, serving
>> CIFS to clients (a mixture of MAC and Linux machines) but as I don't have 
>> that much
>> administration to do in it I'll just do it on the command line and forgo 
>> fancy broken guis... -
>> The system is wel build. Quality is good. I could get the whole motherboard 
>> tray out without
>> needing to use tools. It comes with 1GB of ram that I plan to upgrade. - The 
>> system does come
>> with four HD trays and all the screws you need. I plunked in 4 2T disks, and 
>> a small SSD for
>> the OS. - The motherboard has a minisas connector, which is connected to the 
>> backplane, and a
>> seperate SATA connector that is intended for an optical drive. I used that 
>> to connect a SSD
>> which lives in the optical drive bay. There is also an internal USB 
>> connector you could just
>> put a USB stick in. - Performance under nexentastor appears OK. I have to do 
>> some real tests
>> though. - It is very quiet. Can certainly live with it in my office. (But 
>> will move it in to the
>> basement anyway. .  A nice touch is the eSata connector on the back. It does 
>> have a VGA
>> connector, but no keyboard/mouse. This is completely legacy free...
>>
>> All in all this is an excellent platform to build a NAS on.
>> --
>> This message posted from opensolaris.org
>> ___
>> zfs-discuss mailing list zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
> --
> Eugen* Leitl http://leitl.org";>leitl http://leitl.org
> __
> ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
> 8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
> ___
> zfs-discuss mailing list zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Networker & Dedup @ ZFS

2010-08-25 Thread Sigbjorn Lie
Wow, not bad!

What is your CPU penalty for enabling compression?


Sigbjorn



On Wed, August 18, 2010 14:11, Paul Kraus wrote:
> On Wed, Aug 18, 2010 at 7:51 AM, Peter Tribble  
> wrote:
>
>
>> I tried this with NetBackup, and decided against it pretty rapidly.
>> Basically, we
>> got hardly any dedup at all. (Something like 3%; compression gave us much 
>> better results.) Tiny
>> changes in block alignment completely ruin the possibility of significant 
>> benefit.
>
> We are using Netbackup with ZFS Disk Stage under Solaris 10U8,
> no dedupe but are getting 1.9x compression ratio :-)
>
>> Using ZFS dedup is logically the wrong place to do this; you want a decent
>> backup system that doesn't generate significant amounts of duplicate data in 
>> the first place.
>
> The latest release of NBU (7.0) supports both client side and
> server side dedupe (at additional cost ;-). We are using it in test for 
> backing up remote servers
> across slow WAN links with very good results.
>
> --
> {1-2-3-4-5-6-7-}
> Paul Kraus
> -> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
> -> Sound Coordinator, Schenectady Light Opera Company (
> http://www.sloctheater.org/ )
> -> Technical Advisor, RPI Players
>
>


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Networker & Dedup @ ZFS

2010-08-25 Thread Sigbjorn Lie
Hi,

What sort of compression ratio do you get?


Sigbjorn


On Wed, August 18, 2010 12:59, Hans Foertsch wrote:
> Hello,
>
>
> we use ZFS on Solaris 10u8 as a backup to disk solution with EMC Networker.
>
> We use the standard recordsize 128k and zfs compression.
>
>
> Dedup we can't use, because of Solaris 10.
>
>
> But we working on to use more feature and look for more improvements...
>
>
> But we are happy with this solution.
>
>
> Hans
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Networker & Dedup @ ZFS

2010-08-18 Thread Sigbjorn Lie
Hi,

We are considering using a ZFS based storage as a staging disk for Networker. 
We're aiming at
providing enough storage to be able to keep 3 months worth of backups on disk, 
before it's moved
to tape.

To provide storage for 3 months of backups, we want to utilize the dedup 
functionality in ZFS.

I've searched around for these topics and found no success stories, however 
those who has tried
did not mention if they had attempted to change the blocksize to any smaller 
than the default of
128k.

Does anyone have any experience with this kind of setup?


Regards,
Sigbjorn


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS performance?

2010-07-23 Thread Sigbjorn Lie

On Fri, July 23, 2010 11:21, Thomas Burgess wrote:
> On Fri, Jul 23, 2010 at 5:00 AM, Sigbjorn Lie  wrote:
>
>
>> I see I have already received several replies, thanks to all!
>>
>>
>> I would not like to risk losing any data, so I believe a ZIL device would
>> be the way for me. I see these exists in different prices. Any reason why I 
>> would not buy a cheap
>>  one? Like the Intel X25-V SSD 40GB 2,5"?
>>
>>
>> What size of ZIL device would be recommened for my pool consisting for 4 x
>> 1,5TB drives? Any
>> brands I should stay away from?
>>
>>
>>
>> Regards,
>> Sigbjorn
>>
>>
>> Like i said, i bought a 50 gb OCZ Vertex Limited Edition...it's like 200
>>
> dollars, up to 15,000 random iops (iops is what you want for fast zil)
>
>
> I've gotten excelent performance out of it.
>
>

The X25-V has up to 25k random read iops and up to 2.5k random write iops per 
second, so that
would seem okay for approx $80. :)

What about mirroring? Do I need mirrored ZIL devices in case of a power outage?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS performance?

2010-07-23 Thread Sigbjorn Lie
On Fri, July 23, 2010 10:42, tomwaters wrote:
> I agree, I get apalling NFS speeds compared to CIFS/Samba..ie. CIFS/Samba of 
> 95-105MB and NFS of
> 5-20MB.
>
>
> Not the thread hijack, but I assume a SSD ZIL will similarly improve an iSCSI 
> target...as I am
> getting 2-5MB on that too. --
> This message posted from opensolaris.org


This is exactly the numbers I'm getting as well.

What's the reason for such low rate when using iSCSI?




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS performance?

2010-07-23 Thread Sigbjorn Lie
I see I have already received several replies, thanks to all!

I would not like to risk losing any data, so I believe a ZIL device would be 
the way for me. I see
these exists in different prices. Any reason why I would not buy a cheap one? 
Like the Intel X25-V
SSD 40GB 2,5"?

What size of ZIL device would be recommened for my pool consisting for 4 x 
1,5TB drives? Any
brands I should stay away from?



Regards,
Sigbjorn





On Fri, July 23, 2010 09:48, Phil Harman wrote:
> That's because NFS adds synchronous writes to the mix (e.g. the client needs 
> to know certain
> transactions made it to nonvolatile storage in case the server restarts etc). 
> The simplest safe
> solution, although not cheap, is to add an SSD log device to the pool.
>
> On 23 Jul 2010, at 08:11, "Sigbjorn Lie"  wrote:
>
>
>> Hi,
>>
>>
>> I've been searching around on the Internet to fine some help with this, but 
>> have been
>> unsuccessfull so far.
>>
>> I have some performance issues with my file server. I have an OpenSolaris 
>> server with a Pentium
>> D
>> 3GHz CPU, 4GB of memory, and a RAIDZ1 over 4 x Seagate (ST31500341AS) 1,5TB 
>> SATA drives.
>>
>>
>> If I compile or even just unpack a tar.gz archive with source code (or any 
>> archive with lots of
>>  small files), on my Linux client onto a NFS mounted disk to the OpenSolaris 
>> server, it's
>> extremely slow compared to unpacking this archive on the locally on the 
>> server. A 22MB .tar.gz
>> file containng 7360 files takes 9 minutes and 12seconds to unpack over NFS.
>>
>> Unpacking the same file locally on the server is just under 2 seconds. 
>> Between the server and
>> client I have a gigabit network, which at the time of testing had no other 
>> significant load. My
>> NFS mount options are: "rw,hard,intr,nfsvers=3,tcp,sec=sys".
>>
>>
>> Any suggestions to why this is?
>>



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] NFS performance?

2010-07-23 Thread Sigbjorn Lie
Hi,

I've been searching around on the Internet to fine some help with this, but 
have been
unsuccessfull so far.

I have some performance issues with my file server. I have an OpenSolaris 
server with a Pentium D
3GHz CPU, 4GB of memory, and a RAIDZ1 over 4 x Seagate (ST31500341AS) 1,5TB 
SATA drives.

If I compile or even just unpack a tar.gz archive with source code (or any 
archive with lots of
small files), on my Linux client onto a NFS mounted disk to the OpenSolaris 
server, it's extremely
slow compared to unpacking this archive on the locally on the server. A 22MB 
.tar.gz file
containng 7360 files takes 9 minutes and 12seconds to unpack over NFS.

Unpacking the same file locally on the server is just under 2 seconds. Between 
the server and
client I have a gigabit network, which at the time of testing had no other 
significant load. My
NFS mount options are: "rw,hard,intr,nfsvers=3,tcp,sec=sys".

Any suggestions to why this is?


Regards,
Sigbjorn


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-15 Thread Sigbjorn Lie
On Thu, July 15, 2010 09:38, Frank Cusack wrote:
> On 7/15/10 9:49 AM +0900 BM wrote:
>
>> On Thu, Jul 15, 2010 at 5:57 AM, Paul B. Henson  wrote:
>>
>>> ZFS is great. It's pretty much the only reason we're running Solaris.
>>>
>>
>> Well, if this is the the only reason, then run FreeBSD instead. I run
>> Solaris because of the kernel architecture and other things that Linux
>> or any BSD simply can not do. For example, running something on a port below 
>> 1000, but as a true
>> non-root (i.e. no privileges dropping, but straight-forward run by a 
>> non-root).
>
> Um, there's plenty of things Solaris can do that Linux and FreeBSD can't
> do, but non-root privileged ports is not one of them.

Using least privileges' "net_privaddr" allows a process to bind to a port below 
1000 without
granting full root access to the process owner.


Regards,
Siggi



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS recovery tools

2010-06-01 Thread Sigbjorn Lie
Hi,

I have just recovered from a ZFS crash. During the antagonizing time this took, 
I was surprised to
learn how undocumented the tools and options for ZFS recovery we're. I managed 
to recover thanks
to some great forum posts from Victor Latushkin, however without his posts I 
would still be crying
at night...

I think the worst example is the zdb man page, which all it does is to ask you 
to "contact a Sun
Engineer", as this command is for experts only. What the hell? I don't have a 
support contract for
my home machines... I don't feel like this is the right way to go for an open 
source project...

A penny for anyone elses thoughts or facts about why it's like this...:)



regards,
Sigbjorn Lie

's/windows/unix/g'
- "Ubuntu" - an African word, meaning "Slackware is too hard for me"




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss