[zfs-discuss] ZFS ACL hell..Cannot figure it out.

2009-12-02 Thread Alxen
Ok.I need to set the following permissions:

domain admin - full control
domain users add files,folders,but not able to delete,modify or rename.

Doesn't matter what I try domain users still able to modify files.
What am I doing wrong ?

This is my setup:

chmod A=group:MYDOMAIN+domain\ admins:full_set:fd:allow,group:MYDOMAIN+domain\ 
users:list_directory/read_data/add_file/add_subdirectory/read_xattr/execute/read_attributes/read_acl:fd:allow,group:MYDOMAIN+domain\
 
users:append_data/write_data/delete/delete_child/write_xattr/write_attributes/write_acl/write_owner/synchronize:fd:deny
 test

-bash-4.0# ls -vd test/
d-+  6 root root   8 Dec  2 23:15 test/
 0:group:11014:list_directory/read_data/add_file/write_data
 /add_subdirectory/append_data/read_xattr/write_xattr/execute
 /delete_child/read_attributes/write_attributes/delete/read_acl
 /write_acl/write_owner/synchronize:file_inherit/dir_inherit:allow
 2:group:CADDALTA+domain use:list_directory/read_data/add_file/write_data
 /add_subdirectory/append_data/read_xattr/execute/read_attributes
 /read_acl:file_inherit/dir_inherit:allow
 3:group:CADDALTA+domain use:add_file/write_data/add_subdirectory
 /append_data/write_xattr/delete_child/write_attributes/delete
 /write_acl/write_owner/synchronize:file_inherit/dir_inherit:deny

smb.conf:

[global]
log level = 2
syslog only = no
max log size = 50
log file = /var/samba/log/%m.log

realm = caddalta.local
workgroup = CADDALTA
security = ADS
encrypt passwords = true
unix extensions = no
password server = caddcentral.caddalta.local
server string =prstorage
wins server = caddcentral.caddalta.local
domain master = no
socket options = TCP_NODELAY SO_KEEPALIVE
client schannel = no
client use spnego = yes

kernel oplocks = yes
oplocks = yes

winbind separator = +
idmap uid = 11000-19000
idmap gid = 11000-19000
winbind enum users = yes
winbind enum groups = yes
winbind nested groups = yes
allow trusted domains = yes

printcap name = /dev/null
load printers = no

[test]
   path = /tank/test
#  acl check permissions = True
  hide dot files = yes
  browseable = yes
  vfs objects = zfsacl
  nfs4: mode = special
  zfsacl: acesort = dontcare
#  create mask = 0770
#  directory mask = 0770
  public = yes
  writable = yes

Please help.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZIL corrupt, not recoverable even with logfix

2009-12-02 Thread James Risner
I have a 9 drive system (four mirrors of two disks and one hot spare) with a 
10th SSD drive for ZIL.

The ZIL is corrupt.

I've been unable to recover using FreeBSD 8, Opensolaris x86, and using logfix 
(http://github.com/pjjw/logfix)

In FreeBSD 8.0RC3 and below (uses v13 ZFS):
1) Boot Single User (both i386 and amd64)
2) /etc/rc.d/hostid start
3) "zpool import" results in system lockup (infinite time or at least 3 days)

In FreeBSD 8.0 Release:
1) Do #1 & #2 above, then "zpool import -f" results is being told there are 
missing elements (namely the log disk ad4p2)

In OpenSolaris x86:
1) "zpool import -f" reports log disk is missing.

Use Logfix under OpenSolaris:
1) make new pool junkpool
2) logfix using a disk from the pool and the new log disk and the guid of the 
old corrupt ZIL log from the freebsd box.
3) "zpool import -f" is different, it now shows the new log but reports a disk 
pair (mirror of da4p5 & da5p5 using the FreeBSD names since I don't understand 
OpenSolaris names) missing.  They show up before the log disk is changed, but 
now do not.
4) If I remove the log disk, they reappear.
5) Of note, 8 of the disks (the four mirrors) are one one SAS HBA.  The spare 
is on another SATA controller with the SSD disk.
6) Could it be that the disks span controllers?  Like c8t[1-8]d0s4 are the 8 
disks and c7d0 is the spare and c8d1 is the SSD.

I've spent 2 weeks trying to recover this pool, and been unable to do so in 
FreeBSD or OpenSolaris.  Is there anyone who could help?  Or suggest things I 
have not tried?  I'm fine with copying the data off if I could just mount the 
thing read only even.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ZIL/log on SSD weirdness

2009-12-02 Thread Dushyanth
Hey,

Sorry for this revisiting this thread late. What exactly is sync writes ? Do 
you mean synchronous writes or a app calling fsync() after every write ?

TIA
Dushyanth
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Any way to remove a vdev

2009-12-02 Thread Mike Freeman
I'm sure its been asked a thousand times but is there any prospect of being
able to remove a vdev from a pool anytime soon?

Thanks!

-- 
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Heads-Up: Changes to the zpool(1m) command

2009-12-02 Thread George Wilson

Some new features have recently integrated into ZFS which have change
the output of zpool(1m) command. Here's a quick recap:

1) 6574286 removing a slog doesn't work

This change added the concept of named top-level devices for the purpose
of device removal. The named top-levels are constructed by using the
logical name (mirror, raidz2, etc) and appending a unique numeric
number. After this change 'zpool status' and 'zpool import' will now
print the configuration using this new naming convention:

jaggs# zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz2-0  ONLINE   0 0 0
c0t0d0  ONLINE   0 0 0
c1t0d0  ONLINE   0 0 0
c4t0d0  ONLINE   0 0 0
c6t0d0  ONLINE   0 0 0
c7t0d0  ONLINE   0 0 0

errors: No known data errors

2) 6677093 zfs should have dedup capability

This project modified the default 'zpool list' to show the "dedupratio"
for each pool. Subsequently a new property, "dedupratio", is available
when using 'zpool get':

jaggs# zpool list
NAME SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
export   928G  47.5G   881G 5%  1.77x  ONLINE  -
rpool928G  25.7G   902G 2%  1.40x  ONLINE  -

jaggs# zpool get dedup rpool
NAME   PROPERTYVALUE  SOURCE
rpool  dedupratio  1.40x  -

3) 6897693 deduplication can only go so far

The integration of dedup changed the way we report report "used" and
"available" space in a pool. In particular, 'zpool list' reports
"allocated" and "free" physical blocks opposed to 'zfs list' shows
"used" and "available" space to the filesystem. This change replaced the
the "used" property with "allocated" and "available" with "free". This
should help clarify the accounting difference reported by the two
utilities. This does, however, impact any scripts which utilized the old
"used" and "available" properties of the zpool command. Those scripts
should be updated to use the new naming convention:

jaggs# zpool list
NAME   SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
rpool  464G  64.6G   399G13%  1.00x  ONLINE  -
tank  2.27T   207K  2.27T 0%  1.00x  ONLINE  -

jaggs# zpool get allocated,free rpool
NAME   PROPERTY   VALUE  SOURCE
rpool  allocated  64.6G  -
rpool  free   399G   -


We realize that these changes may impact some user scripts and we 
apologize for any inconvenience this may cause.


Thanks,
George
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any recommendation: what FS in DomU?

2009-12-02 Thread Seth Heeren
Andre Boegelsack wrote:
> Hi to all,
>
> I have a short question regarding which filesystem I should use in Dom0/DomU. 
> I've built my Dom0 on basis of ZFS.
>
> For my first DomU I've created a ZFS pool and installed the DomU (with OSOL 
> inside). During the installation process you are being asked if you wanna use 
> UFS or ZFS - I've chosen ZFS. The installation process was incredible slow. 
>
> Hence, in the next DomU I used UFS instead of ZFS. And the installation 
> process was pretty fast.
>
> This leads me to the coonclusion: ZFS on top of ZFS = don't; UFS on top of 
> ZFS = ok
>
> Can anybody verify that performance issue?
>
> Regards
> André
>   

No experience here, but common sense tells me this might happen due to
conflicting raid configurations:

Be sure to have single vdevs in your DomU. Any old filesystem would
normally fit this description. ZFS is special: it does it's own
striping/mirroring, and makes the usual assumptions about your leaf
vdevs in order to optimize performance.
My recommendation follow from my 'common sense' feeling [2] that ZFS
dynamic striping (default operation, but also key in
raidz/raidz2/raidz3) will make assumptions about your underlying
devices, that just won't hold for virtual disks, especially on LVM/RAID
sets or ZFS (ZFS being similar to LVM/RAID in this department).

Very simple example: if your Dom0 stripes across two physical disks
(_simple_ example...) and your DomU stripes across two virtual disks.
The two virtual disks will actually _both_ be striped across the two
physical disks on Dom0 ZFS.

So if DomU ZFS thinks it optimizes writes (by striping across it's
virtual disks, assuming that these are physicial disk units), it will
actually create (possibly distant) writes on the _same_ physical device,
actually hampering performance and disk lifetime. The increased seek
rates and seek times will harm performance.

How far the disks accesses would be spread depends on the way you
allocated the virtual disks. But if the disks are preallocated, it would
probably be wide apart (assuming that the virtual disks get allocated
'sequentially' [1].


[1] sequentially as in contiguous per stripe volume
[2] correct me if I'm wrong

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any recommendation: what FS in DomU?

2009-12-02 Thread Cindy Swearingen

Apparently, I don't know a DomU from a LDOM...

I should have pointed you to the Xen discussion list, here:

http://opensolaris.org/jive/forum.jspa?forumID=53

Cindy

On 12/02/09 08:58, Cindy Swearingen wrote:

I'm not sure we have any LDOMs experts on this list.

You might try reposting this query on the LDOMs discuss list,
which I think is this one:

http://forums.sun.com/forum.jspa?forumID=894

Thanks,

Cindy

On 12/02/09 08:17, Andre Boegelsack wrote:

Hi to all,

I have a short question regarding which filesystem I should use in 
Dom0/DomU. I've built my Dom0 on basis of ZFS.


For my first DomU I've created a ZFS pool and installed the DomU (with 
OSOL inside). During the installation process you are being asked if 
you wanna use UFS or ZFS - I've chosen ZFS. The installation process 
was incredible slow.
Hence, in the next DomU I used UFS instead of ZFS. And the 
installation process was pretty fast.


This leads me to the coonclusion: ZFS on top of ZFS = don't; UFS on 
top of ZFS = ok


Can anybody verify that performance issue?

Regards
André



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Resilver/scrub times?

2009-12-02 Thread Stephan Eisenbeiss

Colin Raven wrote:


Asus P5Q-EM mainboard
Core2 Quad 2.83 GHZ
8GB DDR2/80

OS:
2 x SSD's in RAID 0 (brand/size not decided on yet, but they will 
definitely be some flavor of SSD)


Data:
4 x 1TB Samsung Spin Point 7200 RPM 32MB cache SATA HD's (RAIDZ)

as i have a quite comparable setup:
my HW:
ASUS P5Q-EM
Core 2 Duo E8400 (2x3GHz)
4GB DDR/800

OS:
1x 250GB ATA (old&dusty, thinking about to replace this with an SSD)

Data:
4x 1TB Seagate Pipeline HD (2x533CS, 2x322CS)

NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
pool   3.64T   943G  2.72T25%  ONLINE  -

Status:
  pool: pool
 state: ONLINE
 scrub: scrub completed after 1h25m with 0 errors on Wed Dec  2 
16:43:35 2009

config:

NAMESTATE READ WRITE CKSUM
poolONLINE   0 0 0
  raidz1-0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0
c1t5d0  ONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t1d0  ONLINE   0 0 0

errors: No known data errors

resilvering a drive took about 1:30h to 2:15h (i upgraded the pool from 
500Gig drives to the 1TB drives)


HTH,
Stpehan

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Separate Zil on HDD ?

2009-12-02 Thread Eric D. Mudama

On Wed, Dec  2 at 10:59, Rob Logan wrote:



2 x 500GB mirrored root pool
6 x 1TB raidz2 data pool
I happen to have 2 x 250GB Western Digital RE3 7200rpm
be better than having the ZIL 'inside' the zpool.


listing two log devices (stripe) would have more spindles
than your single raidz2 vdev..  but for low cost fun one
might make a tinny slice on all the disks of the raidz2
and list six log devices (6 way stripe) and not bother
adding the other two disks.


But if you did that, a synchronous write (FUA or with a cache flush)
would have a significant latency penalty, especially if NCQ was being
used.

The size of the zil is usually tiny, striping it doesn't make any
sense to me.

--
Eric D. Mudama
edmud...@mail.bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS dedup issue

2009-12-02 Thread C. Bergström

Colin Raven wrote:

Hey Cindy!

Any idea of when we might see 129? (an approximation only). I ask the 
question because I'm pulling budget funds to build a filer, but it may 
not be in service until mid-January. Would it be reasonable to say 
that we might see 129 by then, or are we looking at summer...or even 
beyond?


I don't see that there's a "wrong answer: here necessarily, :) :) :) 
I'll go with what's out, but dedup is a big one and a feature that 
made me commit to this project.
The unstable and experimental Sun builds typically lag about 2 weeks 
behind the cut of the hg tag.  (Holidays and respins can derail that of 
course.)  The stable releases I have no clue about.  Depending on the 
level of adventure osunix in our next release may be interesting to you.


Feel free to email me off list or say hi on irc #osunix irc.freenode.net


Thanks

./C
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS dedup issue

2009-12-02 Thread Colin Raven
Hey Cindy!

Any idea of when we might see 129? (an approximation only). I ask the
question because I'm pulling budget funds to build a filer, but it may not
be in service until mid-January. Would it be reasonable to say that we might
see 129 by then, or are we looking at summer...or even beyond?

I don't see that there's a "wrong answer: here necessarily, :) :) :) I'll go
with what's out, but dedup is a big one and a feature that made me commit to
this project.

-Colin

On Wed, Dec 2, 2009 at 17:06, Cindy Swearingen wrote:

> Hi Jim,
>
> Nevada build 128 had some problems so will not be released.
>
> The dedup space fixes should be available in build 129.
>
> Thanks,
>
> Cindy
>
>
> On 12/02/09 02:37, Jim Klimov wrote:
>
>> Hello all
>>
>> Sorry for bumping an old thread, but now that snv_128 is due to appear as
>> a public DVD download, I wonder: has this fix for zfs-accounting and other
>> issues with zfs dedup been integrated into build 128?
>>
>> We have a fileserver which is likely to have much redundant data and we'd
>> like to clean up its space with zfs-deduping (even if that takes copying
>> files over to a temp dir and back - so their common blocks are noticed by
>> the code). Will build 128 be ready for the task - and increase our server's
>> available space after deduping - or should we better wait for another one?
>>
>> In general, were there any stability issues with snv_128 during
>> internal/BFU testing?
>>
>> TIA,
>> //Jim
>>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS dedup issue

2009-12-02 Thread Cindy Swearingen

Hi Jim,

Nevada build 128 had some problems so will not be released.

The dedup space fixes should be available in build 129.

Thanks,

Cindy

On 12/02/09 02:37, Jim Klimov wrote:

Hello all

Sorry for bumping an old thread, but now that snv_128 is due to appear as a 
public DVD download, I wonder: has this fix for zfs-accounting and other issues 
with zfs dedup been integrated into build 128?

We have a fileserver which is likely to have much redundant data and we'd like 
to clean up its space with zfs-deduping (even if that takes copying files over 
to a temp dir and back - so their common blocks are noticed by the code). Will 
build 128 be ready for the task - and increase our server's available space 
after deduping - or should we better wait for another one?

In general, were there any stability issues with snv_128 during internal/BFU 
testing?

TIA,
//Jim

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Separate Zil on HDD ?

2009-12-02 Thread Rob Logan


> 2 x 500GB mirrored root pool
> 6 x 1TB raidz2 data pool
> I happen to have 2 x 250GB Western Digital RE3 7200rpm
> be better than having the ZIL 'inside' the zpool.

listing two log devices (stripe) would have more spindles
than your single raidz2 vdev..  but for low cost fun one
might make a tinny slice on all the disks of the raidz2
and list six log devices (6 way stripe) and not bother
adding the other two disks.

Rob


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any recommendation: what FS in DomU?

2009-12-02 Thread Cindy Swearingen

I'm not sure we have any LDOMs experts on this list.

You might try reposting this query on the LDOMs discuss list,
which I think is this one:

http://forums.sun.com/forum.jspa?forumID=894

Thanks,

Cindy

On 12/02/09 08:17, Andre Boegelsack wrote:

Hi to all,

I have a short question regarding which filesystem I should use in Dom0/DomU. 
I've built my Dom0 on basis of ZFS.

For my first DomU I've created a ZFS pool and installed the DomU (with OSOL inside). During the installation process you are being asked if you wanna use UFS or ZFS - I've chosen ZFS. The installation process was incredible slow. 


Hence, in the next DomU I used UFS instead of ZFS. And the installation process 
was pretty fast.

This leads me to the coonclusion: ZFS on top of ZFS = don't; UFS on top of ZFS 
= ok

Can anybody verify that performance issue?

Regards
André

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Separate Zil on HDD ?

2009-12-02 Thread Edward Ned Harvey
> I previously had a linux NFS server that I had mounted 'ASYNC' and, as
> one would expect, NFS performance was pretty good getting close to
> 900gb/s. Now that I have moved to opensolaris,  NFS performance is not
> very good, I'm guessing mainly due to the 'SYNC' nature of NFS.  I've
> seen various threads and most point at 2 options;
> 
> 1. Disable the ZIL
> 2. Add independent log device/s

Really your question isn't about Zil on HDD (as subject says) but NFS
performance.

I'll tell you a couple of things.  I have a solaris ZFS and NFS server at
work, which noticeably outperforms the previous NFS server.  Here are the
differences in our setup:

Yes, I have SSD for ZIL.  Just one SSD.  32G.  But if this is the problem,
then you'll have the same poor performance on the local machine that you
have over NFS.  So I'm curious to see if you have the same poor performance
locally.  The ZIL does not need to be reliable; if it fails, the ZIL will
begin writing to the main storage, and performance will suffer until the new
SSD is put into production.

Another thing - You have 6 disks in raidz2.  This is 6 disks with the
capacity of 4.  You should get noticeably better performance if you have
3x2disk mirrors.  6 disks with the capacity of 3.  But if your bottleneck is
Ethernet, this difference might be irrelevant.

I have nothing special in my dfstab.
cat /etc/dfs/dfstab
share -F nfs -o ro=host1,rw=host2:host3,root=host2,host3,anon=4294967294
/path-to-export

But when I mount it from linux, I took great care to create this config:
cat /etc/auto.master
/-  /etc/auto.direct --timeout=1200

cat /etc/auto.direct
/mountpoint  -fstype=nfs,noacl,rw,hard,intr,posix
solarisserver:/path-to-export


I'm interested to hear if this sheds any light for you.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Any recommendation: what FS in DomU?

2009-12-02 Thread Andre Boegelsack
Hi to all,

I have a short question regarding which filesystem I should use in Dom0/DomU. 
I've built my Dom0 on basis of ZFS.

For my first DomU I've created a ZFS pool and installed the DomU (with OSOL 
inside). During the installation process you are being asked if you wanna use 
UFS or ZFS - I've chosen ZFS. The installation process was incredible slow. 

Hence, in the next DomU I used UFS instead of ZFS. And the installation process 
was pretty fast.

This leads me to the coonclusion: ZFS on top of ZFS = don't; UFS on top of ZFS 
= ok

Can anybody verify that performance issue?

Regards
André
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Separate Zil on HDD ?

2009-12-02 Thread Ross Walker
On Dec 2, 2009, at 6:57 AM, Brian McKerr   
wrote:



Hi all,

I have a home server based on SNV_127 with 8 disks;

2 x 500GB mirrored root pool
6 x 1TB raidz2 data pool

This server performs a few functions;

NFS : for several 'lab' ESX virtual machines
NFS : mythtv storage (videos, music, recordings etc)
Samba : for home directories for all networked PCs

I backup the important data to external USB hdd each day.


I previously had a linux NFS server that I had mounted 'ASYNC' and,  
as one would expect, NFS performance was pretty good getting close  
to 900gb/s. Now that I have moved to opensolaris,  NFS performance  
is not very good, I'm guessing mainly due to the 'SYNC' nature of  
NFS.  I've seen various threads and most point at 2 options;


1. Disable the ZIL
2. Add independent log device/s

I happen to have 2 x 250GB Western Digital RE3 7200rpm (Raid  
edition, rated for 24x7 usage etc) hard drives sitting doing nothing  
and was wondering whether it might speed up NFS, and possibly  
general filesystem usage, by adding these devices as log devices to  
the data pool.  I understand that an SSD is considered ideal for log  
devices but I'm thinking that these 2 drives should at least be  
better than having the ZIL 'inside' the zpool.


If adding these devices, should I add them as mirrored or individual  
to get some sort of load balancing (according to zpool manpage) and  
perhaps a little bit more performance ?


I'm running ZFS version 19 which 'zpool upgrade -v' shows me as  
having 'log device removal' support. Can I easily remove these  
devices if I find that they have resulted in little/no performance  
improvements ?


Any help/tips greatly appreciated.


It wouldn't hurt to try, but I'd be surprised if it helped much if at  
all. Idea of a separate ZIL is to locate it on a device with lower  
latency then the pool which would help increase performance between  
pool and log writes.


What speed are you trying to achieve for writes? Wirespeed? Well it's  
achievable, but with an app that uses larger block sizes and allows  
more then 1 transaction in flight at a time.


I wouldn't disable the ZIL but look at tuning the client side, or you  
could invest in a controller with a large battery backed write-cache  
and good JBOD mode, or a small fast SSD drive.


-Ross
 
___

zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Separate Zil on HDD ?

2009-12-02 Thread Brian McKerr
Hi all,

I have a home server based on SNV_127 with 8 disks;

2 x 500GB mirrored root pool
6 x 1TB raidz2 data pool

This server performs a few functions;

NFS : for several 'lab' ESX virtual machines
NFS : mythtv storage (videos, music, recordings etc)
Samba : for home directories for all networked PCs

I backup the important data to external USB hdd each day.


I previously had a linux NFS server that I had mounted 'ASYNC' and, as one 
would expect, NFS performance was pretty good getting close to 900gb/s. Now 
that I have moved to opensolaris,  NFS performance is not very good, I'm 
guessing mainly due to the 'SYNC' nature of NFS.  I've seen various threads and 
most point at 2 options;

1. Disable the ZIL
2. Add independent log device/s

I happen to have 2 x 250GB Western Digital RE3 7200rpm (Raid edition, rated for 
24x7 usage etc) hard drives sitting doing nothing and was wondering whether it 
might speed up NFS, and possibly general filesystem usage, by adding these 
devices as log devices to the data pool.  I understand that an SSD is 
considered ideal for log devices but I'm thinking that these 2 drives should at 
least be better than having the ZIL 'inside' the zpool.

If adding these devices, should I add them as mirrored or individual to get 
some sort of load balancing (according to zpool manpage) and perhaps a little 
bit more performance ?

I'm running ZFS version 19 which 'zpool upgrade -v' shows me as having 'log 
device removal' support. Can I easily remove these devices if I find that they 
have resulted in little/no performance improvements ?

Any help/tips greatly appreciated.

Cheers.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS dedup issue

2009-12-02 Thread Jim Klimov
Hello all

Sorry for bumping an old thread, but now that snv_128 is due to appear as a 
public DVD download, I wonder: has this fix for zfs-accounting and other issues 
with zfs dedup been integrated into build 128?

We have a fileserver which is likely to have much redundant data and we'd like 
to clean up its space with zfs-deduping (even if that takes copying files over 
to a temp dir and back - so their common blocks are noticed by the code). Will 
build 128 be ready for the task - and increase our server's available space 
after deduping - or should we better wait for another one?

In general, were there any stability issues with snv_128 during internal/BFU 
testing?

TIA,
//Jim
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss