Re: [zfs-discuss] Fwd: zpool import despite missing log [PSARC/2010/292Self Review]

2010-12-23 Thread Dmitry Sorokin
Yesterday I was able to import zpool with missing log device using
"zpool import -f -m myzpool" command.
I had to boot from Oracle Solaris Express Live CD. Then I just did
"zpool remove myzpool logdevice"
That's it. Now I've got my pool back with all the data and with ONLINE
status.
I had my zpool (with 8 x 500 GB disks) sitting for almost 6 months
unavailable.
This was my Christmas present!


Best regards,
Dmitry

Office phone:   905.625.6471 ext. 104
Cell phone:   416.529.1627

-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Jim Doyle
Sent: Sunday, August 01, 2010 1:40 AM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Fwd: zpool import despite missing log
[PSARC/2010/292Self Review]

A solution to this problem would be my early Christmas present! 

Here is how I lost access to an otherwise healthy mirrored pool two
months ago:

Box running snv_130 with two disks in a mirror and an iRAM
battery-backed ZIL device was shutdown orderly and powered down
normally.  While I was away on travel, the PSU in the PC died while in
its lowest-power standby state - this caused the Li battery in the iRAM
to discharge and all of the SLOG contents in the DRAM went poof.

Powered box back up... zpool import -f tank failed to bring the pool
back online.
After much research, I found the 'logfix' tool, got it compile on
another snv_122 box and followed the directions to synthesize a "forged"
log device header using the guid of the original device extracted from
vdev list.  This failed to work despite the binary tool running and some
inspection of the guids using zdb -l spoofed_new_logdev.

What's intrigueing is that zpool is not even properly reporting the
'missing device'.  See the output below from zpool, then zdb - notice
that zdb shows the remnants of a vdev for a log device but with guid = 0



# zpool import
  pool: tank
id: 6218740473633775200
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-6X
config:
[b]
tank UNAVAIL  missing device
  mirror-0   ONLINE
c0t1d0   ONLINE
c0t2d0   ONLINE
[/b]
Additional devices are known to be part of this pool, though
their




 # zdb -e tank

Configuration for import:
vdev_children: 2
version: 22
pool_guid: 6218740473633775200
name: 'tank'
state: 0
hostid: 9271202
hostname: 'eon'
vdev_tree:
type: 'root'
id: 0
guid: 6218740473633775200
children[0]:
type: 'mirror'
id: 0
guid: 5245507142600321917
metaslab_array: 23
metaslab_shift: 33
ashift: 9
asize: 1000188936192
is_log: 0
children[0]:
type: 'disk'
id: 0
guid: 15634594394239615149
phys_path: '/p...@0,0/pci1458,b...@11/d...@2,0:a'
whole_disk: 1
DTL: 55
path: '/dev/dsk/c0t1d0s0'
devid:
'id1,s...@sata_st31000333as9te1jx8c/a'
children[1]:
type: 'disk'
id: 1
guid: 3144903288495510072
phys_path: '/p...@0,0/pci1458,b...@11/d...@1,0:a'
whole_disk: 1
DTL: 54
path: '/dev/dsk/c0t2d0s0'
devid:
'id1,s...@sata_st31000528as9vp2kwam/a'
[b]
children[1]:
type: 'missing'
id: 1
guid: 0
[/b]
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] problem with zpool import - zil and cache drive are not displayed?

2010-08-03 Thread Dmitry Sorokin

I'm in the same situation as Darren - my log SSD device died completely.
Victor,  could you please explain how did you "mocked up log device in a
file" so zpool status started to show the device with UNAVAIL status?
I lost the latest zpool.cache file, but I was able to recover GUID of
the log device from the backup copy of zpool.cache.

Thanks,
Dmitry


-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Victor
Latushkin
Sent: Tuesday, August 03, 2010 7:09 PM
To: Darren Taylor
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] problem with zpool import - zil and cache
drive are not displayed?


On Aug 4, 2010, at 12:23 AM, Darren Taylor wrote:

> Hi George,
> 
> I think you are right. The log device looks to have suffered a
complete loss, there is no data on the disk at all. The log device was a
"acard" ram drive (with battery backup), but somehow it has faulted
clearing all data. 
> 
> --victor gave me this advice, and queried about the zpool.cache-- 
> Looks like there's a hardware problem with c7d0 as it appears to
contain garbage. Do you have zpool.cache with this pool configuration
available?

Besides containing garbage former log device now appears to have
different geometry and is not able to read in the higher LBA ranges. So
i'd say it is broken.

> c7d0 was the log device. I'm unsure what the next step is, but i'm
assuming there is a way to grab the drives original config from the
zpool.cache file and apply back to the drive?

I mocked up log device in a file, and that made zpool import more happy:

bash-4.0# zpool import
  pool: tank
id: 15136317365944618902
 state: DEGRADED
status: The pool was last accessed by another system.
action: The pool can be imported despite missing or damaged devices.
The
fault tolerance of the pool may be compromised if imported.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

tankDEGRADED
  raidz1-0  ONLINE
c6t4d0  ONLINE
c6t5d0  ONLINE
c6t6d0  ONLINE
c6t7d0  ONLINE
  raidz1-1  ONLINE
c6t0d0  ONLINE
c6t1d0  ONLINE
c6t2d0  ONLINE
c6t3d0  ONLINE
cache
  c8d1
logs
  c13d1s0   UNAVAIL  cannot open



bash-4.0# zpool import -fR / tank
cannot import 'tank': one or more devices is currently unavailable
Recovery is possible, but will result in some data loss.
Returning the pool to its state as of July 21, 2010 03:49:50 AM
NZST
should correct the problem.  Approximately 91 seconds of data
must be discarded, irreversibly.  After rewind, several
persistent user-data errors will remain.  Recovery can be
attempted
by executing 'zpool import -F tank'.  A scrub of the pool
is strongly recommended after recovery.
bash-4.0#

So if you are happy with the results, you can perform actual import with

zpool import -fF -R / tank

You should then be able to remove log device completely.

regards
victor

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fwd: zpool import despite missing log [PSARC/2010/292Self Review]

2010-07-30 Thread Dmitry Sorokin
Thanks for the update Robert.

 

Currently I have failed zpool with slog missing, which I was unable to
recover, although I was able to find out what the GUID was for the slog
device (below is the uotput of zpool import command).

I couldn't compile logfix binary either, so I ran out of any ideas of
how I can recover this zpool.

So for now it just sits there untouched.

This proposed improvement to zfs is definetely a hope for me.

When do you think it'll be implemented (roughly - this year, early next
year) and would I be able to import this pool at it's current
version 22 (snv_129)?

 

 

[r...@storage ~]# zpool import

pool: tank

id: 1346464136813319526

state: UNAVAIL

status: The pool was last accessed by another system.

action: The pool cannot be imported due to damaged devices or data.

   see: http://www.sun.com/msg/ZFS-8000-EY

config:

 

tank UNAVAIL  missing device

  raidz2-0   ONLINE

c4t0d0   ONLINE

c4t1d0   ONLINE

c4t2d0   ONLINE

c4t3d0   ONLINE

c4t4d0   ONLINE

c4t5d0   ONLINE

c4t6d0   ONLINE

c4t7d0   ONLINE

[r...@storage ~]#

 

Bets regards,

Dmitry

 

 

From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Robert
Milkowski
Sent: Wednesday, July 28, 2010 7:12 PM
To: ZFS Discussions
Subject: [zfs-discuss] Fwd: zpool import despite missing log
[PSARC/2010/292Self Review]

 


fyi

-- 
Robert Milkowski
http://milek.blogspot.com


 Original Message  

Subject: 

zpool import despite missing log [PSARC/2010/292 Self Review]

Date: 

Mon, 26 Jul 2010 08:38:22 -0600

From: 

Tim Haley  <mailto:tim.ha...@oracle.com> 

To: 

psarc-...@sun.com

CC: 

zfs-t...@sun.com

 

I am sponsoring the following case for George Wilson.  Requested binding

is micro/patch.  Since this is a straight-forward addition of a command 
line option, I think itqualifies for self review.  If an ARC member 
disagrees, let me know and I'll convert to a fast-track.
 
Template Version: @(#)sac_nextcase 1.70 03/30/10 SMI
This information is Copyright (c) 2010, Oracle and/or its affiliates. 
All rights reserved.
1. Introduction
1.1. Project/Component Working Name:
 zpool import despite missing log
1.2. Name of Document Author/Supplier:
 Author:  George Wilson
1.3  Date of This Document:
26 July, 2010
 
4. Technical Description
 
OVERVIEW:
 
 ZFS maintains a GUID (global unique identifier) on each device
and
 the sum of all GUIDs of a pool are stored into the ZFS
uberblock.
 This sum is used to determine the availability of all vdevs
 within a pool when a pool is imported or opened.  Pools which
 contain a separate intent log device (e.g. a slog) will fail to
 import when that device is removed or is otherwise unavailable.
 This proposal aims to address this particular issue.
 
PROPOSED SOLUTION:
 
 This fast-track introduce a new command line flag to the
 'zpool import' sub-command.  This new option, '-m', allows
 pools to import even when a log device is missing.  The
contents
 of that log device are obviously discarded and the pool will
 operate as if the log device were offlined.
 
MANPAGE DIFFS:
 
   zpool import [-o mntopts] [-p property=value] ... [-d dir | -c
cachefile]
-  [-D] [-f] [-R root] [-n] [-F] -a
+  [-D] [-f] [-m] [-R root] [-n] [-F] -a
 
 
   zpool import [-o mntopts] [-o property=value] ... [-d dir | -c
cachefile]
-  [-D] [-f] [-R root] [-n] [-F] pool |id [newpool]
+  [-D] [-f] [-m] [-R root] [-n] [-F] pool |id [newpool]
 
   zpool import [-o mntopts] [ -o property=value] ... [-d dir |
- -c cachefile] [-D] [-f] [-n] [-F] [-R root] -a
+ -c cachefile] [-D] [-f] [-m] [-n] [-F] [-R root] -a
 
   Imports all  pools  found  in  the  search  directories.
   Identical to the previous command, except that all pools
 
+ -m
+
+Allows a pool to import when there is a missing log device
 
EXAMPLES:
 
1). Configuration with a single intent log device:
 
# zpool status tank
   pool: tank
state: ONLINE
 scan: none requested
 config:
 
 NAMESTATE READ WRITE CKSUM
 tankONLINE   0 0 0
   c7t0d0ONLINE   0 0 0
 logs
   c5t0d0ONLINE   0 0 0
 
errors: No known data errors
 
# zpool import tank
The devices below are missing, use '-m' to import the pool anyway:
 c5t0d0 [log]
 
cannot import 'tank': one or more devices is currently unavailable
 
# zpool import -m tank
# zpool status tank
   pool: tank
  state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas
exist for
   

Re: [zfs-discuss] ZIL SSD failed

2010-07-13 Thread Dmitry Sorokin
Hi Richard,

What happened is this SSD gave some IO errors and the pool become degraded, so 
after the machine got rebooted, I found that the pool became unavailable.
The SSD drive itself is toasted, as bios now reports it as 8 GB in size and 
name is "Inuem SS E Cootmoader!", so all the partitions are gone and it is 
completely unusable at the moment.
I was able to find GUID of the slog from zpool.cache file that I backed up in 
January this year. However I was unable to compile logfix binary and the one 
provided by the author (compiled on snv_111) dumps core on snv_129 and snv_134.
Does anyone have logfix binary compiled for snv_129?

Thanks,
Dmitry


-Original Message-
From: Richard Elling [mailto:richard.ell...@gmail.com] 
Sent: Tuesday, July 13, 2010 5:12 PM
To: Dmitry Sorokin
Subject: Re: [zfs-discuss] ZIL SSD failed

Is the drive still there? If so, then try removing it or temporarily zeroing 
out the s0 label. If that allows zpool import -F to work, then please let me 
know and I'll add your post to a new bug report. I have seen something like 
this recently and have been trying to reproduce. 

 -- richard

On Jul 12, 2010, at 6:22 PM, "Dmitry Sorokin"  wrote:

> I have/had Intel M25-E 32GB SSD drive as ZIL/cache device (2 GB ZIL 
> slice0 and the rest is cache slice1)
> 
> The SSD drive has failed and zpool is unavailable anymore.
> 
> Is there any way to import the pool/recover data, even with some latest 
> transactions lost?
> 
> I’ve tried zdb –e -bcsvL  but it didn’t work.
> 
>  
> 
> Below are the details:
> 
>  
> 
> [r...@storage ~]# uname -a
> 
> SunOS storage 5.11 snv_129 i86pc i386 i86pc
> 
>  
> 
> [r...@storage ~]# zpool import
> 
>   pool: neosys
> 
> id: 1346464136813319526
> 
> state: UNAVAIL
> 
> status: The pool was last accessed by another system.
> 
> action: The pool cannot be imported due to damaged devices or data.
> 
>see: http://www.sun.com/msg/ZFS-8000-EY
> 
> config:
> 
>  
> 
> neosys   UNAVAIL  missing device
> 
>   raidz2-0   ONLINE
> 
> c4t0d0   ONLINE
> 
> c4t1d0   ONLINE
> 
> c4t2d0   ONLINE
> 
> c4t3d0   ONLINE
> 
> c4t4d0   ONLINE
> 
> c4t5d0   ONLINE
> 
> c4t6d0   ONLINE
> 
> c4t7d0   ONLINE
> 
>  
> 
> [r...@storage ~]# zdb -e neosys
> 
>  
> 
> Configuration for import:
> 
> vdev_children: 2
> 
> version: 22
> 
> pool_guid: 1346464136813319526
> 
> name: 'neosys'
> 
> state: 0
> 
> hostid: 577477
> 
> hostname: 'storage'
> 
> vdev_tree:
> 
> type: 'root'
> 
> id: 0
> 
> guid: 1346464136813319526
> 
> children[0]:
> 
> type: 'raidz'
> 
> id: 0
> 
> guid: 12671265726510370964
> 
> nparity: 2
> 
> metaslab_array: 25
> 
> metaslab_shift: 35
> 
> ashift: 9
> 
> asize: 4000755744768
> 
> is_log: 0
> 
> children[0]:
> 
> type: 'disk'
> 
> id: 0
> 
> guid: 10831801542309994254
> 
> phys_path: 
> '/p...@0,0/pci10de,3...@f/pci1000,3...@0/s...@0,0:a'
> 
> whole_disk: 1
> 
> DTL: 3489
> 
> path: '/dev/dsk/c4t0d0s0'
> 
> devid: 'id1,s...@n5000cca32cc21642/a'
> 
> children[1]:
> 
> type: 'disk'
> 
> id: 1
> 
> guid: 39402223705908332
> 
> phys_path: 
> '/p...@0,0/pci10de,3...@f/pci1000,3...@0/s...@1,0:a'
> 
> whole_disk: 1
> 
> DTL: 3488
> 
> path: '/dev/dsk/c4t1d0s0'
> 
> devid: 'id1,s...@n5000cca32cc1f061/a'
> 
> children[2]:
> 
> type: 'disk'
> 
> id: 2
> 
> guid: 5642566785254158202
> 
> phys_path: 
> '/p...@0,0/pci10de,3...@f/pci1000,3...@0/s...@2,0:a'
> 
> whole_disk: 1
> 
> DTL: 3487
> 
> path: '/dev/dsk/c4t2d0s0'
> 
>   

[zfs-discuss] ZIL SSD failed

2010-07-12 Thread Dmitry Sorokin

whole_disk: 1

DTL: 3469

path: '/dev/dsk/c4t7d0s0'

    devid: 'id1,s...@n5000cca357ec9b07/a'

children[1]:

type: 'missing'

id: 1

guid: 0

zdb: can't open 'neosys': No such device or address

 

[r...@storage ~]# zdb -e -bcsvL neosys

zdb: can't open 'neosys': No such device or address

 

Any help would be greatly appreciated.

 

Thanks,

Dmitry

 



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS dedup, snapshots and NFS

2010-05-26 Thread Dmitry Sorokin
Thanks a lot for heads up  Garrett. I'll be watching for an update from
Nexenta then.

 

Dmitry

 

 

From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Garrett D'Amore
Sent: Wednesday, May 26, 2010 3:57 PM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS dedup, snapshots and NFS

 

On 5/26/2010 11:47 AM, Dmitry Sorokin wrote: 

Hi All,

 

I was just wandering if the issue that affects NFS availability when
deleting large snapshots on ZFS data sets with dedup enabled was fixed.


There is a fix for this in b141 of the OpenSolaris source product.  We are
looking at including this fix in a forthcoming patch update/micro update to
the Nexenta product.  So stay tuned for more on that.

- Garrett




 

More on the issue here:

http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg37288.html

and here:

http://opensolaris.org/jive/thread.jspa?messageID=474765
<http://opensolaris.org/jive/thread.jspa?messageID=474765&tstart=0>
&tstart=0

 

I have similar problem - when destroying snapshot (few gigabytes in size) on
ZFS data set with dedup enabled (was enabled prior and at a time when
snapshot was taken, but disabled later) NFS hangs and becomes unavailable to
remote hosts (same gigabit LAN).

I have a decent hardware with quad core Intel Nehalem CPU, 12 GB RAM and
Intel SSD drives as read and write cache for this zfs pool configured. I'm
running Nexenta Community edition 3.0.1.

I've posted to Nexenta community support forum but they referred me to
OpenSolaris community mailing list or as they've explained it's not Nexenta,
but rather OpenSolaris issue.

 

Thanks,

Dmitry

 

 

 
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  

 



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS dedup, snapshots and NFS

2010-05-26 Thread Dmitry Sorokin
Hi All,

 

I was just wandering if the issue that affects NFS availability when
deleting large snapshots on ZFS data sets with dedup enabled was fixed.

 

More on the issue here:

http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg37288.html

and here:

http://opensolaris.org/jive/thread.jspa?messageID=474765
<http://opensolaris.org/jive/thread.jspa?messageID=474765&tstart=0>
&tstart=0

 

I have similar problem - when destroying snapshot (few gigabytes in size) on
ZFS data set with dedup enabled (was enabled prior and at a time when
snapshot was taken, but disabled later) NFS hangs and becomes unavailable to
remote hosts (same gigabit LAN).

I have a decent hardware with quad core Intel Nehalem CPU, 12 GB RAM and
Intel SSD drives as read and write cache for this zfs pool configured. I'm
running Nexenta Community edition 3.0.1.

I've posted to Nexenta community support forum but they referred me to
OpenSolaris community mailing list or as they've explained it's not Nexenta,
but rather OpenSolaris issue.

 

Thanks,

Dmitry

 

 



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which build is the most stable, mainly for NAS (zfs)?

2010-04-15 Thread Dmitry
"free nexentastor community edition = commercial edition without support,"
You are opened my eyes :) 
start to download, tomorrow will look
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which build is the most stable, mainly for NAS (zfs)?

2010-04-15 Thread Dmitry
Thanks for the tips, 
I tried EON, but it is too minimalistic, I plan to use this server for other 
(monitoring server and etc.) 
Nexenta is a strange hybrid, and use the not commercial version, without its 
ability, i don't know...

A napp-it i'll try for sure
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which build is the most stable, mainly for NAS (zfs)?

2010-04-14 Thread Dmitry
Yesterday I received a victim.

"SuperServer 5026T-3RF 19" 2U, Intel X58, 1xCPU LGA1366 8xSAS/SATA hot-swap 
drive bays, 8 ports SAS LSI 1068E, 6 ports SATA-II Intel ICH10R, 2xGigabit 
Ethernet"

and i have 2 ways Openfiler vs Opensolaris :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Which build is the most stable, mainly for NAS (zfs)?

2010-04-14 Thread Dmitry
Which build is the most stable, mainly for NAS?
I plann NAS  zfs + CIFS,iSCSI

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Recover after zpool add -f ? Is it possible?

2009-09-06 Thread Dmitry M. Reznitsky
Hello.

Recently I've upgraded one of my machines to OpenSolaris 2009.06; the box has a 
few HDD: 1) main with OS, etc and 2) archive and ... 
After installing I created users, environment, etc on HDD(1), and then wanted 
to mount the existing ZFS for the rest (from (2), etc). And by a mistake, for 
(2) I did zpool add -f space c9d0.

That's it. "space" pool is empty. 

zpool import -D shows nothing.
zpool list shows for this drive
space 696G 76K 696G 0% ONLINE -

So. The question is simple. Is it possible to recover it (I mean that what was 
on c9d0 before I did zpool add)?

Thanks.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS poor performance on Areca 1231ML

2009-01-10 Thread Dmitry Razguliaev
At the time of writing that post, no, I didn't run zpool iostat -v 1. However, 
I run it after that. Results for operations of iostat command has changed from 
1 for every device in raidz to something in between 20 and 400 for raidz volume 
and from 3 to something in between 200 and 450 for a single device zfs volume, 
but the final result remained the same: single disk zfs volume is only about 
twice slower, then 9 disks raidz zfs volume, which seems to be very strange. My 
expectations are in a range of 6-7 times difference in performance.

Best Regards, Dmitry
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS poor performance on Areca 1231ML

2008-12-20 Thread Dmitry Razguliaev
Hi, I faced with a similar problem, like Ross, but still have not found a 
solution. I have raidz out of 9 sata disks connected to internal and 2 external 
sata controllers. Bonnie++ gives me the following results: 
nexenta,8G,104393,43,159637,30,57855,13,77677,38,56296,7,281.8,1,16,26450,99,+,+++,29909,93,24232,99,+,+++,13912,99
while running on a single disk it gives me the following:
nexenta,8G,54382,23,49141,8,25955,5,58696,27,60815,5,270.8,1,16,19793,76,+,+++,32637,99,22958,99,+,+++,10490,99
The performance difference of between those two seems to be too small. I 
checked zpool iostat -v during bonnie++ itelligent writing and it looks it, 
every time more or less like this:

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
iTank   7.20G  2.60T  12 13  1.52M  1.58M
  raidz17.20G  2.60T  12 13  1.52M  1.58M
c8d0-  -   1  1   172K   203K
c7d1-  -   1  1   170K   203K
c6t0d0  -  -  1  1   172K   203K
c8d1-  -   1  1   173K   203K
c9d0-  -   1  1   174K   203K
c10d0   -  -  1  1   174K   203K
c6t1d0  -  -  1  1   175K   203K
c5t0d0s0  -  -   1  1   176K   203K
c5t1d0s0  -  -   1  1   176K   203K

As far as I understand it, less each vdev executes only 1 i/o in a time. time. 
however, on a single device zpool iostat -v gives me the following:


   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
rpool5.47G   181G  3  3   441K   434K
  c7d0s05.47G   181G  3  3   441K   434K
--  -  -  -  -  -  -

In this case this device performs 3 i/o in a time, which gives it much higher 
bandwidth per unit.

Is there any way to increase i/o counts for my iTank zpool?
I'm running OS-11.2008 on MSI P45 Diamond with 4G of memory

Best Regards, Dmitry
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Lack of physical memory evidences

2007-10-17 Thread Dmitry Degrave
In pre-ZFS era, we had observable parameters like scan rate and anonymous 
page-in/-out counters to discover situations when a system experiences a lack 
of physical memory. With ZFS, it's difficult to use mentioned parameters to 
figure out situations like that. Has someone any idea what we can use for the 
same purpose now ?

Thanks in advance,
Dmeetry
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS quota

2007-08-07 Thread Dmitry
Hello

Is there a way to limit size of filesystem not including snapshots?
Or even better size of data on filesystem regardless of compression.
If not is it planned?
It is hard to explain to user that it is normal that after deleting his files 
he did not receive more space. Even harder to ask to use not more than half of 
given quota.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Samba and ZFS ACL Question

2007-05-16 Thread Dmitry Tourkin
May be this link could help you? 

http://www.nabble.com/VFS-module-handling-ACL-on-ZFS-t3730348.html
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss