Re: [zfs-discuss] SMC Webconsole 3.1 and ZFS Administration 1.0 - stacktraces in snv_b89

2008-06-10 Thread Jim Klimov
Likewise. Just plain doesn't work.

Not required though, since the command-line is okay and way powerful ;)

And there are some more interesting challenges to work on, so I didn't push 
this problem any more yet.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs promote and ENOSPC

2008-06-10 Thread Robin Guo
Hi, Mike,

  It's like 6452872, it need enough space for 'zfs promote'

  - Regards,

Mike Gerdts wrote:
> I needed to free up some space to be able to create and populate a new
> upgrade.  I was caught off guard by the amount of free space required
> by "zfs promote".
>
> bash-3.2# uname -a
> SunOS indy2 5.11 snv_86 i86pc i386 i86pc
>
> bash-3.2# zfs list
> NAME   USED  AVAIL  REFER  MOUNTPOINT
> rpool 5.49G  1.83G55K  /rpool
> [EMAIL PROTECTED] 46.5K  -  49.5K  -
> rpool/ROOT5.39G  1.83G18K  none
> rpool/ROOT/2008.052.68G  1.83G  3.38G  legacy
> rpool/ROOT/2008.05/opt 814M  1.83G  22.3M  legacy
> rpool/ROOT/2008.05/[EMAIL PROTECTED]43K  -  22.3M  -
> rpool/ROOT/2008.05/opt/SUNWspro739M  1.83G   739M  legacy
> rpool/ROOT/2008.05/opt/netbeans   52.9M  1.83G  52.9M  legacy
> rpool/ROOT/preview2   2.71G  1.83G  2.71G  /mnt
> rpool/ROOT/[EMAIL PROTECTED] 6.13M  -  2.71G  -
> rpool/ROOT/preview2/opt 27K  1.83G  22.3M  legacy
> rpool/export  89.8M  1.83G19K  /export
> rpool/export/home 89.8M  1.83G  89.8M  /export/home
>
> bash-3.2# zfs promote rpool/ROOT/2008.05
> cannot promote 'rpool/ROOT/2008.05': out of space
>
> Notice that I have 1.83 GB of free space and the snapshot from which
> the clone was created (rpool/ROOT/[EMAIL PROTECTED]) is 2.71 GB.  It
> was not until I had more than 2.71 GB of free space that I could
> promote rpool/ROOT/2008.05.
>
> This behavior does not seem to be documented.  Is it a bug in the
> documentation or zfs?
>
>   


-- 
Regards,

Robin Guo, Xue-Bin Guo
Solaris Kernel and Data Service QE,
Sun China Engineering and Reserch Institute
Phone: +86 10 82618200 +82296
Email: [EMAIL PROTECTED]
Blog: http://blogs.sun.com/robinguo

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SMC Webconsole 3.1 and ZFS Administration 1.0 - stacktraces in snv_b89

2008-06-10 Thread Jean-Paul Rivet
> snv_89 is the same. The ZFS Administration console
> worked fine to create my first 2 pools. I've been
> unable to use it since then. I have the same stack
> trace errors.
> 
> Did you find a workaround for this issue?
> 
> -Rick

Nothing yetdropping to the command line for the moment. Looking forward to 
when its working that's for sure!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [blog post] Trying to corrupt data in a ZFS mirror

2008-06-10 Thread Dave Bechtel
WayCool stuff man, nice post! :)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SMC Webconsole 3.1 and ZFS Administration 1.0 - stacktraces in snv_b89

2008-06-10 Thread Rick
snv_89 is the same. The ZFS Administration console worked fine to create my 
first 2 pools. I've been unable to use it since then. I have the same stack 
trace errors.

Did you find a workaround for this issue?

-Rick
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SATA controller suggestion

2008-06-10 Thread James C. McPherson
Brandon High wrote:
> On Thu, Jun 5, 2008 at 9:12 PM, Joe Little <[EMAIL PROTECTED]> wrote:
>> winner is going to be the newer SAS/SATA mixed HBAs from LSI based on
>> the 1068 chipset, which Sun has been supporting well in newer
>> hardware.
>>
>> http://jmlittle.blogspot.com/2008/06/recommended-disk-controllers-for-zfs.html
> 
> Joe --
> 
> What about the LSISAS3081E-R? Does it use the same drivers as the
> other LSI controllers?
> 
> http://www.lsi.com/storage_home/products_home/host_bus_adapters/sas_hbas/lsisas3081er/index.html

That card is very similar to ones sold by Sun. It should
work fine out of the box with the mpt(7D) driver.


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SATA controller suggestion

2008-06-10 Thread Brandon High
On Thu, Jun 5, 2008 at 9:12 PM, Joe Little <[EMAIL PROTECTED]> wrote:
> winner is going to be the newer SAS/SATA mixed HBAs from LSI based on
> the 1068 chipset, which Sun has been supporting well in newer
> hardware.
>
> http://jmlittle.blogspot.com/2008/06/recommended-disk-controllers-for-zfs.html

Joe --

What about the LSISAS3081E-R? Does it use the same drivers as the
other LSI controllers?

http://www.lsi.com/storage_home/products_home/host_bus_adapters/sas_hbas/lsisas3081er/index.html

-B

-- 
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS conflict with MAID?

2008-06-10 Thread A Darren Dunham
On Tue, Jun 10, 2008 at 05:32:21PM -0400, Torrey McMahon wrote:
> However, some apps will probably be very unhappy if i/o takes 60 seconds 
> to complete.

It's certainly not uncommon for that to occur in an NFS environment.
All of our applications seem to hang on just fine for minor planned and
unplanned outages.

Would the apps behave differently in this case?  (I'm certainly not
thinking of a production database for such a configuration).

-- 
Darren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS conflict with MAID?

2008-06-10 Thread Torrey McMahon
Richard Elling wrote:
> Tobias Exner wrote:
>   
>> Hi John,
>>
>> I've done some tests with a SUN X4500 with zfs and "MAID" using the 
>> powerd of Solaris 10 to power down the disks which weren't access for 
>> a configured time. It's working fine...
>>
>> The only thing I run into was the problem that it took roundabout a 
>> minute to power on 4 disks in a zfs-pool. The problem seems to be that 
>> the powerd starts the disks sequentially.
>> 
>
> Did you power down disks or spin down disks?  It is relatively
> easy to spin down (or up) disks with luxadm stop (start).  If a
> disk is accessed, then it will spin itself up.  By default, the timeout
> for disk response is 60 seconds, and most disks can spin up in
> less than 60 seconds.

However, some apps will probably be very unhappy if i/o takes 60 seconds 
to complete.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't rm file when "No space left on device"...

2008-06-10 Thread Brad Diggs
Great point.  Hadn't thought of it in that way.
I haven't tried truncating a file prior to trying
to remove it.  Either way though, I think it is a
bug if once the filesystem fills up, you can't remove
a file.

Brad

On Thu, 2008-06-05 at 21:13 -0600, Keith Bierman wrote:
> On Jun 5, 2008, at 8:58 PM   6/5/, Brad Diggs wrote:
> 
> > Hi Keith,
> >
> > Sure you can truncate some files but that effectively corrupts
> > the files in our case and would cause more harm than good. The
> > only files in our volume are data files.
> >
> 
> 
> 
> So an rm is ok, but a truncation is not?
> 
> Seems odd to me, but if that's your constraint so be it.
> 
-- 
-
  _/_/_/  _/_/  _/ _/   Brad Diggs
 _/  _/_/  _/_/   _/Communications Area Market
_/_/_/  _/_/  _/  _/ _/ Senior Directory Architect
   _/  _/_/  _/   _/_/
  _/_/_/   _/_/_/   _/ _/   Office:  972-992-0002
E-Mail:  [EMAIL PROTECTED]
 M  I  C  R  O  S  Y  S  T  E  M  S

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS space map causing slow performance

2008-06-10 Thread Victor Latushkin
Scott wrote:
> Hello,
> 
> I have several ~12TB storage servers using Solaris with ZFS.  Two of
> them have recently developed performance issues where the majority of
> time in an spa_sync() will be spent in the space_map_*() functions.
> During this time, "zpool iostat" will show 0 writes to disk, while it
> does hundreds or thousands of small (~3KB) reads each second,
> presumably reading space map data from disk to find places to put the
> new blocks.  The result is that it can take several minutes for an
> spa_sync() to complete, even if I'm only writing a single 128KB
> block.
> 
> Using DTrace, I can see that space_map_alloc() frequently returns -1
> for 128KB blocks.  From my understanding of the ZFS code, that means
> that one or more metaslabs has no 128KB blocks available.  Because of
> that, it seems to be spending a lot of time going through different
> space maps which aren't able to all be cached in RAM at the same
> time, thus causing bad performance as it has to read from the disks.
> The on-disk space map size seems to be about 500MB.

This indeed sounds like ZFS is trying to find bigger chunks of properly 
aligned free space segments and fails to find it.

> I assume the simple solution is to leave enough free space available
> so that the space map functions don't have to hunt around so much.
> This problem starts happening when there's about 1TB free out of the
> 12TB.  It seems like such a shame to waste that much space, so if
> anyone has any suggestions, I'd be glad to hear them.

Although fix for "6596237 Stop looking and start ganging" as suggested
by Sanjeev will provide some relief here, you are running you pool at 
92% capacity, so it may be time to consider expanding your pool.

> 1) Is there anything I can do to temporarily fix the servers that are
> having this problem? They are production servers, and I have
> customers complaining, so a temporary fix is needed.

Setting ZFS recordsize to some smaller value than default 128k may help
but only temporarily.

> 2) Is there any sort of tuning I can do with future servers to
> prevent this from becoming a problem?  Perhaps a way to make sure all
> the space maps are always in RAM?

Fix for 6596237 will help improve performance in such cases, so
probably you need to make sure that it is installed once available.

Ability to defragment pool could be useful as well.

> 3) I set recordsize=32K and turned off compression, thinking that
> should fix the performance problem for now.  However, using a DTrace
> script to watch calls to space_map_alloc(), I see that it's still
> looking for 128KB blocks (!!!) for reasons that are unclear to me,
> thus it hasn't helped the problem.

Changing recordsize affect block sizes ZFS uses for data blocks. It may
still require bigger blocks for metadata needs.

DTrace may help to better understand what is causing ZFS to try to
allocate bigger block. For example, larger blocks may still be used for ZIL.

Wbr,
victor

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Growing root pool ?

2008-06-10 Thread A Darren Dunham
On Tue, Jun 10, 2008 at 11:33:36AM -0700, Wyllys Ingersoll wrote:
> Im running build 91 with ZFS boot.  It seems that ZFS will not allow
> me to add an additional partition to the current root/boot pool
> because it is a bootable dataset.  Is this a known issue that will be
> fixed or a permanent limitation?

The current limitation is that a bootable pool be limited to one disk or
one disk and a mirror.  When your data is striped across multiple disks,
that makes booting harder.

>From a post to zfs-discuss about two months ago:

   ... we do have plans to support booting from RAID-Z.  The design is
   still being worked out, but it's likely that it will involve a new
   kind of dataset which is replicated on each disk of the RAID-Z pool,
   and which contains the boot archive and other crucial files that the
   booter needs to read.  I don't have a projected date for when it will
   be available.  It's a lower priority project than getting the install
   support for zfs boot done.

-- 
Darren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Growing root pool ?

2008-06-10 Thread Wyllys Ingersoll
Im running build 91 with ZFS boot.   It seems that ZFS will not allow me to add 
an additional partition to the current root/boot pool because it is a bootable 
dataset.  Is this a known issue that will be fixed or a permanent limitation?  

-Wyllys
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Root Install with Nevada build 90

2008-06-10 Thread Otmar Meier
have a look at the screenshots at my article (in German): 
http://otmanix.de/2008/06/07/heureka-zfs-boot-auf-sparc-betriebsbereit/

Best regards, Otmanix
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot delete errored file

2008-06-10 Thread Brandon High
On Tue, Jun 10, 2008 at 9:12 AM, Ben Middleton <[EMAIL PROTECTED]> wrote:
> I'll still try a long memtest run, followed by a rebuild of the errored pool. 
> I'll have a read around to see if there's anyway of making the memory more 
> stable on this mobo.

Run it at 800MHz. I have a MSI P35 Platinum for my Windows gaming
system and after trying to get my 1066 memory to run stably at speed,
I gave up and run it at 800. You should try reducing the memory speed
and relaxing the timing to 5-5-5-15 to see if it helps.

-B

-- 
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS conflict with MAID?

2008-06-10 Thread Richard Elling
Tobias Exner wrote:
> Hi John,
>
> I've done some tests with a SUN X4500 with zfs and "MAID" using the 
> powerd of Solaris 10 to power down the disks which weren't access for 
> a configured time. It's working fine...
>
> The only thing I run into was the problem that it took roundabout a 
> minute to power on 4 disks in a zfs-pool. The problem seems to be that 
> the powerd starts the disks sequentially.

Did you power down disks or spin down disks?  It is relatively
easy to spin down (or up) disks with luxadm stop (start).  If a
disk is accessed, then it will spin itself up.  By default, the timeout
for disk response is 60 seconds, and most disks can spin up in
less than 60 seconds.

> I tried to open a RFE... but until now without success.
>

Perhaps because disks will spin up when an access is requested,
so to solve your "problem" you'd have to make sure that all of
a set of disks are accessed when any in the set are accessed --
butugly.

NB. back when I had a largish pile of smallish disks hanging
off my workstation for testing, a simple cron job running
luxadm stop helped my energy bill :-)
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS conflict with MAID?

2008-06-10 Thread Tobias Exner




Hi John,

I've done some tests with a SUN X4500 with zfs and "MAID" using the
powerd of Solaris 10 to power down the disks which weren't access for a
configured time. It's working fine...

The only thing I run into was the problem that it took roundabout a
minute to power on 4 disks in a zfs-pool. The problem seems to be that
the powerd starts the disks sequentially. 
I tried to open a RFE... but until now without success.


kind regards,

Tobias Exner

eo ipso Systeme GmbH



Mertol Ozyoney schrieb:

  Hi ;

If you want to use ZFS special ability to pool all the storage together to
supply thin provisioning like functionlaty , this will work against MAID.
However there is always the option to setup ZFS just like any other FS. (ie.
One disk - one fs ) 

By the way, if I am not mistaken MAID like functionality is built into
Solaris. 
I think solaris gurus should answer this part but I think there is a command
to enable MAID like functionality on sata drives. 

Mertol 


Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email [EMAIL PROTECTED]



-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]] On Behalf Of John Kunze
Sent: Friday, June 06, 2008 7:29 PM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] ZFS conflict with MAID?

My organization is considering an RFP for MAID storage and we're
wondering about potential conflicts between MAID and ZFS.

We want MAID's power management benefits but are concerned
that what we understand to be ZFS's use of dynamic striping across
devices with filesystem metadata replication and cache syncing will
tend to keep disks spinning that the MAID is trying to spin down.
Of course, we like ZFS's large namespace and dynamic memory
pool resizing ability.

Is it possible to configure ZFS to maximize the benefits of MAID?

-John

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
John A. Kunze  [EMAIL PROTECTED]
California Digital LibraryWork: +1-510-987-9231
415 20th St, #406 http://dot.ucop.edu/home/jak/
Oakland, CA  94612 USA University of California
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


  



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot delete errored file

2008-06-10 Thread Ben Middleton
Hi,

It's an ASUS P5K-WS board with 2Gb of Corsair TwinX DDR2 8500 1066MHz Non-ECC 
memory. The board uses the Intel P35 chipset - it also will not support ECC 
RAM. TBH, this is probably the last time I'll get an ASUS, as this is the 
second board I've got through - the first one died for no particular reason. 
I've been recommended a Supermicro C2SBX Mobo - this will take my existing 
Supermicro PCI-X card as well as my current Intel CPU. The only problem is that 
it's DDR3 - and ECC DDR3 ram is pretty hard to come by right now.

I'll still try a long memtest run, followed by a rebuild of the errored pool. 
I'll have a read around to see if there's anyway of making the memory more 
stable on this mobo.

Ben
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot delete errored file

2008-06-10 Thread Brandon High
On Tue, Jun 10, 2008 at 8:01 AM, Ben Middleton <[EMAIL PROTECTED]> wrote:
> Today's findings are that the cksum errors appear on the new disk on the 
> other controller too - so I've ruled out controllers & cables. It's probably 
> as Jeff says - just got to figure out now how to prove the memory is duff.

How much memory do you have and what chipset / controller are you using?

There are some controllers that claim to do DMA to 64-bit addresses
but don't actually support it, which causes errors on machines with >
4gb of memory. The SB600 is one example.

If it is bad memory that has somehow passed memtest, swapping the
memory for known good (preferably ECC) memory is one option to
diagnose it.

-B

-- 
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot delete errored file

2008-06-10 Thread Ben Middleton
Sent response by private message.

Today's findings are that the cksum errors appear on the new disk on the other 
controller too - so I've ruled out controllers & cables. It's probably as Jeff 
says - just got to figure out now how to prove the memory is duff.

Ben
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-10 Thread Ivan Wang
> Richard Elling wrote:
> > For ZFS, there are some features which conflict
> with the
> > notion of user quotas: compression, copies, and
> snapshots come
> > immediately to mind.  UFS (and perhaps VxFS?) do
> not have
> > these features, so accounting space to users is
> much simpler.
> > Indeed, if was was easy to add to ZFS, then CR
> 6557894
> > would have been closed long ago.  Surely we can
> describe the
> > business problems previously solved by user-quotas
> and then
> > proceed to solve them?  Mail is already solved.
> 
> I just find it ironic that before ZFS I kept hearing
> of people wanting 
> group quotas rather than user quotas.  Now that we
> have ZFS group quotas 
> are easy - quota the filesystem and ensure only that
> group can write to 
> it - but now the focus is back on user quotas again
> ;-)
> 

I hate to say that, but most of us didn't expect zfs takes possibility of user 
quotas away.. I am not sure "trading" one capability with the other is desired. 

Back to zfs' "filesystem is cheap" paradigm, it won't be complained so much if 
other facilities provided by OS (for example, automounter) scales equally well 
with zfs. 

Making other fs related facility fitting into the new paradigm would help much, 
at least I think..

Ivan.


> -- 
> Darren J Moffat
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS space map causing slow performance

2008-06-10 Thread Scott
> Scott,
> 
> This looks more like " bug#*6596237 Stop looking and
> start ganging 
> ".
> *
> What version of Solaris are the production servers
> running (S10 or 
> Opensolaris) ?
> 
> Thanks and regards,
> Sanjeev.

Hi Sanjeev,

Thanks for the reply.  These servers are running SXCE 86.  The same problem 
happens with b87, but I downgraded to take the new write throttling code out of 
the equation.  Interestingly enough, I never saw this problem when these 
servers were running b70.  It may just be a coincidence, but I noticed the 
problem starting within days of upgrading from b70, and it was already too late 
to downgrade due to ZFS versioning.

-Scott
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] creating ZFS mirror over iSCSI between to DELL MD3000i arrays

2008-06-10 Thread Thomas Rojsel
Hi Tomas,

I will try it my self, but it's just that if i google the subject i only find 
old entries describing things as kernel panics and system freeze. I'm just 
wondering if this problem is fixed in the newer releases, or if there is 
another recommended way to keep data stored on different sites in realtime. 
We're talking about +8TB of data.

Thanks,

/T
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SATA controller suggestion

2008-06-10 Thread Joerg Schilling
Bob Friesenhahn <[EMAIL PROTECTED]> wrote:

> /dev/zero does not have infinite performance.  Dd will perform at 
> least one extra data copy in memory.  Since zfs computes checksums it 
> needs to inspect all of the bytes in what is written.  As a result, 
> zfs will easily know if the block is all zeros and even if the data is 
> all zeros, time will be consumed.
>
> On my system, Solaris dd does not seem to create a sparse file.  I 
> don't have GNU dd installed to test with.

I did not read the older messages in this thread, but:

dd skip=n skips n records on input
dd seek=n seeks n records on output

Whenever you use "dd ... of=name seek=something"
you will have the chance to get a sparse file (depending on the parameters
of the underlying filesystem).

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] memory hog

2008-06-10 Thread udippel
On 6/10/08, Volker A. Brandt <[EMAIL PROTECTED]> wrote:
>> It might just be me, and the 'feel' of it, but it still feels to me that
>> the system needs to be under more memory pressure before ZFS gives pages
>> back. This could also be because I'm typically using systems with either
>>  > 128GB, or <= 4GB of RAM, and in the smaller case, not having some
>> headroom costs me...
>
> I can confirm this "feeling".  I have several older systems which used
> to have UFS and now run using ZFS, and the effect is noticeable.  I have
> never gotten around to doing any benchmarks, but as a rule of thumb
> any box under 2GB RAM is not really good for ZFS.

Here I made the opposite observation: Just installed nv90 to a dated
notebook DELL D400; unmodified except of a 80GB 2.5" hard disk and -
of course ! - an extra strip of 1 GB of RAM; making it 1.2 GB
altogether.
Now, first I installed UFS; then wiped everything to install the full
ZFS-beauty. And I can't say that there was a noticeable difference
between the two in respect to subjective speed behaviour.

Uwe

>
>
> Regards -- Volker
> --
> 
> Volker A. Brandt  Consulting and Support for Sun Solaris
> Brandt & Brandt Computer GmbH   WWW: http://www.bb-c.de/
> Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED]
> Handelsregister: Amtsgericht Bonn, HRB 10513  Schuhgröße: 45
> Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot delete errored file

2008-06-10 Thread Jeff Bonwick
That's odd -- the only way the 'rm' should fail is if it can't
read the znode for that file.  The znode is metadata, and is
therefore stored in two distinct places using ditto blocks.
So even if you had one unlucky copy that was damaged on two
of your disks, you should still have another copy elsewhere.

Assuming you weren't so shockingly unlucky, the only way to
get a corrupted znode that I know of is flaky memory, such that
the znode is checksummed, then the DRAM flips a bit, then you
write the znode to disk.  The fact that you've seen so many
checksum errors makes me suspect hardware all the more.

Can you send me the output of fmdump -ev and fmdump -eV ?
There should be some useful crumbs in there...

Jeff

On Tue, Jun 03, 2008 at 04:27:21AM -0700, Ben Middleton wrote:
> Hi,
> 
> I can't seem to delete a file in my zpool that has permanent errors:
> 
> zpool status -vx
>   pool: rpool
>  state: ONLINE
> status: One or more devices has experienced an error resulting in data
> corruption.  Applications may be affected.
> action: Restore the file in question if possible.  Otherwise restore the
> entire pool from backup.
>see: http://www.sun.com/msg/ZFS-8000-8A
>  scrub: scrub completed after 2h10m with 1 errors on Tue Jun  3 11:36:49 2008
> config:
> 
> NAMESTATE READ WRITE CKSUM
> rpool   ONLINE   0 0 0
>   raidz1ONLINE   0 0 0
> c0t0d0  ONLINE   0 0 0
> c0t1d0  ONLINE   0 0 0
> c0t2d0  ONLINE   0 0 0
> 
> errors: Permanent errors have been detected in the following files:
> 
> /export/duke/test/Acoustic/3466/88832/09 - Check.mp3
> 
> 
> rm "/export/duke/test/Acoustic/3466/88832/09 - Check.mp3"
> 
> rm: cannot remove `/export/duke/test/Acoustic/3466/88832/09 - Check.mp3': I/O 
> error
> 
> Each time I try to do anything to the file, the checksum error count goes up 
> on the pool.
> 
> I also tried a mv and a cp over the top - but same I/O error.
> 
> I performed a "zpool scrub rpool" followed by a "zpool clear rpool" - but 
> still get the same error. Any ideas?
> 
> PS - I'm running snv_86, and use the sata driver on an intel x86 architecture.
> 
> B
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] memory hog

2008-06-10 Thread Volker A. Brandt
> It might just be me, and the 'feel' of it, but it still feels to me that
> the system needs to be under more memory pressure before ZFS gives pages
> back. This could also be because I'm typically using systems with either
>  > 128GB, or <= 4GB of RAM, and in the smaller case, not having some
> headroom costs me...

I can confirm this "feeling".  I have several older systems which used
to have UFS and now run using ZFS, and the effect is noticeable.  I have
never gotten around to doing any benchmarks, but as a rule of thumb
any box under 2GB RAM is not really good for ZFS.


Regards -- Volker
-- 

Volker A. Brandt  Consulting and Support for Sun Solaris
Brandt & Brandt Computer GmbH   WWW: http://www.bb-c.de/
Am Wiesenpfad 6, 53340 Meckenheim Email: [EMAIL PROTECTED]
Handelsregister: Amtsgericht Bonn, HRB 10513  Schuhgröße: 45
Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot delete errored file

2008-06-10 Thread Ben Middleton
Hi Marc,

Thanks for all of your suggestions.

I'll restart memtest when I'm next in the office and leave it running overnight.

I can recreate the pool - but I guess the question is am I safe to do this on 
the existing setup, or am I going to hit the same issue again sometime? 
Assuming I don't find any obvious hardware issues - wouldn't this be a regarded 
as flaw in ZFS (i.e. no way of clearing such an error without a rebuild)?

Would I be safer rebuilding to a pair of mirrors rather than a 3 disk zraid + 
hotspare?

Ben
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss