Re: [zfs-discuss] future ZFS Boot and ZFS "copies"

2007-10-09 Thread Matthew Ahrens
Jesus Cea wrote:
> Would ZFS boot be able to boot from a "copies" boot dataset, when one of
> the disks are failing?. Counting that ditto blocks are spread between
> both disks, of course.

You can not boot from a pool with multiple top-level vdevs (eg, the "copies" 
pool you describe).  We hope to enhance zfs boot to provide this 
functionality at a later date.

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] future ZFS Boot and ZFS "copies"

2007-10-09 Thread Matthew Ahrens
Jesus Cea wrote:
> Read performance [when using "zfs set copies=2" vs a mirror] would double, 
> and this is very nice

I don't see how that could be the case.  Either way, the reads should be able 
to fan out over the two disks.

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread Anton B. Rang
Do you have compression turned on? If so, dd'ing from /dev/zero isn't very 
useful as a benchmark. (I don't recall if all-zero blocks are always detected 
if checksumming is turned on, but I seem to recall that they are, even if 
compression is off.)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-09 Thread dudekula mastan
Hi Everybody,
   
  From the last one week so many mails are exchanged on this topic.
   
  I have also one similar issue like this. I will appreciate if any one helps 
me on this.
   
  I have an IO test tool, which writes the data and reads the data and then 
compare the read data with write data. If read data and write data are same 
then there is no CORRUIPTION else there is a CORRUPTION.
   
  File data may corrupt because of any reasons and one possible reason is file 
system cache. If file system cache have issues, it will give wrong data (wrong 
data means the actual data on the disk and the data that read call return to 
the application are not match) to user applications.
   
  When there is a CORRUPTION, to check file system cache issues, my application 
bypass the file system cache and then reads (Re-read) the data from the same 
file and then compare the re-read data with write data.
   
  Tell me, is there a way to skip ZFS file system cache or tell me is there a 
way to do direct IO on ZFS file system?
   
  Regards
  Masthan D
   

   
-
Don't let your dream ride pass you by.Make it a reality with Yahoo! Autos. ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What would be the exact difference between import/mount and export/unmount ?

2007-10-09 Thread Vidya Sakar N
Mastan,

Import/Export are pool level commands whereas mount/unmount are
file system level commands, both serving different purposes.
You would typically 'export' a pool when you want to connect the
storage to a different machine and 'import' the pool there for
subsequent use in that machine, which could even be of different
endianness. mount on the other hand is for the purpose of attaching
a file system to the file system hierarchy at the mount point, just
like the way any other file system is accessed.

> If you run zpool import command, it will lists all theimportable
> zpools and the devices which are part of those zpools. How exactly
> import command works ?

Import command looks at the device labels to extract this information.

> In my machine I have 10 to 15 zpools, and I am pumping IO on those
> zpools. IO pump tool is working fine for 2 to 3 hours after that
> some how my machine is going down.

Is it hung or is there a panic? Is the shell responsive? Are your
file system / pool access commands hanging?

> Can any one explain why it is happening ?

If you have a responsive shell, may be you can check the threadlist
and that may give some clues on where the commands are hung. You can use
the following command to gather the threadlist.
echo "::threadlist -v" | mdb -k

> I am not much aware of debug tools, Can any one explain how to debug
> coredump ?

A good starting point would be the Solaris Modular Debugger Guide at
http://docs.sun.com/app/docs/doc/817-2543?l=en&q=mdb

Cheers,
Vidya Sakar


dudekula mastan wrote:
> Hi All,
>  
> Can any one explain this ?
>  
> -Mashtan D
> 
> 
> */dudekula mastan <[EMAIL PROTECTED]>/* wrote:
> 
>  
> Hi All,
>  
> What exactly import and export commands will do ?
>  
> Are they similar to mount and unmount ?
>  
> How import differs from mount and how export differs from umount ?
>  
> If you run zpool import command, it will lists all theimportable
> zpools and the devices which are part of those zpools. How exactly
> import command works ?
>  
> In my machine I have 10 to 15 zpools, and I am pumping IO on those
> zpools. IO pump tool is working fine for 2 to 3 hours after that
> some how my machine is going down.
> Can any one explain why it is happening ?
>  
> I am not much aware of debug tools, Can any one explain how to debug
> coredump ?
>  
> Your help is apprecialted.
>  
> Thanks & Regards
> Masthan D
> 
> Fussy? Opinionated? Impossible to please? Perfect. Join Yahoo!'s
> user panel
> 
> 
> and lay it on us. ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
> 
> 
> Take the Internet to Go: Yahoo!Go puts the Internet in your pocket: 
>  
> mail, news, photos & more.
> 
> 
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does bug 6602947 concern ZFS more than Gnome?

2007-10-09 Thread Matthew Ahrens
MC wrote:
> Re: http://bugs.opensolaris.org/view_bug.do?bug_id=6602947
> 
> Specifically this part:
> 
> [i]Create zpool /testpool/.  Create zfs file system /testpool/testfs.
> Right click on /testpool/testfs (filesystem) in nautilus and rename to 
> testfs2.
> Do zfs list.  Note that only /testpool/testfs (filesystem) is present.
> Do zfs rename /testpool/testfs /testpool/testfsrename.  This will fail saying 
> the dataset testfs does not exist.[/i]
> 
> Right now the bug is assigned to gnome:file-manager so it might not be seen 
> by the right people?

Actually, it is likely even more general the ZFS or Gnome.  You should not 
rename (eg, via mv(1) or rename(2)) mountpoints in Solaris, otherwise things 
get confused.  In this case, the "zfs rename" is trying to unmount the 
renamed mountpoint but is not able to find it.  I thought I filed a bug on 
this a while back, but can't find it now :-(

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Setting up a file server (NAS)

2007-10-09 Thread MC
> 3) Forget PCI-Express -- if you have a free PCI-X (or
> PCI)-slot. Supermicro AOC-SAT2-MV8 (PCI-X cards are
> (usually) plain-PCI-compatible; and this one is). It
> has 8 ports, is natively plug-and-play-suported and
> does not cost more than twice a si3132, and costs
> only a fraction of other >2-port-cards where you
> pay for raid-chip-sets you don't need or even can't
> use..
> si3132 may be an option but I can't recommend it.
> SAS-controllers are in another league I think; 3-5
> times the price of AOC-SAT2-MV8.

That is a really neat suggestion, I had no idea that card was compatible with 
regular old pci slots.  You can get generic 4-port silicon image 3114 PCI cards 
for like $30, but that Supermicro for $100 is also a great idea.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What would be the exact difference between import/mount and export/unmount ?

2007-10-09 Thread dudekula mastan
Hi All,
   
  Can any one explain this ?
   
  -Mashtan D
  

dudekula mastan <[EMAIL PROTECTED]> wrote:
 
  Hi All,
   
  What exactly import and export commands will do ?
   
  Are they similar to mount and unmount ?
   
  How import differs from mount and how export differs from umount ?
   
  If you run zpool import command, it will lists all theimportable zpools and 
the devices which are part of those zpools. How exactly import command works ?
   
  In my machine I have 10 to 15 zpools, and I am pumping IO on those zpools. IO 
pump tool is working fine for 2 to 3 hours after that some how my machine is 
going down.
  Can any one explain why it is happening ?
   
  I am not much aware of debug tools, Can any one explain how to debug coredump 
?
   
  Your help is apprecialted.
   
  Thanks & Regards
  Masthan D

-
  Fussy? Opinionated? Impossible to please? Perfect. Join Yahoo!'s user panel 
and lay it on us. ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


   
-
Take the Internet to Go: Yahoo!Go puts the Internet in your pocket: mail, news, 
photos & more. ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Space Map optimalization

2007-10-09 Thread Matthew Ahrens
?ukasz wrote:
> I have a huge problem with space maps on thumper. Space maps takes over 3GB
> and write operations generates massive read operations. 
> Before every spa sync phase zfs reads space maps from disk.
> 
> I decided to turn on compression for pool ( only for pool, not filesystems ) 
> and it helps.

That is extremely hard to believe (given that all you actually did was turn 
on compression for a 19k filesystem).

> Now space maps, intent log, spa history are compressed.

All normal metadata (including space maps and spa history) is always 
compressed.  The intent log is never compressed.

> Not I'm thinking about disabling checksums. All metadata are written in 2 
> copies,
> so when I have compression=on do I need checksums ? 

Yes, you need checksums, otherwise silent hardware errors will be silent data 
corruption.  You can not turn off checksums on metadata.  Turning off 
checksums may have some tiny impact because it will cause the level-1 
indirect blocks to compress better.

> Is there other way to check space map compression ratio ?
> Now I'm using "#zdb -bb pool" but it takes hours.

You can probably do it with "zdb -vvv pool | less" and look for each of the 
space map files in the MOS.  This is printed pretty early on, after which you 
can kill off the zdb.

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] question about uberblock blkptr

2007-10-09 Thread Matthew Ahrens
Max,

Glad you figured out where your problem was.  Compression does complicate 
things.  Also, make sure you have the most recent (highest txg) uberblock.

Just for the record, using MDB to print out ZFS data structures is totally 
sweet!  We have actually been wanting to do that for about 5 years now, but 
other things keep coming up :-)  So we'd love for you to contribute your code 
to OpenSolaris once you get it more fully working.

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs / zpool list odd results in U4

2007-10-09 Thread Matthew Ahrens
Solaris wrote:
> Greetings.
> 
> I applied the Recommended Patch Cluster including 120012-14 to a U3
> system today.  I upgraded my zpool and it seems like we have some very
> strange information coming from zpool list and zfs list...
> 
> [EMAIL PROTECTED]:/]# zfs list
> NAMEUSED  AVAIL  REFER  MOUNTPOINT
> zpool028.56G  11.1G  32.6K  /zpool02
> zpool02/data   5.78G  11.1G  5.78G  /mnt/data
> zpool02/opt2.73G  11.1G  2.73G  /opt
> zpool02/samba  60.3M  11.1G  60.3M  /usr/local/samba
> [EMAIL PROTECTED]:/]# zpool list
> NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
> zpool0229.9G   12.9G   17.1G42%  ONLINE -

Is this on RAID-Z?  If so, you may be seeing 6308817.  However, upgrading 
your bits shouldn't have changed anything.

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs snapshot timestamp info

2007-10-09 Thread Matthew Ahrens
Tim Spriggs wrote:
> I think they are listed in order with "zfs list".

That's correct, they are listed in the order taken, from oldest to newest.

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Possible ZFS Bug - Causes OpenSolaris Crash

2007-10-09 Thread Matthew Ahrens
If you haven't resolved this bug with the storage folks, you can file a bug 
at http://bugs.opensolaris.org/

--matt

eric kustarz wrote:
> This actually looks like a sd bug... forwarding it to the storage  
> alias to see if anyone has seen this...
> 
> eric
> 
> On Sep 14, 2007, at 12:42 PM, J Duff wrote:
> 
>> I’d like to report the ZFS related crash/bug described below. How  
>> do I go about reporting the crash and what additional information  
>> is needed?
>>
>> I’m using my own very simple test app that creates numerous  
>> directories and files of randomly generated data. I have run the  
>> test app on two machines, both 64 bit.
>>
>> OpenSolaris crashes a few minutes after starting my test app. The  
>> crash has occurred on both machines. On Machine 1, the fault occurs  
>> in the SCSI driver when invoked from ZFS. On Machine 2, the fault  
>> occurs in the ATA driver when invoked from ZFS. The relevant parts  
>> of the message logs appear at the end of this post.
>>
>> The crash is repeatable when using the ZFS file system. The crash  
>> does not occur when running the test app against a Solaris/UFS file  
>> system.
>>
>> Machine 1:
>> OpenSolaris Community Edition,
>>  snv_72, no BFU (not DEBUG)
>> SCSI Drives, Fibre Channel
>> ZFS Pool is six drive stripe set
>>
>> Machine 2:
>> OpenSolaris Community Edition
>> snv_68 with BFU (kernel has DEBUG enabled)
>> SATA Drives
>> ZFS Pool is four RAIDZ sets, two disks in each RAIDZ set
>>
>> (Please forgive me if I have posted in the wrong place. I am new to  
>> ZFS and this forum. However, this forum appears to be the best  
>> place to get good quality ZFS information. Thanks.)
>>
>> Duff
>>
>> --
>>
>> Machine 1 Message Log:
>> . . .
>> Sep 13 14:13:22 cypress unix: [ID 836849 kern.notice]
>> Sep 13 14:13:22 cypress ^Mpanic[cpu5]/thread=ff000840dc80:
>> Sep 13 14:13:22 cypress genunix: [ID 683410 kern.notice] BAD TRAP:  
>> type=e (#pf Page fault) rp=ff000840ce90 addr=ff01f2b0
>> Sep 13 14:13:22 cypress unix: [ID 10 kern.notice]
>> Sep 13 14:13:22 cypress unix: [ID 839527 kern.notice] sched:
>> Sep 13 14:13:22 cypress unix: [ID 753105 kern.notice] #pf Page fault
>> Sep 13 14:13:22 cypress unix: [ID 532287 kern.notice] Bad kernel  
>> fault at addr=0xff01f2b0
>> . . .
>> Sep 13 14:13:22 cypress unix: [ID 10 kern.notice]
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
>> ff000840cd70 unix:die+ea ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
>> ff000840ce80 unix:trap+1351 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
>> ff000840ce90 unix:_cmntrap+e9 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
>> ff000840cfc0 scsi:scsi_transport+1f ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
>> ff000840d040 sd:sd_start_cmds+2f4 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
>> ff000840d090 sd:sd_core_iostart+17b ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
>> ff000840d0f0 sd:sd_mapblockaddr_iostart+185 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
>> ff000840d140 sd:sd_xbuf_strategy+50 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
>> ff000840d180 sd:xbuf_iostart+103 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
>> ff000840d1b0 sd:ddi_xbuf_qstrategy+60 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
>> ff000840d1f0 sd:sdstrategy+ec ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
>> ff000840d220 genunix:bdev_strategy+77 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
>> ff000840d250 genunix:ldi_strategy+54 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
>> ff000840d2a0 zfs:vdev_disk_io_start+219 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
>> ff000840d2c0 zfs:vdev_io_start+1d ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
>> ff000840d300 zfs:zio_vdev_io_start+123 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
>> ff000840d320 zfs:zio_next_stage_async+bb ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
>> ff000840d340 zfs:zio_nowait+11 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
>> ff000840d380 zfs:vdev_mirror_io_start+18f ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
>> ff000840d3c0 zfs:zio_vdev_io_start+131 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
>> ff000840d3e0 zfs:zio_next_stage+b3 ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
>> ff000840d410 zfs:zio_ready+10e ()
>> Sep 13 14:13:22 cypress genunix: [ID 655072 kern.notice]  
>> ff000840d430 zfs:zio_next_stage+b3 ()
>> Sep 13 14:13:22 cypress genunix: [ID

Re: [zfs-discuss] Setting up a file server (NAS)

2007-10-09 Thread Ima
Thanks a lot for your help everyone :)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Would a device list output be a reasonable feature for zpool(1)?

2007-10-09 Thread Matthew Ahrens
MC wrote:
> With the arrival of ZFS, the "format" command is well on its way to 
> deprecation station.  But how else do you list the devices that zpool can 
> create pools out of?
> 
> Would it be reasonable to enhance zpool to list the vdevs that are available 
> to it?  Perhaps as part of the help output to "zpool create"?

Sounds a lot like 4868036 "zpool create should have better support for total 
world domination".

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Areca 1100 SATA Raid Controller in JBOD mode Hangs on zfs root creation

2007-10-09 Thread Kugutsumen
Updated to latest firmware 1.43-70417 ... 

same problem..

WARNING: arcmsr0: dma map got 'no resources' 

WARNING: arcmsr0: dma allocate fail 

WARNING: arcmsr0: dma allocate fail free scsi hba pkt 

WARNING: arcmsr0: dma map got 'no resources' 

WARNING: arcmsr0: dma allocate fail 

WARNING: a

The only positive thing is that everytime I try to copy my UFS root (about 5 
gig) to a ZFS filesystem, this bug happen.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Bug 6354872 - integrated in snv_36, how about Solaris ?

2007-10-09 Thread James C. McPherson
Sergiy Kolodka wrote:
> Hello,
> 
> We're hitting this bug for few days, however SunSolve isn't really
> helpful in way to fix it, it says "Integrated in Build: snv_36", I think
> its Nevada, but how do I find when it was fixed in Solaris ? 
> Should I assume that it was fixed in -36 kernel patch as well ? Box is
> running 6/06 release and I'm thinking if I should upgrade it to 11/06 or
> just find a patch somewhere, latter is preferable.

Hi Sergiy,
the multi-release record for that bug doesn't appear
to be visible on bugs.opensolaris.org, so here's the
relevant information for you:

For the multirelease record targeted at S10u2::

"Integrated as part of PSARC 2002/240 ZFS. See CR# 6338653"


http://bugs.opensolaris.org/view_bug.do?bug_id=6338653
indicates that it was integrated into Solaris 10 Update
2 build 6, so your S10 06/06 system should have that
fix included.

Are you really sure that you're suffering from bug 6354872?
I suggest that you log a call with Sun Support if you
haven't already done so, so that you can get the assistance
which you need to resolve the issue in a timely fashion.



James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Bug 6354872 - integrated in snv_36, how about Solaris ?

2007-10-09 Thread Sergiy Kolodka
Hello,

We're hitting this bug for few days, however SunSolve isn't really helpful in 
way to fix it, it says "Integrated in Build: snv_36", I think its Nevada, but 
how do I find when it was fixed in Solaris ?

Should I assume that it was fixed in -36 kernel patch as well ? Box is running 
6/06 release and I'm thinking if I should upgrade it to 11/06 or just find a 
patch somewhere, latter is preferable.

Thanks.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Areca 1100 SATA Raid Controller in JBOD mode Hangs on zfs root creation.

2007-10-09 Thread Kugutsumen
Just as I create a ZFS pool and copy the root partition to it the 
performance seems to be really good then suddenly the system hangs all my 
sesssions and displays on the console:

Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0: dma map got 'no resources' 
Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0: dma allocate fail 
Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0: dma allocate fail free scsi 
hba

AR1100 was running in JBOD mode and write-back caching under Solaris Express 
Xen Drop b66 07/07. Never had problem this setup under linux under heavy load.

ARECA SATA-SAS RAID Host Adapter Driver(i386) 1.20.00.13,REV=2006.08.14

I filed a support ticket with Areca with areca already.

Anyone you explain me what the error message is  about? 

Would turning off write-back help? I am going to try that.

Here is the log of what I did to setup the controller using the areca cli:

# Select current controller:

set curctrl=1

set password=
GuiErrMsg<0x00>: Success.

rsf delete raid=1
GuiErrMsg<0x00>: Success.

CLI> disk info
 #   ModelNameSerial#  FirmRev Capacity  State
===
 1   ST3500630AS  X 3.AAE500.1GB  Free  
 2   ST3500630AS  X3.AAE500.1GB  Free  
===
GuiErrMsg<0x00>: Success.

CLI> sys changepwd p=XXX
GuiErrMsg<0x00>: Success.

CLI> sys mode p=1
ErrMsg: All RaidSet Must Be Deleted In Order To Be Configured As JBOD

CLI> disk delete drv=1
GuiErrMsg<0x00>: Success.

CLI> disk delete drv=2
GuiErrMsg<0x00>: Success.


We want to create to pass-thru disks with write back enabled.

CLI> disk create drv=1 cache=Y
GuiErrMsg<0x08>: Password Required.

CLI> disk create drv=2 cache=Y
GuiErrMsg<0x00>: Success.


CLI> sys info
The System Information
===
Main Processor : 500MHz
CPU ICache Size: 32KB
CPU DCache Size: 32KB
System Memory  : 256MB/333MHz
Firmware Version   : V1.41 2006-5-24 
BOOT ROM Version   : V1.41 2006-5-24 
Serial Number  : X
Controller Name: ARC-1110
===
GuiErrMsg<0x00>: Success.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Norco's new storage appliance

2007-10-09 Thread Frank Leers
On Tue, 2007-10-09 at 23:36 +0100, Adam Lindsay wrote:
> Gary Gendel wrote:
> > Norco usually uses Silicon Image based SATA controllers. 
> 
> Ah, yes, I remember hearing SI SATA multiplexer horror stories when I 
> was researching storage possibilities.
> 
> However, I just heard back from Norco:
> 
> > Thank you for interest in Norco products.
> > Most of part uses by DS -520 are using chipset found on common boards.
> > For example we use marvell 88sx6081 as SATA controller.
> > The system should be function fine with OpenSolaris.
> > Please feel free to contact us for further more questions.
> 
> That's the Thumper's controller chipset, right? Sounds like very good 
> news to me.
> 

Yes, it is.

0b:01.0 SCSI storage controller: Marvell Technology Group Ltd.
MV88SX6081 8-port SATA II PCI-X Controller (rev 09)



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Norco's new storage appliance

2007-10-09 Thread Adam Lindsay
Gary Gendel wrote:
> Norco usually uses Silicon Image based SATA controllers. 

Ah, yes, I remember hearing SI SATA multiplexer horror stories when I 
was researching storage possibilities.

However, I just heard back from Norco:

> Thank you for interest in Norco products.
> Most of part uses by DS -520 are using chipset found on common boards.
> For example we use marvell 88sx6081 as SATA controller.
> The system should be function fine with OpenSolaris.
> Please feel free to contact us for further more questions.

That's the Thumper's controller chipset, right? Sounds like very good 
news to me.

adam

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS 60 second pause times to read 1K

2007-10-09 Thread Marc Bevand
Michael  bigfoot.com> writes:
> 
> Excellent. 
> 
> Oct  9 13:36:01 zeta1 scsi: [ID 107833 kern.warning] WARNING:
> /pci  2,0/pci1022,7458  8/pci11ab,11ab  1/disk  2,0 (sd13):
> Oct  9 13:36:01 zeta1   Error for Command: readError
Level: Retryable
> 
> Scrubbing now.

This is only a part of the complete error message. Look a few lines above this
one. If you see something like:

  sata: [ID 801593 kern.notice] NOTICE: /[EMAIL PROTECTED],0/pci8086,[EMAIL 
PROTECTED]/pci11ab,[EMAIL PROTECTED]:
   port 1: device reset
  sata: [ID 801593 kern.notice] NOTICE: /[EMAIL PROTECTED],0/pci8086,[EMAIL 
PROTECTED]/pci11ab,[EMAIL PROTECTED]:
   port 1: link lost
  sata: [ID 801593 kern.notice] NOTICE: /[EMAIL PROTECTED],0/pci8086,[EMAIL 
PROTECTED]/pci11ab,[EMAIL PROTECTED]:
   port 1: link established
  marvell88sx: [ID 812950 kern.warning] WARNING: marvell88sx0: error on port 1:
  marvell88sx: [ID 517869 kern.info]   device disconnected
  marvell88sx: [ID 517869 kern.info]   device connected

Then it means you are probably affected by bug
http://bugs.opensolaris.org/view_bug.do?bug_id=6587133

This bug is fixed in Solaris Express build 73 and above, and will likely be
fixed in Solaris 10 Update 5. The workaround is to disable SATA NCQ and queuing
by adding "set sata:sata_func_enable = 0x4" to /etc/system.

-marc

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Moving default snapshot location

2007-10-09 Thread Walter Faleiro
Hi,
We have implemented a zfs files system for home directories and have enabled
it with quotas+snapshots. However the snapshots are causing an issue with
the user quotas. The default snapshot files go under
~username/.zfs/snapshot, which is a part of the user file system. So if the
quota is 10G and the snapshots total to 2G, this adds to the disk space used
by the user. Is there any turnaround for this. One is to increase the quota
for the user, which we dont want to implement. Can the default snapshots be
taken to some other location outside the user home directory.

Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Some test results: ZFS + SAMBA + Sun Fire X4500 (Thumper)

2007-10-09 Thread Tim Thomas
Title: Signature




Will et al

I added a few extra graphs to the original posting today showing the
work that an individual disk was doing

http://blogs.sun.com/timthomas/entry/samba_performance_on_sun_fire

and ran the RAID-Z config with fewer disks just to see what happened.

http://blogs.sun.com/timthomas/entry/another_samba_test_on_sun

What I find nice about Thumper/X4500's is that they behave very
predictably..in my experience anyway.

Rgds

Tim
-- 


  

  Tim Thomas 
  Storage
Systems Product Group
   Sun Microsystems, Inc.
  
  Internal Extension: x(70)18097
Office Direct Dial: +44-161-905-8097
Mobile: +44-7802-212-209
Email: [EMAIL PROTECTED]
  

  




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Adding my own compression to zfs

2007-10-09 Thread roland
for those who are interested in lzo with zfs, i have made a special version of 
the patch taken from the zfs-fuse mailinglist:

http://82.141.46.148/tmp/zfs-fuse-lzo.tgz

this file contains the patch in unified diff format and also a broken out 
version (i.e. split into single files).

maybe this makes integrating into an onnv-tree easier and also is better for 
review.

i took some quick look and compared to onnv sources and it looks that it`s not 
too hard to be integrated - most lines are new files and oonv files seem to be 
changed just little. 

unfortunately i have no solaris build environment around for now, so i cannot 
give it a try and i also have no clue if this will compile at all. maybe the 
code needs much rework to be able to run in kernelspace, maybe not - but some 
solaris kernelhacker will better know
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool status backwards scrub progress on when using iostat

2007-10-09 Thread Wade . Stuart


[EMAIL PROTECTED] wrote on 10/09/2007 01:11:16 PM:

> I am using a x4500 with a single "4*( raid2z 9 + 2)+ 2 spare" pool.
> I some bad blocks on one of the disks
> Oct 9 13:36:01 zeta1 scsi: [ID 107833 kern.warning] WARNING: /[EMAIL 
> PROTECTED],
> 0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
> (sd13):
> Oct 9 13:36:01 zeta1 Error for Command: read Error Level: Retryable
>
> I am running zpool scrub (20 hours so far) (UK time)
> 2007-10-08.21:51:54 zpool scrub zeta
>
> The progress seem to go backwards when I run zpool iostat.
>  scrub: scrub in progress, 2.28% done, 5h19m to go
>  scrub: scrub in progress, 2.45% done, 5h18m to go
>  scrub: scrub in progress, 2.70% done, 5h15m to go

Have you created any snapshots while the scrub was running?  There is a bug
that resets the scrub/resilver every time you make a new snapshot.  The
workaround: Stop making snapshots while scrubbing and resilvering.  It
really sucks if the sole purpose of the machine is for snaps.

More Info: bug id 6343667

Also Matthew Ahrens recently said that this should be fixed sometime around
the new year,  the bugid really does not show any useful information about
the status.

-Wade


>
> #
> # zpool iostat 5
>capacity operationsbandwidth
> pool used  avail   read  write   read  write
> --  -  -  -  -  -  -
> zeta10.4T  9.64T692177  74.9M  15.3M
> zeta10.4T  9.64T  3.28K 54   395M   238K
> zeta10.4T  9.64T  1.69K  0  8.96M  0
> zeta10.4T  9.64T981 42  6.82M   356K
> zeta10.4T  9.64T693177  74.9M  15.3M
> zeta10.4T  9.64T  4.75K  0   594M  0
> zeta10.4T  9.64T  4.51K  0   564M  0
> zeta10.4T  9.64T  4.62K 75   578M   402K
>
>  scrub: scrub in progress, 0.54% done, 4h49m to go
>
> And the time to go is not progressing.
>
> Here is the full status
> # zpool status -v
>   pool: zeta
>  state: ONLINE
>  scrub: scrub in progress, 0.32% done, 5h14m to go
> config:
>
> NAMESTATE READ WRITE CKSUM
> gsazeta ONLINE   0 0 0
>   raidz2ONLINE   0 0 0
> c4t0d0  ONLINE   0 0 0
> c4t4d0  ONLINE   0 0 0
> c7t0d0  ONLINE   0 0 0
> c7t4d0  ONLINE   0 0 0
> c6t0d0  ONLINE   0 0 0
> c6t4d0  ONLINE   0 0 0
> c1t0d0  ONLINE   0 0 0
> c1t4d0  ONLINE   0 0 0
> c0t0d0  ONLINE   0 0 0
> c0t4d0  ONLINE   0 0 0
> c5t1d0  ONLINE   0 0 0
>   raidz2ONLINE   0 0 0
> c5t5d0  ONLINE   0 0 0
> c4t1d0  ONLINE   0 0 0
> c4t5d0  ONLINE   0 0 0
> c7t1d0  ONLINE   0 0 0
> c7t5d0  ONLINE   0 0 0
> c6t1d0  ONLINE   0 0 0
> c6t5d0  ONLINE   0 0 0
> c1t1d0  ONLINE   0 0 0
> c1t5d0  ONLINE   0 0 0
> c0t1d0  ONLINE   0 0 0
> c0t5d0  ONLINE   0 0 0
>   raidz2ONLINE   0 0 0
> c0t2d0  ONLINE   0 0 0
> c0t6d0  ONLINE   0 0 0
> c1t2d0  ONLINE   0 0 0
> c1t6d0  ONLINE   0 0 0
> c4t2d0  ONLINE   0 0 0
> c4t6d0  ONLINE   0 0 0
> c6t2d0  ONLINE   0 0 0
> c6t6d0  ONLINE   0 0 0
> c7t2d0  ONLINE   0 0 0
> c7t6d0  ONLINE   0 0 0
> c5t6d0  ONLINE   0 0 0
>   raidz2ONLINE   0 0 0
> c0t3d0  ONLINE   0 0 0
> c0t7d0  ONLINE   0 0 0
> c1t3d0  ONLINE   0 0 0
> c1t7d0  ONLINE   0 0 0
> c4t3d0  ONLINE   0 0 0
> c4t7d0  ONLINE   0 0 0
> c6t3d0  ONLINE   0 0 0
> c6t7d0  ONLINE   0 0 0
> c7t3d0  ONLINE   0 0 0
> c7t7d0  ONLINE   0 0 0
> c5t7d0  ONLINE   0 0 0
> spares
>   c5t2d0AVAIL
>   c5t3d0AVAIL
>
> errors: No known data errors
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-

[zfs-discuss] zpool status backwards scrub progress on when using iostat

2007-10-09 Thread Michael
I am using a x4500 with a single "4*( raid2z 9 + 2)+ 2 spare" pool. I some bad 
blocks on one of the disks
Oct 9 13:36:01 zeta1 scsi: [ID 107833 kern.warning] WARNING: /[EMAIL 
PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 (sd13):
Oct 9 13:36:01 zeta1 Error for Command: read Error Level: Retryable

I am running zpool scrub (20 hours so far) (UK time)
2007-10-08.21:51:54 zpool scrub zeta

The progress seem to go backwards when I run zpool iostat. 
 scrub: scrub in progress, 2.28% done, 5h19m to go
 scrub: scrub in progress, 2.45% done, 5h18m to go
 scrub: scrub in progress, 2.70% done, 5h15m to go

#
# zpool iostat 5
   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
zeta10.4T  9.64T692177  74.9M  15.3M
zeta10.4T  9.64T  3.28K 54   395M   238K
zeta10.4T  9.64T  1.69K  0  8.96M  0
zeta10.4T  9.64T981 42  6.82M   356K
zeta10.4T  9.64T693177  74.9M  15.3M
zeta10.4T  9.64T  4.75K  0   594M  0
zeta10.4T  9.64T  4.51K  0   564M  0
zeta10.4T  9.64T  4.62K 75   578M   402K

 scrub: scrub in progress, 0.54% done, 4h49m to go

And the time to go is not progressing.

Here is the full status
# zpool status -v
  pool: zeta
 state: ONLINE
 scrub: scrub in progress, 0.32% done, 5h14m to go
config:

NAMESTATE READ WRITE CKSUM
gsazeta ONLINE   0 0 0
  raidz2ONLINE   0 0 0
c4t0d0  ONLINE   0 0 0
c4t4d0  ONLINE   0 0 0
c7t0d0  ONLINE   0 0 0
c7t4d0  ONLINE   0 0 0
c6t0d0  ONLINE   0 0 0
c6t4d0  ONLINE   0 0 0
c1t0d0  ONLINE   0 0 0
c1t4d0  ONLINE   0 0 0
c0t0d0  ONLINE   0 0 0
c0t4d0  ONLINE   0 0 0
c5t1d0  ONLINE   0 0 0
  raidz2ONLINE   0 0 0
c5t5d0  ONLINE   0 0 0
c4t1d0  ONLINE   0 0 0
c4t5d0  ONLINE   0 0 0
c7t1d0  ONLINE   0 0 0
c7t5d0  ONLINE   0 0 0
c6t1d0  ONLINE   0 0 0
c6t5d0  ONLINE   0 0 0
c1t1d0  ONLINE   0 0 0
c1t5d0  ONLINE   0 0 0
c0t1d0  ONLINE   0 0 0
c0t5d0  ONLINE   0 0 0
  raidz2ONLINE   0 0 0
c0t2d0  ONLINE   0 0 0
c0t6d0  ONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t6d0  ONLINE   0 0 0
c4t2d0  ONLINE   0 0 0
c4t6d0  ONLINE   0 0 0
c6t2d0  ONLINE   0 0 0
c6t6d0  ONLINE   0 0 0
c7t2d0  ONLINE   0 0 0
c7t6d0  ONLINE   0 0 0
c5t6d0  ONLINE   0 0 0
  raidz2ONLINE   0 0 0
c0t3d0  ONLINE   0 0 0
c0t7d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0
c1t7d0  ONLINE   0 0 0
c4t3d0  ONLINE   0 0 0
c4t7d0  ONLINE   0 0 0
c6t3d0  ONLINE   0 0 0
c6t7d0  ONLINE   0 0 0
c7t3d0  ONLINE   0 0 0
c7t7d0  ONLINE   0 0 0
c5t7d0  ONLINE   0 0 0
spares
  c5t2d0AVAIL
  c5t3d0AVAIL

errors: No known data errors
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread Thomas Liesner
i wanted to test some simultanious sequential writes and wrote this little 
snippet:

#!/bin/bash
for ((i=1; i<=20; i++))
do
  dd if=/dev/zero of=lala$i bs=128k count=32768 &
done

While the script was running i watched zpool iostat and measured the time 
between starting and stopping of the writes (usually i saw bandwth figures 
around 500...)
The result was 409 mb/s in writes. Not too bad at all :)

Now the same with sequential reads:

#!/bin/bash
for ((i=1; i<=20; i++))
do
  dd if=lala$i of=/dev/zero bs=128k &
done

again checked with zpool iostat seeing even higher numbers around 850 and the 
result was 910mb/s...

wow 
that all looks quite promising :)

Tom
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS file system is crashing my system

2007-10-09 Thread Prabahar Jeyaram
Hi Masthan,

There was a race in the block allocation code which allocates a  
single disk block to two consumers. The system will trip when both  
the consumers try to free the block.

--
Prabahar.

On Oct 9, 2007, at 4:20 AM, dudekula mastan wrote:

> Hi Jeyaram,
>
> Thanks for your reply. Can you explain more about this bug ?
>
> Regards
> Masthan D
>
> Prabahar Jeyaram <[EMAIL PROTECTED]> wrote:
> Your system seem to have hit a variant of BUG :
>
> 6458218 - http://bugs.opensolaris.org/view_bug.do?bug_id=6458218
>
> This is fixed in Opensolaris Build 60 or S10U4.
>
> --
> Prabahar.
>
>
> On Oct 8, 2007, at 10:04 PM, dudekula mastan wrote:
>
> > Hi All,
> >
> > Any one has any chance to look into this issue ?
> >
> > -Masthan D
> >
> > dudekula mastan wrote:
> >
> > Hi All,
> >
> > While pumping IO on a zfs file system my ststem is crashing/
> > panicing. Please find the crash dump below.
> >
> > panic[cpu0]/thread=2a100adfcc0: assertion failed: ss != NULL,
> > file: ../../common/fs/zfs/space_map.c, line: 125
> > 02a100adec40 genunix:assfail+74 (7b652448, 7b652458, 7d,
> > 183d800, 11ed400, 0)
> > %l0-3:   011e7508
> > 03000744ea30
> > %l4-7: 011ed400  0186fc00
> > 
> > 02a100adecf0 zfs:space_map_remove+b8 (3000683e7b8, 2b20,
> > 2, 7b652000, 7b652400, 7b652400)
> > %l0-3:  2b22 2b0ec600
> > 03000744ebc0
> > %l4-7: 03000744eaf8 2b0ec000 7b652000
> > 2b0ec600
> > 02a100adedd0 zfs:space_map_load+218 (3000683e7b8, 30006f5f160,
> > 1000, 3000683e488, 2b00, 1)
> > %l0-3: 0160 030006f5f000 
> > 7b620ad0
> > %l4-7: 7b62086c 7fff 7fff
> > 030006f5f128
> > 02a100adeea0 zfs:metaslab_activate+3c (3000683e480,
> > 8000, c000, 24a998, 3000683e480, c000)
> > %l0-3:  0008 
> > 029ebf9d
> > %l4-7: 704e2000 03000391e940 030005572540
> > 0300060bacd0
> > 02a100adef50 zfs:metaslab_group_alloc+1bc (3fff,
> > 2, 8000, 7e68000, 30006766080, )
> > %l0-3:  0300060bacd8 0001
> > 03000683e480
> > %l4-7: 8000  03f34000
> > 4000
> > 02a100adf030 zfs:metaslab_alloc_dva+114 (0, 7e68000,
> > 30006766080, 2, 30005572540, 1e910)
> > %l0-3: 0001  0003
> > 03000380b6e0
> > %l4-7:  0300060bacd0 
> > 0300060bacd0
> > 02a100adf100 zfs:metaslab_alloc+2c (3000391e940, 2,
> > 30006766080, 1, 1e910, 0)
> > %l0-3: 009980001605 0016 1b4d
> > 0214
> > %l4-7:   03000391e940
> > 0001
> > 02a100adf1b0 zfs:zio_dva_allocate+4c (30005dd8a40, 7b6335a8,
> > 30006766080, 704e2508, 704e2400, 20001)
> > %l0-3: 030005dd8a40 060200ff00ff 060200ff00ff
> > 
> > %l4-7:  018a6400 0001
> > 0006
> > 02a100adf260 zfs:zio_write_compress+1ec (30005dd8a40, 23e20b,
> > 23e000, ff00ff, 2, 30006766080)
> > %l0-3:  00ff 0100
> > 0002
> > %l4-7:  00ff fc00
> > 00ff
> > 02a100adf330 zfs:arc_write+e4 (30005dd8a40, 3000391e940, 6, 2,
> > 1, 1e910)
> > %l0-3:  7b6063c8 030006af2570
> > 0300060c5cf0
> > %l4-7: 02a100adf538 0004 0004
> > 0300060c7a88
> > 02a100adf440 zfs:dbuf_sync+6c0 (30006af2570, 30005dd9440,
> > 2b3ca, 2, 6, 1e910)
> > %l0-3: 030005dd96c0  030006ae7750
> > 030006af2678
> > %l4-7: 030006766080 0013 0001
> > 
> > 02a100adf560 zfs:dnode_sync+35c (0, 0, 30005dd9440,
> > 30005ac8cc0, 2, 2)
> > %l0-3: 030006af2570 030006ae77a8 030006ae7808
> > 030006ae7808
> > %l4-7:  030006ae77a8 0001
> > 03000640ace0
> > 02a100adf620 zfs:dmu_objset_sync_dnodes+6c (30005dd96c0,
> > 30005dd97a0, 30005ac8cc0, 30006ae7750, 30006bd3ca0, 0)
> > %l0-3: 704e84c0 704e8000 704e8000
> > 0001
> > %l4-7:  704e4000 
> > 030005dd9440
> > 02a100adf6d0 zfs:dmu_objset_sync+54 (30005dd96c0, 30005ac8cc0,
> > 0, 0, 300060c5318, 1e910)
> > %l0-3:  000f 
> > 478d
> > %l4-7: 030005dd97a0  030005dd97a0
> > 030005dd9820
> > 02a100adf7e0 zfs:dsl_dataset_sync+c (30006f36780, 30005ac8cc0,
> > 30006f36810, 300040c7db8, 300040c7db8, 30006f36780)
> > %l0-3: 0001 000

Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread eric kustarz

On Oct 9, 2007, at 4:25 AM, Thomas Liesner wrote:

> Hi,
>
> i checked with $nthreads=20 which will roughly represent the  
> expected load and these are the results:

Note, here is the description of the 'fileserver.f' workload:
"
define process name=filereader,instances=1
{
   thread name=filereaderthread,memsize=10m,instances=$nthreads
   {
 flowop openfile name=openfile1,filesetname=bigfileset,fd=1
 flowop appendfilerand name=appendfilerand1,iosize=$meaniosize,fd=1
 flowop closefile name=closefile1,fd=1
 flowop openfile name=openfile2,filesetname=bigfileset,fd=1
 flowop readwholefile name=readfile1,fd=1
 flowop closefile name=closefile2,fd=1
 flowop deletefile name=deletefile1,filesetname=bigfileset
 flowop statfile name=statfile1,filesetname=bigfileset
   }
}
"

Each thread in 'nthreads' is executing the above:
- open
- append
- close
- open
- read
- close
- delete
- stat

You have 20 parallel threads doing the above.

Before looking at the results, decide if that really *is* your  
expected workload.

>
> IO Summary: 7989 ops 7914.2 ops/s, (996/979 r/w) 142.7mb/s, 255us  
> cpu/op, 0.2ms latency
>
> BTW, smpatch is still running and further tests will get done when  
> the system is rebooted.
>
> The figures published at...
> http://blogs.sun.com/timthomas/feed/entries/atom?cat=%2FSun+Fire+X4500
> ...made me expect to see higher rates with my setup.
>
> I have seen the new filebench at sourceforge, but did not manage to  
> install. It's a source ditrsibution now and the wiki and readmes  
> are not updated yet. A simple "make" didn't do the trick though ;)

Are you talking about the documentation at:
http://sourceforge.net/projects/filebench
or:
http://www.opensolaris.org/os/community/performance/filebench/
and:
http://www.solarisinternals.com/wiki/index.php/FileBench
?

I'll figure out why "make" isn't working.

eric
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs import => cannot mount 'fs' : directory is not empty

2007-10-09 Thread Alain Raimbault - SUN Microsystems France -




hello,

I am having a issue with zpool import.
Only some fs are mounted, some stay unmounted,
see below :

[EMAIL PROTECTED] # zpool import co-e34-dev
cannot mount
'/co-e34-dev/oracle/E34': directory is not empty
cannot mount
'/co-e34-dev/usr/sap': directory is not empty
cannot mount
'/co-e34-dev/oracle': directory is not empty
cannot mount
'/co-e34-dev/sapmnt/E34': directory is not empty
[EMAIL PROTECTED] #
[EMAIL PROTECTED] #
[EMAIL PROTECTED] # zfs list -o mounted,mountpoint
MOUNTED  MOUNTPOINT
    yes  /co-e34-dev
    yes  /co-e34-dev/admin
    yes 
/co-e34-dev/usr/sap/E34/archiving
    yes 
/co-e34-dev/oracle/client
    yes 
/co-e34-dev/home/exploit
    yes 
/co-e34-dev/sapmnt/E34/global/CACHE
    yes 
/co-e34-dev/home/tmp
    yes 
/co-e34-dev/oracle/E34/mirrlogB
    yes 
/co-e34-dev/oracle/E34/mirrlogA
    yes 
/co-e34-dev/home/e34adm
    yes 
/co-e34-dev/oracle/E34/origlogA
 no  /co-e34-dev/oracle
 no 
/co-e34-dev/oracle/E34
    yes 
/co-e34-dev/oracle/E34/origlogB
    yes 
/co-e34-dev/oracle/E34/oraarch
    yes 
/co-e34-dev/home/patrol
 no  /co-e34-dev/usr/sap
    yes 
/co-e34-dev/usr/sap/E34
    yes 
/co-e34-dev/oracle/E34/sapdatas
 no 
/co-e34-dev/sapmnt/E34
    yes 
/co-e34-dev/oracle/E34/sapreorg
    yes 
/co-e34-dev/oracle/stage
    yes 
/co-e34-dev/usr/sap/trans
    yes 
/co-e34-dev/home/unicenter
    yes 
/co-e34-dev/home/unispool

I am running S10u3 and BUG
6377673 [ Synopsis: 'zfs mount -a' should
discover the proper mount ] has been fixed in S10u2.
Do we have so respect some
rules at zpool creation ?

Alain








___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS 60 second pause times to read 1K

2007-10-09 Thread Michael
Excellent. 

Oct  9 13:36:01 zeta1 scsi: [ID 107833 kern.warning] WARNING: /[EMAIL 
PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 (sd13):
Oct  9 13:36:01 zeta1   Error for Command: readError Level: 
Retryable

Scrubbing now.

Big thanks ggendel.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs boot issue, changing device id

2007-10-09 Thread Mark J Musante
On Mon, 8 Oct 2007, Kugutsumen wrote:

> I just tried..
> mount -o  rw,remount /
> zpool import -f tank
> mount -F zfs tank/rootfs /a
> zpool status
> ls -l /dev/dsk/c1t0d0s0
> # /[EMAIL PROTECTED],0/pci1000,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a
> csh
> setenv TERM vt100
> vi /a/boot/solaris/bootenv.rc
> # the bootpath was actually set to the proper device.
>
> cp /etc/path_to_inst /a/etc/path_to_inst
> touch /a/reconfigure
> rm /a/etc/devices/*
> bootadm update-archive -R /a
>
> zpool export tank
> reboot

(Shouldn't need to export)  Check to see that your zpool.cache file in 
/a/etc is the same as in /etc.  Also, the filelist.ramdisk (under 
/a/boot/solaris) should include etc/zfs/zpool.cache.  If it doesn't, add 
it and re-do bootadm.


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Norco's new storage appliance

2007-10-09 Thread Gary Gendel
Norco usually uses Silicon Image based SATA controllers. The OpenSolaris driver 
for this has caused me enough headaches for me to replace it with a Marvell 
based board. I would also imagine that they use a 5 to 1 SATA multiplexer, 
which is not supported by any OpenSolaris driver that I've tested.

Gary
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS 60 second pause times to read 1K

2007-10-09 Thread Gary Gendel
Are there any clues in the logs?  I have had a similar problem when a disk bad 
block was uncovered by zfs.  I've also seen this when using the Silicon Image 
driver without the recommended patch.

The former became evident when I ran a scrub. I saw the SCSI timeout errors pop 
up in the "kern" syslogs.  I solved this by replacing the disk.

Gary
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] creating zfs-home-partitions

2007-10-09 Thread Darren J Moffat
Claus Guttesen wrote:
> Hi.
> 
> I read the zfs getting started guided at
> http://www.opensolaris.org/os/community/zfs/intro/;jsessionid=A64DABB3DF86B8FDBF8A3E281C30B8B2.
> 
> I created zpool disk1 and created disk1/home and assigned /export/home
> to disk1/home as mountpoint. Then I create a user with 'zfs create
> disk1/home/username' and the partition is mounted beneath disk1/home
> as it should.
> 
> When I add the user via smc I get the message:
> 
> The attempt to modify the Home Directory /export/home/username for
> User username failed because ...: Directory path already exists'.
> 
> So in order to remedy this I create the user first, manually move the
> directory create to /export/home.ufs and then creates the
> zfs-partition. But it seems that this is not the optimal way when you
> have to deal with 100 or more users. Fortunately I only have less than
> 10 accounts to handle but this method is somewhat cumbersome.

I wouldn't say that smc is suitable when creating more than an hand full 
of accounts anyway.

Consider using useradd(1M) instead.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] creating zfs-home-partitions

2007-10-09 Thread Claus Guttesen
Hi.

I read the zfs getting started guided at
http://www.opensolaris.org/os/community/zfs/intro/;jsessionid=A64DABB3DF86B8FDBF8A3E281C30B8B2.

I created zpool disk1 and created disk1/home and assigned /export/home
to disk1/home as mountpoint. Then I create a user with 'zfs create
disk1/home/username' and the partition is mounted beneath disk1/home
as it should.

When I add the user via smc I get the message:

The attempt to modify the Home Directory /export/home/username for
User username failed because ...: Directory path already exists'.

So in order to remedy this I create the user first, manually move the
directory create to /export/home.ufs and then creates the
zfs-partition. But it seems that this is not the optimal way when you
have to deal with 100 or more users. Fortunately I only have less than
10 accounts to handle but this method is somewhat cumbersome.

Are other tools better suited when creating users that resides in
zfs-partitions?

-- 
regards
Claus

When lenity and cruelty play for a kingdom,
the gentlest gamester is the soonest winner.

Shakespeare
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread Thomas Liesner
Hi,

i checked with $nthreads=20 which will roughly represent the expected load and 
these are the results:

IO Summary: 7989 ops 7914.2 ops/s, (996/979 r/w) 142.7mb/s, 255us cpu/op, 0.2ms 
latency

BTW, smpatch is still running and further tests will get done when the system 
is rebooted.

The figures published at...
http://blogs.sun.com/timthomas/feed/entries/atom?cat=%2FSun+Fire+X4500
...made me expect to see higher rates with my setup.

I have seen the new filebench at sourceforge, but did not manage to install. 
It's a source ditrsibution now and the wiki and readmes are not updated yet. A 
simple "make" didn't do the trick though ;)

Thanks again,
Tom
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread Thomas Liesner
Hi,

i checked with $nthreads=20 which will roughly represent the expected load and 
these are the results:

IO Summary:   7989 ops 7914.2 ops/s, (996/979 r/w) 142.7mb/s,255us 
cpu/op,   0.2ms latency

BTW, smpatch is still running and further tests will get done when the system 
is rebooted.

The figures published at...
http://blogs.sun.com/timthomas/feed/entries/atom?cat=%2FSun+Fire+X4500
...made me expect to see higher rates with my setup.

I have seen the new filebench at sourceforge, but did not manage to install. 
It's a source ditrsibution now and the wiki and readmes are not updated yet. A 
simple "make" didn't do the trick though ;)

Thanks again,
Tom
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS file system is crashing my system

2007-10-09 Thread dudekula mastan
Hi Jeyaram,
   
  Thanks for your reply. Can you explain more about this bug ?
   
  Regards
  Masthan D

Prabahar Jeyaram <[EMAIL PROTECTED]> wrote:
  Your system seem to have hit a variant of BUG :

6458218 - http://bugs.opensolaris.org/view_bug.do?bug_id=6458218

This is fixed in Opensolaris Build 60 or S10U4.

--
Prabahar.


On Oct 8, 2007, at 10:04 PM, dudekula mastan wrote:

> Hi All,
>
> Any one has any chance to look into this issue ?
>
> -Masthan D
>
> dudekula mastan wrote:
>
> Hi All,
>
> While pumping IO on a zfs file system my ststem is crashing/ 
> panicing. Please find the crash dump below.
>
> panic[cpu0]/thread=2a100adfcc0: assertion failed: ss != NULL, 
> file: ../../common/fs/zfs/space_map.c, line: 125
> 02a100adec40 genunix:assfail+74 (7b652448, 7b652458, 7d, 
> 183d800, 11ed400, 0)
> %l0-3:   011e7508 
> 03000744ea30
> %l4-7: 011ed400  0186fc00 
> 
> 02a100adecf0 zfs:space_map_remove+b8 (3000683e7b8, 2b20, 
> 2, 7b652000, 7b652400, 7b652400)
> %l0-3:  2b22 2b0ec600 
> 03000744ebc0
> %l4-7: 03000744eaf8 2b0ec000 7b652000 
> 2b0ec600
> 02a100adedd0 zfs:space_map_load+218 (3000683e7b8, 30006f5f160, 
> 1000, 3000683e488, 2b00, 1)
> %l0-3: 0160 030006f5f000  
> 7b620ad0
> %l4-7: 7b62086c 7fff 7fff 
> 030006f5f128
> 02a100adeea0 zfs:metaslab_activate+3c (3000683e480, 
> 8000, c000, 24a998, 3000683e480, c000)
> %l0-3:  0008  
> 029ebf9d
> %l4-7: 704e2000 03000391e940 030005572540 
> 0300060bacd0
> 02a100adef50 zfs:metaslab_group_alloc+1bc (3fff, 
> 2, 8000, 7e68000, 30006766080, )
> %l0-3:  0300060bacd8 0001 
> 03000683e480
> %l4-7: 8000  03f34000 
> 4000
> 02a100adf030 zfs:metaslab_alloc_dva+114 (0, 7e68000, 
> 30006766080, 2, 30005572540, 1e910)
> %l0-3: 0001  0003 
> 03000380b6e0
> %l4-7:  0300060bacd0  
> 0300060bacd0
> 02a100adf100 zfs:metaslab_alloc+2c (3000391e940, 2, 
> 30006766080, 1, 1e910, 0)
> %l0-3: 009980001605 0016 1b4d 
> 0214
> %l4-7:   03000391e940 
> 0001
> 02a100adf1b0 zfs:zio_dva_allocate+4c (30005dd8a40, 7b6335a8, 
> 30006766080, 704e2508, 704e2400, 20001)
> %l0-3: 030005dd8a40 060200ff00ff 060200ff00ff 
> 
> %l4-7:  018a6400 0001 
> 0006
> 02a100adf260 zfs:zio_write_compress+1ec (30005dd8a40, 23e20b, 
> 23e000, ff00ff, 2, 30006766080)
> %l0-3:  00ff 0100 
> 0002
> %l4-7:  00ff fc00 
> 00ff
> 02a100adf330 zfs:arc_write+e4 (30005dd8a40, 3000391e940, 6, 2, 
> 1, 1e910)
> %l0-3:  7b6063c8 030006af2570 
> 0300060c5cf0
> %l4-7: 02a100adf538 0004 0004 
> 0300060c7a88
> 02a100adf440 zfs:dbuf_sync+6c0 (30006af2570, 30005dd9440, 
> 2b3ca, 2, 6, 1e910)
> %l0-3: 030005dd96c0  030006ae7750 
> 030006af2678
> %l4-7: 030006766080 0013 0001 
> 
> 02a100adf560 zfs:dnode_sync+35c (0, 0, 30005dd9440, 
> 30005ac8cc0, 2, 2)
> %l0-3: 030006af2570 030006ae77a8 030006ae7808 
> 030006ae7808
> %l4-7:  030006ae77a8 0001 
> 03000640ace0
> 02a100adf620 zfs:dmu_objset_sync_dnodes+6c (30005dd96c0, 
> 30005dd97a0, 30005ac8cc0, 30006ae7750, 30006bd3ca0, 0)
> %l0-3: 704e84c0 704e8000 704e8000 
> 0001
> %l4-7:  704e4000  
> 030005dd9440
> 02a100adf6d0 zfs:dmu_objset_sync+54 (30005dd96c0, 30005ac8cc0, 
> 0, 0, 300060c5318, 1e910)
> %l0-3:  000f  
> 478d
> %l4-7: 030005dd97a0  030005dd97a0 
> 030005dd9820
> 02a100adf7e0 zfs:dsl_dataset_sync+c (30006f36780, 30005ac8cc0, 
> 30006f36810, 300040c7db8, 300040c7db8, 30006f36780)
> %l0-3: 0001 0007 0300040c7e38 
> 
> %l4-7: 030006f36808   
> 
> 02a100adf890 zfs:dsl_pool_sync+64 (300040c7d00, 1e910, 
> 30006f36780, 30005ac9640, 30005581a80, 30005581aa8)
> %l0-3:  03000391ed00 030005ac8cc0 
> 0300040c7e98
> %l4-7: 0300040c7e68 0300040c7e38 0300040c7da8 
> 030005dd9440
> 02a100adf940 zfs:spa_sync+1

Re: [zfs-discuss] Norco's new storage appliance

2007-10-09 Thread Casper . Dik

>[EMAIL PROTECTED] wrote:
>>> If you don't have a 64bit cpu, add more ram(tm).
>> 
>> 
>> Actually, no; if you have a 32 bit CPU, you must not add too much
>> RAM or the kernel will run out of space to put things.
>
>Hrm. Do you have a working definition of "too much"?


I think it would be something like "more that can fit in the kernel's
address segment"; that is adjustable but it is generally set at 1GB.

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Norco's new storage appliance

2007-10-09 Thread Adam Lindsay
[EMAIL PROTECTED] wrote:
>> If you don't have a 64bit cpu, add more ram(tm).
> 
> 
> Actually, no; if you have a 32 bit CPU, you must not add too much
> RAM or the kernel will run out of space to put things.

Hrm. Do you have a working definition of "too much"?

adam
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Norco's new storage appliance

2007-10-09 Thread Casper . Dik

>If you don't have a 64bit cpu, add more ram(tm).


Actually, no; if you have a 32 bit CPU, you must not add too much
RAM or the kernel will run out of space to put things.

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Norco's new storage appliance

2007-10-09 Thread James C. McPherson
Adam Lindsay wrote:
> Hello, Robert,
> 
> Robert Milkowski wrote:
> 
>> Because it offers upto 1GB of memory, 32bit shouldn't be an issue.
> 
> Sorry, could someone expand on this?
> The only received opinion I've seen on 32-bit is from the ZFS best 
> practice wiki, which simply says "Run ZFS on a system that runs a 64-bit 
> kernel." I have little idea where this comes from, and had no idea that 
> it would rely on memory concerns.

Put simply, ZFS eats address space for breakfast :)

So if you have a 64bit cpu, with its larger address
space, that's a better option than 32bit.

If you don't have a 64bit cpu, add more ram(tm).


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Norco's new storage appliance

2007-10-09 Thread Adam Lindsay
Hello, Robert,

Robert Milkowski wrote:

> Because it offers upto 1GB of memory, 32bit shouldn't be an issue.

Sorry, could someone expand on this?
The only received opinion I've seen on 32-bit is from the ZFS best 
practice wiki, which simply says "Run ZFS on a system that runs a 64-bit 
kernel." I have little idea where this comes from, and had no idea that 
it would rely on memory concerns.

thanks,
adam
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS 60 second pause times to read 1K

2007-10-09 Thread Michael Kucharski
Every day we see pause times of sometime 60 seconds to read 1K of a file for 
local reads as well as NFS in a test setup.

We have a x4500 setup as a single 4*( raid2z 9 + 2)+2 spare pool and have the 
files  system mounted over v5 krb5 NFS and accessed directly. The pool is a 
20TB pool and is using . There are three filesystems, backup, test and home. 
Test has about 20 million files and uses 4TB. These files range from 100B to 
200MB. Test has a cron job to take snapshots every 15 minutes from 1m on the 
hour. Every 15min at 2min on the hour a cron batch job runs to zfs send/recv to 
the backup filesystem. Home has only 100GB.

The test dir has 3 directories, 1 has 130,000 files other 2 have 10,000,000. We 
have 4 processes, 2 over NFS, 2 local, and 2 reading the dir with 130,000 
files, the other 2 reading the dir with 10,000,000. Every 35 seconds each 
process reads 1K of 64thK of 10 files and records the latency, then reads for 1 
second and record the throughput. At times of no other activity (outside the 
snapshot and send/recv times) we see read latencies of up to 60 seconds, maybe 
once a day at random times.

We are using an unpatched Solaris 10 08/07 build.

Pause times this long can to timeouts and jobs to fails which is problematic 
for us. Is this expected behaviour? Can anything be done to mitigate or 
diagnose the issue?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Norco's new storage appliance

2007-10-09 Thread Robert Milkowski
Hello Adam,

Tuesday, October 9, 2007, 10:15:13 AM, you wrote:

AL> Hey all,

AL> Has anyone else noticed Norco's recently-announced DS-520 and thought 
AL> ZFS-ish thoughts? It's a five-SATA, Celeron-based desktop NAS that ships
AL>   without an OS.
AL>   http://www.norcotek.com/item_detail.php?categoryid=8&modelno=ds-520

AL> What practical impact is a 32-bit processor going to have on a ZFS 
AL> system? (I know this relies on speculation, but) Might anyone know 
AL> anything about Norco's usual chipsets to guess about OpenSolaris 
AL> compatibility?

Because it offers upto 1GB of memory, 32bit shouldn't be an issue.
-- 
Best regards,
 Robert Milkowskimailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] About bug 6486493 (ZFS boot incompatible with

2007-10-09 Thread Robert Milkowski
Hello Pawel,

Monday, October 8, 2007, 9:45:01 AM, you wrote:

PJD> On Fri, Oct 05, 2007 at 08:52:17AM +0100, Robert Milkowski wrote:
>> Hello Eric,
>> 
>> Thursday, October 4, 2007, 5:54:06 PM, you wrote:
>> 
>> ES> On Thu, Oct 04, 2007 at 05:22:58AM -0700, Ivan Wang wrote:
>> >> > This bug was rendered moot via 6528732 in build
>> >> > snv_68 (and s10_u5).  We
>> >> > now store physical devices paths with the vnodes, so
>> >> > even though the
>> >> > SATA framework doesn't correctly support open by
>> >> > devid in early boot, we
>> >> 
>> >> But if I read it right, there is still a problem in SATA framework 
>> >> (failing ldi_open_by_devid,) right?
>> >> If this problem is framework-wide, it might just bite back some time in 
>> >> the future.
>> >> 
>> 
>> ES> Yes, there is still a bug in the SATA framework, in that
>> ES> ldi_open_by_devid() doesn't work early in boot.  Opening by device path
>> ES> works so long as you don't recable your boot devices.  If we had open by
>> ES> devid working in early boot, then this wouldn't be a problem.
>> 
>> Even if someone re-cables sata disks couldn't we fallback to "read zfs
>> label from all available disks and find our pool and import it"?

PJD> FreeBSD's GEOM storage framework implements a method called 'taste'.
PJD> When new disks arrives (or is closed after last write), GEOM calls taste
PJD> methods of all storage subsystems and subsystems can try to read their
PJD> metadata. This is bascially how autoconfiguration happens in FreeBSD for
PJD> things like software RAID1/RAID3/stripe/and others.
PJD> It's much easier than what ZFS does:
PJD> 1. read /etc/zfs/zpool.cache
PJD> 2. open components by name
PJD> 3. if there is no such disk goto 5
PJD> 4. verify diskid (not all disks have an ID)
PJD> 5. if diskid doesn't match, try to lookup by ID

PJD> If there are few hundreds of disks, it may slows booting down, but it
PJD> was never a real problem in FreeBSD.


I haven't done any benchmarks but I would say zpool.cache could
possible greatly reduce boot times especially in SAN environment.

Then using devids is a good idea - again you don't have to scan all
disks... it's just about last chance mechanizm - read all disks and
try to construct pool from it.

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS booting with Solaris (2007-08)

2007-10-09 Thread Robert Milkowski
Hello Richard,

Friday, October 5, 2007, 6:41:10 PM, you wrote:

RE> Robert Milkowski wrote:
>> Hello Richard,
>>
>> Friday, September 28, 2007, 7:45:47 PM, you wrote:
>>
>> RE> Kris Kasner wrote:
>>   
> 2. Back to Solaris Volume Manager (SVM), I guess. It's too bad too, 
> because I 
> don't like it with 2 SATA disks either. There isn't enough drives to put 
> the 
> State Database Replicas so that if either drive failed, the system would 
> reboot unattended. Unless there is a trick?
> 
 There is a trick for this, not sure how long it's been around.
 Add to /etc/system:
 *Allow the system to boot if one of two rootdisks is missing
 set md:mirrored_root_flag=1
   
>>
>> RE> Before you do this, please read the fine manual:
>> RE> 
>> http://docs.sun.com/app/docs/doc/819-2724/chapter2-161?l=en&a=view&q=mirrored_root_flag
>>
>> The description on docs.sun.com is somewhat misleading.
>>
>> http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/lvm/md/md_mddb.c#5659
>>5659 if (mirrored_root_flag == 1 && setno == 0 &&
>>5660 svm_bootpath[0] != 0) {
>>5661 md_clr_setstatus(setno, MD_SET_STALE);
>>
>> Looks like it has to be diskset=0 bootpath has to reside on svm device
>> and mirrored_root_flag has to be set to 1.
>>
>> So if you got other disks (>2) in a system just put them in a separate
>> disk group.
>>
>>
>>
>>   
RE> If we have more than 2 disks, then we have space for a 3rd metadb copy.
RE>  -- richard

well, depends - if it's external jbod I prefer to put all disks from
that jbod into separate diskset - that way it's easier to move that
jbod or re-install host.

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Norco's new storage appliance

2007-10-09 Thread Adam Lindsay
Hey all,

Has anyone else noticed Norco's recently-announced DS-520 and thought 
ZFS-ish thoughts? It's a five-SATA, Celeron-based desktop NAS that ships 
  without an OS.
  http://www.norcotek.com/item_detail.php?categoryid=8&modelno=ds-520

What practical impact is a 32-bit processor going to have on a ZFS 
system? (I know this relies on speculation, but) Might anyone know 
anything about Norco's usual chipsets to guess about OpenSolaris 
compatibility?

adam
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread Dick Davies
Hi Thomas

the point I was making was that you'll see low performance figures
with 100 concurrent threads. If you set nthreads to something closer
to your expected load, you'll get a more accurate figure.

Also, there's a new filebench out now, see

 http://blogs.sun.com/erickustarz/entry/filebench

will be integrated into Nevada in b76, according to Eric.

On 09/10/2007, Thomas Liesner <[EMAIL PROTECTED]> wrote:
> Hi again,
>
> i did not want to compare the filebench test with the single mkfile command.
> Still, i was hoping to see similar numbers in the filbench stats.
> Any hints what i could do to further improve the performance?
> Would a raid1 over two stripes be faster?
>
> TIA,
> Tom
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>


-- 
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread Thomas Liesner
Hi again,

i did not want to compare the filebench test with the single mkfile command.
Still, i was hoping to see similar numbers in the filbench stats.
Any hints what i could do to further improve the performance?
Would a raid1 over two stripes be faster?

TIA,
Tom
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss