Re: [zfs-discuss] Recommended many-port SATA controllers for budget ZFS

2007-11-02 Thread Al Hopper
On Fri, 2 Nov 2007, Peter Schuller wrote:

 snip 
> Does anyone have suggestions on what to choose, that will actually work the
> way you want it for JBOD use with ZFS? Or avenus of investigation? Is there
> any chance of a lowly consumer getting any information out of LSI? Is there
  ^^
Your best bet is to call Tech Support and not Sales.  I've found LSI 
tech support to be very responsive to individual customers.

> some other manufacturer that provide low-budget stuff that you can get some
> technical information about? Does anyone have some specific knowledge of a
> suitable product?

I recommend the SuperMicro card - but that is PCI-X and I think you're 
looking for PCI-Express?  I've used the older LSI 4-port (internal) 
PCI Express SAS3041E card which is still available for around $165 and 
works well with ZFS (SATA or SAS drives).  The newer cards are less 
expensive - but its not clear from the LSI website if they support 
JBOD operation or if you can form a "mirror" or "stripe" using only 
one drive and present it to ZFS as a single drive.

Please let us know what you find out...

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
Graduate from "sugar-coating school"?  Sorry - I never attended! :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Force SATA1 on AOC-SAT2-MV8

2007-11-02 Thread Al Hopper
On Fri, 2 Nov 2007, Eric Haycraft wrote:

 reformatted
> I have a supermicro AOC-SAT2-MV8 and am having some issues getting 
> drives to work. From what I can tell, my cables are to long to use 
> with SATA2. I got some drives to work by jumpering them down to 
> sata1, but other drives I can't jumper without opening the case and 
> voiding the drive warranty. Does anyone know if there is a system 
> setting to drop it back to SATA1? I use zfs on a raid2 if makes a 
> difference. This is on release of OpenSolaris 74.

What is the make/model# for the disk drives?

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
Graduate from "sugar-coating school"?  Sorry - I never attended! :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recommended many-port SATA controllers for budget ZFS

2007-11-02 Thread Andy Lubel
Marvell controllers work great with solaris.

Supermicro AOC-SAT2-MV8 is what I currently use.  I bought it on
recommendation from this list actually. I think I paid 110$ for mine.

-Andy


On 11/2/07 4:10 PM, "Peter Schuller" <[EMAIL PROTECTED]> wrote:

> Hello,
> 
> Short version: Can anyone recommend a *many port* (8 or more) SATA/SAS
> controller (RAID or otherwise) that will allow *SAFE* use of ZFS, including
> honoring cache flush commands in the sense of submitting them to the
> underlying device, that is also *low budget* (suitable for personal use; say
> in the <= $250 range)?
> 
> Long version:
> 
> I am having difficulty getting reliable information on SATA controllers for
> use with ZFS (that also work with FreeBSD in this case; though I magine the
> same problem applies with Solaris).
> 
> The problem is that most non-RAID controllers do not have enough ports. In
> fact the only one I have found that is decent is the Supermicro Marvell card;
> but that is PCI-X rather than PCI (or PCI Express). Works in one machine,
> doesn't in another (presumably because of PCI-X; only have PCI slots). And
> even if it does work, you are rather limited in bandwidth. Not that I really
> care about the latter for low budget use.
> 
> You might say that just get a RAID controller and configure it for JBOD. Well,
> I was assuming that would be aafe bet, but apparanly you cannot trust them to
> behave correctly with respect to write caching and cache flushing.
> 
> I recently found out that the Dell supplied LSI MegaRaid derived Perc 5/i RAID
> controllers will not honor cache flush requests (according to Dell technical
> support, after quite some time trying to explain to them what I wanted to
> know). So assuming this information is correct (I never saw the actual
> response from the "behind the lines" tech support that my tech support
> contact in turn asked), it means that running without battery backup, you
> actually negate the safety offered by ZFS with respect to write caching,
> making the pool less reliable than it would be with a cheap non-RAID card.
> 
> Right now I have noticed that LSI has recently began offering some
> lower-budget stuff; specifically I am looking at the MegaRAID SAS
> 8208ELP/XLP, which are very reasonably priced.
> 
> The problem again is that, while they are cheap raid cards without cache, my
> understanding is that it is still primarily intended for RAID rather than
> plain SATA "pass-through". As a result I am worried about the same problem as
> with the Perc 5/i.
> 
> Of course, LSI being a large corporation, it is seemingly impossible to obtain
> contact information for them, or find any technical specifications that would
> contain information as specific as what it will do in response to cache flush
> requests, so I am at a loss. Unless you're buying 10 000 cards it's difficult
> to get answers.
>  
> Does anyone have suggestions on what to choose, that will actually work the
> way you want it for JBOD use with ZFS? Or avenus of investigation? Is there
> any chance of a lowly consumer getting any information out of LSI? Is there
> some other manufacturer that provide low-budget stuff that you can get some
> technical information about? Does anyone have some specific knowledge of a
> suitable product?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] the number of mount points

2007-11-02 Thread Christine Tran
Hidehiko Jono wrote:
> Hi,
> 
> IHAC who wants to use ZFS for user's home directory.
> He is worry about the number of mount points.
> Does ZFS have any limitation about the number of mount points in a server?
> 

No limit as far as I know, but you may want to check out CR 6425094; 
that one has to do with mem requirement.  There's another CR which I 
can't find right now, but it has to do with ZFS trying to mount 
everything at boot time, resulting in a long-ish boot.  Not a big deal 
if number of FS is small, but if it's in the tens of thousands probably 
something to think about.  -CT
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Recommended many-port SATA controllers for budget ZFS

2007-11-02 Thread Peter Schuller
Hello,

Short version: Can anyone recommend a *many port* (8 or more) SATA/SAS 
controller (RAID or otherwise) that will allow *SAFE* use of ZFS, including 
honoring cache flush commands in the sense of submitting them to the 
underlying device, that is also *low budget* (suitable for personal use; say 
in the <= $250 range)?

Long version:

I am having difficulty getting reliable information on SATA controllers for 
use with ZFS (that also work with FreeBSD in this case; though I magine the 
same problem applies with Solaris).

The problem is that most non-RAID controllers do not have enough ports. In 
fact the only one I have found that is decent is the Supermicro Marvell card; 
but that is PCI-X rather than PCI (or PCI Express). Works in one machine, 
doesn't in another (presumably because of PCI-X; only have PCI slots). And 
even if it does work, you are rather limited in bandwidth. Not that I really 
care about the latter for low budget use.

You might say that just get a RAID controller and configure it for JBOD. Well, 
I was assuming that would be aafe bet, but apparanly you cannot trust them to 
behave correctly with respect to write caching and cache flushing.

I recently found out that the Dell supplied LSI MegaRaid derived Perc 5/i RAID 
controllers will not honor cache flush requests (according to Dell technical 
support, after quite some time trying to explain to them what I wanted to 
know). So assuming this information is correct (I never saw the actual 
response from the "behind the lines" tech support that my tech support 
contact in turn asked), it means that running without battery backup, you 
actually negate the safety offered by ZFS with respect to write caching, 
making the pool less reliable than it would be with a cheap non-RAID card. 

Right now I have noticed that LSI has recently began offering some 
lower-budget stuff; specifically I am looking at the MegaRAID SAS 
8208ELP/XLP, which are very reasonably priced.

The problem again is that, while they are cheap raid cards without cache, my 
understanding is that it is still primarily intended for RAID rather than 
plain SATA "pass-through". As a result I am worried about the same problem as 
with the Perc 5/i.

Of course, LSI being a large corporation, it is seemingly impossible to obtain 
contact information for them, or find any technical specifications that would 
contain information as specific as what it will do in response to cache flush 
requests, so I am at a loss. Unless you're buying 10 000 cards it's difficult 
to get answers.
 
Does anyone have suggestions on what to choose, that will actually work the 
way you want it for JBOD use with ZFS? Or avenus of investigation? Is there 
any chance of a lowly consumer getting any information out of LSI? Is there 
some other manufacturer that provide low-budget stuff that you can get some 
technical information about? Does anyone have some specific knowledge of a 
suitable product?

-- 
/ Peter Schuller

PGP userID: 0xE9758B7D or 'Peter Schuller <[EMAIL PROTECTED]>'
Key retrieval: Send an E-Mail to [EMAIL PROTECTED]
E-Mail: [EMAIL PROTECTED] Web: http://www.scode.org



signature.asc
Description: This is a digitally signed message part.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Force SATA1 on AOC-SAT2-MV8

2007-11-02 Thread Andy Lubel

Jumpering drives by removing the cover?  Do you mean opening the chassis
because they aren't removable from the outside?

Your cable is longer than 1 meter inside of a chasis??

I think sataI is 2 meters and sataII is 1 meter.

As far as a system setting for demoting these to sataI I don't know, but I
don't think its possible.. Don't hold me to that however, I only say that
because THE way I demote them to sataI is by removing a jumper actually :)

HTH,

Andy

On 11/2/07 12:29 PM, "Eric Haycraft" <[EMAIL PROTECTED]> wrote:

> I have a supermicro AOC-SAT2-MV8 and am having some issues getting drives to
> work. From what I can tell, my cables are to long to use with SATA2. I got
> some drives to work by jumpering them down to sata1, but other drives I can't
> jumper without opening the case and voiding the drive warranty. Does anyone
> know if there is a system setting to drop it back to SATA1? I use zfs on a
> raid2 if makes a difference. This is on release of OpenSolaris 74.
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backport of vfs_zfsacl.c to samba 3.0.26a, [and NexentaStor]

2007-11-02 Thread Joe Little
On 11/2/07, Rob Logan <[EMAIL PROTECTED]> wrote:
>
> I'm confused by this and NexentaStor... wouldn't it be better
> to use b77? with:
>
> Heads Up: File system framework changes (supplement to CIFS' "head's up")
> Heads Up: Flag Day (Addendum) (CIFS Service)
> Heads Up: Flag Day (CIFS Service)
> caller_context_t in all VOPs - PSARC/2007/218
> VFS Feature Registration and ACL on Create - PSARC/2007/227
> ZFS Case-insensitive support - PSARC/2007/244
> Extensible Attribute Interfaces - PSARC/2007/315
> ls(1) new command line options '-/' and '-%': CIFS system attributes support 
> - PSARC/2007/394
> Modified Access Checks for CIFS - PSARC/2007/403
> Add system attribute support to chmod(1) - PSARC/2007/410
> CIFS system attributes support for cp(1), pack(1), unpack(1), compress(1) and 
> uncompress(1) - PSARC/2007/432
> Rescind SETTABLE Attribute - PSARC/2007/444
> CIFS system attributes support for cpio(1), pax(1), tar(1) - PSARC/2007/459
> Update utilities to match CIFS system attributes changes. - PSARC/2007/546
> ZFS sharesmb property - PSARC/2007/560
> VFS Feature Registration and ACL on Create - PSARC/2007/227
> Extensible Attribute Interfaces - PSARC/2007/315
> Extensible Attribute Interfaces - PSARC/2007/315
> Extensible Attribute Interfaces - PSARC/2007/315
> Extensible Attribute Interfaces - PSARC/2007/315
> CIFS Service - PSARC/2006/715

It doesn't yet have anything to do with NexentaStor per se. I know
that CIFS service support in the BETA is preliminary, and the timing
of the availability makes a CIFS service tied to ZFS and its share
commands much more attractive. Depending on its maturity, I hope
Nexenta folk will have it included in their final release if not
somewhere on their roadmap.



>
>
> http://www.opensolaris.org/os/community/on/flag-days/all/
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What is the correct way to replace a good disk?

2007-11-02 Thread Bill Sommerfeld
On Fri, 2007-11-02 at 11:20 -0700, Chris Williams wrote:
> I have a 9-bay JBOD configured as a raidz2.  One of the disks, which
> is on-line and fine, needs to be swapped out and replaced.  I have
> been looking though the zfs admin guide and am confused on how I
> should go about swapping out.  I though I could put the disk off-line,
> remove it, put a new disk in, and put on-line.  Does this sound
> right?  

That sounds right.  You'll have improved availability if you have a
spare disk slot and can do "zpool replace $pool $old $new", but offline
followed by a reconstruct-in-place via "zpool replace $pool $disk" also
works.

- Bill


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Jumpstart integration and the amazing invisible zpool.cache

2007-11-02 Thread Dave Pratt
 I've been wrestling with implementing some ZFS mounts for /var and 
/usr into a jumpstart setup. I know that jumpstart does "know" anything 
about zfs as in your can't define ZFS volumes or pools in the profile. 
I've gone ahead and let the JS do a base install into a single ufs slice 
and then attempted to create the zpool and zfs volumes in the finish 
script and ufsdump|ufsrestore the data from the /usr and /var partitions 
into the new zfs volumes. Problem is there doesn't seem to be a way to 
ensure that the zpool is imported into the freshly built system on the 
first reboot.
 I see in the archives here from a few weeks ago someone was asking 
a similar question and it was suggested that as part of the finish 
script the "/etc/zfs/zpool.cache" could be copied to 
"/etc/zfs/zpool.cache" but it has been my experience through some 
serious testing that when creating and managing zfs pools and volumes in 
the jumpstart scripts that no zpool.cache file is created. Even 
including "find / -name zpool.cache" in the finish script returns no 
hits on that file name. Now, I'm aware that the zpool.cache file isn't 
intended to really be used for administrative tasks as it's format and 
existence aren't even well documented or solidified as part of the 
management framework for zfs moving forward; I would however REALLY like 
to know why in every other situation when managing zfs pools/vols that 
this file is created, but in this one situation it isn't. I would be 
equally curious to know if it is possible to maybe force the creation of 
this file or as a last option, at least make zpool statically linked in 
the default solaris distribution so that I may put a method and 
toolchain neccessary for import pools in the early part of the SMF boot 
sequence.

Thanks in Advance for any insight as to how to work this out.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What is the correct way to replace a good disk?

2007-11-02 Thread Cindy . Swearingen
Chris,

You need to use the zpool replace command.

I recently enhanced this section of the admin guide with more explicit
instructions on page 68, here:

http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf

If these are hot-swappable disks, for example, c0t1d0, then use this syntax:

# zpool replace pool-name c0t1d0

ZFS recognizes that this is a replacement disk in the same location.

You don't need to offline the disk to be replaced unless it is failing
and making the pool unhappy.





Chris Williams wrote:
> I have a 9-bay JBOD configured as a raidz2.  One of the disks, which is 
> on-line and fine, needs to be swapped out and replaced.  I have been looking 
> though the zfs admin guide and am confused on how I should go about swapping 
> out.  I though I could put the disk off-line, remove it, put a new disk in, 
> and put on-line.  Does this sound right?  
> 
> Any help would be great
> Thanks
> Chris
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] first public offering of NexentaStor

2007-11-02 Thread Tim Spriggs
Joe Little wrote:
> On 11/2/07, MC <[EMAIL PROTECTED]> wrote:
>   
>>> I consider myself an early adopter of ZFS and pushed
>>> it hard on this
>>> list and in real life with regards to iSCSI
>>> integration, zfs
>>> performance issues with latency there of, and how
>>> best to use it with
>>> NFS. Well, I finally get to talk more about the
>>> ZFS-based product I've
>>> been beta testing for quite some time. I thought this
>>> was the most
>>> appropriate place to make it known that NexentaStor
>>> is now out, and
>>> you can read more of my take at my personal post,
>>> http://jmlittle.blogspot.com/2007/11/coming-out-party-
>>> for-commodity-storage.html
>>>
>>> I thought it would be in the normal opensolaris blog
>>> listing, but
>>> since its not showing up there, this single list
>>> seems most
>>> appropriate to get interested parties and feedback.
>>> ___
>>> zfs-discuss mailing list
>>> zfs-discuss@opensolaris.org
>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
>>> ss
>>>   
>> Hmm so is that where all the Nexenta guys have been all this time!?!? :)
>>
>> I look forward to trying out what has been produced.  This type of solution 
>> is a pleasing one for the consumer.
>>
>> Is there a list of the contributers and what they do?  The landscape of 
>> Nexenta has changed and I wonder about the details.  PS: the website looks 
>> kind of busy to the eyes :)
>>
>> PPS: I think the new Nexenta team is the perfect candidate for submitting to 
>> the community how they think the OpenSolaris branding and compatibility 
>> should work.  Would you like a "Built with OpenSolaris" logo to use?  How 
>> far would you (or should you) go to maintain compatibility and be certified 
>> as "OpenSolaris Compatible"?
>>
>> 
>
> I can only speak up to my particular usage and understanding. Its
> OpenSolaris-based in the sense it is based on the ON/NWS
> consolidations (aka, NexentaOS or the NCP releases). Its still very
> much Debian/Ubuntu like in that it has that packaging, that installer,
> etc. Time will tell how compatible that is deemed to be.
>   

That's about right. There is a little bit of a compatibility layer for 
the .pkg format. For example, pkgadd is wrapped to convert a .pkg to a 
.deb and install the .deb. Sometimes things don't work (like the sun 
compiler packages) but sometimes they do. I would expect this type of 
thing to get better over time. Supporting .pkg seems like a plus in 
being OpenSolaris Compatible.

There is also a bit of your own choosing for how compatible you want to 
be. An example is that Nexenta packs the Sun ssh build but also allows 
installation of the Debian/Ubuntu build of the openssh package. The Sun 
ssh is exactly what you expect. One thing that is difficult and not 
entirely dealt with is upgrading zones to stay in sync with the global 
zone core libraries. Of course, it seems that is a little bit of a 
problem for more than just Nexenta ;)

ZFS/iSCSI works great out of the box and has actually allowed me to 
import pools that an older Solaris hosts couldn't (because of pool 
problems). We are running Nexenta in production on a Thumper, two 
x4100's, and a generic AMD x86_64 machine. I can't wait to load up the 
upcoming 1.0!

-Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] What is the correct way to replace a good disk?

2007-11-02 Thread Chris Williams
I have a 9-bay JBOD configured as a raidz2.  One of the disks, which is on-line 
and fine, needs to be swapped out and replaced.  I have been looking though the 
zfs admin guide and am confused on how I should go about swapping out.  I 
though I could put the disk off-line, remove it, put a new disk in, and put 
on-line.  Does this sound right?  

Any help would be great
Thanks
Chris
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS very slow under xVM

2007-11-02 Thread Gary Pennington
Hmm, I just repeated this test on my system:

bash-3.2# uname -a
SunOS soe-x4200m2-6 5.11 onnv-gate:2007-11-02 i86pc i386 i86xpv

bash-3.2# prtconf | more
System Configuration:  Sun Microsystems  i86pc
Memory size: 7945 Megabytes

bash-3.2# prtdiag | more
System Configuration: Sun Microsystems Sun Fire X4200 M2
BIOS Configuration: American Megatrends Inc. 080012   02/02/2007
BMC Configuration: IPMI 1.5 (KCS: Keyboard Controller Style)

bash-3.2# ptime dd if=/dev/zero of=/xen/myfile bs=16k count=15
15+0 records in
15+0 records out

real   31.927
user0.689
sys15.750

bash-3.2# zpool iostat 1

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
xen 15.3G   121G  0261  0  32.7M
xen 15.3G   121G  0350  0  43.8M
xen 15.3G   121G  0392  0  48.9M
xen 15.3G   121G  0631  0  79.0M
xen 15.5G   121G  0532  0  60.1M
xen 15.6G   120G  0570  0  65.1M
xen 15.6G   120G  0645  0  80.7M
xen 15.6G   120G  0516  0  63.6M
xen 15.7G   120G  0403  0  39.9M
xen 15.7G   120G  0585  0  73.1M
xen 15.7G   120G  0573  0  71.7M
xen 15.7G   120G  0579  0  72.4M
xen 15.7G   120G  0583  0  72.9M
xen 15.7G   120G  0568  0  71.1M
xen 16.1G   120G  0400  0  39.0M
xen 16.1G   120G  0584  0  73.0M
xen 16.1G   120G  0568  0  71.0M
xen 16.1G   120G  0585  0  73.1M
xen 16.1G   120G  0583  0  72.8M
xen 16.1G   120G  0665  0  83.2M
xen 16.1G   120G  0643  0  80.4M
xen 16.1G   120G  0603  0  75.0M
xen 16.1G   120G  5526   320K  64.9M
xen 16.7G   119G  0582  0  68.0M
xen 16.7G   119G  0639  0  78.5M
xen 16.7G   119G  0641  0  80.2M
xen 16.7G   119G  0664  0  83.0M
xen 16.7G   119G  0629  0  78.5M
xen 16.7G   119G  0654  0  81.7M
xen 17.2G   119G  0563  63.4K  63.5M
xen 17.3G   119G  0525  0  59.2M
xen 17.3G   119G  0619  0  71.4M
xen 17.4G   119G  0  7  0   448K
xen 17.4G   119G  0  0  0  0
xen 17.4G   119G  0408  0  51.1M
xen 17.4G   119G  0618  0  76.5M
xen 17.6G   118G  0264  0  27.4M
xen 17.6G   118G  0  0  0  0
xen 17.6G   118G  0  0  0  0
xen 17.6G   118G  0  0  0  0
...

I don't seem to be experiencing the same result as yourself.

The behaviour of ZFS might vary between invocations, but I don't think that
is related to xVM. Can you get the results to vary when just booting under
"bare metal"?

Gary

On Fri, Nov 02, 2007 at 10:46:56AM -0700, Martin wrote:
> I've removed half the memory, leaving 4Gb, and rebooted into "Solaris xVM", 
> and re-tried under Dom0.  Sadly, I still get a similar problem.  With "dd 
> if=/dev/zero of=myfile bs=16k count=15" I get command returning in 15 
> seconds, and "zpool iostat 1 1000" shows 22 records with an IO rate of around 
> 80M, then 209 records of 2.5M (pretty consistent), then the final 11 records 
> climbing to 2.82, 3.29, 3.05, 3.32, 3.17, 3.20, 3.33, 4.41, 5.44, 8.11
> 
> regards
> 
> Martin
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
Gary Pennington
Solaris Core OS
Sun Microsystems
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS very slow under xVM

2007-11-02 Thread Martin
I've removed half the memory, leaving 4Gb, and rebooted into "Solaris xVM", and 
re-tried under Dom0.  Sadly, I still get a similar problem.  With "dd 
if=/dev/zero of=myfile bs=16k count=15" I get command returning in 15 
seconds, and "zpool iostat 1 1000" shows 22 records with an IO rate of around 
80M, then 209 records of 2.5M (pretty consistent), then the final 11 records 
climbing to 2.82, 3.29, 3.05, 3.32, 3.17, 3.20, 3.33, 4.41, 5.44, 8.11

regards

Martin
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Force SATA1 on AOC-SAT2-MV8

2007-11-02 Thread Eric Haycraft
I have a supermicro AOC-SAT2-MV8 and am having some issues getting drives to 
work. From what I can tell, my cables are to long to use with SATA2. I got some 
drives to work by jumpering them down to sata1, but other drives I can't jumper 
without opening the case and voiding the drive warranty. Does anyone know if 
there is a system setting to drop it back to SATA1? I use zfs on a raid2 if 
makes a difference. This is on release of OpenSolaris 74.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] first public offering of NexentaStor

2007-11-02 Thread Joe Little
On 11/2/07, MC <[EMAIL PROTECTED]> wrote:
> > I consider myself an early adopter of ZFS and pushed
> > it hard on this
> > list and in real life with regards to iSCSI
> > integration, zfs
> > performance issues with latency there of, and how
> > best to use it with
> > NFS. Well, I finally get to talk more about the
> > ZFS-based product I've
> > been beta testing for quite some time. I thought this
> > was the most
> > appropriate place to make it known that NexentaStor
> > is now out, and
> > you can read more of my take at my personal post,
> > http://jmlittle.blogspot.com/2007/11/coming-out-party-
> > for-commodity-storage.html
> >
> > I thought it would be in the normal opensolaris blog
> > listing, but
> > since its not showing up there, this single list
> > seems most
> > appropriate to get interested parties and feedback.
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> > ss
>
> Hmm so is that where all the Nexenta guys have been all this time!?!? :)
>
> I look forward to trying out what has been produced.  This type of solution 
> is a pleasing one for the consumer.
>
> Is there a list of the contributers and what they do?  The landscape of 
> Nexenta has changed and I wonder about the details.  PS: the website looks 
> kind of busy to the eyes :)
>
> PPS: I think the new Nexenta team is the perfect candidate for submitting to 
> the community how they think the OpenSolaris branding and compatibility 
> should work.  Would you like a "Built with OpenSolaris" logo to use?  How far 
> would you (or should you) go to maintain compatibility and be certified as 
> "OpenSolaris Compatible"?
>

I can only speak up to my particular usage and understanding. Its
OpenSolaris-based in the sense it is based on the ON/NWS
consolidations (aka, NexentaOS or the NCP releases). Its still very
much Debian/Ubuntu like in that it has that packaging, that installer,
etc. Time will tell how compatible that is deemed to be.

> People doing real work on real projects should chime on on those issues 
> because there is far too much yapping from people like me who do nothing :)
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backport of vfs_zfsacl.c to samba 3.0.26a, [and NexentaStor]

2007-11-02 Thread Rob Logan

I'm confused by this and NexentaStor... wouldn't it be better
to use b77? with:

Heads Up: File system framework changes (supplement to CIFS' "head's up")
Heads Up: Flag Day (Addendum) (CIFS Service)
Heads Up: Flag Day (CIFS Service)
caller_context_t in all VOPs - PSARC/2007/218
VFS Feature Registration and ACL on Create - PSARC/2007/227
ZFS Case-insensitive support - PSARC/2007/244
Extensible Attribute Interfaces - PSARC/2007/315
ls(1) new command line options '-/' and '-%': CIFS system attributes support - 
PSARC/2007/394
Modified Access Checks for CIFS - PSARC/2007/403
Add system attribute support to chmod(1) - PSARC/2007/410
CIFS system attributes support for cp(1), pack(1), unpack(1), compress(1) and 
uncompress(1) - PSARC/2007/432
Rescind SETTABLE Attribute - PSARC/2007/444
CIFS system attributes support for cpio(1), pax(1), tar(1) - PSARC/2007/459
Update utilities to match CIFS system attributes changes. - PSARC/2007/546
ZFS sharesmb property - PSARC/2007/560
VFS Feature Registration and ACL on Create - PSARC/2007/227
Extensible Attribute Interfaces - PSARC/2007/315
Extensible Attribute Interfaces - PSARC/2007/315
Extensible Attribute Interfaces - PSARC/2007/315
Extensible Attribute Interfaces - PSARC/2007/315
CIFS Service - PSARC/2006/715


http://www.opensolaris.org/os/community/on/flag-days/all/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Backport of vfs_zfsacl.c to samba 3.0.26a

2007-11-02 Thread Carson Gaspar
As 3.2.0 isn't released yet, and I didn't want to wait, I've backported 
vfs_zfsacl.c from SAMBA_3_2.

You can grab the patch from 
http://taltos.dreamhosters.com/samba-3.0.26a.zfs.patch if interested.

It all appears to work, although Vista complains about ACL ordering.

-- 
Carson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS very slow under xVM

2007-11-02 Thread Jürgen Keil
> I've got Solaris Express Community Edition build 75
> (75a) installed on an Asus P5K-E/WiFI-AP (ip35/ICH9R
> based) board.  CPU=Q6700, RAM=8Gb, disk=Samsung
> HD501LJ and (older) Maxtor 6H500F0.
> 
> When the O/S is running on bare metal, ie no xVM/Xen
> hypervisor, then everything is fine.
> 
> When it's booted up running xVM and the hypervisor,
> then unlike plain disk I/O, and unlike svm volumes,
> zfs is around 20 time slower.

Just a wild guess, but since we're just seeing a similar
strange performance problem on an Intel quadcore system
with 8GB or memory


Can you try to remove some part of the ram, so that the
system runs on 4GB instead of 8GB?  Or use xen / 
solaris boot options to restrict physical memory usage to
the low 4GB range?


It seems that on certain mainboards [*] the bios is unable to
install mtrr cachable ranges for all of the 8GB system ram,
when when some important stuff ends up in uncachable ram,
performance gets *really* bad.

[*] http://lkml.org/lkml/2007/6/1/231
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS very slow under xVM

2007-11-02 Thread Paul Kraus
On 11/1/07, Nathan Kroenert <[EMAIL PROTECTED]> wrote:

> Tell me - If you watch with an iostat -x 1, do you see bursts of I/O
> then periods of nothing, or just a slow stream of data?
>
> I was seeing intermittent stoppages in I/O, with bursts of data on
> occasion...

I have seen this with  ZFS under 10U3, both SPARC and x86,
although the cycle rate differed. Basically, no i/o reported via zpool
iostat 1 or iostat -xn 1 (to the raw devices) for a period of time
followed by a second of ramp up, one or more seconds of excellent
throughput (given the underlying disk systems), a second of slow down,
then more samples with no i/o. The period between peaks was 10 seconds
in one case and 7 in the other. I forget which was SPARC and which was
x86.

I assumed this had to do with ZFS caching i/o until it had a
large enough block to be worth writing. In some cases the data was
coming in via the network (NFS in one SMB in the other), but in
neither case was the network interface saturated (in fact, I saw
similar periods of no activity on the network) and the did not seem to
be a CPU limitation (load was low and idle time high). I have also
seen this with local disk to disk copies (from UFS to ZFS or ZFS to
ZFS).

-- 
Paul Kraus
Albacon 2008 Facilities
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] first public offering of NexentaStor

2007-11-02 Thread MC
> I consider myself an early adopter of ZFS and pushed
> it hard on this
> list and in real life with regards to iSCSI
> integration, zfs
> performance issues with latency there of, and how
> best to use it with
> NFS. Well, I finally get to talk more about the
> ZFS-based product I've
> been beta testing for quite some time. I thought this
> was the most
> appropriate place to make it known that NexentaStor
> is now out, and
> you can read more of my take at my personal post,
> http://jmlittle.blogspot.com/2007/11/coming-out-party-
> for-commodity-storage.html
> 
> I thought it would be in the normal opensolaris blog
> listing, but
> since its not showing up there, this single list
> seems most
> appropriate to get interested parties and feedback.
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss

Hmm so is that where all the Nexenta guys have been all this time!?!? :)

I look forward to trying out what has been produced.  This type of solution is 
a pleasing one for the consumer.

Is there a list of the contributers and what they do?  The landscape of Nexenta 
has changed and I wonder about the details.  PS: the website looks kind of busy 
to the eyes :)

PPS: I think the new Nexenta team is the perfect candidate for submitting to 
the community how they think the OpenSolaris branding and compatibility should 
work.  Would you like a "Built with OpenSolaris" logo to use?  How far would 
you (or should you) go to maintain compatibility and be certified as 
"OpenSolaris Compatible"?

People doing real work on real projects should chime on on those issues because 
there is far too much yapping from people like me who do nothing :)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss