Re: [zfs-discuss] OpenStorage GUI

2008-11-11 Thread Ed Saipetch
Boyd,

That's exactly what I was getting at.  This list probably isn't the  
place to discuss but this is the first real instance aside of maybe  
xVM Ops Center where it was pretty much put out in the open that you  
can expect to pay to get the goods.

Fishworks seems to have much more than just a nice wrapper put around  
Solaris, ZFS, NFS, FMA, AVS etc.  A lot of my ability to evangelize  
the benefits of Solaris in the storage world to my customers hinges on  
me being able to say "Try it... you'll like it...".  I know Try-and- 
buy exists but in the grand scheme of things, adoption of Solaris  
hinges on easy accessibility.

I apologize for the tangent and the VM instance is a good start but  
the stance on opensourcing right or wrong seems like it has changed.

On Nov 11, 2008, at 8:30 PM, Boyd Adamson wrote:

> Bryan Cantrill <[EMAIL PROTECTED]> writes:
>
>> On Tue, Nov 11, 2008 at 02:21:11PM -0500, Ed Saipetch wrote:
>>> Can someone clarify Sun's approach to opensourcing projects and
>>> software?  I was under the impression the strategy was to charge for
>>> hardware, maintenance and PS.  If not, some clarification would be  
>>> nice.
>>
>> There is no single answer -- we use open source as a business  
>> strategy,
>> not as a checkbox or edict.  For this product, open source is an  
>> option
>> going down the road, but not a priority.  Will our software be open
>> sourced in the fullness of time?  My Magic 8-Ball tells me "signs
>> point to yes" (or is that "ask again later"?) -- but it's certainly
>> not something that we have concrete plans for at the moment...
>
> I think that's fair enough. What Sun choose to do is, of course, up to
> Sun.
>
> One can, however, understand that people might have expected otherwise
> given statements like this:
>
>> "With our announced intent to open source the entirety of our  
>> software
>> offerings, every single developer across the world now has access to
>> the most sophisticated platform available for web 1.0, 2.0 and  
>> beyond"
>
> - Jonathan Schwartz
> http://www.sun.com/smi/Press/sunflash/2005-11/sunflash.20051130.1.xml
>
> -- 
> Boyd

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OpenStorage GUI

2008-11-11 Thread Ed Saipetch
Can someone clarify Sun's approach to opensourcing projects and  
software?  I was under the impression the strategy was to charge for  
hardware, maintenance and PS.  If not, some clarification would be nice.

On Nov 11, 2008, at 12:38 PM, Bryan Cantrill wrote:
>
>  4.  If we do make something available, it won't be free.
>
> If you are willing/prepared(/eager?) to abide by these constraints,  
> please
> let us ([EMAIL PROTECTED]) know -- that will help us build the  
> business
> case for doing this...
>
>   - Bryan
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with Traditional SAN

2008-08-21 Thread Ed Saipetch
>
>
> That's the one that's been an issue for me and my customers - they  
> get billed back for GB allocated to their servers by the back end  
> arrays.
> To be more explicit about the 'self-healing properties' -
> To deal with any fs corruption situation that would traditionally  
> require an fsck on UFS (SAN switch crash, multipathing issues,  
> cables going flaky or getting pulled, server crash that corrupts  
> fs's) ZFS needs some disk redundancy in place so it has parity and  
> can recover.  (raidz, zfs mirror, etc)
> Which means to use ZFS a customer have to pay more to get the back  
> end storage redundancy they need to recover from anything that would  
> cause an fsck on UFS.  I'm not saying it's a bad implementation or  
> that the gains aren't worth it, just that cost-wise, ZFS is more  
> expensive in this particular bill-back model.
>
> cheers,
> Brian
>>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Why would the customer need to use raidz or zfs mirroring if the array  
is doing it for them?  As someone else posted, metadata is already  
redundant by default and doesn't consume a ton of space.  Some people  
may disagree but the first thing I like about ZFS is the ease of pool  
management and the second thing is the checksumming.

When a customer had issues with Solaris 10 x86, vxfs and EMC  
powerpath, I took them down the road of using powerpath and zfs.  Made  
some tweaks so we didn't tell the array to flush to rust and they're  
happy as clams.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] J4200/J4400 Array

2008-07-02 Thread Ed Saipetch
This array has not been formally announced yet and information on  
general availability is not available as far as I know.  I saw the  
docs last week and the product was supposed to be launched a couple of  
weeks ago.

Unofficially this is Sun's continued push to develop cheaper storage  
options that can be combined with Solaris and the Open Storage  
initiative to provide customers with options they don't have today.   
I'd expect the price-point to be quite a bit cheaper than the LC 24XX  
series of arrays.

On Jul 2, 2008, at 7:49 AM, Ben B. wrote:

> Hi,
>
> According to the Sun Handbook, there is a new array :
> SAS interface
> 12 disks SAS or SATA
>
> ZFS could be used nicely with this box.
>
> There is an another version called
> J4400 with 24 disks.
>
> Doc is here :
> http://docs.sun.com/app/docs/coll/j4200
>
> Does someone know price and availability for these products ?
>
> Best Regards,
> Ben
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS problem with oracle

2008-04-01 Thread Ed Saipetch

Wiwat,

You should make sure that you have read the Best Practices Guide and the 
Evil Tuning Guide for helpful information on optimizing ZFS for Oracle.  
There are some things you can do to tweak ZFS to get better performance 
like using a separate filesystem for logs and separating the ZFS intent 
log (ZIL) from the main pool.


They can be found here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide

Also, what kind of disk subsystem (number of disks, is it an array?, 
etc.) and how do you have your zfs pools configured (raid type, separate 
ZIL, etc.)?


Hope this gives you a start.

-Ed

Wiwat Kiatdechawit wrote:


I implement ZFS with Oracle but it slower than UFS very much. Do you 
have any solution?


 


Can I fix this problem with ZFS direct I/O. If it can, how to set it?

 


Wiwat

 




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-30 Thread Ed Saipetch
Tried that... completely different cases with different power supplies.

On Oct 30, 2007, at 10:28 AM, Al Hopper wrote:

> On Mon, 29 Oct 2007, MC wrote:
>
>>> Here's what I've done so far:
>>
>> The obvious thing to test is the drive controller, so maybe you  
>> should do that :)
>>
>
> Also - while you're doing swapTronics - don't forget the Power Supply
> (PSU).  Ensure that your PSU has sufficient capacity on its 12Volt
> rails (older PSUs did'nt even tell you how much current they can push
> out on the 12V outputs).
>
> See also: http://blogs.sun.com/elowe/entry/zfs_saves_the_day_ta
>
> Regards,
>
> Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
>Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
> OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
> http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
> Graduate from "sugar-coating school"?  Sorry - I never attended! :)
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-30 Thread Ed Saipetch
To answer a number of questions:

Regarding different controllers, I've tried 2 Syba Sil 3114 controllers 
purchased about 4 months apart.  I've tried 5.4.3 firmware with one and 5.4.13 
with another.  Maybe Syba makes crappy Sil 3114 cards but it's the same one 
that someone on blogs.sun.com used with success.  I had weird problems flashing 
the first card I got, hence the order of another one.  I'm not sure how I could 
get 2 different controllers 4 months apart and then use them in 2 completely 
different computers and both controllers be bad.

Regarding cables, they aren't densely packed.  I've just got 1 drive attached 
in this new instance.  In the old, I just had 4 cables unbundled (not bound 
together) attached between the card and the drives.

Here's an error on startup in /var/adm/messages, note however that this error 
didn't come up on the old mb/cpu combo with the older 3114 hba.  These errors 
happen only during boot and don't happen during file transfers:

Sep 14 23:51:49 eknas genunix: [ID 936769 kern.info] sd0 is /[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],1/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
Sep 14 23:52:11 eknas scsi: [ID 107833 kern.warning] WARNING: /[EMAIL 
PROTECTED],0/[EMAIL PROTECTED]/[EMAIL PROTECTED] (ata0):
Sep 14 23:52:11 eknas   timeout: abort request, target=1 lun=0

Here's the scanpci output:
pci bus 0x cardnum 0x08 function 0x00: vendor 0x1095 device 0x3114
 Silicon Image, Inc. SiI 3114 [SATALink/SATARaid] Serial ATA Controller

and prtconf -pv:
subsystem-vendor-id:  1095
subsystem-id:  3114
unit-address:  '8'
class-code:  00018000
revision-id:  0002
vendor-id:  1095
device-id:  3114

and prtconf -D:
pci-ide, instance #0 (driver name: pci-ide)
ide, instance #0 (driver name: ata)

and pertinent modinfo:
 40 fbbf1250   1050 224   1  pci-ide (pciide nexus driver for 'PCI-ID)
 41 f783c000  10230 112   1  ata (ATA AT-bus attachment disk cont)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-29 Thread Ed Saipetch
Hello,

I'm experiencing major checksum errors when using a syba silicon image 3114 
based pci sata controller w/ nonraid firmware.  I've tested by copying data via 
sftp and smb.  With everything I've swapped out, I can't fathom this being a 
hardware problem.  There have been quite a few blog posts out there with people 
having a similar config and not having any problems.

Here's what I've done so far:
1. Changed solaris releases from S10 U3 to NV 75a
2. Switched out motherboards and cpus from AMD sempron to a Celeron D
3. Switched out memory to use completely different dimms
4. Switched out sata drives (2-3 250gb hitachi's and seagates in RAIDZ, 3x400GB 
seagates RAIDZ and 1x250GB hitachi with no raid)

Here's output of a scrub and the status (ignore the date and time, I haven't 
reset it on this new motherboard) and please point me in the right direction if 
I'm barking up the wrong tree.

# zpool scrub tank
# zpool status
  pool: tank
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: scrub completed with 140 errors on Sat Sep 15 02:07:35 2007
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0   293
  c0d1  ONLINE   0 0   293

errors: 140 data errors, use '-v' for a list
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss