Re: [zfs-discuss] Public ZFS API ?

2009-03-18 Thread Erast Benson
On Tue, 2009-03-17 at 14:53 -0400, Cherry Shu wrote:
 Are any plans for an API that would allow ZFS commands including 
 snapshot/rollback integrated with customer's application?

Sounds like you are looking for abstraction layering on top of
integrated solution such as NexentaStor. Take a look on API it provides
here:

http://www.nexenta.com/nexentastor-api

SA-API has bindings for C, C++, Perl, Python and Ruby. This
documentation contains examples and samples to demonstrate SA-API
applications in C, C++, Perl, Python and Ruby. You can develop and run
SA-API applications on both Windows and Linux platforms.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] AVS and ZFS demos - link broken?

2009-03-17 Thread Erast Benson
James,

also there is this demo:

http://www.nexenta.com/demos/auto-cdp.html

showing how AVS/ZFS integrated in NexentaStor.

On Tue, 2009-03-17 at 10:25 -0600, James D. Rogers wrote:
 The links to the Part 1 and Part 2 demos on this page
 (http://www.opensolaris.org/os/project/avs/Demos/) appear to be
 broken.
 
  
 
 http://www.opensolaris.org/os/project/avs/Demos/AVS-ZFS-Demo-V1/ 
 
 http://www.opensolaris.org/os/project/avs/Demos/AVS-ZFS-Demo-V2/ 
 
  
 
 James D. Rogers
 
 NRA, GOA, DAD -- and I VOTE!
 
 2207 Meadowgreen Circle
 
 Franktown, CO 80116
 
  
 
 coyote_hunt...@msn.com
 
 303-688-0480
 
 303-885-7410 Cell (Working hours and when coyote huntin'!)
 
  
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comstar production-ready?

2009-03-03 Thread Erast Benson
Hi Stephen,

NexentaStor v1.1.5+ could be an alternative, I think. And it includes
new cool COMSTAR integration, i.e. ZFS shareiscsi property actually
implements COMSTAR iSCSI target share functionality not available in
SXCE. http://www.nexenta.com/nexentastor-relnotes 

On Wed, 2009-03-04 at 07:07 +, Stephen Nelson-Smith wrote:
 Hi,
 
 I recommended a ZFS-based archive solution to a client needing to have
 a network-based archive of 15TB of data in a remote datacentre.  I
 based this on an X2200 + J4400, Solaris 10 + rsync.
 
 This was enthusiastically received, to the extent that the client is
 now requesting that their live system (15TB data on cheap SAN and
 Linux LVM) be replaced with a ZFS-based system.
 
 The catch is that they're not ready to move their production systems
 off Linux - so web, db and app layer will all still be on RHEL 5.
 
 As I see it, if they want to benefit from ZFS at the storage layer,
 the obvious solution would be a NAS system, such as a 7210, or
 something buillt from a JBOD and a head node that does something
 similar.  The 7210 is out of budget - and I'm not quite sure how it
 presents its storage - is it NFS/CIFS?  If so, presumably it would be
 relatively easy to build something equivalent, but without the
 (awesome) interface.
 
 The interesting alternative is to set up Comstar on SXCE, create
 zpools and volumes, and make these available either over a fibre
 infrastructure, or iSCSI.  I'm quite excited by this as a solution,
 but I'm not sure if it's really production ready.
 
 What other options are there, and what advice/experience can you share?
 
 Thanks,
 
 S.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-16 Thread Erast Benson
pNFS is NFS-centric of course and it is not yet stable, isn't it? btw,
what is the ETA for pNFS putback?

On Thu, 2008-10-16 at 12:20 -0700, Marion Hakanson wrote:
 [EMAIL PROTECTED] said:
  It's interesting how the speed and optimisation of these maintenance
  activities limit pool size.  It's not just full scrubs.  If the filesystem 
  is
  subject to corruption, you need a backup.  If the filesystem takes two 
  months
  to back up / restore, then you need really solid incremental backup/restore
  features, and the backup needs to be a cold spare, not just a
  backup---restoring means switching the roles of the primary and backup
  system, not actually moving data.   
 
 I'll chime in here with feeling uncomfortable with such a huge ZFS pool,
 and also with my discomfort of the ZFS-over-ISCSI-on-ZFS approach.  There
 just seem to be too many moving parts depending on each other, any one of
 which can make the entire pool unavailable.
 
 For the stated usage of the original poster, I think I would aim toward
 turning each of the Thumpers into an NFS server, configure the head-node
 as a pNFS/NFSv4.1 metadata server, and let all the clients speak parallel-NFS
 to the cluster of file servers.  You'll end up with a huge logical pool,
 but a Thumper outage should result only in loss of access to the data on
 that particular system.  The work of scrub/resilver/replication can be
 divided among the servers rather than all living on a single head node.
 
 Regards,
 
 Marion
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-14 Thread Erast Benson
James, all serious ZFS bug fixes back-ported to b85 as well as marvell
and other sata drivers. Not everything is possible to back-port of
course, but I would say all critical things are there. This includes ZFS
ARC optimization patches, for example.

On Tue, 2008-10-14 at 22:33 +1000, James C. McPherson wrote:
 Gray Carper wrote:
  Hey there, James!
  
  We're actually running NexentaStor v1.0.8, which is based on b85. We 
  haven't done any tuning ourselves, but I suppose it is possible that 
  Nexenta did. If there's something specific you'd like me to look for, 
  I'd be happy to.
 
 Hi Gray,
 So build 85 that's getting a bit long in the tooth now.
 
 I know there have been *lots* of ZFS, Marvell SATA and iSCSI
 fixes and enhancements since then which went into OpenSolaris.
 I know they're in Solaris Express and the updated binary distro
 form of os2008.05 - I just don't know whether Erast and the
 Nexenta clan have included them in what they are releasing as 1.0.8.
 
 Erast - could you chime in here please? Unfortunately I've got no
 idea about Nexenta.
 
 
 James C. McPherson
 --
 Senior Kernel Software Engineer, Solaris
 Sun Microsystems
 http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Nexenta/ZFS vs Heartbeat/DRBD

2008-09-10 Thread Erast Benson
Well, obviously - its Linux vs. OpenSolaris question. Most serious
advantage of OpenSolaris is ZFS and its enterprise level storage stack.
Linux just not there yet..

On Wed, 2008-09-10 at 14:51 +0200, Axel Schmalowsky wrote:
 Hallo list,
 
 hope that so can help me on this topic.
 
 I'd like to know where the *real* advantages of Nexenta/ZFS (i.e. 
 ZFS/StorageTek) over DRBD/Heartbeat are.
 I'm pretty new to this topic and hence do not have enough experience to judge 
 their respective advantages/disadvantages reasonably.
 
 Any suggestion would be appreciated.
 
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Nexenta/ZFS vs Heartbeat/DRBD

2008-09-10 Thread Erast Benson
On Wed, 2008-09-10 at 14:36 -0400, Maurice Volaski wrote:
 A disadvantage, however, is that Sun StorageTek Availability Suite 
 (AVS), the DRBD equivalent in OpenSolaris, is much less flexible than 
 DRBD. For example, AVS is intended to replicate in one direction, 
 from a primary to a secondary, whereas DRBD can switch on the fly. 
 See 
 http://www.opensolaris.org/jive/thread.jspa?threadID=68881tstart=30 
 for details on this.

I would be curious to see production environments switching direction
on the fly at that low level... Usually some top-level brain does that
in context of HA fail-over and so on.

well, AVS actually does reverse synchronization and does it very good.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Nexenta/ZFS vs Heartbeat/DRBD

2008-09-10 Thread Erast Benson
On Wed, 2008-09-10 at 15:00 -0400, Maurice Volaski wrote:
 On Wed, 2008-09-10 at 14:36 -0400, Maurice Volaski wrote:
   A disadvantage, however, is that Sun StorageTek Availability Suite
   (AVS), the DRBD equivalent in OpenSolaris, is much less flexible than
   DRBD. For example, AVS is intended to replicate in one direction,
   from a primary to a secondary, whereas DRBD can switch on the fly.
   See
   http://www.opensolaris.org/jive/thread.jspa?threadID=68881tstart=30
   for details on this.
 
 I would be curious to see production environments switching direction
 on the fly at that low level... Usually some top-level brain does that
 in context of HA fail-over and so on.
 
 By switching on the fly, I mean if the primary services are taken 
 down and then brought up on the secondary, the direction of 
 synchronization gets reversed. That's not possible with AVS because...
 
 well, AVS actually does reverse synchronization and does it very good.
 
 It's a one-time operation that re-reverses once it completes.

When primary is repaired you want to have it on-line and retain the
changes made on the secondary. Your secondary did the job and switched
back to its secondary role. This HA fail-back cycle could be repeated as
many times as you need using reverse sync command.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Nexenta/ZFS vs Heartbeat/DRBD

2008-09-10 Thread Erast Benson
On Wed, 2008-09-10 at 18:37 -0400, Maurice Volaski wrote:
 On Wed, 2008-09-10 at 15:00 -0400, Maurice Volaski wrote:
   On Wed, 2008-09-10 at 14:36 -0400, Maurice Volaski wrote:
 A disadvantage, however, is that Sun StorageTek Availability Suite
 (AVS), the DRBD equivalent in OpenSolaris, is much less flexible than
 DRBD. For example, AVS is intended to replicate in one direction,
 from a primary to a secondary, whereas DRBD can switch on the fly.
 See
 http://www.opensolaris.org/jive/thread.jspa?threadID=68881tstart=30
 for details on this.
   
   I would be curious to see production environments switching direction
   on the fly at that low level... Usually some top-level brain does that
   in context of HA fail-over and so on.
 
   By switching on the fly, I mean if the primary services are taken
   down and then brought up on the secondary, the direction of
   synchronization gets reversed. That's not possible with AVS because...
 
   well, AVS actually does reverse synchronization and does it very good.
 
   It's a one-time operation that re-reverses once it completes.
 
 When primary is repaired you want to have it on-line and retain the
 changes made on the secondary.
 
 Not necessarily. Even when the primary is ready to go back into 
 service, I may not want to revert to it for one reason or another. 
 That means I am without a live mirror because AVS' realtime mirroring 
 is only one direction, primary to secondary.

This why I tried to state that this is not realistic environment for
non-shared storage HA deployments. DRBD trying to emulate shared-storage
behavior at a wrong level where in fact usage of FC/iSCSI-connected
storage needs to be considered.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Nexenta/ZFS vs Heartbeat/DRBD

2008-09-10 Thread Erast Benson
On Wed, 2008-09-10 at 19:10 -0400, Maurice Volaski wrote:
 On Wed, 2008-09-10 at 18:37 -0400, Maurice Volaski wrote:
   On Wed, 2008-09-10 at 15:00 -0400, Maurice Volaski wrote:
 On Wed, 2008-09-10 at 14:36 -0400, Maurice Volaski wrote:
   A disadvantage, however, is that Sun StorageTek Availability Suite
   (AVS), the DRBD equivalent in OpenSolaris, is much less 
 flexible than
   DRBD. For example, AVS is intended to replicate in one direction,
   from a primary to a secondary, whereas DRBD can switch on the fly.
   See
   
  http://www.opensolaris.org/jive/thread.jspa?threadID=68881tstart=30
   for details on this.
 
 I would be curious to see production environments switching 
  direction
 on the fly at that low level... Usually some top-level brain does 
  that
 in context of HA fail-over and so on.
   
 By switching on the fly, I mean if the primary services are taken
 down and then brought up on the secondary, the direction of
 synchronization gets reversed. That's not possible with AVS because...
   
 well, AVS actually does reverse synchronization and does it very 
  good.
   
 It's a one-time operation that re-reverses once it completes.
   
   When primary is repaired you want to have it on-line and retain the
   changes made on the secondary.
 
   Not necessarily. Even when the primary is ready to go back into
   service, I may not want to revert to it for one reason or another.
   That means I am without a live mirror because AVS' realtime mirroring
   is only one direction, primary to secondary.
 
 This why I tried to state that this is not realistic environment for
 non-shared storage HA deployments.
 
 What's not realistic? DRBD's highly flexible ability to switch roles 
 on the fly is a huge advantage over AVS. But this is not to say AVS 
 is not realistic. It's just a limitation.
 
 DRBD trying to emulate shared-storage
 behavior at a wrong level where in fact usage of FC/iSCSI-connected
 storage needs to be considered.
 
 This makes no sense to me. We're talking about mirroring the storage 
 of two physical and independent systems. How did the concept of 
 shared storage get in here?

This is really outside of ZFS discussion now... But your point taken. If
you want mirror-like behavior of your 2-node cluster, you'll get some
benefits of DRBD but my point is that such solution trying to solve two
problems at the same time: replication and availability, which is in my
opinion plain wrong.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Nexenta/ZFS vs Heartbeat/DRBD

2008-09-10 Thread Erast Benson
On Wed, 2008-09-10 at 19:42 -0400, Maurice Volaski wrote:
 On Wed, 2008-09-10 at 19:10 -0400, Maurice Volaski wrote:
   On Wed, 2008-09-10 at 18:37 -0400, Maurice Volaski wrote:
 On Wed, 2008-09-10 at 15:00 -0400, Maurice Volaski wrote:
   On Wed, 2008-09-10 at 14:36 -0400, Maurice Volaski wrote:
 A disadvantage, however, is that Sun StorageTek 
 Availability Suite
 (AVS), the DRBD equivalent in OpenSolaris, is much less
   flexible than
 DRBD. For example, AVS is intended to replicate in one 
 direction,
 from a primary to a secondary, whereas DRBD can switch 
 on the fly.
 See

 http://www.opensolaris.org/jive/thread.jspa?threadID=68881tstart=30
 for details on this.
   
   I would be curious to see production environments 
 switching direction
   on the fly at that low level... Usually some top-level 
 brain does that
   in context of HA fail-over and so on.
 
   By switching on the fly, I mean if the primary services are taken
   down and then brought up on the secondary, the direction of
   synchronization gets reversed. That's not possible with 
 AVS because...
 
   well, AVS actually does reverse synchronization and does 
 it very good.
 
   It's a one-time operation that re-reverses once it completes.
 
 When primary is repaired you want to have it on-line and retain the
 changes made on the secondary.
   
 Not necessarily. Even when the primary is ready to go back into
 service, I may not want to revert to it for one reason or another.
 That means I am without a live mirror because AVS' realtime mirroring
 is only one direction, primary to secondary.
   
   This why I tried to state that this is not realistic environment for
   non-shared storage HA deployments.
 
   What's not realistic? DRBD's highly flexible ability to switch roles
   on the fly is a huge advantage over AVS. But this is not to say AVS
   is not realistic. It's just a limitation.
 
   DRBD trying to emulate shared-storage
   behavior at a wrong level where in fact usage of FC/iSCSI-connected
   storage needs to be considered.
 
   This makes no sense to me. We're talking about mirroring the storage
   of two physical and independent systems. How did the concept of
   shared storage get in here?
 
 This is really outside of ZFS discussion now... But your point taken. If
 you want mirror-like behavior of your 2-node cluster, you'll get some
 benefits of DRBD but my point is that such solution trying to solve two
 problems at the same time: replication and availability, which is in my
 opinion plain wrong.
 
 Uh, no, DRBD addresses only replication. Linux-HA (aka Heartbeat) 
 address availability. They can be an integrated solution and are to 
 some degree intended that way, so I have no idea where your opinion 
 is coming from.

Because in my opinion DRBD takes some responsibility of management layer
if you will. Classic, predominant replication in HA clusters schema is
primary-backup (or master-slave) and backup by definition is not
necessary primary-identical system. Having said that, it is noble for
DRBD to implement role switching and not a bad idea for many small
deployments.

 For replication, OpenSolaris is largely limited to using AVS, whose 
 functionality is limited, at least relative to DRBD. But there seems 
 to be a few options to implement availability, which should include 
 Linux-HA itself as it should run on OpenSolaris!

Everything is implementable and I believe AVS designers thought about
dynamic switching of roles, but they end up with what we have today,
they likely discarded this idea.

AVS not switching roles and forces IT admins to use it as primary-backup
data protection service only.

 But relevant to the poster's initial question, ZFS is so far and away 
 more advanced than any Linux filesystem can even dream about that it 
 handily nullifies any disadvantage in having to run AVS.

Right.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ?: any effort for snapshot management

2008-09-05 Thread Erast Benson
Steffen,

Most complete and serious ZFS snapshot management, integrated ZFS
send/recv replication over RSYNC with CLI, integrated AVS, GUI and
management server which provides rich API for C/C++/Perl/Python/Ruby
integrators available here:

http://www.nexenta.com/nexentastor-overview

Its ZFS+ with a lot of reliability fixes. Enterprise quality, production
ready solution.

Demo of of advanced CLI usage is here:

http://www.nexenta.com/demos/automated-snapshots.html 
http://www.nexenta.com/demos/auto-tier-basic.html

As a side not, I think that dis-integrated general-purpose scripting
which is available on the Internet simply can not provide production
quality and easy of use.

On Fri, 2008-09-05 at 13:14 -0400, Steffen Weiberle wrote:
 I have seen Tim Foster's auto-snapshot and it looks interesting.
 
 Is there a bug id or effort to deliver snapshot policy and space 
 management framework? Not looking for a GUI, although a CLI based UI 
 might be helpful. Customer needs something that allows the use of 
 snapshots on 100s of systems, and minimizes the administration to handle 
 disks filling up.
 
 I imagine a component is a time or condition based auto-delete of older 
 snopshot(s).
 
 Thanks
 Steffen
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] NexentaStor API Windows SDK published

2008-08-04 Thread Erast Benson
Hey folks,

just saw another cool news this morning - Nexenta Systems released
documentation for remote API and Windows SDK with demos for accessing
NexentaStor. News itself:

http://www.nexenta.com/corp/index.php?option=com_contenttask=viewid=154Itemid=56

ZFS and the rest of appliance functionality abstracted via Nexenta
Management Server (NMS) and available remotely via API with following
language bindings: C, C++, Perl, Python and Ruby:

http://www.nexenta.com/nexentastor-api

And another cool feature worth mentioning is - plugin architecture.
There is no API for plugins available yet, but there are number of
CDDL-licensed plugins available as an examples here:

http://www.nexenta.com/nexentastor-plugins

Nice!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Pogo Linux ships NexentaStor pre-installed boxes

2008-08-02 Thread Erast Benson
Hi folks,

wanted to share some exciting news with you. Pogo Linux shipping
NexentaStor pre-installed boxes, like this one 16TB - 24TB:

http://www.pogolinux.com/quotes/editsys?sys_id=3989

And here is announce:

http://www.nexenta.com/corp/index.php?option=com_contenttask=viewid=129Itemid=56

Pogo says: Managed Storage – NetApp features without the price...

Go OpenSolaris, Go!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] five megabytes per second with Microsoft iSCSI initiator (2.06)

2008-02-19 Thread Erast Benson
http://blogs.sun.com/constantin/entry/x4500_solaris_zfs_iscsi_perfect

On Tue, 2008-02-19 at 14:44 -0600, Bob Friesenhahn wrote:
 It would be useful if people here who have used iSCSI on top of ZFS 
 could share their performance experiences.  It is very easy to waste a 
 lot of time trying to realize unrealistic expectations.  Hopefully 
 iSCSI on top of ZFS normally manages to transfer much more than 
 5MB/second!
 
 Bob
 ==
 Bob Friesenhahn
 [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS replication strategies

2008-02-01 Thread Erast Benson
Take a look on NexentaStor - its a complete 2nd tier solution:

http://www.nexenta.com/products

and AVS is nicely integrated via management RPC interface which is
connecting multiple NexentaStor nodes together and greatly simplifies
AVS usage with ZFS... See demo here:

http://www.nexenta.com/demos/auto-cdp.html

On Fri, 2008-02-01 at 10:15 -0800, Vincent Fox wrote:
 Does anyone have any particularly creative ZFS replication strategies they 
 could share?
 
 I have 5 high-performance Cyrus mail-servers, with about a Terabyte of 
 storage each of which only 200-300 gigs is used though even including 14 days 
 of snapshot space.
 
 I am thinking about setting up a single 3511 with 4 terabytes of storage at a 
 remote site as a backup device for the content.  Struggling with how to 
 organize the idea of wedging 5 servers into the one array though.
 
 Simplest way that occurs is one big RAID-5 storage pool with all disks.  Then 
 slice out 5 LUNs each as it's own ZFS pool.  Then use zfs send  receive to 
 replicate the pools.
 
 Ideally I'd love it if ZFS directly supported the idea of rolling snapshots 
 out into slower secondary storage disks on the SAN, but in the meanwhile 
 looks like we have to roll our own solutions.
  
 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Issue fixing ZFS corruption

2008-01-23 Thread Erast Benson
well, we had some problems with si3124 driver, but with driver binary
posted in this forum the problem seems been fixed. Later we saw the same
fix went in into b72.

On Thu, 2008-01-24 at 05:11 +0300, Jonathan Stewart wrote:
 Jeff Bonwick wrote:
  The Silicon Image 3114 controller is known to corrupt data.
  Google for silicon image 3114 corruption to get a flavor.
  I'd suggest getting your data onto different h/w, quickly.
 
 I'll second this, the 3114 is a piece of junk if you value your data.  I 
 bought a 4 port LSI SAS card (yes a bit pricy) and have had 0 problems 
 since and hot swap actually works.  I never tried it with the 3114 I had 
 just never seen it actually working before so I was quite pleasantly 
 surprised.
 
 Jonathan
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Integrated transactional upgrades with ZFS

2008-01-17 Thread Erast Benson
Hi guys,

new article available explaining details on how enterprise-like upgrades
integrated with Nexenta Core Platform starting from RC2 using ZFS
capabilities and Debian APT:

http://www.nexenta.org/os/TransactionalZFSUpgrades

What is NexentaCP?

NexentaCP is a minimal (core) foundation that can be used to quickly
build servers, desktops, and custom distributions tailored for
specialized applications such as NexentaStor. Similar to NexentOS
desktop distribution, NexentaCP combines reliable state-of-the-art
kernel with the GNU userland, and the ability to integrate open source
components in no time. However, unlike NexentaOS desktop distribution,
NexentaCP does not aim to provide a complete desktop. The overriding
objective for NexentaCP is - stable foundation.

Enjoy!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Nexenta/Debian APT integrated with ZFS now...

2007-12-19 Thread Erast Benson
Hi All,

This is the road to NCP 1.0...

Our motto:

Ubuntu makes best Debian Desktop platform - Nexenta makes best Debian
Server/Storage platform.

Some latest Nexenta related news:

1) Official Nexenta Core Platform (NCP) repository now is
http://apt.nexenta.org

2) Unstable APT integrated with ON build 79, give it a try!

3) apt-get now fully integrated with ZFS cloning. New management tool
provided: apt-clone. Never loose your upgrades again!

4) I'm seeking for developers who loves Debian and will help us to join
Debian community. We've got general agreement with Debian leaders, but
some work needs to be done, lets coordinate on official Nexenta IRC:
#nexenta


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Nexenta/Debian APT integrated with ZFS now...

2007-12-19 Thread Erast Benson
Thank you!

we are working on it. new website is coming, as well as next release of
NCP. Meanwhile, old RC1 could be downloaded from:

http://archive.nexenta.org/releases

On Wed, 2007-12-19 at 18:01 -0800, MC wrote:
  2) Unstable APT integrated with ON build 79, give it a try!
 
 Excellent progress!!  But your website is out of date and I cannot find a 
 NexentaCP link on the download page.  Only the old NexentaOS link.  Also you 
 should update the news page so it looks like the project is active :)
  
 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NexentaCP Beta1-test2 (ZFS/Boot - manual partitioning support)

2007-06-28 Thread Erast Benson
just use pkgadd -d wrapper. it will auto-magically convert SVR4
package to the .deb(s) and install them on the fly. You can also use
pkgrm to remove them. pkginfo wrapper is also available.

On Thu, 2007-06-28 at 16:38 +0200, Selim Daoud wrote:
 superbe job...synaptic package manager is really impressive
 is there a way to transform Sun package to a synaptic package?
 
 selim
 
 On 6/22/07, Al Hopper [EMAIL PROTECTED] wrote:
  On Fri, 22 Jun 2007, Erast Benson wrote:
 
   New unstable ISO of NexentaCP (Core Platform) available.
  
   http://www.gnusolaris.org/unstable-iso/ncp_beta1-test2-b67_i386.iso
 
  Also available at:
 
  http://www.genunix.org/distributions/gnusolaris/index.html
 
   Changes:
  
   * ON B67 based
   * ZFS/Boot manual partitioning support implemented (in addition to
   auto-partitioning). Both, Wizard and FDisk types fully supported.
   * gcc/g++ now officially included on installation media
   * APT repository fixed
   * first official meta-package: nexenta-gnome
  
   After installation, those who needs GNOME environment, just type:
  
   $ sudo apt-get install nexenta-gnome
  
   Known bugs:
  
   * after fresh install APT caches needs to be re-created:
  
   $ sudo rm /var/lib/apt/*
   $ sudo apt-get update
   --
   Erast
 
  Regards,
 
  Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
  Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
  OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
  http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 
-- 
Erast

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Announcing NexentaCP(b65) with ZFS/Boot integrated installer

2007-06-07 Thread Erast Benson
Announcing new direction of Open Source NexentaOS development:
NexentaCP (Nexenta Core Platform).

NexentaCP is Dapper/LTS-based core Operating System Platform distributed
as a single-CD ISO, integrates Installer/ON/NWS/Debian and provides
basis for Network-type installations via main or third-party APTs (NEW).

First unstable b65-based ISO with ZFS/Boot-capable installer available
as usual at:
http://www.gnusolaris.org/unstable-iso/ncp_beta1-test1-b65_i386.iso

Please give it a try and start building your own APT repositories and
communities today!

Note: this version of installer supports ZFS/Boot type of installations
on single disk or 2+ mirror configuration. For now, only Auto
partitioning mode could be used for ZFS root partition creation.

More details on NexentaCP will be available soon...

-- 
Erast

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: [osol-discuss] Re: Announcing NexentaCP(b65) with ZFS/Boot integrated installer

2007-06-07 Thread Erast Benson
On Thu, 2007-06-07 at 16:26 -0400, Francois Saint-Jacques wrote:
 On Wed, Jun 06, 2007 at 11:51:08PM -0700, Erast Benson wrote:
  More details on NexentaCP will be available soon...
 
 Is it based on Alpha7?

Alpha7 is the Desktop-oriented ISO, however they share the same main APT
repository, i.e. Dapper/LTS.

So far core team aggreed on following major decisions:

1) NexentaCP will follow Ubuntu/LTS releases only;
2) NexentaCP main set of packages shipped on ISO will be greately
reduced and will contain only highly tested base minimum;
3) NexentaCP will offer Network-type installations using main(LTS-based)
or third-party repository via Installer or after-install wizards.

FYI, Martin mentioned some main goals of this move during LinuxTag
conference: http://martinman.net/

-- 
Erast

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Testing of UFS, VxFS and ZFS

2007-04-16 Thread Erast Benson
Did you measure CPU utilization by any chance during the tests?
Its T2000 and CPU cores are quite slow on this box hence might be a
bottleneck.

just a guess.

On Mon, 2007-04-16 at 13:10 -0400, Tony Galway wrote:
 I had previously undertaken a benchmark that pits “out of box”
 performance of UFS via SVM, VxFS and ZFS but was waylaid due to some
 outstanding availability issues in ZFS. These have been taken care of,
 and I am once again undertaking this challenge on behalf of my
 customer. The idea behind this benchmark is to show
 
  
 
 a.  How ZFS might displace the current commercial volume and file
 system management applications being used.
 
 b. The learning curve of moving from current volume management
 products to ZFS.
 
 c.  Performance differences across the different volume management
 products.
 
  
 
 VDBench is the test bed of choice as this has been accepted by the
 customer as a telling and accurate indicator of performance. The last
 time I attempted this test it had been suggested that VDBench is not
 appropriate to testing ZFS, I cannot see that being a problem, VDBench
 is a tool – if it highlights performance problems, then I would think
 it is a very effective tool so that we might better be able to fix
 those deficiencies.
 
  
 
 Now, to the heart of my problem!
 
  
 
 The test hardware is a T2000 connected to a 12 disk SE3510 (presenting
 as JBOD)  through a brocade switch, and I am using Solaris 10 11/06.
 For Veritas, I am using Storage Foundation Suite 5.0. The systems were
 jumpstarted to the same configuration before testing a different
 volume management software to ensure there were no artifacts remaining
 from any previous test.
 
  
 
 I present my vdbench definition below for your information:
 
  
 
 sd=FS,lun=/pool/TESTFILE,size=10g,threads=8
 
 wd=DWR,sd=FS,rdpct=100,seekpct=80
 
 wd=ETL,sd=FS,rdpct=0,  seekpct=80
 
 wd=OLT,sd=FS,rdpct=70, seekpct=80
 
 rd=R1-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
 rd=R1-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
 rd=R1-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
 rd=R2-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
 rd=R2-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
 rd=R2-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
 rd=R3-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
 rd=R3-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
 rd=R3-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
  
 
 As you can see, it is fairly straight forward and I take the average
 of the three runs in each of ETL, OLT and DWR workloads. As an aside,
 I am also performing this test for various file system block sizes as
 applicable as well.
 
  
 
 I then ran this workload against a Raid-5 LUN created and mounted in
 each of the different file system types. Please note that one of the
 test criteria is that the associated volume management software create
 the Raid-5 LUN, not the disk subsystem.
 
  
 
 1.  UFS via SVM
 
 # metainit d20 –r d1 … d8 
 
 # newfs /dev/md/dsk/d20
 
 # mount /dev/md/dsk/d20 /pool
 
  
 
 2.  ZFS
 
 # zfs create pool raidz d1 … d8
 
  
 
 3.  VxFS – Veritas SF5.0
 
 # vxdisk init SUN35100_0 ….  SUN35100_7
 
 # vxdg init testdg SUN35100_0  … 
 
 # vxassist –g testdg make pool 418283m layout=raid5
 
  
 
  
 
 Now to my problem – Performance!  Given the test as defined above,
 VxFS absolutely blows the doors off of both UFS and ZFS during write
 operations. For example, during a single test on an 8k file system
 block, I have the following average IO Rates:
 
  
 
  
 
 
ETL
 
 
   OLTP
 
 
DWR
 
 
 UFS
 
 
390.00
 
 
   1298.44
 
 
  23173.60
 
 
 VxFS
 
 
  15323.10
 
 
  27329.04
 
 
  22889.91
 
 
 ZFS
 
 
   2122.23
 
 
   7299.36
 
 
  22940.63
 
 
 
  
 
  
 
 If you look at these numbers percentage wise, with VxFS being set to
 100%, then UFS run’s at 2.5% the speed, and ZFS at 13.8% the speed,
 for OLTP UFS is 4.8% and ZFS 26.7%, however in DWR where there are
 100% reads, no writing, performance is similar with UFS at 101.2% and
 ZFS at 100.2% the speed of VxFS.
 
  
 
   cid:image002.png@01C78027.99B515D0
 
  
 
  
 
 Given this performance problems, then quite obviously VxFS quite
 rightly deserves to be the file system of choice, even with a cost
 premium. If anyone has any insight into why I am seeing, consistently,
 these types of very disappointing numbers I would very much appreciate
 your comments. The numbers are very disturbing as it is indicating
 that write 

Re: [zfs-discuss] Data Management API

2007-03-20 Thread Erast Benson
On Tue, 2007-03-20 at 16:22 +, Darren J Moffat wrote:
 Robert Milkowski wrote:
  Hello devid,
  
  Tuesday, March 20, 2007, 3:58:27 PM, you wrote:
  
  d Does ZFS have a Data Management API to monitor events on files and
  d to store arbitrary attribute information with a file? Any answer on
  d this would be really appreciated.
  
  IIRC correctly there's being developed file event mechanism - more
  general which should work with other file systems too. I have no idea
  of its status or if someone even started coding it.
  
  Your second question - no, you can't.
 
 Yes you can and it has been there even before ZFS existed see fsattr(5) 
 it isn't ZFS specific but a generic attribute extension to the 
 filesystems, currently supported by ufs, nfs, zfs, tmpfs.

apparently fsattr is not part of OpenSolaris or at least I can't find
it..

-- 
Erast

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Data Management API

2007-03-20 Thread Erast Benson
On Tue, 2007-03-20 at 09:29 -0700, Erast Benson wrote:
 On Tue, 2007-03-20 at 16:22 +, Darren J Moffat wrote:
  Robert Milkowski wrote:
   Hello devid,
   
   Tuesday, March 20, 2007, 3:58:27 PM, you wrote:
   
   d Does ZFS have a Data Management API to monitor events on files and
   d to store arbitrary attribute information with a file? Any answer on
   d this would be really appreciated.
   
   IIRC correctly there's being developed file event mechanism - more
   general which should work with other file systems too. I have no idea
   of its status or if someone even started coding it.
   
   Your second question - no, you can't.
  
  Yes you can and it has been there even before ZFS existed see fsattr(5) 
  it isn't ZFS specific but a generic attribute extension to the 
  filesystems, currently supported by ufs, nfs, zfs, tmpfs.
 
 apparently fsattr is not part of OpenSolaris or at least I can't find
 it..

oh, this is API...

-- 
Erast

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Solaris as a VMWare guest

2007-03-12 Thread Erast Benson
On Mon, 2007-03-12 at 20:53 -0600, James Dickens wrote:
 
 
 On 3/12/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: 
 What issues, if any, are likely to surface with using Solaris
 inside vmware as a guest os, if I choose to use ZFS?
 
 works great in vmware server, IO rates suck. 
 
 
 I'm assuming that ZFS's ability to maintain data integrity 
 will prevail and protect me from any problems that the
 addition of vmware might introduce.
 
 no problems so far, created two  virtual disks and  concat, its just a
 toy/test bed for nexenta, only problem I have with nexenta is that the
 64bit mode crashes on boot. b55  may be fixed who knows. 

its ae driver. Murayama fixed it recently in unstable branch.
If you don't want to upgrade to latest, you could change your vmware
settings to use e1000g driver instead. Or just upgrade myamanet-ae from
unstable like:

$ sudo apt-get install myamanet-ae

 
 Are there likely to be any issues with disk drive IO
 performance?
 
 i'm getting  11MB/s on bonnie++, the disks are backed by sata drives
 on a  ultra 20 2.6ghz and has  512MB allocated. 
 
 
 not exactly a speed demon it would get about 130MB/s on the raw
 hardware. 
 
 
 James Dickens
 uadmin.blogspot.com
 
 
  
 
 The concern here is with comments on how ZFS likes to
 own spindles so that it can properly schedule I/O and
 maximise performance.
 
 Any other gotchas, such as the extra vmware layer doing
 buffering that ZFS isn't aware of, etc? 
 
 If there are problems, are they likely to be any
 better/different
 when using ZFS and Solaris as a Xen domU?
 
 Darren
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-- 
Erast

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: ZFS for Linux 2.6

2006-11-07 Thread Erast Benson
On Tue, 2006-11-07 at 10:30 -0800, Akhilesh Mritunjai wrote:
   Yuen L. Lee wrote:
  opensolaris could be a nice NAS filer. I posted
  my question on How to build a NAS box asking for
  instructions on how to build a Solaris NAS box.
  It looks like everyone is busy. I haven't got any
  response yet. By any chance, do you have any
 
 Hi Yuen
 
 May I suggest that a better question would have been How to build a minimal 
 Nevada distribution ?. I'm sure it would have gotten more responses as it is 
 both - a more general, and a more relevent question.
 
 Apart from that unasked advice, If my memory serves right the Belenix folks 
 (Moinak and gang) were discussing a similar thing in a thread sometime 
 back... chasing them might be a good idea ;-)
 
 I found some articles on net on how to build minimal image of solaris with 
 networking. Packages relating to storage (zfs, iSCSI etc) can be added to it 
 later. The minimal system with required components, sure, is heavy - about 
 200MB... but shouldn't be an issue for a *NAS* box. I googled Minimal 
 solaris configuration and found several articles.

Alternative way would be to simply use NexentaOS InstallCD and select
Minimal Profile during installation.

-- 
Erast

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss