Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-13 Thread Erast



On 08/13/2010 01:39 PM, Tim Cook wrote:

http://www.theregister.co.uk/2010/08/13/opensolaris_is_dead/

I'm a bit surprised at this development... Oracle really just doesn't
get it.  The part that's most disturbing to me is the fact they won't be
releasing nightly snapshots.  It appears they've stopped Illumos in its
tracks before it really even got started (perhaps that explains the
timing of this press release)


Wrong. Be patient, with the pace of current Illumos development it soon 
will have all the closed binaries liberated and ready to sync up with 
promised ON code drops as dictated by GPL and CDDL licenses.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?

2010-07-05 Thread Erast

In 3.0.3+ new option would list appliance changelog going forward:

nmc$ show version -c

On 07/04/2010 05:58 PM, Bohdan Tashchuk wrote:

Where can I find a list of these?


This leads to the more generic question of: where are *any* release notes?

I saw on Genunix that Community Edition 3.0.3 was replaced by 3.0.3-1. What changed? I  
went to nexenta.org and looked around. But it wasn't immediately obvious where to find 
release notes. Also, as Tim Cook noted, the Nexenta forums aren't exactly 
"lively".

For a simple, easily understood and easily navigated web site, you can't beat 
www.openbsd.org. Both Sun/Oracle and Nexenta could learn a lot from it. And I can also 
follow very clean, simple instructions for running the "stable" OpenBSD branch 
(which is mostly security fixes).

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Interesting experience with Nexenta - anyone seen it?

2010-05-20 Thread Erast

Hi Kyle,

very likely that you hit driver bug in isp. After the reboot, take a 
look on /var/adm/messages file - anything related might shed some light.


I wouldn't suspect Intel GigE card - fairly good one and driver is very 
stable.


Also, some upgrades posted, make sure the kernel displays 134e after the 
reboot into the new upgrade checkpoint. The upgrade command:


nmc$ setup appliance upgrade

On 05/20/2010 08:05 AM, Kyle McDonald wrote:

Hi all,

I recently installed Nexenta Community 3.0.2 on one of my servers:

IBM eSeries X346
2.8Ghz Xeon
12GB DDR2 RAM
1 builtin BGE interface for management
4 port Intel GigE card aggregated for Data
IBM ServRAID 7k with 256MB BB Cache with (isp driver)
   6 RAID0 single drive LUNS (so I can use the Cache)
 1 18GB LUN for the rpool
 5 300GB LUN for the data pool
1 RAIDZ1 pool from the 5 300GB drives.
   4 test filesystems
 1 No Dedup, No Compression
 1 DeDup, No Compression
 1 No DeDup, Compression
 1 DeDup, Compression

This is pretty old hardware, so I wasn't expecting miracles, but I
thought I'd give it a shot.
My work load is NFS service to software build servers (cvs checkouts, un
tarring files, compiling, etc.) I'm hoping the many CVS checkout trees
will lend themselves to DeDup well, and I know source code should
compress easily.

I setup one client with a single GigE connection, mounted the four file
systems (plus one from the netapp we have here) and proceeded to write a
loop to time both un-tarring the gcc-4.3.3 sources to those 5
filesystems, and to 1 local directory, and to rm -rf the sources too.

The tar took 28 seconds and 10 seconds to remove in the local dir, then
on the first ZFS/NFS filesystem mount, it took basically forever and
hung the Nexenta server. I was watching it go on the web admin page and
it all looked fine for a while, then the client started reporting 'NFS
Server not responding, still trying...' For a while, there were Also
'NFS Server OK' messages too, and the Web GUI remained responsive.
Eventually The OK messages stopped, and the Web GUI froze.

I went an rebooted the NFS client thinking that id the requests stopped
the Server might catch up, but it never started responding again.

I was only untarring a file.. How did this bring the machine down?
I hadn't even gotten to the FS's that had SeSup or Compression turned
on, so those shouldn't have affected things - yet.

Any ideas?

   -Kyle



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Co-creator of ZFS, Bill Moore joins Nexenta advisory board

2010-04-20 Thread Erast

Good news for Nexenta and OpenSolaris community in general:

http://www.nexenta.com/corp/blog/2010/04/06/bill-moore-joins-nexenta-advisory-board/

Nexenta invites talents and hiring OpenSolaris Kernel/API engineers. If 
you are in SF bay area and you think you are qualified, send your resume 
by following the instructions below:


http://www.nexenta.com/corp/nexenta-careers
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks "fork"

2009-10-28 Thread Erast



Eric D. Mudama wrote:

On Wed, Oct 28 at 13:40, "C. Bergström" wrote:

Tim Cook wrote:



   PS: Not having enough engineers to support a growing and paying
   customer base is a *good* problem to have.  The opposite is much, 
much

   worse.



So use Nexenta?

Got data you care about?

Verify extensively before you jump to that ship.. :)


I am not aware of any data issues, but simply when I investigated
nexenta they lagged far enough behind OpenSolaris that I was concerned
they didn't have enough critical mass to keep up.  High quality
distros are a ton of work.

That, and the supported NexentaStor pricing exceeded our $2k ceiling.


As far as I know Developer Edition is free of charge for up to 4TB:

http://www.nexentastor.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks "fork"

2009-10-28 Thread Erast



C. Bergström wrote:

Eugen Leitl wrote:

On Wed, Oct 28, 2009 at 01:40:12PM +0800, "C. Bergström" wrote:

 

So use Nexenta?
  

Got data you care about?

Verify extensively before you jump to that ship.. :)



So you're saying Nexenta have been known to drop bits on
the floor, unprovoked? Inquiring minds...
  
I would say this same thing if it was my company or my product.. 
regardless if it's Sun, Nexenta or any company.. verify the product so 
you can know the risks.. It's an open source project.. talk with the 
developers and those in the community who are using it for similar usage 
as you would..


I 100% agreed. That is the reason why FishWorks with collaboration of 
their HW team and NexentaStor with collaboration with their HW Partners 
exists - its all about testing, verification and then testing again. 
Especially if we are talking about storage software.


I think the idea of storage appliance software is just great! It nails 
down OpenSolaris to the very specific storage purposes. This simplifies 
testing also because storage appliance don't need to care about things 
like sound drivers or GUI, etc...


I think the Open Storage message is extremely powerful.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks "fork"

2009-10-27 Thread Erast
As far as I know, its an effort! Not just for x4275 specifically, but in 
general with any other x86 hardware and storage oriented software. A lot 
of work required to support a final solution as well. What Nexenta does 
with its version of NexentaStor is enabling third-party Partners to 
integrate software into a HW/SW solutions ready for production use. 
There is even a social network for Nexenta partners, where Partners 
talks to each other as well as to Nexenta experts and polishing their 
final NexentaStor solutions. Its a process and it works!


List of Partners: http://www.nexenta.com/partners

Bruno Sousa wrote:

I just curious to see how much effort would it take to put the software of
FISH running within a Sun X4275...
Anyway..lets wait and see.

Bruno

On Tue, 27 Oct 2009 13:29:24 -0500 (CDT), Bob Friesenhahn
 wrote:

On Tue, 27 Oct 2009, Bruno Sousa wrote:

I can agree that the software is the one that really has the added 
value, but to my opinion allowing a stack like Fishworks to run 
outside the Sun Unified Storage would lead to lower price per 
unit(Fishwork license) but maybe increase revenue. Why an increase 
in revenues? Well, i assume that alot of customers would buy the 
Fishworks to put into they XYZ high-end server.
"Fishworks" products (products that the Fishworks team developed) are 
designed, tweaked, and tuned for particular hardware configurations. 
It is not like general purpose OpenSolaris where the end user gets to 
experiment with hardware configurations and tunings to get the best 
performance (but might not achieve it).


Fishworks engineers are even known to "holler" at the drives as part 
of the rigorous product testing.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us,

http://www.simplesystems.org/users/bfriesen/

GraphicsMagick Maintainer,http://www.GraphicsMagick.org/



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] fishworks on x4275?

2009-10-19 Thread Erast

Frank Cusack wrote:
On October 19, 2009 9:53:14 AM +1300 Trevor Pretty 
 wrote:

Frank

I've been looking into:-
http://www.nexenta.com/corp/index.php?option=com_content&task=blogsection
&id=4&Itemid=128


Thanks!  I *thought* there was a Nexenta solution but a google search
didn't turn anything up for me.  I'll definitely be looking into this.
The high level documentation is pretty weak, I guess I have to dig in.
But while I have the attention of this list, does NexentaStor "natively"
support AFP and "bonjour" or can I just add that myself?


You can add this yourself via NMS plugin. The developers portal explains 
API and provides examples on how this can be done:


http://www.nexentastor.org/

The Plugin API documentation collected here:

http://www.nexentastor.org/projects/site/wiki/PluginAPI

I think the closest example to follow would be Amanda Client:

http://www.nexentastor.org/projects/amanda-client/repository

Or UPS integration plugin:

http://www.nexentastor.org/projects/ups/repository

The plugin then can be uploaded into NexentaStor public repository and 
will be available to everyone who wants to use AFP sharing protocol.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] eon or nexentacore or opensolaris

2009-05-26 Thread Erast
May be what you saying is true wrt. NexentaCore 2.0. But hey, think 
about open source principals and development process. We do hope that 
NexentaCore will become an official Debian distribution some day! We 
evolving and driven completely by the community here. Anyone can 
participate and fix the bugs and make it happen:


https://launchpad.net/distros/nexenta

As far commercial bits:

1. NexentaStor is still based off 1.x. Once 2.x branch is more or less 
polished we will make a safe transition


2. ON patches goes through serious stress testing not only by Nexenta 
but also by the growing list of Nexenta partners - i.e. to ensure that 
end solution is absolutely stable and safe:


http://www.nexenta.com/partners

3. The development model of NexentaCore is indeed very much Debian-like. 
 However, NexentaStor is developed with different rules in mind - rules 
of focused testing, conservative principals and partner-wide openness


4. Is Debian helping NexentaStor to integrate stuff? Yes, absolutely! 
Lots of advantages here. Debian is NOT just package management as one 
could think of - it is as well a polished distribution foundation. 
NexentaStor plugins, which are pretty much Debian packages, used to 
extend NexentaStor capabilities. Learn more:


http://www.nexenta.com/corp/index.php?option=com_jreviews&Itemid=112

C. Bergström wrote:

Anil Gulecha wrote:

On Sat, May 23, 2009 at 1:19 PM, Bogdan M. Maryniuk
 wrote:
 

On Sat, May 23, 2009 at 4:56 AM, Joe S  wrote:
   

EON ZFS NAS
http://eonstorage.blogspot.com/
  

No idea.

   

NexentaCore Platform (v2.0 RC3)
http://www.nexenta.org/os/NexentaCore
  

Personally, I tried it few times. For now, it is still too much broken
for me yet and looks scary. Previous version is much more stable but
also older. Newer v2.0 looks exactly like bleeding edge Debian old
times: each time you run "apt-get upgrade" you have to use shaman's
tambourine dancing around the fireplace. I don't remember exactly, but
some packages are just broken and can not find dependencies,
installation crashes, pollutes your system and can not be restored
nicely etc. However, when it will be not that broken anymore, it must
be a great distribution with excellent package management and very
convenient to use.



Hi Bogdan,

Which particular packages were these? RC3 is quite stable, and all
server packages are solid. If you do face issues with a particular
one, we'd appreciate a bug report. All information on this is
helpful..
  
I've done some preliminary patch review on the core on-nexenta patches 
and I'd concur to put Nexenta pretty low on the trusted list for 
enterprise storage.  This is in addition to the packaging problems 
you've pointed out.  If the issues at hand were not enough when I sent 
an email to their dev list it was completely ignored.  Marketing for 
Nexenta as Anil points out is strong, but like many other distributions 
outside Sun there's still a lot of work to go.  I'm not sure EON's 
update delivery, but I believe it's just a minimal repackage of 
OpenSolaris release.  This isn't the advocacy list so if you're 
interested in other alternatives feel free to email me off list.


Cheers,


./Christopher

--
OSUNIX - Built from the best of OpenSolaris Technology
http://www.osunix.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Add WORM to OpenSolaris

2009-04-25 Thread Erast

Something like this?

http://www.nexenta.com/corp/index.php?option=com_content&task=view&id=171&Itemid=112

Daniel P. Bath wrote:

Has anyone created a opensource plugin for WORM (Write Once, Read Many) for 
OpenSolaris?
Any ideas how hard it would be to create this?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Public ZFS API ?

2009-03-18 Thread Erast Benson
On Tue, 2009-03-17 at 14:53 -0400, Cherry Shu wrote:
> Are any plans for an API that would allow ZFS commands including 
> snapshot/rollback integrated with customer's application?

Sounds like you are looking for abstraction layering on top of
integrated solution such as NexentaStor. Take a look on API it provides
here:

http://www.nexenta.com/nexentastor-api

SA-API has bindings for C, C++, Perl, Python and Ruby. This
documentation contains examples and samples to demonstrate SA-API
applications in C, C++, Perl, Python and Ruby. You can develop and run
SA-API applications on both Windows and Linux platforms.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] AVS and ZFS demos - link broken?

2009-03-17 Thread Erast Benson
James,

also there is this demo:

http://www.nexenta.com/demos/auto-cdp.html

showing how AVS/ZFS integrated in NexentaStor.

On Tue, 2009-03-17 at 10:25 -0600, James D. Rogers wrote:
> The links to the Part 1 and Part 2 demos on this page
> (http://www.opensolaris.org/os/project/avs/Demos/) appear to be
> broken.
> 
>  
> 
> http://www.opensolaris.org/os/project/avs/Demos/AVS-ZFS-Demo-V1/ 
> 
> http://www.opensolaris.org/os/project/avs/Demos/AVS-ZFS-Demo-V2/ 
> 
>  
> 
> James D. Rogers
> 
> NRA, GOA, DAD -- and I VOTE!
> 
> 2207 Meadowgreen Circle
> 
> Franktown, CO 80116
> 
>  
> 
> coyote_hunt...@msn.com
> 
> 303-688-0480
> 
> 303-885-7410 Cell (Working hours and when coyote huntin'!)
> 
>  
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comstar production-ready?

2009-03-03 Thread Erast Benson
Hi Stephen,

NexentaStor v1.1.5+ could be an alternative, I think. And it includes
new cool COMSTAR integration, i.e. ZFS shareiscsi property actually
implements COMSTAR iSCSI target "share" functionality not available in
SXCE. http://www.nexenta.com/nexentastor-relnotes 

On Wed, 2009-03-04 at 07:07 +, Stephen Nelson-Smith wrote:
> Hi,
> 
> I recommended a ZFS-based archive solution to a client needing to have
> a network-based archive of 15TB of data in a remote datacentre.  I
> based this on an X2200 + J4400, Solaris 10 + rsync.
> 
> This was enthusiastically received, to the extent that the client is
> now requesting that their live system (15TB data on cheap SAN and
> Linux LVM) be replaced with a ZFS-based system.
> 
> The catch is that they're not ready to move their production systems
> off Linux - so web, db and app layer will all still be on RHEL 5.
> 
> As I see it, if they want to benefit from ZFS at the storage layer,
> the obvious solution would be a NAS system, such as a 7210, or
> something buillt from a JBOD and a head node that does something
> similar.  The 7210 is out of budget - and I'm not quite sure how it
> presents its storage - is it NFS/CIFS?  If so, presumably it would be
> relatively easy to build something equivalent, but without the
> (awesome) interface.
> 
> The interesting alternative is to set up Comstar on SXCE, create
> zpools and volumes, and make these available either over a fibre
> infrastructure, or iSCSI.  I'm quite excited by this as a solution,
> but I'm not sure if it's really production ready.
> 
> What other options are there, and what advice/experience can you share?
> 
> Thanks,
> 
> S.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-16 Thread Erast Benson
pNFS is NFS-centric of course and it is not yet stable, isn't it? btw,
what is the ETA for pNFS putback?

On Thu, 2008-10-16 at 12:20 -0700, Marion Hakanson wrote:
> [EMAIL PROTECTED] said:
> > It's interesting how the speed and optimisation of these maintenance
> > activities limit pool size.  It's not just full scrubs.  If the filesystem 
> > is
> > subject to corruption, you need a backup.  If the filesystem takes two 
> > months
> > to back up / restore, then you need really solid incremental backup/restore
> > features, and the backup needs to be a cold spare, not just a
> > backup---restoring means switching the roles of the primary and backup
> > system, not actually moving data.   
> 
> I'll chime in here with feeling uncomfortable with such a huge ZFS pool,
> and also with my discomfort of the ZFS-over-ISCSI-on-ZFS approach.  There
> just seem to be too many moving parts depending on each other, any one of
> which can make the entire pool unavailable.
> 
> For the stated usage of the original poster, I think I would aim toward
> turning each of the Thumpers into an NFS server, configure the head-node
> as a pNFS/NFSv4.1 metadata server, and let all the clients speak parallel-NFS
> to the "cluster" of file servers.  You'll end up with a huge logical pool,
> but a Thumper outage should result only in loss of access to the data on
> that particular system.  The work of scrub/resilver/replication can be
> divided among the servers rather than all living on a single head node.
> 
> Regards,
> 
> Marion
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-14 Thread Erast Benson
James, all serious ZFS bug fixes back-ported to b85 as well as marvell
and other sata drivers. Not everything is possible to back-port of
course, but I would say all critical things are there. This includes ZFS
ARC optimization patches, for example.

On Tue, 2008-10-14 at 22:33 +1000, James C. McPherson wrote:
> Gray Carper wrote:
> > Hey there, James!
> > 
> > We're actually running NexentaStor v1.0.8, which is based on b85. We 
> > haven't done any tuning ourselves, but I suppose it is possible that 
> > Nexenta did. If there's something specific you'd like me to look for, 
> > I'd be happy to.
> 
> Hi Gray,
> So build 85 that's getting a bit long in the tooth now.
> 
> I know there have been *lots* of ZFS, Marvell SATA and iSCSI
> fixes and enhancements since then which went into OpenSolaris.
> I know they're in Solaris Express and the updated binary distro
> form of os2008.05 - I just don't know whether Erast and the
> Nexenta clan have included them in what they are releasing as 1.0.8.
> 
> Erast - could you chime in here please? Unfortunately I've got no
> idea about Nexenta.
> 
> 
> James C. McPherson
> --
> Senior Kernel Software Engineer, Solaris
> Sun Microsystems
> http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Nexenta/ZFS vs Heartbeat/DRBD

2008-09-10 Thread Erast Benson
On Wed, 2008-09-10 at 19:42 -0400, Maurice Volaski wrote:
> >On Wed, 2008-09-10 at 19:10 -0400, Maurice Volaski wrote:
> >>  >On Wed, 2008-09-10 at 18:37 -0400, Maurice Volaski wrote:
> >>  >>  >On Wed, 2008-09-10 at 15:00 -0400, Maurice Volaski wrote:
> >>  >>  >>  >On Wed, 2008-09-10 at 14:36 -0400, Maurice Volaski wrote:
> >>  >>  >>  >>  A disadvantage, however, is that Sun StorageTek 
> >>Availability Suite
> >>  >>  >>  >>  (AVS), the DRBD equivalent in OpenSolaris, is much less
> >>  >>flexible than
> >>  >>  >>  >>  DRBD. For example, AVS is intended to replicate in one 
> >>direction,
> >>  >>  >>  >>  from a primary to a secondary, whereas DRBD can switch 
> >>on the fly.
> >>  >>  >>  >>  See
> >>  >>  >>  >> 
> >>http://www.opensolaris.org/jive/thread.jspa?threadID=68881&tstart=30
> >>  >>  >>  >>  for details on this.
> >>  >>  >>  >
> >>  >>  >>  >I would be curious to see production environments 
> >>"switching" direction
> >>  >>  >>  >on the fly at that low level... Usually some top-level 
> >>brain does that
> >>  >>  >>  >in context of HA fail-over and so on.
> >>  >>  >>
> >>  >>  >>  By switching on the fly, I mean if the primary services are taken
> >>  >>  >>  down and then brought up on the secondary, the direction of
> >>  >>  >>  synchronization gets reversed. That's not possible with 
> >>AVS because...
> >>  >>  >>
> >>  >>  >>  >well, AVS actually does reverse synchronization and does 
> >>it very good.
> >>  >>  >>
> >>  >>  >>  It's a one-time operation that "re-reverses" once it completes.
> >>  >>  >
> >>  >>  >When primary is repaired you want to have it on-line and retain the
> >>  >>  >changes made on the secondary.
> >>  >>
> >>  >>  Not necessarily. Even when the primary is ready to go back into
> >>  >>  service, I may not want to revert to it for one reason or another.
> >>  >>  That means I am without a live mirror because AVS' realtime mirroring
> >>  >>  is only one direction, primary to secondary.
> >>  >
> >>  >This why I tried to state that this is not realistic environment for
> >>  >non-shared storage HA deployments.
> >>
> >>  What's not realistic? DRBD's highly flexible ability to switch roles
> >>  on the fly is a huge advantage over AVS. But this is not to say AVS
> >>  is not realistic. It's just a limitation.
> >>
> >>  >DRBD trying to emulate shared-storage
> >>  >behavior at a wrong level where in fact usage of FC/iSCSI-connected
> >>  >storage needs to be considered.
> >>
> >>  This makes no sense to me. We're talking about mirroring the storage
> >>  of two physical and independent systems. How did the concept of
> >>  "shared storage" get in here?
> >
> >This is really outside of ZFS discussion now... But your point taken. If
> >you want mirror-like behavior of your 2-node cluster, you'll get some
> >benefits of DRBD but my point is that such solution trying to solve two
> >problems at the same time: replication and availability, which is in my
> >opinion plain wrong.
> 
> Uh, no, DRBD addresses only replication. Linux-HA (aka Heartbeat) 
> address availability. They can be an integrated solution and are to 
> some degree intended that way, so I have no idea where your opinion 
> is coming from.

Because in my opinion DRBD takes some responsibility of management layer
if you will. Classic, predominant replication in HA clusters schema is
primary-backup (or master-slave) and backup by definition is not
necessary primary-identical system. Having said that, it is noble for
DRBD to implement role switching and not a bad idea for many small
deployments.

> For replication, OpenSolaris is largely limited to using AVS, whose 
> functionality is limited, at least relative to DRBD. But there seems 
> to be a few options to implement availability, which should include 
> Linux-HA itself as it should run on OpenSolaris!

Everything is implementable and I believe AVS designers thought about
dynamic switching of roles, but they end up with what we have today,
they likely discarded this idea.

AVS not switching roles and forces IT admins to use it as primary-backup
data protection service only.

> But relevant to the poster's initial question, ZFS is so far and away 
> more advanced than any Linux filesystem can even dream about that it 
> handily nullifies any disadvantage in having to run AVS.

Right.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Nexenta/ZFS vs Heartbeat/DRBD

2008-09-10 Thread Erast Benson
On Wed, 2008-09-10 at 19:10 -0400, Maurice Volaski wrote:
> >On Wed, 2008-09-10 at 18:37 -0400, Maurice Volaski wrote:
> >>  >On Wed, 2008-09-10 at 15:00 -0400, Maurice Volaski wrote:
> >>  >>  >On Wed, 2008-09-10 at 14:36 -0400, Maurice Volaski wrote:
> >>  >>  >>  A disadvantage, however, is that Sun StorageTek Availability Suite
> >>  >>  >>  (AVS), the DRBD equivalent in OpenSolaris, is much less 
> >>flexible than
> >>  >>  >>  DRBD. For example, AVS is intended to replicate in one direction,
> >>  >>  >>  from a primary to a secondary, whereas DRBD can switch on the fly.
> >>  >>  >>  See
> >>  >>  >>  
> >> http://www.opensolaris.org/jive/thread.jspa?threadID=68881&tstart=30
> >>  >>  >>  for details on this.
> >>  >>  >
> >>  >>  >I would be curious to see production environments "switching" 
> >> direction
> >>  >>  >on the fly at that low level... Usually some top-level brain does 
> >> that
> >>  >>  >in context of HA fail-over and so on.
> >>  >>
> >>  >>  By switching on the fly, I mean if the primary services are taken
> >>  >>  down and then brought up on the secondary, the direction of
> >>  >>  synchronization gets reversed. That's not possible with AVS because...
> >>  >>
> >>  >>  >well, AVS actually does reverse synchronization and does it very 
> >> good.
> >>  >>
> >>  >>  It's a one-time operation that "re-reverses" once it completes.
> >>  >
> >>  >When primary is repaired you want to have it on-line and retain the
> >>  >changes made on the secondary.
> >>
> >>  Not necessarily. Even when the primary is ready to go back into
> >>  service, I may not want to revert to it for one reason or another.
> >>  That means I am without a live mirror because AVS' realtime mirroring
> >>  is only one direction, primary to secondary.
> >
> >This why I tried to state that this is not realistic environment for
> >non-shared storage HA deployments.
> 
> What's not realistic? DRBD's highly flexible ability to switch roles 
> on the fly is a huge advantage over AVS. But this is not to say AVS 
> is not realistic. It's just a limitation.
> 
> >DRBD trying to emulate shared-storage
> >behavior at a wrong level where in fact usage of FC/iSCSI-connected
> >storage needs to be considered.
> 
> This makes no sense to me. We're talking about mirroring the storage 
> of two physical and independent systems. How did the concept of 
> "shared storage" get in here?

This is really outside of ZFS discussion now... But your point taken. If
you want mirror-like behavior of your 2-node cluster, you'll get some
benefits of DRBD but my point is that such solution trying to solve two
problems at the same time: replication and availability, which is in my
opinion plain wrong.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Nexenta/ZFS vs Heartbeat/DRBD

2008-09-10 Thread Erast Benson
On Wed, 2008-09-10 at 18:37 -0400, Maurice Volaski wrote:
> >On Wed, 2008-09-10 at 15:00 -0400, Maurice Volaski wrote:
> >>  >On Wed, 2008-09-10 at 14:36 -0400, Maurice Volaski wrote:
> >>  >>  A disadvantage, however, is that Sun StorageTek Availability Suite
> >>  >>  (AVS), the DRBD equivalent in OpenSolaris, is much less flexible than
> >>  >>  DRBD. For example, AVS is intended to replicate in one direction,
> >>  >>  from a primary to a secondary, whereas DRBD can switch on the fly.
> >>  >>  See
> >>  >>  http://www.opensolaris.org/jive/thread.jspa?threadID=68881&tstart=30
> >>  >>  for details on this.
> >>  >
> >>  >I would be curious to see production environments "switching" direction
> >>  >on the fly at that low level... Usually some top-level brain does that
> >>  >in context of HA fail-over and so on.
> >>
> >>  By switching on the fly, I mean if the primary services are taken
> >>  down and then brought up on the secondary, the direction of
> >>  synchronization gets reversed. That's not possible with AVS because...
> >>
> >>  >well, AVS actually does reverse synchronization and does it very good.
> >>
> >>  It's a one-time operation that "re-reverses" once it completes.
> >
> >When primary is repaired you want to have it on-line and retain the
> >changes made on the secondary.
> 
> Not necessarily. Even when the primary is ready to go back into 
> service, I may not want to revert to it for one reason or another. 
> That means I am without a live mirror because AVS' realtime mirroring 
> is only one direction, primary to secondary.

This why I tried to state that this is not realistic environment for
non-shared storage HA deployments. DRBD trying to emulate shared-storage
behavior at a wrong level where in fact usage of FC/iSCSI-connected
storage needs to be considered.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Nexenta/ZFS vs Heartbeat/DRBD

2008-09-10 Thread Erast Benson
On Wed, 2008-09-10 at 15:00 -0400, Maurice Volaski wrote:
> >On Wed, 2008-09-10 at 14:36 -0400, Maurice Volaski wrote:
> >>  A disadvantage, however, is that Sun StorageTek Availability Suite
> >>  (AVS), the DRBD equivalent in OpenSolaris, is much less flexible than
> >>  DRBD. For example, AVS is intended to replicate in one direction,
> >>  from a primary to a secondary, whereas DRBD can switch on the fly.
> >>  See
> >>  http://www.opensolaris.org/jive/thread.jspa?threadID=68881&tstart=30
> >>  for details on this.
> >
> >I would be curious to see production environments "switching" direction
> >on the fly at that low level... Usually some top-level brain does that
> >in context of HA fail-over and so on.
> 
> By switching on the fly, I mean if the primary services are taken 
> down and then brought up on the secondary, the direction of 
> synchronization gets reversed. That's not possible with AVS because...
> 
> >well, AVS actually does reverse synchronization and does it very good.
> 
> It's a one-time operation that "re-reverses" once it completes.

When primary is repaired you want to have it on-line and retain the
changes made on the secondary. Your secondary did the job and switched
back to its secondary role. This HA fail-back cycle could be repeated as
many times as you need using reverse sync command.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Nexenta/ZFS vs Heartbeat/DRBD

2008-09-10 Thread Erast Benson
On Wed, 2008-09-10 at 14:36 -0400, Maurice Volaski wrote:
> A disadvantage, however, is that Sun StorageTek Availability Suite 
> (AVS), the DRBD equivalent in OpenSolaris, is much less flexible than 
> DRBD. For example, AVS is intended to replicate in one direction, 
> from a primary to a secondary, whereas DRBD can switch on the fly. 
> See 
> http://www.opensolaris.org/jive/thread.jspa?threadID=68881&tstart=30 
> for details on this.

I would be curious to see production environments "switching" direction
on the fly at that low level... Usually some top-level brain does that
in context of HA fail-over and so on.

well, AVS actually does reverse synchronization and does it very good.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Nexenta/ZFS vs Heartbeat/DRBD

2008-09-10 Thread Erast Benson
Well, obviously - its Linux vs. OpenSolaris question. Most serious
advantage of OpenSolaris is ZFS and its enterprise level storage stack.
Linux just not there yet..

On Wed, 2008-09-10 at 14:51 +0200, Axel Schmalowsky wrote:
> Hallo list,
> 
> hope that so can help me on this topic.
> 
> I'd like to know where the *real* advantages of Nexenta/ZFS (i.e. 
> ZFS/StorageTek) over DRBD/Heartbeat are.
> I'm pretty new to this topic and hence do not have enough experience to judge 
> their respective advantages/disadvantages reasonably.
> 
> Any suggestion would be appreciated.
> 
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ?: any effort for snapshot management

2008-09-05 Thread Erast Benson
Steffen,

Most complete and serious ZFS snapshot management, integrated ZFS
send/recv replication over RSYNC with CLI, integrated AVS, GUI and
management server which provides rich API for C/C++/Perl/Python/Ruby
integrators available here:

http://www.nexenta.com/nexentastor-overview

Its ZFS+ with a lot of reliability fixes. Enterprise quality, production
ready solution.

Demo of of advanced CLI usage is here:

http://www.nexenta.com/demos/automated-snapshots.html 
http://www.nexenta.com/demos/auto-tier-basic.html

As a side not, I think that dis-integrated general-purpose scripting
which is available on the Internet simply can not provide production
quality and easy of use.

On Fri, 2008-09-05 at 13:14 -0400, Steffen Weiberle wrote:
> I have seen Tim Foster's auto-snapshot and it looks interesting.
> 
> Is there a bug id or effort to deliver snapshot policy and space 
> management framework? Not looking for a GUI, although a CLI based UI 
> might be helpful. Customer needs something that allows the use of 
> snapshots on 100s of systems, and minimizes the administration to handle 
> disks filling up.
> 
> I imagine a component is a time or condition based auto-delete of older 
> snopshot(s).
> 
> Thanks
> Steffen
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] NexentaStor API & Windows SDK published

2008-08-04 Thread Erast Benson
Hey folks,

just saw another cool news this morning - Nexenta Systems released
documentation for remote API and Windows SDK with demos for accessing
NexentaStor. News itself:

http://www.nexenta.com/corp/index.php?option=com_content&task=view&id=154&Itemid=56

ZFS and the rest of appliance functionality abstracted via Nexenta
Management Server (NMS) and available remotely via API with following
language bindings: C, C++, Perl, Python and Ruby:

http://www.nexenta.com/nexentastor-api

And another cool feature worth mentioning is - plugin architecture.
There is no API for plugins available yet, but there are number of
CDDL-licensed plugins available as an examples here:

http://www.nexenta.com/nexentastor-plugins

Nice!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Pogo Linux ships NexentaStor pre-installed boxes

2008-08-01 Thread Erast Benson
Hi folks,

wanted to share some exciting news with you. Pogo Linux shipping
NexentaStor pre-installed boxes, like this one 16TB - 24TB:

http://www.pogolinux.com/quotes/editsys?sys_id=3989

And here is announce:

http://www.nexenta.com/corp/index.php?option=com_content&task=view&id=129&Itemid=56

Pogo says: "Managed Storage – NetApp features without the price"...

Go OpenSolaris, Go!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] five megabytes per second with Microsoft iSCSI initiator (2.06)

2008-02-19 Thread Erast Benson
http://blogs.sun.com/constantin/entry/x4500_solaris_zfs_iscsi_perfect

On Tue, 2008-02-19 at 14:44 -0600, Bob Friesenhahn wrote:
> It would be useful if people here who have used iSCSI on top of ZFS 
> could share their performance experiences.  It is very easy to waste a 
> lot of time trying to realize unrealistic expectations.  Hopefully 
> iSCSI on top of ZFS normally manages to transfer much more than 
> 5MB/second!
> 
> Bob
> ==
> Bob Friesenhahn
> [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS replication strategies

2008-02-01 Thread Erast Benson
Take a look on NexentaStor - its a complete 2nd tier solution:

http://www.nexenta.com/products

and AVS is nicely integrated via management RPC interface which is
connecting multiple NexentaStor nodes together and greatly simplifies
AVS usage with ZFS... See demo here:

http://www.nexenta.com/demos/auto-cdp.html

On Fri, 2008-02-01 at 10:15 -0800, Vincent Fox wrote:
> Does anyone have any particularly creative ZFS replication strategies they 
> could share?
> 
> I have 5 high-performance Cyrus mail-servers, with about a Terabyte of 
> storage each of which only 200-300 gigs is used though even including 14 days 
> of snapshot space.
> 
> I am thinking about setting up a single 3511 with 4 terabytes of storage at a 
> remote site as a backup device for the content.  Struggling with how to 
> organize the idea of wedging 5 servers into the one array though.
> 
> Simplest way that occurs is one big RAID-5 storage pool with all disks.  Then 
> slice out 5 LUNs each as it's own ZFS pool.  Then use zfs send & receive to 
> replicate the pools.
> 
> Ideally I'd love it if ZFS directly supported the idea of rolling snapshots 
> out into slower secondary storage disks on the SAN, but in the meanwhile 
> looks like we have to roll our own solutions.
>  
> 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Issue fixing ZFS corruption

2008-01-23 Thread Erast Benson
well, we had some problems with si3124 driver, but with driver binary
posted in this forum the problem seems been fixed. Later we saw the same
fix went in into b72.

On Thu, 2008-01-24 at 05:11 +0300, Jonathan Stewart wrote:
> Jeff Bonwick wrote:
> > The Silicon Image 3114 controller is known to corrupt data.
> > Google for "silicon image 3114 corruption" to get a flavor.
> > I'd suggest getting your data onto different h/w, quickly.
> 
> I'll second this, the 3114 is a piece of junk if you value your data.  I 
> bought a 4 port LSI SAS card (yes a bit pricy) and have had 0 problems 
> since and hot swap actually works.  I never tried it with the 3114 I had 
> just never seen it actually working before so I was quite pleasantly 
> surprised.
> 
> Jonathan
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Issue fixing ZFS corruption

2008-01-23 Thread Erast Benson
I believe issue been fixed in snv_72+, no?

On Wed, 2008-01-23 at 16:41 -0800, Jeff Bonwick wrote:
> The Silicon Image 3114 controller is known to corrupt data.
> Google for "silicon image 3114 corruption" to get a flavor.
> I'd suggest getting your data onto different h/w, quickly.
> 
> Jeff
> 
> On Wed, Jan 23, 2008 at 12:34:56PM -0800, Bertrand Sirodot wrote:
> > Hi,
> > 
> > I have been experiencing corruption on one of my ZFS pool over the last 
> > couple of days. I have tried running zpool scrub on the pool, but everytime 
> > it comes back with new files being corrupted. I would have thought that 
> > zpool scrub would have identified the corrupted files once and for all and 
> > would be fine afterwards. The feeling I have right now is that zpool scrub 
> > is actually spreading the corruption and won't stop until I have no more 
> > files on the file systems. 
> > 
> > I am running 5.11 snv_60 on an Asus M2A VM motherboard. I am using both the 
> > SATA controller on the motherboard and a Si3114 based controller. I have 
> > had the Si3114 controller for a couple of years now with no issue, that I 
> > know of.
> > 
> > Any idea? I was trying to salvage the situation, but it looks like I am 
> > going to have to destroy the pool and recreate it.
> > 
> > Thanks a lot in advance,
> > Bertrand.
> >  
> >  
> > This message posted from opensolaris.org
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Integrated transactional upgrades with ZFS

2008-01-17 Thread Erast Benson
Hi guys,

new article available explaining details on how enterprise-like upgrades
integrated with Nexenta Core Platform starting from RC2 using ZFS
capabilities and Debian APT:

http://www.nexenta.org/os/TransactionalZFSUpgrades

What is NexentaCP?

NexentaCP is a minimal (core) foundation that can be used to quickly
build servers, desktops, and custom distributions tailored for
specialized applications such as NexentaStor. Similar to NexentOS
desktop distribution, NexentaCP combines reliable state-of-the-art
kernel with the GNU userland, and the ability to integrate open source
components in no time. However, unlike NexentaOS desktop distribution,
NexentaCP does not aim to provide a complete desktop. The overriding
objective for NexentaCP is - stable foundation.

Enjoy!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Nexenta/Debian APT integrated with ZFS now...

2007-12-19 Thread Erast Benson
Thank you!

we are working on it. new website is coming, as well as next release of
NCP. Meanwhile, old RC1 could be downloaded from:

http://archive.nexenta.org/releases

On Wed, 2007-12-19 at 18:01 -0800, MC wrote:
> > 2) Unstable APT integrated with ON build 79, give it a try!
> 
> Excellent progress!!  But your website is out of date and I cannot find a 
> NexentaCP link on the download page.  Only the old NexentaOS link.  Also you 
> should update the news page so it looks like the project is active :)
>  
> 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Nexenta/Debian APT integrated with ZFS now...

2007-12-19 Thread Erast Benson
Hi All,

This is the road to NCP 1.0...

Our motto:

"""Ubuntu makes best Debian Desktop platform - Nexenta makes best Debian
Server/Storage platform."""

Some latest Nexenta related news:

1) Official Nexenta Core Platform (NCP) repository now is
http://apt.nexenta.org

2) Unstable APT integrated with ON build 79, give it a try!

3) apt-get now fully integrated with ZFS cloning. New management tool
provided: apt-clone. Never loose your upgrades again!

4) I'm seeking for developers who loves Debian and will help us to join
Debian community. We've got general agreement with Debian leaders, but
some work needs to be done, lets coordinate on official Nexenta IRC:
#nexenta


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NexentaCP Beta1-test2 (ZFS/Boot - manual partitioning support)

2007-06-28 Thread Erast Benson
just use "pkgadd -d" wrapper. it will auto-magically convert SVR4
package to the .deb(s) and install them on the fly. You can also use
"pkgrm" to remove them. pkginfo wrapper is also available.

On Thu, 2007-06-28 at 16:38 +0200, Selim Daoud wrote:
> superbe job...synaptic package manager is really impressive
> is there a way to transform Sun package to a synaptic package?
> 
> selim
> 
> On 6/22/07, Al Hopper <[EMAIL PROTECTED]> wrote:
> > On Fri, 22 Jun 2007, Erast Benson wrote:
> >
> > > New unstable ISO of NexentaCP (Core Platform) available.
> > >
> > > http://www.gnusolaris.org/unstable-iso/ncp_beta1-test2-b67_i386.iso
> >
> > Also available at:
> >
> > http://www.genunix.org/distributions/gnusolaris/index.html
> >
> > > Changes:
> > >
> > > * ON B67 based
> > > * ZFS/Boot manual partitioning support implemented (in addition to
> > > auto-partitioning). Both, Wizard and FDisk types fully supported.
> > > * gcc/g++ now officially included on installation media
> > > * APT repository fixed
> > > * first official meta-package: nexenta-gnome
> > >
> > > After installation, those who needs GNOME environment, just type:
> > >
> > > $ sudo apt-get install nexenta-gnome
> > >
> > > Known bugs:
> > >
> > > * after fresh install APT caches needs to be re-created:
> > >
> > > $ sudo rm /var/lib/apt/*
> > > $ sudo apt-get update
> > > --
> > > Erast
> >
> > Regards,
> >
> > Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
> > Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
> > OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
> > http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> >
> 
-- 
Erast

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] NexentaCP Beta1-test2 (ZFS/Boot - manual partitioning support)

2007-06-22 Thread Erast Benson
New unstable ISO of NexentaCP (Core Platform) available.

http://www.gnusolaris.org/unstable-iso/ncp_beta1-test2-b67_i386.iso

Changes:

* ON B67 based
* ZFS/Boot manual partitioning support implemented (in addition to
auto-partitioning). Both, Wizard and FDisk types fully supported.
* gcc/g++ now officially included on installation media
* APT repository fixed
* first official meta-package: nexenta-gnome

After installation, those who needs GNOME environment, just type:

$ sudo apt-get install nexenta-gnome

Known bugs:

* after fresh install APT caches needs to be re-created:

$ sudo rm /var/lib/apt/*
$ sudo apt-get update

-- 
Erast

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: [osol-discuss] Re: Announcing NexentaCP(b65) with ZFS/Boot integrated installer

2007-06-07 Thread Erast Benson
On Thu, 2007-06-07 at 16:26 -0400, Francois Saint-Jacques wrote:
> On Wed, Jun 06, 2007 at 11:51:08PM -0700, Erast Benson wrote:
> > More details on NexentaCP will be available soon...
> 
> Is it based on Alpha7?

Alpha7 is the Desktop-oriented ISO, however they share the same main APT
repository, i.e. Dapper/LTS.

So far core team aggreed on following major decisions:

1) NexentaCP will follow Ubuntu/LTS releases only;
2) NexentaCP main set of packages shipped on ISO will be greately
reduced and will contain only highly tested "base" minimum;
3) NexentaCP will offer Network-type installations using main(LTS-based)
or third-party repository via Installer or after-install wizards.

FYI, Martin mentioned some "main" goals of this move during LinuxTag
conference: http://martinman.net/

-- 
Erast

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Announcing NexentaCP(b65) with ZFS/Boot integrated installer

2007-06-06 Thread Erast Benson
Announcing new direction of Open Source NexentaOS development:
NexentaCP (Nexenta Core Platform).

NexentaCP is Dapper/LTS-based core Operating System Platform distributed
as a single-CD ISO, integrates Installer/ON/NWS/Debian and provides
basis for Network-type installations via main or third-party APTs (NEW).

First "unstable" b65-based ISO with ZFS/Boot-capable installer available
as usual at:
http://www.gnusolaris.org/unstable-iso/ncp_beta1-test1-b65_i386.iso

Please give it a try and start building your own APT repositories and
communities today!

Note: this version of installer supports ZFS/Boot type of installations
on single disk or 2+ mirror configuration. For now, only "Auto"
partitioning mode could be used for ZFS root partition creation.

More details on NexentaCP will be available soon...

-- 
Erast

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Testing of UFS, VxFS and ZFS

2007-04-16 Thread Erast Benson
e, with VxFS being set to
> 100%, then UFS run’s at 2.5% the speed, and ZFS at 13.8% the speed,
> for OLTP UFS is 4.8% and ZFS 26.7%, however in DWR where there are
> 100% reads, no writing, performance is similar with UFS at 101.2% and
> ZFS at 100.2% the speed of VxFS.
> 
>  
> 
>   cid:image002.png@01C78027.99B515D0
> 
>  
> 
>  
> 
> Given this performance problems, then quite obviously VxFS quite
> rightly deserves to be the file system of choice, even with a cost
> premium. If anyone has any insight into why I am seeing, consistently,
> these types of very disappointing numbers I would very much appreciate
> your comments. The numbers are very disturbing as it is indicating
> that write performance has issues. Please take into account that this
> benchmark is performed on non-tuned file systems specifically at the
> customers request as this is likely the way they would be deployed in
> their production environments.
> 
>  
> 
> Maybe I should be configuring my workload differently for VDBench – if
> so, does anyone have any ideas on this?
> 
>  
> 
> Unfortunately, I have weeks worth of test data to back up these
> numbers and would enjoy the opportunity to discuss these results in
> detail to discover if my methodology has problems or if it is the file
> system.
> 
>  
> 
> Thanks for your time.
> 
>  
> 
> [EMAIL PROTECTED]
> 
> 416.801.6779
> 
>  
> 
> You can always tell who the Newfoundlanders are in Heaven. They're
> the ones who want to go home
> 
>  
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-- 
Erast

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Data Management API

2007-03-20 Thread Erast Benson
On Tue, 2007-03-20 at 09:29 -0700, Erast Benson wrote:
> On Tue, 2007-03-20 at 16:22 +, Darren J Moffat wrote:
> > Robert Milkowski wrote:
> > > Hello devid,
> > > 
> > > Tuesday, March 20, 2007, 3:58:27 PM, you wrote:
> > > 
> > > d> Does ZFS have a Data Management API to monitor events on files and
> > > d> to store arbitrary attribute information with a file? Any answer on
> > > d> this would be really appreciated.
> > > 
> > > IIRC correctly there's being developed file event mechanism - more
> > > general which should work with other file systems too. I have no idea
> > > of its status or if someone even started coding it.
> > > 
> > > Your second question - no, you can't.
> > 
> > Yes you can and it has been there even before ZFS existed see fsattr(5) 
> > it isn't ZFS specific but a generic attribute extension to the 
> > filesystems, currently supported by ufs, nfs, zfs, tmpfs.
> 
> apparently fsattr is not part of OpenSolaris or at least I can't find
> it..

oh, this is API...

-- 
Erast

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Data Management API

2007-03-20 Thread Erast Benson
On Tue, 2007-03-20 at 16:22 +, Darren J Moffat wrote:
> Robert Milkowski wrote:
> > Hello devid,
> > 
> > Tuesday, March 20, 2007, 3:58:27 PM, you wrote:
> > 
> > d> Does ZFS have a Data Management API to monitor events on files and
> > d> to store arbitrary attribute information with a file? Any answer on
> > d> this would be really appreciated.
> > 
> > IIRC correctly there's being developed file event mechanism - more
> > general which should work with other file systems too. I have no idea
> > of its status or if someone even started coding it.
> > 
> > Your second question - no, you can't.
> 
> Yes you can and it has been there even before ZFS existed see fsattr(5) 
> it isn't ZFS specific but a generic attribute extension to the 
> filesystems, currently supported by ufs, nfs, zfs, tmpfs.

apparently fsattr is not part of OpenSolaris or at least I can't find
it..

-- 
Erast

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Solaris as a VMWare guest

2007-03-12 Thread Erast Benson
On Mon, 2007-03-12 at 20:53 -0600, James Dickens wrote:
> 
> 
> On 3/12/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: 
> What issues, if any, are likely to surface with using Solaris
> inside vmware as a guest os, if I choose to use ZFS?
> 
> works great in vmware server, IO rates suck. 
> 
> 
> I'm assuming that ZFS's ability to maintain data integrity 
> will prevail and protect me from any problems that the
> addition of vmware might introduce.
> 
> no problems so far, created two  virtual disks and  concat, its just a
> toy/test bed for nexenta, only problem I have with nexenta is that the
> 64bit mode crashes on boot. b55  may be fixed who knows. 

its ae driver. Murayama fixed it recently in unstable branch.
If you don't want to upgrade to latest, you could change your vmware
settings to use e1000g driver instead. Or just upgrade myamanet-ae from
unstable like:

$ sudo apt-get install myamanet-ae

> 
> Are there likely to be any issues with disk drive IO
> performance?
> 
> i'm getting  11MB/s on bonnie++, the disks are backed by sata drives
> on a  ultra 20 2.6ghz and has  512MB allocated. 
> 
> 
> not exactly a speed demon it would get about 130MB/s on the raw
> hardware. 
> 
> 
> James Dickens
> uadmin.blogspot.com
> 
> 
>  
> 
> The concern here is with comments on how ZFS likes to
> "own spindles" so that it can properly schedule I/O and
> maximise performance.
> 
> Any other gotchas, such as the extra vmware layer doing
> buffering that ZFS isn't aware of, etc? 
> 
> If there are problems, are they likely to be any
> better/different
> when using ZFS and Solaris as a Xen domU?
> 
> Darren
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-- 
Erast

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS as root FS

2006-11-28 Thread Erast Benson
On Tue, 2006-11-28 at 09:46 +, Darren J Moffat wrote:
> Lori Alt wrote:
> > Latest plan is to release zfs boot with U5.  It definitely isn't going 
> > to make U4.
> > We have new prototype bits, but they haven't been putback yet.  There are
> > a  number of design decisions that have hinged on syncing up our strategy
> > with other projects, or allowing some other projects to "gel".  Main
> > dependencies:  Xen, some sparc boot changes, and zones upgrade.  It's
> > coming together and I hope we can have some new bits putback shortly
> > after the first of the year.
> 
> Any chance of you setting up a repository on OpenSolaris.org with the 
> prototype bits in source so that people can build them and test them out ?
> 
> For some of us the most interesting part of this is the bits in ON not 
> the installer bits - particularly those people interested in building 
> their own distros of OpenSolaris.

+1

SchiliX, BeleniX, Nexenta, Martux have their own installers and boot
environments anyways, so would be *really* nice if you guys could open
up zfs root ON bits.

-- 
Erast

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: ZFS for Linux 2.6

2006-11-07 Thread Erast Benson
On Tue, 2006-11-07 at 10:30 -0800, Akhilesh Mritunjai wrote:
> > > Yuen L. Lee wrote:
> > opensolaris could be a nice NAS filer. I posted
> > my question on "How to build a NAS box" asking for
> > instructions on how to build a Solaris NAS box.
> > It looks like everyone is busy. I haven't got any
> > response yet. By any chance, do you have any
> 
> Hi Yuen
> 
> May I suggest that a better question would have been "How to build a minimal 
> Nevada distribution ?". I'm sure it would have gotten more responses as it is 
> both - a more general, and a more relevent question.
> 
> Apart from that unasked advice, If my memory serves right the Belenix folks 
> (Moinak and gang) were discussing a similar thing in a thread sometime 
> back... chasing them might be a good idea ;-)
> 
> I found some articles on net on how to build minimal image of solaris with 
> networking. Packages relating to storage (zfs, iSCSI etc) can be added to it 
> later. The minimal system with required components, sure, is heavy - about 
> 200MB... but shouldn't be an issue for a *NAS* box. I googled "Minimal 
> solaris configuration" and found several articles.

Alternative way would be to simply use NexentaOS InstallCD and select
"Minimal Profile" during installation.

-- 
Erast

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss