Re: [zfs-discuss] zfs-discuss mailing list & opensolaris EOL

2013-02-16 Thread Sriram Narayanan
On Sat, Feb 16, 2013 at 10:17 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
 wrote:
> In the absence of any official response, I guess we just have to assume this
> list will be shut down, right?
>
> So I guess we just have to move to the illumos mailing list, as Deirdre
> suggests?
>

Or, given that this is a weekend, we assume that someone within Oracle
would see this mail only on Monday morning Pacific Time, then send out
some mails within, and be able to respond in public only by Wednesday
evening Pacific Time at best.

-- Sriram

>
>
>
>
>
>
> From: zfs-discuss-boun...@opensolaris.org
> [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
> (opensolarisisdeadlongliveopensolaris)
> Sent: Friday, February 15, 2013 11:00 AM
> To: zfs-discuss@opensolaris.org
> Subject: [zfs-discuss] zfs-discuss mailing list & opensolaris EOL
>
>
>
> So, I hear, in a couple weeks' time, opensolaris.org is shutting down.  What
> does that mean for this mailing list?  Should we all be moving over to
> something at illumos or something?
>
>
>
> I'm going to encourage somebody in an official capacity at opensolaris to
> respond...
>
> I'm going to discourage unofficial responses, like, illumos enthusiasts etc
> simply trying to get people to jump this list.
>
>
>
> Thanks for any info ...
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
Belenix: www.belenix.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Any recommendations on Perc H700 controller on Dell Rx10 ?

2012-03-10 Thread Sriram Narayanan
Hi folks:

At work, I have an R510, and R610 and an R710 - all with the H700 PERC
controller.

Based on experiments, it seems like there is no way to bypass the PERC
controller - it seems like one can only access the individual disks if
they are set up in RAID0 each.

This brings me to ask some questions:
a. Is it fine (in terms of an intelligent controller coming in the way
of ZFS) to have the PERC controllers present each drive as RAID0
drives ?
b. Would there be any errors in terms of PERC doing things that ZFS is
not aware of and this causing any issues later  ?

-- Sriram
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] bad seagate drive?

2011-09-11 Thread Sriram Narayanan
It'd be worth still reseating the SATA cables on the backplane like
Krunal recommended. Once the resilvering completes, of course ;)

-- Sriram

On 9/12/11, Matt Harrison  wrote:
> On 11/09/2011 18:32, Krunal Desai wrote:
>> On Sep 11, 2011, at 13:01 , Richard Elling wrote:
>>> The removed state can be the result of a transport issue. If this is a
>>> Solaris-based
>>> OS, then look at "fmadm faulty" for a diagnosis leading to a removal. If
>>> none,
>>> then look at "fmdump -eV" for errors relating to the disk. Last, check
>>> the "zpool
>>> history" to make sure one of those little imps didn't issue a "zpool
>>> remove"
>>> command.
>>
>> Definitely check your cabling; a few of my drives disappeared like this as
>> 'REMOVED', turned out to be some loose SATA cables on my backplane.
>>
>> --khd
>
> Thanks guys,
>
> I reinstalled the drive after testing on the windows machine and it
> looks fine now. By the time I'd got on to the console it had already
> started resilvering. All done now and hopefully it will stay like that
> for a while.
>
> Thanks again, saved me some work
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>

-- 
Sent from my mobile device

==
Belenix: www.belenix.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs scripts

2011-09-09 Thread Sriram Narayanan
Plus, you'll need an & character at the end of each command.

-- Sriram

On 9/9/11, Tomas Forsman  wrote:
> On 09 September, 2011 - cephas maposah sent me these 0,4K bytes:
>
>> i am trying to come up with a script that incorporates other scripts.
>>
>> eg
>> zfs send pool/filesystem1@100911 > /backup/filesystem1.snap
>> zfs send pool/filesystem2@100911 > /backup/filesystem2.snap
>
> #!/bin/sh
> zfs send pool/filesystem1@100911 > /backup/filesystem1.snap &
> zfs send pool/filesystem2@100911 > /backup/filesystem2.snap
>
> ..?
>
>> i need to incorporate these 2 into a single script with both commands
>> running concurrently.
>
> /Tomas
> --
> Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
> |- Student at Computing Science, University of Umeå
> `- Sysadmin at {cs,acc}.umu.se
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>

-- 
Sent from my mobile device

==
Belenix: www.belenix.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS for Linux?

2011-06-14 Thread Sriram Narayanan
I just learned from the Phoronix website that KQ Infotech has stopped
working on ZFS for Linux, but that their github repo is still active.

Also, zfsonlinux.org mentioned earlier on this mail thread is seeing
active development.

-- Sriram

On 6/14/11, Sriram Narayanan  wrote:
> There's also ZFS from KQInfotech.
>
> -- Sriram
>
> On 6/14/11, David Magda  wrote:
>> On Tue, June 14, 2011 08:15, Jim Klimov wrote:
>>> Hello,
>>>
>>>A college friend of mine is using Debian Linux on his desktop,
>>> and wondered if he could tap into ZFS goodness without adding
>>> another server in his small quiet apartment or changing the
>>> desktop OS. According to his research, there are some kernel
>>> modules for Debian which implement ZFS, or a FUSE variant.
>>
>> Besides FUSE, there's also this:
>>
>> http://zfsonlinux.org/
>>
>> Btrfs also has many ZFS-like features:
>>
>> http://en.wikipedia.org/wiki/Btrfs
>>
>>>Can anyone comment how stable and functional these are?
>>> Performance is a secondary issue, as long as it does not
>>> lead to system crashes due to timeouts, etc. ;)
>>
>> A better bet would probably be to check out the lists of the porting
>> projects themselves. Most of the folks on zfs-discuss are probably people
>> that use ZFS on platforms that have more official support for it
>> (OpenSolaris-based stuff and FreeBSD).
>>
>>
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
> --
> Sent from my mobile device
>
> ==
> Belenix: www.belenix.org
>

-- 
Sent from my mobile device

==
Belenix: www.belenix.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS for Linux?

2011-06-14 Thread Sriram Narayanan
There's also ZFS from KQInfotech.

-- Sriram

On 6/14/11, David Magda  wrote:
> On Tue, June 14, 2011 08:15, Jim Klimov wrote:
>> Hello,
>>
>>A college friend of mine is using Debian Linux on his desktop,
>> and wondered if he could tap into ZFS goodness without adding
>> another server in his small quiet apartment or changing the
>> desktop OS. According to his research, there are some kernel
>> modules for Debian which implement ZFS, or a FUSE variant.
>
> Besides FUSE, there's also this:
>
> http://zfsonlinux.org/
>
> Btrfs also has many ZFS-like features:
>
> http://en.wikipedia.org/wiki/Btrfs
>
>>Can anyone comment how stable and functional these are?
>> Performance is a secondary issue, as long as it does not
>> lead to system crashes due to timeouts, etc. ;)
>
> A better bet would probably be to check out the lists of the porting
> projects themselves. Most of the folks on zfs-discuss are probably people
> that use ZFS on platforms that have more official support for it
> (OpenSolaris-based stuff and FreeBSD).
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>

-- 
Sent from my mobile device

==
Belenix: www.belenix.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ideas for ghetto file server data reliability?

2010-11-15 Thread Sriram Narayanan
To add:
Even if you have great faith in ZFS, a backup helps in dealing with the unknown.
Consider:
- multiple disk failures that you are somehow unable to respond to.
- hardware failures (power supplies, motherboard, RAM).
- damage to the building.
- having to recreate everything elsewhere - even another system - for
a special reason.

ECC RAM will help ensure that the data given to ZFS is error free. ZFS
will ensure that it's able to detect errors while writing to the
storage medium.

There are still issues such as disks reporting that data has been
written, but not having written it yet. Could someone elaborate a bit
more on this aspect, please?

-- Sriram

On 11/16/10, Edward Ned Harvey  wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Toby Thain
>>
>> The corruption will at least be detected by a scrub, even in cases where
> it
>> cannot be repaired.
>
> Not necessarily.  Let's suppose you have some bad memory, and no ECC.  Your
> application does 1 + 1 = 3.  Then your application writes the answer to a
> file.  Without ECC, the corruption happened in memory and went undetected.
> Then the corruption was written to file, with a correct checksum.  So in
> fact it's not filesystem corruption, and ZFS will correctly mark the
> filesystem as clean and free of checksum errors.
>
> In conclusion:
>
> Use ECC if you care about your data.
> Do backups if you care about your data.
>
> Don't be a cheapskate, or else, don't complain when you get bitten by lack
> of adequate data protection.
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>

-- 
Sent from my mobile device

==
Belenix: www.belenix.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] using ZFS on new system

2010-11-05 Thread Sriram Narayanan
On Fri, Nov 5, 2010 at 6:41 PM, Chris Marquardt
 wrote:
> I just got a new system and want to use ZFS.  I want it look like it did on 
> the old system.  I'm not a systems person and I did not setup the current 
> system.  The guy who did no longer works here.  Can I do zfs list and zpool 
> list and get ALL the information I need to do this?
>

You can use zfs list and zpool list to find out what ZFS file systems
you currently have.

There are other zfs and zpool commands that will let you do other
things with your file systems.

Do you remember what you old system was set up to do ?


> Thanks,
> Chris
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
Belenix: www.belenix.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is this pool recoverable?

2010-03-20 Thread Sriram Narayanan
On Sun, Mar 21, 2010 at 12:32 AM, Miles Nordin  wrote:
>>>>>> "sn" == Sriram Narayanan  writes:
>
>    sn> http://docs.sun.com/app/docs/doc/817-2271/ghbxs?a=view
>
> yeah, but he has no slog, and he says 'zpool clear' makes the system
> panic and reboot, so even from way over here that link looks useless.
>
> Patrick, maybe try a newer livecd from genunix.org like b130 or later
> and see if the panic is fixed so that you can import/clear/export the
> pool.  The new livecd's also have 'zpool import -F' for Fix Harder
> (see manpage first).  Let us know what happens.
>

Yes, I realized that after I posted to the list, and I replied again
asking him to use the opensolaris LiveCD. I just noticed that I
replied direct rather than to the list.

-- Sriram
-
Belenix: www.belenix.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is this pool recoverable?

2010-03-20 Thread Sriram Narayanan
On Sat, Mar 20, 2010 at 9:19 PM, Patrick Tiquet  wrote:
> Also, I tried to run zpool clear, but the system crashes and reboots.

Please see if this link helps
http://docs.sun.com/app/docs/doc/817-2271/ghbxs?a=view

-- Sriram
-
Belenix: www.belenix.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OpenSolaris to Ubuntu

2009-12-29 Thread Sriram Narayanan
Each of these problems that you faced can be solved. Please ask for
help on each of these via separate emails to osol-discuss and you'll
get help.

I say so because I'm moving my infrastructure to opensolaris for these
services, among others.

-- Sriram

On 12/29/09, Duane Walker  wrote:
> I tried running an OpenSolaris server so I could use ZFS but SMB Serving
> wasn't reliable (it would only work for about 15 minutes). I also couldn't
> get Cacti working (No PHP-SNMP support and I tried building PHP with SNMP
> but it failed).
>
> So now I am going to run Ubuntu with RAID1 drives.  I am trying to transfer
> the files from my zpool (I have the drive in a USB - SATA chassis).
>
> I want to mount the pool and then volume without destroying the files if
> possible.
>
> If I create a pool will it destroy the contents of the pool?
>
> >From reading the doco and the forums it looks like "zpool import rpool
> /dev/sdc" may be what I want?
>
> I did a "zpool import" but it didn't show the drive.  It was part of a
> mirror maybe "zpool import -D"?
>
> I have built zfs-fuse and it seems to be working.
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>

-- 
Sent from my mobile device
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I destroy a Zpool without importing it?

2009-12-27 Thread Sriram Narayanan
Also, if you don't care about the existing pool and want to create a
new pool one the same devices, you can go ahead and do so.

The format command will list the storage devices available to you.

-- Sriram

On 12/27/09, Sriram Narayanan  wrote:
> opensolaris has a newer version of ZFS than Solaris. What you have is
> a pool that was not marked as exported for use on a different OS
> install.
>
> Simply force import the pool using zpool import -f
>
> -- Sriram
>
> On 12/27/09, Havard Kruger  wrote:
>> Hi, in the process of building a new fileserver and I'm currently playing
>> around with various operating systems, I created a pool in Solaris, before
>> I
>> decided to try OpenSolaris aswell, so I installed OpenSolaris 20009.06,
>> but
>> I forgot to destroy the pool I created in Solaris, so now I can't import
>> it
>> because it's a newer version of ZFS in Solaris then it is in OpenSolaris.
>>
>> And I can not seem to find a way to destroy the pool without importing it
>> first. I guess I could format the drives in another OS, but that is alot
>> more work then it should be. Is there any way to do this in OpenSolaris?
>> --
>> This message posted from opensolaris.org
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
> --
> Sent from my mobile device
>

-- 
Sent from my mobile device
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I destroy a Zpool without importing it?

2009-12-27 Thread Sriram Narayanan
opensolaris has a newer version of ZFS than Solaris. What you have is
a pool that was not marked as exported for use on a different OS
install.

Simply force import the pool using zpool import -f

-- Sriram

On 12/27/09, Havard Kruger  wrote:
> Hi, in the process of building a new fileserver and I'm currently playing
> around with various operating systems, I created a pool in Solaris, before I
> decided to try OpenSolaris aswell, so I installed OpenSolaris 20009.06, but
> I forgot to destroy the pool I created in Solaris, so now I can't import it
> because it's a newer version of ZFS in Solaris then it is in OpenSolaris.
>
> And I can not seem to find a way to destroy the pool without importing it
> first. I guess I could format the drives in another OS, but that is alot
> more work then it should be. Is there any way to do this in OpenSolaris?
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>

-- 
Sent from my mobile device
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to destroy your system in funny way with ZFS

2009-12-27 Thread Sriram Narayanan
You could revert to the @install snapshot (via the livecd) and swe if
that works for you.

-- Sriram

On 12/27/09, Tomas Bodzar  wrote:
> So I booted from Live CD and then :
>
> zpool import
> pfexec zpool import -f rpool
> pfexec zfs set compression=off rpool
> pfexec zpool export rpool
>
> and reboot but still same problem.
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>

-- 
Sent from my mobile device
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send | verify | receive

2009-12-04 Thread Sriram Narayanan
If feasible, you may want to generate MD5 sums on the streamed output
and then use these for verification.

-- Sriram

On 12/5/09, Edward Ned Harvey  wrote:
>> Depending of your version of OS, I think the following post from Richard
>> Elling
>> will be of great interest to you:
>> -
>> http://richardelling.blogspot.com/2009/10/check-integrity-of-zfs-send-streams.
>> html
>
> Thanks!  :-)
> No, wait! 
>
> According to that page, if you "zfs receive -n" then you should get a 0 exit
> status for success, and 1 for error.
>
> Unfortunately, I've been sitting here and testing just now ...  I created a
> "zfs send" datastream, then I made a copy of it and toggled a bit in the
> middle to make it corrupt ...
>
> I found that the "zfs receive -n" always returns 0 exit status, even if the
> data stream is corrupt.  In order to get the "1" exit status, you have to
> get rid of the "-n" which unfortunately means writing the completely
> restored filesystem to disk.
>
> I've sent a message to Richard to notify him of the error on his page.  But
> it would seem, the zstreamdump must be the only way to verify the integrity
> of a stored data stream.  I haven't tried it yet, and I'm out of time for
> today...
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>

-- 
Sent from my mobile device
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS storage server hardware

2009-11-20 Thread Sriram Narayanan
On Wed, Nov 18, 2009 at 3:24 AM, Bruno Sousa  wrote:
> Hi Ian,
>
> I use the Supermicro SuperChassis 846E1-R710B, and i added the JBOD kit that
> has :
>
> Power Control Card
>
> SAS 846EL2/EL1 BP External Cascading Cable
>
> SAS 846EL1 BP 1-Port Internal Cascading Cable
>
> I don't do any monitoring in the JBOD chassis..

I have some really newbie questions here about such a chassis:
- Do we need to buy a motherboard as well ?
- What Motherboard model do you have for such a chassis ?
- Does the motherboard accept dual power supplies ?
- Which motherboard model do you have ?

> Bruno
>
> Ian Allison wrote:
>
> Hi Bruno,
>
> Bruno Sousa wrote:
>
> Hi,
>
> I currently have a 1U server (Sun X2200) with 2 LSI HBA attached to a
> Supermicro JBOD chassis each one with 24 disks , SATA 1TB, and so far so
> good..
> So i have a 48 TB raw capacity, with a mirror configuration for NFS
> usage (Xen VMs) and i feel that for the price i paid i have a very nice
> system.
>
> Sounds good. I understand from
>
> http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg27248.html
>
> That you need something like supermicro's CSE-PTJBOD-CB1 to cable the drive
> trays up, do you do anything about monitoring the power supply?
>
> Cheers,
> Ian.
>
>
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] The ZFS FAQ needs an update

2009-10-18 Thread Sriram Narayanan
All:

Given that the latest S10 update includes user quotas, the FAQ here
[1] may need an update

-- Sriram

[1] http://opensolaris.org/os/community/zfs/faq/#zfsquotas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] A note on Apache using ZFS

2009-09-07 Thread Sriram Narayanan
http://www.scmagazineuk.com/Apache-publishes-detailed-report-about-security-breach-with-aims-to-prevent-a-recurrence/article/148282/

-- Sriram
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Fwd: [ilugb] Does ZFS support Hole Punching/Discard

2009-09-07 Thread Sriram Narayanan
Folks:

I gave a presentation last weekend on how one could use Zones, ZFS and
Crossbow to recreate deployments scenarios on one's computer (to the
extent possible).

I've received the following question, and would like to ask the ZFS
Community for answers.

-- Sriram


-- Forwarded message --
From: Ritesh Raj Sarraf 
Date: Mon, Sep 7, 2009 at 2:20 PM
Subject: [ilugb] Does ZFS support Hole Punching/Discard
To: ilug-bengal...@googlegroups.com


Thanks to Sriram for the nice walk through on "Beyond localhost".

There was one item I forgot to ask. Does ZFS support Hole Punching ?

After pushing off to BP is when I remembered of this issue. Here's a link about
this issue and its state in Linux.
http://lwn.net/Articles/293658/

Ritesh
--
Ritesh Raj Sarraf
RESEARCHUT - http://www.researchut.com
"Necessity is the mother of invention."


signature.asc
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot mount '/tank/home': directory is not empty

2009-06-10 Thread Sriram Narayanan
On Thu, Jun 11, 2009 at 1:45 AM, Arthur Bundo wrote:
> I want to thank you for your quick response.
>
> Regarding the learning curve, i really don't have enough time to go in deep 
> of the things anymore, i just like the stability of the Solaris platform in 
> general, and i used it at home years from now.
>
> All i need now is some out of the box installation OS.
> I see a lot of things have changed in OSolaris, regarding Belenix, i 
> appreciate very much your efforts on that distro, but i don't know their 
> internals and simply put, it is about trust  and Opensolaris is supported 
> under the Sun name, and that is enough for me.
>

Heh.. most of the OpenSolaris 2008.n distro is based on work done on
Belenix. The same person who created Belenix worked closely with the
OpenSolaris distro team to create that distro. We continue to help out
as and when we get time out from work.

> I am not a developer or sys-admin or anything connected to OS in general, i 
> just want stability and backward compatibility.
>

I think Richard Elling pointed out something worth investigating -
what does the present /export/home folder contain ?

-- Sriram
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot mount '/tank/home': directory is not empty

2009-06-10 Thread Sriram Narayanan
On Wed, Jun 10, 2009 at 4:35 AM, Arthur Bundo wrote:
> I cant login as root anymore with su , as x user i cant execute almost 
> anything as sys to do some maintenance , only in single mode at boot,

After you log on as the use x (let's call this user "arthur"), see if
you can run "pfexec su -" and if you get a # in the command prompt. If
you do, then you can still rescue your data.

Alternatively, boot from the Belenix live CD, run zpool import -f and
see if the pool gets imported.

> name of system is "unknown" now, just got tired of this thing now, i don't 
> want to learn solaris, dont have time for that , just wanted to use it since 
> the broadband connection and the java environment seemed much more responsive 
> than linux and gnome too behaves much better in OpenSolaris, on previous 
> releases i did have sound via OSS now not anymore etc etc and a lot of small 
> things i see changed from release 59 and dont have the time to go deeply. 
> this snapshot thing is a killer, but i immediately run into problems with it.
>

Snapshots are an awesome feature indeed.

In case you are interested, try booting from Belenix and see if it
detects your sound device. You can take this discussion to
belenix-discuss, since zfs-discuss is for the zfs filesystem.

> thank you guys for your time and advice since it was all for free.
> good luck with this monster, see you all in 6 months
>

I wouldn't be so quick to dismiss opensolaris as a monster, much less
ZFS :) There's sometimes a learning curve (no matter low close to zero
it may be).

All you need to do, is to calmly read the instructions give earlier on
this thread, and also try to use shorter sentences and a fullstop.

> Arthur
> --

-- Sriram
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot mount '/tank/home': directory is not empty

2009-06-09 Thread Sriram Narayanan
On Wed, Jun 10, 2009 at 2:14 AM, Arthur Bundo wrote:
> i reread my post, it makes dizzy even me, right.
>
> the problem i have is this, i had a user x with home directory /export/home/x 
> .
> some days before i upgraded from 101 to 111, and on nautilus i tried to make 
> snapshots of / and /export/home/x and when i reboot some time after i 
> couldn't login normally, and all i get is "can not mount directory 
> /export/home not empty" so i have no X, i just log as user x and tried to 
> open X with pfexec gdm where is select failsafe mode and there i open 
> nautilus as root in order to try to manage the snapshots, on the snapshots of 
> the "/export/home/x" directory on earlier dates is empty, on the last 
> snapshot is there. my problem is simple . i just need the stuff in 
> /export/home/x and a working opensolaris.
>
> i will try anything you guys here say but just don't want to rm the 
> /export/home/x .
> thanks guys i know it is my fault and my ignorance.

Some tips:
- Try breaking your message into smaller sentences with full stops.
- Do not try to use X, it's useless for you.

Since you see the command line environment:
zfs list | grep home

Do you see /export/home/x as a separate entry ?

rpool/export/home   10.7G  52.1G  10.6G  /export/home
rpool/export/h...@install   88.5K  -   154K  -

That's what I see on my system - I have /export/home which belongs to
the pool called rpool. I do not have a separate home directory for my
user "sriram". That is, /export/home contains "sriram", "abcd",
"demouser", etc. These are separate home directories, but they are
folders and not individual file systems.

Is that what you have ?

Please reply, and hopefully, someone else will respond to your reply.
It's 2:30 am in my time zone, and I'm going to sleep now.

Keep those tips in mind !

-- Sriram
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on a SAN

2009-03-12 Thread Sriram Narayanan
On Thu, Mar 12, 2009 at 2:12 AM, Erik Trimble  wrote:


> On the SAN, create (2) LUNs - one for your primary data, and one for
> your snapshots/backups.
>
> On hostA, create a zpool on the primary data LUN (call it zpool A), and
> another zpool on the backup LUN (zpool B).  Take snapshots on A, then
> use 'zfs send' and 'zfs receive' to copy the clone/snapshot over to
> zpool B. then 'zpool export B'

Shouldn't this be 'zpool export A' ?

-- Sriram
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RePartition OS disk, give some to zpool

2009-03-06 Thread Sriram Narayanan
On Sat, Mar 7, 2009 at 8:27 AM, Harry Putnam  wrote:
> I'm still a little confused about the various versions but I guess
> since I installed from the official opensolaris 2008.11, which gave me
> 101b.  And then updated to dev (208) that would be Indiana right?
>

That'd be build 108, I think.

> So getting to the actual task... Can that drive be partitioned without
> destroying the installation on it?  If so, can an Installation be
> tarred to another disk and simply tarred back once the partitioning is
> done?

You cannot non-destructively repartition. If you have any data on that
disk,  move it off (to the other disks, for e.g.), reinstall into a
smaller partition, and then get that data back.

-- Sriram
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to make a ZFS pool with discs of the other machines of the LAN?

2009-03-06 Thread Sriram Narayanan
On Sat, Mar 7, 2009 at 11:26 AM, Sriram Narayanan  wrote:
> On Sat, Mar 7, 2009 at 11:23 AM, Sriram Narayanan  wrote:



>> I intend to experiment with iSCSI later when I free up some machines
>> for such an experiment.
>
> My only tip for Linux based iSCSI clients would be that you should use
> CentOS 5.2 based dm (device multi path) for connecting to iSCSI using
> multipath.

Sorry, dm -> device mapper. multipath is one of the features that dm provides.

-- Sriram
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to make a ZFS pool with discs of the other machines of the LAN?

2009-03-06 Thread Sriram Narayanan
On Sat, Mar 7, 2009 at 11:23 AM, Sriram Narayanan  wrote:
> On Sat, Mar 7, 2009 at 11:05 AM, Thiago C. M. Cordeiro | World Web
>  wrote:
>> Hi!
>>
>>  Today I have ten computers with Xen and Linux, each with 2 discs of 500G in 
>> raid1, each node sees only its own raid1 volume, I do not have live motion 
>> of my virtual machines... and moving the data from one hypervisor to another 
>> is a pain task...
>>
>>  Now that I discovered this awesome file system! I want that the ZFS manages 
>> all my discs in a network environment.
>>
>>  But I don't know the best way to make a pool using all my 20 discs in one 
>> big pool with 10T of capacity.
>>
>>  My first contact with Solaris, was with the OpenSolaris 2008.11, as a 
>> virtual machine (paravirtual domU) on a Linux (Debian 5.0) dom0. I also have 
>> more opensolaris on real machines to make the tests...
>>
>>  I'm thinking in export all my 20 discs, through the AoE protocol, and in my 
>> dom0 that I'm running the opensolaris domU (in HA through the Xen), I will 
>> make the configuration file for it (zfs01.cfg) with 20 block devices of 500G 
>> and inside the opensolaris domu, I will share the pool via iSCSI targets 
>> and/or NFS back to the domUs of my cluster...  Is this a good idea?
>
> I share a three disk pool over NFS for some VMWare ESXi based hosting.
> There is considerably high disk I/O caused by the apps that run on
> these VMs. ZFS + NFS is working fine for me.
>
> I intend to experiment with iSCSI later when I free up some machines
> for such an experiment.

My only tip for Linux based iSCSI clients would be that you should use
CentOS 5.2 based dm (device multi path) for connecting to iSCSI using
multipath. One of the dm developers at Netapp recently gave a talk at
a LUG meet we hosted at our office, and he explained how Netapp and
Redhat put in a lot of effort to ensure that dm on Redhat is stable.

After some experiments at work, I have concluded that dm on CentOS 5.2
is much more stable than on any other Linux distro.

>
> -- Sriram
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to make a ZFS pool with discs of the other machines of the LAN?

2009-03-06 Thread Sriram Narayanan
On Sat, Mar 7, 2009 at 11:05 AM, Thiago C. M. Cordeiro | World Web
 wrote:
> Hi!
>
>  Today I have ten computers with Xen and Linux, each with 2 discs of 500G in 
> raid1, each node sees only its own raid1 volume, I do not have live motion of 
> my virtual machines... and moving the data from one hypervisor to another is 
> a pain task...
>
>  Now that I discovered this awesome file system! I want that the ZFS manages 
> all my discs in a network environment.
>
>  But I don't know the best way to make a pool using all my 20 discs in one 
> big pool with 10T of capacity.
>
>  My first contact with Solaris, was with the OpenSolaris 2008.11, as a 
> virtual machine (paravirtual domU) on a Linux (Debian 5.0) dom0. I also have 
> more opensolaris on real machines to make the tests...
>
>  I'm thinking in export all my 20 discs, through the AoE protocol, and in my 
> dom0 that I'm running the opensolaris domU (in HA through the Xen), I will 
> make the configuration file for it (zfs01.cfg) with 20 block devices of 500G 
> and inside the opensolaris domu, I will share the pool via iSCSI targets 
> and/or NFS back to the domUs of my cluster...  Is this a good idea?

I share a three disk pool over NFS for some VMWare ESXi based hosting.
There is considerably high disk I/O caused by the apps that run on
these VMs. ZFS + NFS is working fine for me.

I intend to experiment with iSCSI later when I free up some machines
for such an experiment.

-- Sriram
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs with PERC 6/i card?

2009-03-04 Thread Sriram Narayanan
On Wed, Mar 4, 2009 at 5:29 AM, Julius Roberts  wrote:
>>> I would like to hear if anyone is using ZFS with this card and how you set
>>> it up, and what, if any, issues you've had with that set up.
>>
>> However I would expect that if you could present 8 raid0 luns to
>> the host then that should be at least a decent config to start
>> using for ZFS.
>
> I can confirm that we are doing that here (with 3 drives) and it's
> been fine for almost a year now.
>

I've done exactly this myself.

I have two 200 GB SATA disks for the OS, and four 146 GB SAS disks for
the data pool. I've just downloaded and installed Nexenta.

-- Sriram
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Pointer needed: blog post on integrating ZFS with the Linux kernel

2009-02-23 Thread Sriram Narayanan
Hello all:

I recall that some time last year, somone from Sun had written a two
part blog post calling for a technical discussion on the effort
involved with making ZFS work on the Linux kernel.

May I have a pointer to that blog post, please ?

-- Ram
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on SAN?

2009-02-15 Thread Sriram Narayanan
On Mon, Feb 16, 2009 at 9:11 AM, Sanjeev  wrote:
> Sendai,
>
> On Fri, Feb 13, 2009 at 03:21:25PM -0800, Andras Spitzer wrote:
>> Hi,
>>
>> When I read the ZFS manual, it usually recommends to configure redundancy at 
>> the ZFS layer, mainly because there are features that will work only with 
>> redundant configuration (like corrupted data correction), also it implies 
>> that the overall robustness will improve.
>>
>> My question is simple, what is the recommended configuration on SAN (on 
>> high-end EMC, like the Symmetrix DMX series for example) where usually the 
>> redundancy is configured at the array level, so most likely we would use 
>> simple ZFS layout, without redundancy?
>
> >From my experience, this is a bad idea. I ahve seen couple of cases with such
> config (no redundancy at ZFS level) where the connection between the HBA and 
> the
> storage was flaky. And there was no way for ZFS to recover. I agree that MPxIO
> or any other multipathing handles failure of links. But, that in itself is not
> sufficient.
>

So what would you recommend then, Sanjeev ?
- multiple ZFS pools running on a SAN ?
- An S10 box or boxes that provide ZFS backed iSCSI ?

-- Sriram
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A question on "non-consecutive disk failures"

2009-02-08 Thread Sriram Narayanan
On Sun, Feb 8, 2009 at 1:29 AM, Frank Cusack  wrote:
>
> what mirror?  there is no mirror.  you have a raidz.  you can have 1
> disk failure.

Thanks for the correction. I was thinking RAIDZ, but typed "mirror". I
have only RAIDZs on my servers.

>
>> - if disks a and c fail, then I will be be able to read from disks b
>> and d. Is this understanding correct ?
>
> no.  you will lose all the data if 2 disks fail.
>
> The part of the slides you are referring to is in reference to ditto
> blocks, which allow failure of PARTS of a SINGLE disk.
>

Thanks. I've started to read the various ZFS documentation.


-- Sriram
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A question on "non-consecutive disk failures"

2009-02-08 Thread Sriram Narayanan
On Sun, Feb 8, 2009 at 1:56 AM, Peter Tribble  wrote:

> No. That quote is part of the discussion of ditto blocks.
>
> See the following:
>
> http://blogs.sun.com/bill/entry/ditto_blocks_the_amazing_tape
>

Thank you, Peter.

-- Sriram
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Nested ZFS file systems are not visible over an NFS export

2009-02-07 Thread Sriram Narayanan
An update:

I'm using VMWare ESX 3.5 and VMWare ESXi 3.5 as the NFS clients.

I'm use zfs set sharenfs=on datapool/vmwarenfs to make that zfs file
system accessible over NFS.

-- Sriram
On Sun, Feb 8, 2009 at 12:07 AM, Sriram Narayanan  wrote:
> Hello:
>
> I have the following zfs structure
> datapool/vmwarenfs - which is available over NFS
>
> I have some ZFS filesystems as follows:
> datapool/vmwarenfs/basicVMImage
> datapool/vmwarenfs/basicvmim...@snapshot
> datapool/vmwarenfs/VMImage01-> zfs cloned from basicvmim...@snapshot
> datapool/vmwarenfs/VMImage02-> zfs cloned from basicvmim...@snapshot
>
> These are accessible via NFS as /datapool/vmwarenfs
>with the subfolders VMImage01 and VMImage02
>
> What's happening right now:
> a. When I connect to datapool/vmwarenfs over NFS,
> - the contents of /datapool/vmwarenfs are visible and usable
> - VMImage01 and VMImage02 are appearing as empty sub-folders at the
> paths /datapool/vmwarenfs/VMImage01 and /datapool/vmwarenfs/VMImage02,
> but their contents are not vbisible.
>
> b. When I explicity share VMImage01 and VMImage02 via NFS, then
> /datapool/vmwarenfs/VMImage01 -> usable as a separate NFS share
> /datapool/vmwarenfs/VMImage02 -> usable as a separate NFS share
>
> What I'd like to have:
> - attach over NFS to /datapool/vmwarenfs
> - view the ZFS filesystems VMImage01 and VMImage02 as sub folders
> under /datapool/vmwarenfs
>
> If needed, I can move VMImage01 and VMImage02 from datapool/vmwarenfs,
> and even re-create them elsewhere.
>
> -- Sriram
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] A question on "non-consecutive disk failures"

2009-02-07 Thread Sriram Narayanan
>From the presentation "ZFS - The last word in filesystems", Page 22
"In a multi-disk pool, ZFS survives any non-consecutive disk failures"

Questions:
If I have a 3 disk RAIDZ with disks A, B and C, then:
- if disk b fails, then will I be able to continue to read data if
disks A and C are still available ?
If I have a 4 disk RAIDZ with disks A, B, C, and D, then:
- if disks a and b fail, then I won't be able to read from the mirror
any more. Is this understanding correct ?
- if disks a and c fail, then I will be be able to read from disks b
and d. Is this understanding correct ?

-- Sriram
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Alternatives to increading the number of copies on a ZFS snapshot

2009-02-07 Thread Sriram Narayanan
How do I set the number of copies on a snapshot ? Based on the error
message, I believe that I cannot do so.
I already have a number of clones based on this snapshot, and would
like the snapshot to have more copies now.
For higher redundancy and peace of mind, what alternatives do I have ?

-- Sriram
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss