Re: [zfs-discuss] zfs dynamic lun expansion

2007-07-04 Thread yu larry liu
If you are using the whole lun as your vdev in zpool and using EFI 
label, you can export zpool, relabel the luns (using the new capacity) 
and import that zpool. You should be able to see the increased size then.

FYI, dynamic lun expansion feature is under testing and will be 
available soon.

Larry

ganesh wrote:
> Hi,
> I had 2 luns in a zfs mirrored config.
> I increased the size of both the luns by x gig and offlined/online the 
> individual luns in the zpool, also tried an export/import of the zpool, but i 
> am unable to see the increased sizewhat would i need to do to see the 
> increased size?...or is it not possible yet?
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, iSCSI + Mac OS X Tiger (globalSAN iSCSI)

2007-07-04 Thread Nathan Kroenert - Server ESG
Hey there -

This is very likely completely unrelated, but here goes anyhoo...

I have noticed with some particular ethernet adapters (e1000g in my 
case) and large MTU sizes (8K) that things (most anything that really 
pushes the interface) sometimes stop for no good reason on my x86 
Solaris boxes. After it stops, I'm able to re-connect after a short time 
and it works for a while again... (Really must get around to properly 
reproducing the problem and logging a bug too...)

I'd be curious to know if setting the MTU to 1500 on both systems makes 
any difference at all.

Note that I have only observed this with my super cheap adapters at 
home. I'm yet to see if (though also yet to try really hard) on the more 
expensive ones at work...

Again - Likely nothing to do with your problem, but hey. It has made a 
difference for me before...

Cheers.

Nathan.


George wrote:
> I have set up an iSCSI ZFS target that seems to connect properly from 
> the Microsoft Windows initiator in that I can see the volume in MMC Disk 
> Management.
> 
>  
> When I shift over to Mac OS X Tiger with globalSAN iSCSI, I am able to 
> set up the Targets with the target name shown by `iscsitadm list target` 
> and when I actually connect or "Log On" I see that one connection exists 
> on the Solaris server.  I then go on to the Sessions tab in globalSAN 
> and I see the session details and it appears that data is being 
> transferred via the PDUs Sent, PDUs Received, Bytes, etc.  HOWEVER the 
> connection then appears to terminate on the Solaris side if I check it a 
> few minutes later it shows no connections, but the Mac OS X initiator 
> still shows connected although no more traffic appears to be flowing in 
> the Session Statistics dialog area.
> 
>  
> Additionally, when I then disconnect the Mac OS X initiator it seems to 
> drop fine on the Mac OS X side, even though the Solaris side has shown 
> it gone for a while, however when I reconnect or Log On again, it seems 
> to spin infinitely on the "Target Connect..." dialog.  Solaris is, 
> interestingly, showing 1 connection while this apparent issue (spinning 
> beachball of death) is going on with globalSAN.  Even killing the Mac OS 
> X process doesn't seem to get me full control again as I have to restart 
> the system to kill all processes (unless I can hunt them down and `kill 
> -9` them which I've not successfully done thus far).
> 
> Has anyone dealt with this before and perhaps be able to assist or at 
> least throw some further information towards me to troubleshoot this?
> 
>  
> 
>  
> Thanks much,
> 
>  
> -George
> 
> 
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs dynamic lun expansion

2007-07-04 Thread ganesh
Hi,
I had 2 luns in a zfs mirrored config.
I increased the size of both the luns by x gig and offlined/online the 
individual luns in the zpool, also tried an export/import of the zpool, but i 
am unable to see the increased sizewhat would i need to do to see the 
increased size?...or is it not possible yet?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS and Firewire/USB enclosures

2007-07-04 Thread Jeff Thompson
Besides the one you mention, bug 6560174 also shows the problems I've
seen with ZFS on firewire.  (This bug also shows the blank status page.)

Is there any way to know if these will be addressed?

Thanks,
- Jeff

>> I still haven't got any "warm and fuzzy" responses
>> yet solidifying ZFS in combination with Firewire or USB enclosures.
> 
> I was unable to use zfs (that is "zpool create" or "mkfs -F ufs") on
> firewire devices, because scsa1394 would hang the system as
> soon as multiple concurrent write commands are submitted to it.
> 
> I filed bug 6445725 (which disappeared in the scsa1394
> bugs.opensolaris.org black hole), submitted a fix and
> requested a sponsor for the fix[*], but not much has happened
> with fixing this problem in opensolaris.  
> 
> There is no such problem with USB mass storage devices.
> 
> [*] http://www.opensolaris.org/jive/thread.jspa?messageID=46190

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] data structures in ZFS

2007-07-04 Thread Wout Mertens
> A data structure view of ZFS is now available:
> http://www.opensolaris.org/os/community/zfs/structures/
> 
> We've only got one picture up right now (though its a juicy one!),  
> but let us know what you're interested in seeing, and
> we'll try to make that happen.

Well it's a nice picture, thanks! (could you also make a .svg version 
available?)

As a zfs-code-illiterate interested bystander, it would be nice if the various 
objects were annotated with short descriptions of their intent. The same goes 
for the pointers to objects in other layers, what is the pointer keeping track 
of, what kind of information passes between the layers.

I'd also love to see what happens on the block level when you take a snapshot. 
What objects end up pointing to the same things etc.

And if you could make it into a 3d zooming movie with change-by-change 
animations that'd be awesome! ;-)

But this is certainly a lot easier to grok than the linear descriptions in the 
on-disk format guide. Thanks!

Wout.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS, iSCSI + Mac OS X Tiger (globalSAN iSCSI)

2007-07-04 Thread George

I have set up an iSCSI ZFS target that seems to connect properly from the
Microsoft Windows initiator in that I can see the volume in MMC Disk
Management.

When I shift over to Mac OS X Tiger with globalSAN iSCSI, I am able to set
up the Targets with the target name shown by `iscsitadm list target` and
when I actually connect or "Log On" I see that one connection exists on the
Solaris server.  I then go on to the Sessions tab in globalSAN and I see the
session details and it appears that data is being transferred via the PDUs
Sent, PDUs Received, Bytes, etc.  HOWEVER the connection then appears to
terminate on the Solaris side if I check it a few minutes later it shows no
connections, but the Mac OS X initiator still shows connected although no
more traffic appears to be flowing in the Session Statistics dialog area.


Additionally, when I then disconnect the Mac OS X initiator it seems to drop
fine on the Mac OS X side, even though the Solaris side has shown it gone
for a while, however when I reconnect or Log On again, it seems to spin
infinitely on the "Target Connect..." dialog.  Solaris is, interestingly,
showing 1 connection while this apparent issue (spinning beachball of death)
is going on with globalSAN.  Even killing the Mac OS X process doesn't seem
to get me full control again as I have to restart the system to kill all
processes (unless I can hunt them down and `kill -9` them which I've not
successfully done thus far).

Has anyone dealt with this before and perhaps be able to assist or at least
throw some further information towards me to troubleshoot this?




Thanks much,


-George
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and VXVM/VXFS

2007-07-04 Thread przemolicc
On Mon, Jul 02, 2007 at 12:30:32PM -0700, Richard Elling wrote:
> Magesh R wrote:
> > We are looking at the alternatives to VXVM/VXFS. One of the feature
> > which we liked in Veritas, apart from the obvious ones is the
> > ability to call the disks by name and group them in to a disk group.
> >
> > Especially in SAN based environment where the disks may be shared by
> > multiple machines, it is very easy to manage them by disk group
> > names rather than cxtxdx numbers.
> >
> > Does zfs offer such capabilities?
>
> ZFS greatly simplifies disk management.  I would argue that is
> eliminates the
> need for vanity naming or some features of diskgroups.  I suggest you
> read through
> the docs on how to administer and setup ZFS, try a few examples, and
> then ask
> specific questions.
>
> Nit: You confused me with "disks may be shared by multiple machines"
> because LUNs
> have no protection below the LUN level, and if your disk is a LUN,
> then sharing it
> leaves the data unprotected.  Perhaps you are speaking of LUNs on a
> RAID array?
>   -- richard

Richard,
It is rather rare that in "SAN based environment" you are given access
to particular disks. :-)

Magesh,
in ZFS world you create pools. Each pool has its own name. Then, later,
you refer to each pool (and filesystem) using its name.

Regards
przemol

--
http://przemol.blogspot.com/














--
O Twoich stronach juz się mówi...
Na >>> http://link.interia.pl/f1ad3



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to take advantage of PSARC 2007/171: ZFS Separate Intent Log

2007-07-04 Thread Jure Pečar
On Tue, 3 Jul 2007 10:26:20 -0500
Albert Chin <[EMAIL PROTECTED]> wrote:

> Or maybe someone knows of cheap SSD storage that
> could be used for the ZIL?

If 4G is enough for you, take a look at Gigabyte iRam:
http://www.gigabyte.com.tw/Products/Storage/Products_Overview.aspx?ProductID=2180

Linux folks say it's a bit quirky (it sets 'device failure' bit or something 
like that, so you have to comment out some checking code in the kernel to have 
it working), but works extremly well in my home box and in few production mail 
servers.

-- 

Jure Pečar
http://jure.pecar.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss