Re: [zfs-discuss] ZFS with SAN's and HA

2010-09-03 Thread Peter Karlsson

Hi Michael,

Have a look at this Blog/WP 
http://blogs.sun.com/TF/entry/new_white_paper_practicing_solaris for an 
example on how to use a iSCSI target from a NAS device as storage, you 
can just replace the tomcat/mysql HA services with HA nfs and you have 
what you are looking for.


/peter

On 8/27/10 11:25 , Michael Dodwell wrote:

Lao,

I had a look at the HAStoragePlus etc and from what i understand that's to 
mirror local storage across 2 nodes for services to be able to access 'DRBD 
style'.

Having a read thru the documentation on the oracle site the cluster software 
from what i gather is how to cluster services together (oracle/apache etc) and 
again any documentation i've found on storage is how to duplicate local storage 
to multiple hosts for HA failover. Can't really see anything on clustering 
services to use shared storage/zfs pools.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS issue with ZFS

2010-08-16 Thread Peter Karlsson



On 8/14/10 11:49 , Phillip Bruce (Mindsource) wrote:

Peter,

what would you expect for root?
That is the user I am at.


root is default mapped to annon, if you don't specifically export it 
with the option to allow root on one or more clients to be mapped to 
local root on the server.


zfs set sharenfs=rw,root=host zpool/fs/to/export

where host is a ':' separated list of hosts.

Alternatively, if you want root from any host to be mapped to root on 
the server (bad idea), you can do something like this


zfs set sharenfs=rw,anon=0 zpool/fs/to/export

to allow root access to all hosts.

/peter


Like I already stated it is NOT a UID or GUID issue.
Both systems are the same.


Try as a different user that have the same uid on both systems and have 
write access to the directory in qustion.




Phillip
____
From: Peter Karlsson [peter.k.karls...@oracle.com]
Sent: Friday, August 13, 2010 7:23 PM
To: zfs-discuss@opensolaris.org; Phillip Bruce (Mindsource)
Subject: Re: [zfs-discuss] NFS issue with ZFS

Hi Phillip,

What's the permissions on the directory where you try to write to, and
what user are you using on the client system, it's most likely a UID
mapping issue between the client and the server.

/peter

On 8/14/10 3:19 , Phillip Bruce wrote:

I have Solaris 10 U7 that is exporting ZFS filesytem.
The client is Solaris 9 U7.

I can mount the filesytem just fine but I am unable to write to it.
showmount -e shows my mount is set for everyone.
the dfstab file has option rw set.

So what gives?

Phillip



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS issue with ZFS

2010-08-16 Thread Peter Karlsson

Hi Phillip,

What's the permissions on the directory where you try to write to, and 
what user are you using on the client system, it's most likely a UID 
mapping issue between the client and the server.


/peter

On 8/14/10 3:19 , Phillip Bruce wrote:

I have Solaris 10 U7 that is exporting ZFS filesytem.
The client is Solaris 9 U7.

I can mount the filesytem just fine but I am unable to write to it.
showmount -e shows my mount is set for everyone.
the dfstab file has option rw set.

So what gives?

Phillip

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirroring USB Drive with Laptop for Backup purposes

2010-05-04 Thread Peter Karlsson

Hi Matt,

Don't know if it's recommended or not, but I've been doing it for close 
to 3 years on my OpenSolaris laptop, it saved me a few times like last 
week when my internal drive died :)


/peter

On 2010-05-04 20.33, Matt Keenan wrote:

Hi,

Just wondering whether mirroring a USB drive with main laptop disk for
backup purposes is recommended or not.

Current setup, single root pool set up on 200GB internal laptop drive :

$ zpool status
pool: rpool
state: ONLINE
scrub: non requested
config :
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c5t0d0s0 ONLINE 0 0 0


I have a 320GB external USB drive which I'd like to configure as a
mirror of this root pool (I know it will only use 200GB of the eternal
one, not worried about that).

Plan would be to connect the USB drive, once or twice a week, let it
resilver, and then disconnect again. Connecting USB drive 24/7 would
AFAIK have performance issues for the Laptop.

This would have the added benefit of the USB drive being bootable.

- Recommended or not ?
- Are there known issues with this type of setup ?


cheers

Matt

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] question about which build to install

2010-01-21 Thread Peter Karlsson
Wrong list But anyhow I was able to install b128 and then upgrade to 
b130. I had relink some OpenGL files to get Compiz to work but apart 
from that it looks OK.


/peter

On 2010-01-22 11.03, Thomas Burgess wrote:

I installed b130 on my server, and i'm being hit by this bug:
http://defect.opensolaris.org/bz/show_bug.cgi?id=13540

Where i can't log into gnome.  I've bee trying to deal with it hoping 
that i a workaround would show up..

if there IS a workaround, i'd love to have it...if not, i'm wondering:

is there another version i can downgrade to?  I'm pretty new to 
opensolaris and i've tried to google to find this answer but i can't 
find it.


my zpool is version 22.

thanks for any help.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris10 10/09 ZFS shared via CIFS?

2009-11-22 Thread Peter Karlsson



Trevor Pretty wrote:


OK I've also a S700 simulator as a VM and it seems to have done what I 
would expect.


7000# zfs get sharesmb  pool-0/local/trevors_stuff/tlp
NAMEPROPERTY  VALUE   SOURCE
pool-0/local/trevors_stuff/tlp  sharesmb  name=trevors_stuff_tlp  
inherited from pool-0/local/trevors_stuff


7000# cd /var/ak/shares/web/export/tlp/.zfs
7000# ls shares/
trevors_stuff_tlp

It also has sharemgr which seems to be missing in S10.

Jepp, it would as it's OpenSolaris based :)

/peter



Trevor Pretty wrote:

Team

I'm missing something?  First off I normally play around with 
OpenSolaris & it's been a while since I played with Solaris 10.


I'm doing all this via VirtualBox (Vista host) and I've set-up the 
network (I believe) as I can ping, ssh and telnet from Vista into the 
S10 virtual machine 192.168.56.101.


I've set smbshare on. But there seems to be non the the CIFS commands 
you get in OpenSolaris and when I point a file browser (or whatever 
it's called in Windows) at \\192.168.56.101 I can't access it.


I would also expect a file name in .zfs/share like it says in the man 
pages, but there is non.


What have I missed? RTFMs more than welcome :-)


Details.

bash-3.00# zfs get sharesmb sam_pool/backup
NAME PROPERTY  VALUE SOURCE
sam_pool/backup  sharesmb  onlocal


bash-3.00# ls -al /sam_pool/backup/.zfs
total 3
dr-xr-xr-x   3 root root   3 Aug 11 14:26 .
drwxr-xr-x   2 root root   8 Aug 18 09:52 ..
dr-xr-xr-x   2 root root   2 Aug 11 14:26 snapshot


bash-3.00# ifconfig -a
lo0: flags=2001000849 mtu 
8232 index 1

inet 127.0.0.1 netmask ff00
e1000g0: flags=1004843 mtu 
1500 index 2

inet 192.168.56.101 netmask ff00 broadcast 192.168.56.255
ether 8:0:27:84:cb:f5


bash-3.00# cat /etc/release
   Solaris 10 10/09 s10x_u8wos_08a X86
   Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
Use is subject to license terms.
   Assembled 16 September 2009




  


===
www.eagle.co.nz
This email is confidential and may be legally privileged.
If received in error please destroy and immediately notify us.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris10 10/09 ZFS shared via CIFS?

2009-11-22 Thread Peter Karlsson



Trevor Pretty wrote:



Tim Cook wrote:



On Sun, Nov 22, 2009 at 4:18 PM, Trevor Pretty 
mailto:trevor_pre...@eagle.co.nz>> wrote:


Team

I'm missing something?  First off I normally play around with
OpenSolaris & it's been a while since I played with Solaris 10.

I'm doing all this via VirtualBox (Vista host) and I've set-up
the network (I believe) as I can ping, ssh and telnet from Vista
into the S10 virtual machine 192.168.56.101.

I've set smbshare on. But there seems to be non the the CIFS
commands you get in OpenSolaris and when I point a file browser
(or whatever it's called in Windows) at \\192.168.56.101 I can't
access it.

I would also expect a file name in .zfs/share like it says in the
man pages, but there is non.

What have I missed? RTFMs more than welcome :-)


Details.

bash-3.00# zfs get sharesmb sam_pool/backup
NAME PROPERTY  VALUE SOURCE
sam_pool/backup  sharesmb  onlocal


bash-3.00# ls -al /sam_pool/backup/.zfs
total 3
dr-xr-xr-x   3 root root   3 Aug 11 14:26 .
drwxr-xr-x   2 root root   8 Aug 18 09:52 ..
dr-xr-xr-x   2 root root   2 Aug 11 14:26 snapshot


bash-3.00# ifconfig -a
lo0: flags=2001000849
mtu 8232 index 1
  inet 127.0.0.1 netmask ff00
e1000g0: flags=1004843
mtu 1500 index 2
  inet 192.168.56.101 netmask ff00 broadcast 192.168.56.255
  ether 8:0:27:84:cb:f5


bash-3.00# cat /etc/release
 Solaris 10 10/09 s10x_u8wos_08a X86
 Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
  Use is subject to license terms.
 Assembled 16 September 2009


I thought I had heard forever ago that the native cifs implementation 
wouldn't ever be put back to solaris10 due to the fact it makes 
significant changes to the kernel.  Maybe I'm crazy though.


I would think an ls would tell you if it was or not.  Do you see this 
output when you run a '/bin/ls -dV'?


root# /bin/ls -dV /
drwxr-xr-x  26 root root  35 Nov 15 10:58 /
 owner@:--:---:deny
 owner@:rwxp---A-W-Co-:---:allow
 group@:-w-p--:---:deny
 group@:r-x---:---:allow
  everyone@:-w-p---A-W-Co-:---:deny
  everyone@:r-x---a-R-c--s:---:allow
 




--
--Tim

Yep!

bash-3.00# /bin/ls -dV /
drwxr-xr-x  46 root root  63 Nov 23 11:41 /
owner@:--:--:deny
owner@:rwxp---A-W-Co-:--:allow
group@:-w-p--:--:deny
group@:r-x---:--:allow
 everyone@:-w-p---A-W-Co-:--:deny
 everyone@:r-x---a-R-c--s:--:allow
bash-3.00#
Nope this is just zfs acls,  doesn't have anything to do with the CIFS 
server


/peter


I think the server is in but not the client but I can't find sharemgr 
either.





*/ /*

www.eagle.co.nz  

This email is confidential and may be legally privileged. If received 
in error please destroy and immediately notify us.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris10 10/09 ZFS shared via CIFS?

2009-11-22 Thread Peter Karlsson

Hi Trevor,

The native CIFS/SMB stuff was never backported to S10, so you would have 
to use the Samba on your S10 vm


Cheers,
Peter

Trevor Pretty wrote:

Team

I'm missing something?  First off I normally play around with 
OpenSolaris & it's been a while since I played with Solaris 10.


I'm doing all this via VirtualBox (Vista host) and I've set-up the 
network (I believe) as I can ping, ssh and telnet from Vista into the 
S10 virtual machine 192.168.56.101.


I've set smbshare on. But there seems to be non the the CIFS commands 
you get in OpenSolaris and when I point a file browser (or whatever 
it's called in Windows) at \\192.168.56.101 I can't access it.


I would also expect a file name in .zfs/share like it says in the man 
pages, but there is non.


What have I missed? RTFMs more than welcome :-)


Details.

bash-3.00# zfs get sharesmb sam_pool/backup
NAME PROPERTY  VALUE SOURCE
sam_pool/backup  sharesmb  onlocal


bash-3.00# ls -al /sam_pool/backup/.zfs
total 3
dr-xr-xr-x   3 root root   3 Aug 11 14:26 .
drwxr-xr-x   2 root root   8 Aug 18 09:52 ..
dr-xr-xr-x   2 root root   2 Aug 11 14:26 snapshot


bash-3.00# ifconfig -a
lo0: flags=2001000849 mtu 
8232 index 1

   inet 127.0.0.1 netmask ff00
e1000g0: flags=1004843 mtu 
1500 index 2

   inet 192.168.56.101 netmask ff00 broadcast 192.168.56.255
   ether 8:0:27:84:cb:f5


bash-3.00# cat /etc/release
  Solaris 10 10/09 s10x_u8wos_08a X86
  Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
   Use is subject to license terms.
  Assembled 16 September 2009




===
www.eagle.co.nz
This email is confidential and may be legally privileged.
If received in error please destroy and immediately notify us.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS flar image.

2009-09-14 Thread Peter Karlsson

Hi Greg,

We did a hack on those lines when we installed 100 Ultra 27s that was 
used during J1, but we automated the process by using AI to install a 
bootstrap image that had a SMF service that pulled over the zfs 
sendfile, create a new BE and received the sendfile to the new BE. Work 
fairly OK, there where a few things that we to run a few scripts to fix, 
but at large it was smooth I really need to get that blog entry done :)


/peter

Greg Mason wrote:
As an alternative, I've been taking a snapshot of rpool on the golden 
system, sending it to a file, and creating a boot environment from the 
archived snapshot on target systems. After fiddling with the snapshots 
a little, I then either appropriately anonymize the system or provide 
it with its identity. When it boots up, it's ready to go.


The only downfall to my method is that I still have to run the full 
OpenSolaris installer, and I can't exclude anything in the archive.


Essentially, it's a poor man's flash archive.

-Greg

cindy.swearin...@sun.com wrote:

Hi RB,

We have a draft of the ZFS/flar image support here:

http://opensolaris.org/os/community/zfs/boot/flash/

Make sure you review the Solaris OS requirements.

Thanks,

Cindy

On 09/14/09 11:45, RB wrote:
Is it possible to create flar image of ZFS root filesystem to 
install it to other macines?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs incremental-forever

2008-06-06 Thread Peter Karlsson
Or you could use Tim Fosters ZFS snapshot service 
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_now_with

/peter

On Jun 6, 2008, at 14:07, Tobias Exner wrote:

> Hi,
>
> I'm thinking about the following situation and I know there are some
> things I have to understand:
>
> I want to use two SUN-Servers with the same amount of storage capacity
> on both of them and I want to replicate the filesystem ( zfs )
> incrementally two times a day from the first to the second one.
>
> I know that the zfs send/receive commands will do the job, but I don't
> understand exactly how zfs will know what have to be transferred..  
> Is it
> the difference to the last snapshot?
>
> If yes, does that mean that I have to keep all snapshots to achieve an
> "incremental-forever" configuration?  --> That's my goal!
>
>
>
>
> regards,
>
> Tobias Exner
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs incremental-forever

2008-06-06 Thread Peter Karlsson
Hi Tobias,

I did this for a large lab we had last month, I have it setup  
something like this.

zfs snapshot  [EMAIL PROTECTED]
zfs send -i [EMAIL PROTECTED] [EMAIL PROTECTED] | ssh server2 zfs recv  rep_pool
ssh zfs destroy [EMAIL PROTECTED]
ssh zfs rename [EMAIL PROTECTED] [EMAIL PROTECTED]
zfs destroy [EMAIL PROTECTED]
zfs rename [EMAIL PROTECTED] [EMAIL PROTECTED]

I was using this for a set up with on master systems and 100 system  
that I replicated the zpool to, and I had scripts that was used when a  
student logged out to do a zfs rollback [EMAIL PROTECTED] to reset it to  
known state to be ready for a new student. I don't have the actual  
script I used here right now, so I might have missed some flags, but  
you see that basic flow of it.

/peter

On Jun 6, 2008, at 14:07, Tobias Exner wrote:

> Hi,
>
> I'm thinking about the following situation and I know there are some
> things I have to understand:
>
> I want to use two SUN-Servers with the same amount of storage capacity
> on both of them and I want to replicate the filesystem ( zfs )
> incrementally two times a day from the first to the second one.
>
> I know that the zfs send/receive commands will do the job, but I don't
> understand exactly how zfs will know what have to be transferred..  
> Is it
> the difference to the last snapshot?
>
> If yes, does that mean that I have to keep all snapshots to achieve an
> "incremental-forever" configuration?  --> That's my goal!
>
>
>
>
> regards,
>
> Tobias Exner
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Image with DD from ZFS partition

2008-05-08 Thread Peter Karlsson
Hi Hans,

Think what you are looking for would be a combination of a snapshot  
and zfs send/receive, that would give you an archive that you can use  
to recreate your zfs filesystems on your zpool at will at  later time.  
So you can do something like this

Create archive :
zfs snapshot -r [EMAIL PROTECTED]
zfs send -R [EMAIL PROTECTED] > mypool_archive.zfs

Restore from archive:
zpool create mynewpool disk1 disk2
zfs receive -d -F < mypool_archive.zfs


Doing this will create an archive that contains all descendent file  
systems of mypool, that can be restored at a later time, with out  
depending on how the zpool is organized.

/peter

On May 7, 2008, at 23:31, Hans wrote:
> thank you for your posting.
> well i still have problems understanding how a pool works.
> when i have one partition with zfs like this:
> /dev/sda1 -> ZFS
> /dev/sda2 -> ext2
> the only pool is on the sda1 device. in this way i can backup it  
> with the dd command.
> now i try to understand:
> when i have 2 zfs partitions like this:
> /dev/sda1 ->ZFS
> /dev/sda2 ->ZFS
> /dev/sda3 ->ext2
> i cannot copy only sda1 with dd and leave sda2 because i destroy the  
> pool then.
> is it possible to seperate two partitions in this way, that i can  
> backup one seperatley?
> the normal linux way is that every partition is mountet into the  
> file-system tree, but is in his way to store data different. so at  
> linux you can mount a ext3 and reiserfs together to one file-system  
> tree.
> zfs is different. it spreads data over the partitions how it is the  
> best way for zfs. maybe i can compare it a little with a raid 0  
> where data is spread over several harddisks. on a raid0 it is  
> impossible to backup one harddisk and restore it , in this way i  
> cannot backup a zfs partition and leafe other zfs partions.
> well i think a snapshot is not what i want.
> i want a image that i can use at any problems. so i can install an  
> new version of solaris, installing software. and then say... not  
> good. restore image. or whatever i want.
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool shared between OSX and Solaris on a MacBook Pro

2008-02-19 Thread Peter Karlsson
On Feb 19, 2008, at 17:27, Darren J Moffat wrote:

> Peter Karlsson wrote:
>> Hi,
>> I got my MacBook pro set up to dual boot between Solaris and OSX and 
>> I  have created a zpool to use as a shred storage for documents etc.. 
>>  However got this strange thing when trying to access the zpool from 
>>  Solaris, only root can see it?? I created the zpool on OSX as they 
>> are  using an old version of the on disk format, if I create a zpool 
>> on  Solaris all users can see it, strange
>
> What do you mean by "only root can see it"
>
As root:
-bash-3.2# ls -ld /zpace
drwxr-xr-x   8 root root   9 Feb 19 12:28 /zpace

As myself:
 ls -ld /zpace
/zpace: Permission denied
bash-3.2$ cd /zpace
bash: cd: /zpace: Permission denied

So I create a zpool on Solaris
 -bash-3.2# zpool create ztst /export/home/tst/a /export/home/tst/b 
/export/home/tst/c

bash-3.2$ ls -ld /ztst
drwxr-xr-x   2 root root   2 Feb 19 17:23 /ztst
bash-3.2$ cd /ztst

So that works, so something is strange with the zpool I created in OSX

> All files are owned by root ?

Nope, some files are owned by other users, 
>
> Users don't see the datasets with zfs list ?
Can:

bash-3.2$ /sbin/zpool list
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
zpace61G   715M  60.3G 1%  ONLINE  -

>
> Users don't see the mounted filesystems with df ?
Nope:
/zpace (zpace ):124462673 blocks 124462673 files
df: cannot statvfs /zpace/DB: Permission denied
df: cannot statvfs /zpace/Download: Permission denied
df: cannot statvfs /zpace/demo: Permission denied

>
> Users don't even see the pool with zpool status ?
Can:
bash-3.2$ /sbin/zpool status
  pool: zpace
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on older software versions.
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
zpace   ONLINE   0 0 0
  c1d0p4ONLINE   0 0 0

errors: No known data errors
bash-3.2$ /sbin/zfs list
NAME USED  AVAIL  REFER  MOUNTPOINT
zpace715M  59.3G   586K  /zpace
zpace/DB29.5K  59.3G  29.5K  /zpace/DB
zpace/Download   648M  59.3G   648M  /zpace/Download
zpace/demo  66.2M  59.3G  66.2M  /zpace/demo
>
>
>
> This looks strange:
>
> zpace  delegation   off default

That's the default on OSX, as I created the file system on OSX

On Solaris it reports delegation on
bash-3.2$ /sbin/zpool get all zpace
NAME   PROPERTY VALUE   SOURCE
zpace  size 61G -
zpace  used 715M-
zpace  available60.3G   -
zpace  capacity 1%  -
zpace  altroot  -   default
zpace  health   ONLINE  -
zpace  guid 2692302108782490543  -
zpace  version  6   local
zpace  bootfs   -   default
zpace  delegation   on  default
zpace  autoreplace  off default
zpace  cachefile-   default
zpace  failmode waitdefault
>
>
> The default is "on" not off.
>
> What build of Solaris are you using ?
snvx_b80

>
>
> Also see this:
>
> zpace/demo  mountpoint  /Volumes/zpace/demodefault

It was from OSX, I should note that


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool shared between OSX and Solaris on a MacBook Pro

2008-02-18 Thread Peter Karlsson
Hi,

I got my MacBook pro set up to dual boot between Solaris and OSX and I  
have created a zpool to use as a shred storage for documents etc..  
However got this strange thing when trying to access the zpool from  
Solaris, only root can see it?? I created the zpool on OSX as they are  
using an old version of the on disk format, if I create a zpool on  
Solaris all users can see it, strange

Any ideas on what might be the issue here??

Cheers,
Peter

root# zpool get all zpace
NAME  PROPERTY VALUE   SOURCE
zpace  bootfs   -   default
zpace  autoreplace  off default
zpace  delegation   off default

root# zfs get all zpace/demo
NAMEPROPERTY   VALUE  SOURCE
zpace/demo  typefilesystem -
zpace/demo  creationSat Feb 16 13:25 2008  -
zpace/demo  used66.2M  -
zpace/demo  available   59.3G  -
zpace/demo  referenced  66.2M  -
zpace/demo  compressratio   1.00x  -
zpace/demo  mounted yes-
zpace/demo  quota   none   default
zpace/demo  reservation none   default
zpace/demo  recordsize  128K   default
zpace/demo  mountpoint  /Volumes/zpace/demodefault
zpace/demo  sharenfsoffdefault
zpace/demo  checksumon default
zpace/demo  compression offdefault
zpace/demo  atime   on default
zpace/demo  devices on default
zpace/demo  execon default
zpace/demo  setuid  on default
zpace/demo  readonlyoffdefault
zpace/demo  zoned   offdefault
zpace/demo  snapdir hidden default
zpace/demo  aclmode groupmask  default
zpace/demo  aclinherit  secure default
zpace/demo  canmounton default
zpace/demo  shareiscsi  offdefault
zpace/demo  xattr   on default
zpace/demo  copies  1  default
zpace/demo  version 2  -

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: External drive enclosures + Sun Server for massstorage

2007-01-22 Thread Peter Karlsson

Hi Frank,

try man devfsadm, it will update devfs with your new disk drives. disks 
is an older command that does about the same thing.


Cheers,
Peter

Frank Cusack wrote:
On January 22, 2007 12:12:19 PM -0600 Brian Hechinger 
<[EMAIL PROTECTED]> wrote:

On Mon, Jan 22, 2007 at 09:39:19AM -0800, Frank Cusack wrote:

On January 21, 2007 12:15:22 AM -0200 Toby Thain <[EMAIL PROTECTED]>
wrote:
> To be clear: the X2100 drives are neither "hotswap" nor "hotplug" 
under

> Solaris. Replacing a failed drive requires a reboot.

Also, adding a drive that wasn't present at boot requires a reboot.


This couldn't possibly be true, unless we've taken major steps backwards
as this has always been possible (at least on sparc)


It is true.  Try it.

[Sorry to send a reply to a personal mail back to the list, but your
email address bounces

450 <[EMAIL PROTECTED]>: Recipient address rejected: Domain 
not found]


-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss