[zfs-discuss] Creating zfs filesystem on a partition with ufs - Newbie

2006-12-06 Thread Ian Brown
Hello, 
I try to create a zfs file system according to 
Creating a Basic ZFS File System section of 
Creating a Basic ZFS File System document of SUN.

The problem is that the device has a ufs filesystem the partiotion
I am trying to work with; it is in fact empty and does not contain any 
file which I need.

So:

zpool create tank /dev/dsk/c1d0s6
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c1d0s6 contains a ufs filesystem.
/dev/dsk/c1d0s6 is normally mounted on /MyPartition according to 
  /etc/vfstab. Please remove this entry to use this device.

So I removed this entry from /etc/vfstab and also unmounted the
/MyPartition partition.

than I tried: 

zpool create -f tank /dev/dsk/c1d0s6
internal error: No such device
Abort (core dumped)

But:
 zpool list gives: 
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
tank   1.94G   51.5K   1.94G 0%  ONLINE -

is there any reason for this internal error: No such device ?
Is there something wrong here which I should do in a different way ? 


from man zpool create -f
 The command  verifies  that  each  device  specified  is
 accessible  and  not currently in use by another subsys-
 tem. There  are  some  uses,  such  as  being  currently
 mounted, or specified as the dedicated dump device, that
 prevents a device from ever being  used  by  ZFS.  Other
 uses,  such as having a preexisting UFS file system, can
 be overridden with the -f option.
 ...
 ...
  -f

 Forces use of vdevs, even if they appear in  use  or
 specify  a  conflicting  replication  level. Not all
 devices can be overridden in this manner.

Ian
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shared ZFS pools

2006-12-06 Thread Albert Shih
 Le 06/12/2006 à 05:05:55+0100, Flemming Danielsen a écrit
 Hi
 I have 2 questions on use of ZFS.
 How do I ensure I have site redundancy using zfs pools, as I see it we only
 ensures mirrors between 2 disks. I have 2 HDS on one each site and I want to 
 be
 able to loose the one of them and my pools should still be running. F.inst.
  
 I have created 2 luns on each site (A and B) named AA, AB and BA, BB. I then
 create my pool and mirror AA to BA and AB to BB. If I lose site B hosting BA
 and BB can I be sure they do no hold both copies of any data?
  
I'm asking a question (maybe stupid) what you use to attach two 2disks on 2
different site ? You using FC attachement ?

Regards.


--
Albert SHIH
Universite de Paris 7 (Denis DIDEROT)
U.F.R. de Mathematiques.
7 ième étage, plateau D, bureau 10
Heure local/Local time:
Wed Dec 6 09:06:30 CET 2006
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Creating zfs filesystem on a partition with ufs - Newbie

2006-12-06 Thread Wee Yeh Tan

Ian,

The first error is correct in that zpool-create will not, unless
forced, create a file system if it knows that another filesystem
presides in the target vdev.

The second error was caused by your removal of the slice.

What I find discerning is that the zpool created.
Can you provide the result of zpool status and list of the disk
partition table?  If it's indeed carved from c1d0s6, can you destroy
the pool and see if the same creation sequence indeed creates the
zpool?


--
Just me,
Wire ...

On 12/6/06, Ian Brown [EMAIL PROTECTED] wrote:

Hello,
I try to create a zfs file system according to
Creating a Basic ZFS File System section of
Creating a Basic ZFS File System document of SUN.

The problem is that the device has a ufs filesystem the partiotion
I am trying to work with; it is in fact empty and does not contain any
file which I need.

So:

zpool create tank /dev/dsk/c1d0s6
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c1d0s6 contains a ufs filesystem.
/dev/dsk/c1d0s6 is normally mounted on /MyPartition according to
  /etc/vfstab. Please remove this entry to use this device.

So I removed this entry from /etc/vfstab and also unmounted the
/MyPartition partition.

than I tried:

zpool create -f tank /dev/dsk/c1d0s6
internal error: No such device
Abort (core dumped)

But:
 zpool list gives:
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
tank   1.94G   51.5K   1.94G 0%  ONLINE -

is there any reason for this internal error: No such device ?
Is there something wrong here which I should do in a different way ?


from man zpool create -f
 The command  verifies  that  each  device  specified  is
 accessible  and  not currently in use by another subsys-
 tem. There  are  some  uses,  such  as  being  currently
 mounted, or specified as the dedicated dump device, that
 prevents a device from ever being  used  by  ZFS.  Other
 uses,  such as having a preexisting UFS file system, can
 be overridden with the -f option.
 ...
 ...
  -f

 Forces use of vdevs, even if they appear in  use  or
 specify  a  conflicting  replication  level. Not all
 devices can be overridden in this manner.

Ian


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Jim Davis
We have two aging Netapp filers and can't afford to buy new Netapp gear, 
so we've been looking with a lot of interest at building NFS fileservers 
running ZFS as a possible future approach.  Two issues have come up in the 
discussion


- Adding new disks to a RAID-Z pool (Netapps handle adding new disks very 
nicely).  Mirroring is an alternative, but when you're on a tight budget 
losing N/2 disk capacity is painful.


- The default scheme of one filesystem per user runs into problems with 
linux NFS clients; on one linux system, with 1300 logins, we already have 
to do symlinks with amd because linux systems can't mount more than about 
255 filesystems at once.  We can of course just have one filesystem 
exported, and make /home/student a subdirectory of that, but then we run 
into problems with quotas -- and on an undergraduate fileserver, quotas 
aren't optional!


Neither of these problems are necessarily showstoppers, but both make the 
transition more difficult.  Any progress that could be made with them 
would help sites like us make the switch sooner.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Al Hopper
On Wed, 6 Dec 2006, Jim Davis wrote:

 We have two aging Netapp filers and can't afford to buy new Netapp gear,
 so we've been looking with a lot of interest at building NFS fileservers
 running ZFS as a possible future approach.  Two issues have come up in the
 discussion

 - Adding new disks to a RAID-Z pool (Netapps handle adding new disks very
 nicely).  Mirroring is an alternative, but when you're on a tight budget
 losing N/2 disk capacity is painful.

 - The default scheme of one filesystem per user runs into problems with
 linux NFS clients; on one linux system, with 1300 logins, we already have
 to do symlinks with amd because linux systems can't mount more than about
 255 filesystems at once.  We can of course just have one filesystem
 exported, and make /home/student a subdirectory of that, but then we run
 into problems with quotas -- and on an undergraduate fileserver, quotas
 aren't optional!

 Neither of these problems are necessarily showstoppers, but both make the
 transition more difficult.  Any progress that could be made with them
 would help sites like us make the switch sooner.

The showstopper might be performance - since the Netapp has nonvolatile
memory - which greatly accelerates NFS operations.  A good strategy is to
build a ZFS test system and determine if it provides the NFS performance
you expect in your environment.  Remember that ZFS likes inexpensive
SATA disk drives - so a test system will be kind to your budget and the
hardware is re-usable when you decide to deploy ZFS.  And you may very
well find other, unintended uses for that test system.

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
   Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
 OpenSolaris Governing Board (OGB) Member - Feb 2006
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Darren J Moffat

On Wed, 6 Dec 2006, Jim Davis wrote:

We have two aging Netapp filers and can't afford to buy new Netapp gear,
so we've been looking with a lot of interest at building NFS fileservers
running ZFS as a possible future approach.  Two issues have come up in the
discussion

- Adding new disks to a RAID-Z pool (Netapps handle adding new disks very
nicely).  Mirroring is an alternative, but when you're on a tight budget
losing N/2 disk capacity is painful.


You can add more disks to a pool that is in raid-z you just can't
add disks to the existing raid-z vdev.

The following config was done in two steps:

$ zpool status
  pool: cube
 state: ONLINE
 scrub: scrub completed with 0 errors on Mon Dec  4 03:52:18 2006
config:

NAME STATE READ WRITE CKSUM
cube ONLINE   0 0 0
  raidz1 ONLINE   0 0 0
c5t0d0   ONLINE   0 0 0
c5t1d0   ONLINE   0 0 0
c5t2d0   ONLINE   0 0 0
c5t3d0   ONLINE   0 0 0
c5t4d0   ONLINE   0 0 0
c5t5d0   ONLINE   0 0 0
  raidz1 ONLINE   0 0 0
c5t8d0   ONLINE   0 0 0
c5t9d0   ONLINE   0 0 0
c5t10d0  ONLINE   0 0 0
c5t11d0  ONLINE   0 0 0
c5t12d0  ONLINE   0 0 0
c5t13d0  ONLINE   0 0 0


The targets t0 through t5 included were added initially, many days
later the targets t8 through t13 were added.

The fact that these are all the same controller isn't relevant.

This is actually what you want with raid-z anyway, in may case above
it wouldn't be good for performance to have 12 disks in the top level
raid-z.


- The default scheme of one filesystem per user runs into problems with
linux NFS clients; on one linux system, with 1300 logins, we already have
to do symlinks with amd because linux systems can't mount more than about
255 filesystems at once.  We can of course just have one filesystem
exported, and make /home/student a subdirectory of that, but then we run
into problems with quotas -- and on an undergraduate fileserver, quotas
aren't optional!


So how can OpenSolaris help you with a Linux kernel restriction
on the number of mounts ?

Hey I know, get rid of the Linux boxes and replace them with OpenSolaris
based ones ;-)

Seriously, what are you expecting OpenSolaris and ZFS/NFS in particular 
to be able to do about a restriction in Linux ?


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Creating zfs filesystem on a partition with ufs - Newbie

2006-12-06 Thread Torrey McMahon
Still ... I don't think a core file is appropriate. Sounds like a bug is 
in order if one doesn't already exist. (zpool dumps core when  missing 
devices are used perhaps?)


Wee Yeh Tan wrote:

Ian,

The first error is correct in that zpool-create will not, unless
forced, create a file system if it knows that another filesystem
presides in the target vdev.

The second error was caused by your removal of the slice.

What I find discerning is that the zpool created.
Can you provide the result of zpool status and list of the disk
partition table?  If it's indeed carved from c1d0s6, can you destroy
the pool and see if the same creation sequence indeed creates the
zpool?




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] weird thing with zfs

2006-12-06 Thread Krzys

Thanks so much.. anyway resilvering worked its way, I got everything resolved
zpool status -v
  pool: mypool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0

errors: No known data errors

  pool: mypool2
 state: ONLINE
 scrub: resilver completed with 0 errors on Tue Dec  5 13:48:31 2006
config:

NAMESTATE READ WRITE CKSUM
mypool2 ONLINE   0 0 0
  raidz ONLINE   0 0 0
c3t0d0  ONLINE   0 0 0
c3t1d0  ONLINE   0 0 0
c3t2d0  ONLINE   0 0 0
c3t3d0  ONLINE   0 0 0
c3t4d0  ONLINE   0 0 0
c3t5d0  ONLINE   0 0 0
c3t6d0  ONLINE   0 0 0

errors: No known data errors

did not change any cables nor anything, just reboot... I will llook into 
replacing cables (those are the short scsi cables.. anyway this is so weird and 
original disk that I replaced seems to be good as well.. it must be connectivity 
problem... but whats weird is that I had it running for months without 
problems...


Regards and thanks to all for help.

Chris



On Tue, 5 Dec 2006, Richard Elling wrote:


BTW, there is a way to check what the SCSI negotiations resolved to.
I wrote about it once in a BluePrint
http://www.sun.com/blueprints/0500/sysperfnc.pdf
See page 11
-- richard

Richard Elling wrote:

This looks more like a cabling or connector problem.  When that happens
you should see parity errors and transfer rate negotiations.
 -- richard

Krzys wrote:

Ok, so here is an update

I did restart my sysyte, I power it off and power it on. Here is screen 
capture of my boot. I certainly do have some hard drive issues and will 
need to take a look at them... But I got my disk back visible to the 
system and zfs is doing resilvering again


Rebooting with command: boot
Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a  
File and args:
SunOS Release 5.10 Version Generic_118833-24 64-bit
Copyright 1983-2006 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Hardware watchdog enabled
Hostname: chrysek
WARNING: /[EMAIL PROTECTED],60/[EMAIL PROTECTED] (glm2):
SCSI bus DATA IN phase parity error
WARNING: /[EMAIL PROTECTED],60/[EMAIL PROTECTED] (glm2):
Target 6 reducing sync. transfer rate
WARNING: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 (sd5):
Error for Command: read(10)Error Level: Retryable
Requested Block: 286732066 Error Block: 286732066
Vendor: SEAGATESerial Number: 3HY14PVS
Sense Key: Aborted Command
ASC: 0x48 (initiator detected error message received), ASCQ: 0x0, 
FRU: 0x2

WARNING: /[EMAIL PROTECTED],60/[EMAIL PROTECTED] (glm2):
SCSI bus DATA IN phase parity error
WARNING: /[EMAIL PROTECTED],60/[EMAIL PROTECTED] (glm2):
Target 3 reducing sync. transfer rate
WARNING: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 (sd23):
Error for Command: read(10)Error Level: Retryable
Requested Block: 283623842 Error Block: 283623842
Vendor: SEAGATESerial Number: 3HY8HS7L
Sense Key: Aborted Command
ASC: 0x48 (initiator detected error message received), ASCQ: 0x0, 
FRU: 0x2

WARNING: /[EMAIL PROTECTED],60/[EMAIL PROTECTED] (glm2):
SCSI bus DATA IN phase parity error
WARNING: /[EMAIL PROTECTED],60/[EMAIL PROTECTED] (glm2):
Target 5 reducing sync. transfer rate
WARNING: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 (sd25):
Error for Command: read(10)Error Level: Retryable
Requested Block: 283623458 Error Block: 283623458
Vendor: SEAGATESerial Number: 3HY0LF18
Sense Key: Aborted Command
ASC: 0x48 (initiator detected error message received), ASCQ: 0x0, 
FRU: 0x2

/kernel/drv/sparcv9/zpool symbol avl_add multiply defined
/kernel/drv/sparcv9/zpool symbol assfail3 multiply defined
WARNING: kstat_create('unix', 0, 'dmu_buf_impl_t'): namespace collision
mypool2/d3 uncorrectable error
checking ufs filesystems
/dev/rdsk/c1t0d0s7: is logging.

chrysek console login: VERITAS SCSA Generic Revision: 3.5c
Dec  5 13:01:38 chrysek root: CAPTURE_UPTIME ERROR: /var/opt/SUNWsrsrp 
missing
Dec  5 13:01:38 chrysek root: CAPTURE_UPTIME ERROR: /var/opt/SUNWsrsrp 
missing

Dec  5 13:01:46 chrysek VERITAS: No proxy found.
Dec  5 13:01:52 chrysek vmd[546]: ready for connections
Dec  5 13:01:53 chrysek VERITAS: No proxy found.
Dec  5 13:01:54 

Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Rob



You can add more disks to a pool that is in raid-z you just can't
add disks to the existing raid-z vdev.
 


cd /usr/tmp
mkfile -n 100m 1 2 3 4 5 6 7 8 9 10
zpool create t raidz /usr/tmp/1 /usr/tmp/2 /usr/tmp/3
zpool status t
zfs list t
zpool add -f t raidz2 /usr/tmp/4 /usr/tmp/5 /usr/tmp/6 /usr/tmp/7
zpool status t
zfs list t
zpool add t /usr/tmp/8 spare /usr/tmp/9
zpool status t
zfs list t
zpool attach t /usr/tmp/8 /usr/tmp/10
zpool status t
zfs list t
sleep 10
rm /usr/tmp/5
zpool scrub t
sleep 3
zpool status t
mkfile -n 100m 5
zpool replace t /usr/tmp/5
zpool status t
sleep 10
zpool status t
zpool offline t /usr/tmp/1
mkfile -n 200m 1
zpool replace t /usr/tmp/1
zpool status t
sleep 10
zpool status t
zpool offline t /usr/tmp/2
mkfile -n 200m 2
zpool  replace t /usr/tmp/2
zfs list t
sleep 10
zpool offline t /usr/tmp/3
mkfile -n 200m 3
zpool replace t /usr/tmp/3
sleep 10
zfs list t
zpool destroy t
rm 1 2 3 4 5 6 7 8 9 10

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread eric kustarz

Jim Davis wrote:
We have two aging Netapp filers and can't afford to buy new Netapp gear, 
so we've been looking with a lot of interest at building NFS fileservers 
running ZFS as a possible future approach.  Two issues have come up in 
the discussion


- Adding new disks to a RAID-Z pool (Netapps handle adding new disks 
very nicely).  Mirroring is an alternative, but when you're on a tight 
budget losing N/2 disk capacity is painful.


What about adding a whole new RAID-Z vdev and dynamicly stripe across 
the RAID-Zs?  Your capacity and performance will go up with each RAID-Z 
vdev you add.


Such as:
# zpool create swim raidz /var/tmp/dev1 /var/tmp/dev2 /var/tmp/dev3
# zpool status
  pool: swim
 state: ONLINE
 scrub: none requested
config:

NAME   STATE READ WRITE CKSUM
swim   ONLINE   0 0 0
  raidz1   ONLINE   0 0 0
/var/tmp/dev1  ONLINE   0 0 0
/var/tmp/dev2  ONLINE   0 0 0
/var/tmp/dev3  ONLINE   0 0 0

errors: No known data errors
# zpool add swim raidz /var/tmp/dev4 /var/tmp/dev5 /var/tmp/dev6
# zpool status
  pool: swim
 state: ONLINE
 scrub: none requested
config:

NAME   STATE READ WRITE CKSUM
swim   ONLINE   0 0 0
  raidz1   ONLINE   0 0 0
/var/tmp/dev1  ONLINE   0 0 0
/var/tmp/dev2  ONLINE   0 0 0
/var/tmp/dev3  ONLINE   0 0 0
  raidz1   ONLINE   0 0 0
/var/tmp/dev4  ONLINE   0 0 0
/var/tmp/dev5  ONLINE   0 0 0
/var/tmp/dev6  ONLINE   0 0 0

errors: No known data errors
#
# zpool add swim raidz /var/tmp/dev7 /var/tmp/dev8 /var/tmp/dev9
# zpool status
  pool: swim
 state: ONLINE
 scrub: none requested
config:

NAME   STATE READ WRITE CKSUM
swim   ONLINE   0 0 0
  raidz1   ONLINE   0 0 0
/var/tmp/dev1  ONLINE   0 0 0
/var/tmp/dev2  ONLINE   0 0 0
/var/tmp/dev3  ONLINE   0 0 0
  raidz1   ONLINE   0 0 0
/var/tmp/dev4  ONLINE   0 0 0
/var/tmp/dev5  ONLINE   0 0 0
/var/tmp/dev6  ONLINE   0 0 0
  raidz1   ONLINE   0 0 0
/var/tmp/dev7  ONLINE   0 0 0
/var/tmp/dev8  ONLINE   0 0 0
/var/tmp/dev9  ONLINE   0 0 0

errors: No known data errors
#




- The default scheme of one filesystem per user runs into problems with 
linux NFS clients; on one linux system, with 1300 logins, we already 
have to do symlinks with amd because linux systems can't mount more than 
about 255 filesystems at once.  We can of course just have one 
filesystem exported, and make /home/student a subdirectory of that, but 
then we run into problems with quotas -- and on an undergraduate 
fileserver, quotas aren't optional!


Have you tried using the automounter as suggested by the linux faq?:
http://nfs.sourceforge.net/#section_b

Look for section B3. Why can't I mount more than 255 NFS file systems 
on my client? Why is it sometimes even less than 255?.


Let us know if that works or doesn't work.

Also, ask for reasoning/schedule on when they are going to fix this on 
the linux NFS alias (i believe its [EMAIL PROTECTED] ).  Trond 
should be able to help you.  If going to OpenSolaris clients is not an 
option, then i would be curious to know why.


eric



Neither of these problems are necessarily showstoppers, but both make 
the transition more difficult.  Any progress that could be made with 
them would help sites like us make the switch sooner.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS failover without multipathing

2006-12-06 Thread Jason J. W. Williams

Hi Luke,

We've been using MPXIO (STMS) with ZFS quite solidly for the past few
months. Failover is  instantaneous when a write operations occurs
after a path is pulled. Our environment is similar to yours, dual-FC
ports on the host, and 4 FC ports on the storage (2 per controller).
Depending on your gear using MPXIO is ridiculously simple. For us it
was as simple as enabling it on our T2000, the Opteron boxes just came
up.

Best Regards,
Jason

On 12/6/06, Luke Schwab [EMAIL PROTECTED] wrote:

Hi,

I am running Solaris 10 ZFS and I do not have STMS multipathing enables. I have 
dual FC connections to storage using two ports on an Emulex HBA.

In the Solaris ZFS admin guide. It says that a ZFS file system monitors disks 
by their path and their device ID. If a disk is switched between controllers, 
ZFS will be able to pick up the disk on a secondary controller.

I tested this theory by creating a zpool on the first controller and then I 
pulled the cable on the back of the server. the server took about 3-5 minutes 
to failover. But it did fail over!!

My question is, can ZFS be configured to detect path changes quicker? I would 
like to configure ZFS to failover within a reasonable amount of time, like 1-2 
seconds vs. 1-5 minutes.

Thanks,

ljs


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Re: Snapshots impact on performance

2006-12-06 Thread Chris Gerhard
One of our file servers internally to Sun that reproduces this running nv53 
here is the dtrace output:

  unix`mutex_vector_enter+0x120
  zfs`metaslab_group_alloc+0x1a0
  zfs`metaslab_alloc_dva+0x10c
  zfs`metaslab_alloc+0x3c
  zfs`zio_dva_allocate+0x54
  zfs`zio_write_compress+0x248
  zfs`arc_write+0xec
  zfs`dbuf_sync+0x698
  zfs`dnode_sync+0x2ec
  zfs`dmu_objset_sync_dnodes+0x60
  zfs`dmu_objset_sync+0x78
  zfs`dsl_dataset_sync+0xc
  zfs`dsl_pool_sync+0x64
  zfs`spa_sync+0x1b4
  zfs`txg_sync_thread+0x120
  unix`thread_start+0x4
8

  unix`mutex_vector_enter+0x120
  zfs`metaslab_group_alloc+0x1a0
  zfs`metaslab_alloc_dva+0x10c
  zfs`metaslab_alloc+0x3c
  zfs`zio_alloc_blk+0x34
  zfs`zil_lwb_write_start+0xdc
  zfs`zil_commit_writer+0x2ac
  zfs`zil_commit+0x68
  zfs`zfs_fsync+0xa4
  genunix`fop_fsync+0x14
  nfssrv`rfs3_setattr+0x4ec
  nfssrv`common_dispatch+0x3c8
  rpcmod`svc_getreq+0x20c
  rpcmod`svc_run+0x1ac
  nfs`nfssys+0x18c
  unix`syscall_trap32+0xcc
8

  genunix`avl_walk+0x3c
  zfs`metaslab_ff_alloc+0x90
  zfs`space_map_alloc+0x10
  zfs`metaslab_group_alloc+0x200
  zfs`metaslab_alloc_dva+0x10c
  zfs`metaslab_alloc+0x3c
  zfs`zio_alloc_blk+0x34
  zfs`zil_lwb_write_start+0xdc
  zfs`zil_lwb_commit+0x94
  zfs`zil_commit_writer+0x1e4
  zfs`zil_commit+0x68
  zfs`zfs_putpage+0x1d8
  genunix`fop_putpage+0x1c
  nfssrv`rfs3_commit+0x130
  nfssrv`common_dispatch+0x4ec
  rpcmod`svc_getreq+0x20c
  rpcmod`svc_run+0x1ac
  nfs`nfssys+0x18c
  unix`syscall_trap32+0xcc
8

  genunix`avl_walk+0x40
  zfs`metaslab_ff_alloc+0x90
  zfs`space_map_alloc+0x10
  zfs`metaslab_group_alloc+0x200
  zfs`metaslab_alloc_dva+0x10c
  zfs`metaslab_alloc+0x3c
  zfs`zio_alloc_blk+0x34
  zfs`zil_lwb_write_start+0xdc
  zfs`zil_commit_writer+0x2ac
  zfs`zil_commit+0x68
  zfs`zfs_putpage+0x1d8
  genunix`fop_putpage+0x1c
  nfssrv`rfs3_commit+0x130
  nfssrv`common_dispatch+0x4ec
  rpcmod`svc_getreq+0x20c
  rpcmod`svc_run+0x1ac
  nfs`nfssys+0x18c
  unix`syscall_trap32+0xcc
8

  genunix`avl_walk+0x4c
  zfs`metaslab_ff_alloc+0x90
  zfs`space_map_alloc+0x10
  zfs`metaslab_group_alloc+0x200
  zfs`metaslab_alloc_dva+0x10c
  zfs`metaslab_alloc+0x3c
  zfs`zio_dva_allocate+0x54
  zfs`zio_write_compress+0x248
  zfs`arc_write+0xec
  zfs`dbuf_sync+0x698
  zfs`dnode_sync+0x2ec
  zfs`dmu_objset_sync_dnodes+0x60
  zfs`dmu_objset_sync+0x50
  zfs`dsl_dataset_sync+0xc
  zfs`dsl_pool_sync+0x64
  zfs`spa_sync+0x1b4
  zfs`txg_sync_thread+0x120
  unix`thread_start+0x4
8

  zfs`fletcher_2_native+0x2c
  zfs`arc_cksum_verify+0x64
  zfs`arc_buf_thaw+0x38
  zfs`dbuf_dirty+0x10c
  zfs`dmu_write_uio+0xc4
  zfs`zfs_write+0x3ac
  genunix`fop_write+0x20
  nfssrv`rfs3_write+0x3d8
  nfssrv`common_dispatch+0x3c8
  rpcmod`svc_getreq+0x20c
  rpcmod`svc_run+0x1ac
  nfs`nfssys+0x18c
  unix`syscall_trap32+0xcc
8

  zfs`fletcher_2_native+0x2c
  zfs`arc_cksum_verify+0x64
  zfs`arc_buf_destroy+0x1c
  zfs`arc_evict+0x1f0
  zfs`arc_adjust+0xf8
  zfs`arc_kmem_reclaim+0x100
  zfs`arc_kmem_reap_now+0x20
  zfs`arc_reclaim_thread+0xdc
  unix`thread_start+0x4
8

  zfs`fletcher_2_native+0x2c
  zfs`arc_cksum_compute+0x6c
  zfs`dbuf_rele+0x40
  zfs`dmu_buf_rele_array+0x34
  zfs`dmu_write_uio+0x13c
  zfs`zfs_write+0x3ac
  genunix`fop_write+0x20
  nfssrv`rfs3_write+0x3d8
  nfssrv`common_dispatch+0x3c8
  rpcmod`svc_getreq+0x20c
  rpcmod`svc_run+0x1ac
  nfs`nfssys+0x18c
  unix`syscall_trap32+0xcc
8

  unix`disp_getwork+0x7c
 

Re: [zfs-discuss] ZFS failover without multipathing

2006-12-06 Thread Douglas Denny

On 12/6/06, Jason J. W. Williams [EMAIL PROTECTED] wrote:

We've been using MPXIO (STMS) with ZFS quite solidly for the past few
months. Failover is  instantaneous when a write operations occurs
after a path is pulled. Our environment is similar to yours, dual-FC
ports on the host, and 4 FC ports on the storage (2 per controller).
Depending on your gear using MPXIO is ridiculously simple. For us it
was as simple as enabling it on our T2000, the Opteron boxes just came
up.


Jason,

Could you tell me more about you configuration? Do you have multiple
LUNs defined? Do you mirror/raidz these LUNs?

-Doug
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS failover without multipathing

2006-12-06 Thread Jason J. W. Williams

Hi Doug,

The configuration is a T2000 connected to a StorageTek FLX210 array
via Qlogic QLA2342 HBAs and Brocade 3850 switches. We currently RAID-Z
the LUNs across 3 array volume groups. For performance reasons we're
in the process of changing to striped zpools across RAID-1 volume
groups. The performance issue is more a reflection on the array than
ZFS. Though RAID-Z tends to be more chatty IOPS-wise than typical
RAID-5.

Overall, we've been VERY happy with ZFS. The scrub feature has saved a
lot of time tracking down a corruption issue that cropped up in one of
our databases. Helped prove it wasn't ZFS or the storage.

Does this help?

Best Regards,
Jason

On 12/6/06, Douglas Denny [EMAIL PROTECTED] wrote:

On 12/6/06, Jason J. W. Williams [EMAIL PROTECTED] wrote:
 We've been using MPXIO (STMS) with ZFS quite solidly for the past few
 months. Failover is  instantaneous when a write operations occurs
 after a path is pulled. Our environment is similar to yours, dual-FC
 ports on the host, and 4 FC ports on the storage (2 per controller).
 Depending on your gear using MPXIO is ridiculously simple. For us it
 was as simple as enabling it on our T2000, the Opteron boxes just came
 up.

Jason,

Could you tell me more about you configuration? Do you have multiple
LUNs defined? Do you mirror/raidz these LUNs?

-Doug


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Edward Pilatowicz
On Wed, Dec 06, 2006 at 07:28:53AM -0700, Jim Davis wrote:
 We have two aging Netapp filers and can't afford to buy new Netapp gear,
 so we've been looking with a lot of interest at building NFS fileservers
 running ZFS as a possible future approach.  Two issues have come up in the
 discussion

 - Adding new disks to a RAID-Z pool (Netapps handle adding new disks very
 nicely).  Mirroring is an alternative, but when you're on a tight budget
 losing N/2 disk capacity is painful.

 - The default scheme of one filesystem per user runs into problems with
 linux NFS clients; on one linux system, with 1300 logins, we already have
 to do symlinks with amd because linux systems can't mount more than about
 255 filesystems at once.  We can of course just have one filesystem
 exported, and make /home/student a subdirectory of that, but then we run
 into problems with quotas -- and on an undergraduate fileserver, quotas
 aren't optional!


well, if the mount limitation is imposed by the linux kernel you might
consider trying running linux in zone on solaris (via BrandZ).  Since
BrandZ allows you to execute linux programs on a solaris kernel you
shoudn't have a problem with limits imposed by the linux kernel.
brandz currently ships in an solaris express (or solaris express
community release) build snv_49 or later.

you can find more info on brandz here:
http://opensolaris.org/os/community/brandz/

ed
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS failover without multipathing

2006-12-06 Thread Douglas Denny

On 12/6/06, Jason J. W. Williams [EMAIL PROTECTED] wrote:

The configuration is a T2000 connected to a StorageTek FLX210 array
via Qlogic QLA2342 HBAs and Brocade 3850 switches. We currently RAID-Z
the LUNs across 3 array volume groups. For performance reasons we're
in the process of changing to striped zpools across RAID-1 volume
groups. The performance issue is more a reflection on the array than
ZFS. Though RAID-Z tends to be more chatty IOPS-wise than typical
RAID-5.


Thanks Jason,

Yes, this does help. I think you are doing all raid through ZFS. The
disk array is being used a FC JBOD.

Thanks!

-Doug
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Darren J Moffat

Edward Pilatowicz wrote:

On Wed, Dec 06, 2006 at 07:28:53AM -0700, Jim Davis wrote:

We have two aging Netapp filers and can't afford to buy new Netapp gear,
so we've been looking with a lot of interest at building NFS fileservers
running ZFS as a possible future approach.  Two issues have come up in the
discussion

- Adding new disks to a RAID-Z pool (Netapps handle adding new disks very
nicely).  Mirroring is an alternative, but when you're on a tight budget
losing N/2 disk capacity is painful.

- The default scheme of one filesystem per user runs into problems with
linux NFS clients; on one linux system, with 1300 logins, we already have
to do symlinks with amd because linux systems can't mount more than about
255 filesystems at once.  We can of course just have one filesystem
exported, and make /home/student a subdirectory of that, but then we run
into problems with quotas -- and on an undergraduate fileserver, quotas
aren't optional!



well, if the mount limitation is imposed by the linux kernel you might
consider trying running linux in zone on solaris (via BrandZ).  Since
BrandZ allows you to execute linux programs on a solaris kernel you
shoudn't have a problem with limits imposed by the linux kernel.
brandz currently ships in an solaris express (or solaris express
community release) build snv_49 or later.


Another alternative is to pick an OpenSolaris based distribution that 
looks and feels more like Linux.  Nexenta might do that for you.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Eric Kustarz

Jim Davis wrote:

eric kustarz wrote:



What about adding a whole new RAID-Z vdev and dynamicly stripe across 
the RAID-Zs?  Your capacity and performance will go up with each 
RAID-Z vdev you add.



Thanks, that's an interesting suggestion.



Have you tried using the automounter as suggested by the linux faq?:
http://nfs.sourceforge.net/#section_b



Yes.  On our undergrad timesharing system (~1300 logins) we actually hit 
that limit with a standard automounting scheme.  So now we make static 
mounts of the Netapp /home space and then use amd to make symlinks to 
the home directories.  Ugly, but it works.


Ug indeed.





Also, ask for reasoning/schedule on when they are going to fix this on 
the linux NFS alias (i believe its [EMAIL PROTECTED] ).  Trond 
should be able to help you.



It's item 9 (last) on their medium priority list, according to 
http://www.linux-nfs.org/priorities.html.  That doesn't sound like a fix 
is coming soon.


Hmm, looks like that list is a little out of date, i'll ask trond to 
update it.





If going to OpenSolaris clients is not an option, then i would be 
curious to know why.



Ah, well... it was a Solaris system for many years.  And we were mostly 
a Solaris shop for many years.  Then Sun hardware got too pricey, and 
fast Intel systems got cheap but at the time Solaris support for them 
lagged and Linux matured and...  and now Linux is entrenched. It's a 
story other departments here could tell.  And at other universities too 
I'll bet.  So the reality is we have to make whatever we run on our 
servers play well with Linux clients.


Ok, can i ask a favor then?  Could you try one OpenSolaris client 
(should work fine on the existing hardware you have) and let us know if 
that works better/worse for you?  And as Ed just mentioned, i would be 
really interested if BrandZ fits your needs (then you could have one+ 
zone with a linux userland and opensolaris kernel).


eric


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS failover without multipathing

2006-12-06 Thread Jason J. W. Williams

Hi Luke,

That's really strange. We did the exact same thing moving between two
hosts (export/import) and it took maybe 10 secs. How big is your
zpool?

Best Regards,
Jason

On 12/6/06, Luke Schwab [EMAIL PROTECTED] wrote:

Doug,

I should have posted the reason behind this posting.

I have 2 v280's in a clustered environment and I am
going to attempt to failover (migrate) the zpool
between the machines. I tried the following two
configurations:

1. I used ZFS with STMS(mpxio) enabled. I then
exported a zpool and imported it onto another machine.
The second machine took 6 minutes to import the zpool.
(Maybe I am configuring something wrong??) Do you use
exports/imports??

2. In a second configuration I disabled STMS(mpxio)
and exported a zpool and imported it onto the other
machine again. The second machine then only took 50
seconds to import the zpool.

When dealing with clusters, we have a 5 minute
failover requirement on the entire cluster to move.
Therefore, it would be ideal to not have STMS(mpxio)
enabled on the machines.

Luke Schwab



--- Jason J. W. Williams [EMAIL PROTECTED]
wrote:

 Hi Doug,

 The configuration is a T2000 connected to a
 StorageTek FLX210 array
 via Qlogic QLA2342 HBAs and Brocade 3850 switches.
 We currently RAID-Z
 the LUNs across 3 array volume groups. For
 performance reasons we're
 in the process of changing to striped zpools across
 RAID-1 volume
 groups. The performance issue is more a reflection
 on the array than
 ZFS. Though RAID-Z tends to be more chatty IOPS-wise
 than typical
 RAID-5.

 Overall, we've been VERY happy with ZFS. The scrub
 feature has saved a
 lot of time tracking down a corruption issue that
 cropped up in one of
 our databases. Helped prove it wasn't ZFS or the
 storage.

 Does this help?

 Best Regards,
 Jason

 On 12/6/06, Douglas Denny [EMAIL PROTECTED]
 wrote:
  On 12/6/06, Jason J. W. Williams
 [EMAIL PROTECTED] wrote:
   We've been using MPXIO (STMS) with ZFS quite
 solidly for the past few
   months. Failover is  instantaneous when a write
 operations occurs
   after a path is pulled. Our environment is
 similar to yours, dual-FC
   ports on the host, and 4 FC ports on the storage
 (2 per controller).
   Depending on your gear using MPXIO is
 ridiculously simple. For us it
   was as simple as enabling it on our T2000, the
 Opteron boxes just came
   up.
 
  Jason,
 
  Could you tell me more about you configuration? Do
 you have multiple
  LUNs defined? Do you mirror/raidz these LUNs?
 
  -Doug
 



__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: zpool import takes to long with large numbers of file systems

2006-12-06 Thread Luke Schwab
I, too, experienced a long delay while importing a zpool on a second machine. I 
do not have any filesystems in the pool. Just the Solaris 10 Operating system, 
Emulex 10002DC HBA, and a 4884 LSI array (dual attached). 

I don't have any file systems created but when STMS(mpxio) is enabled I see 

# time zpool import testpool
real 6m41.01s
user 0m.30s
sys 0m0.14s

When I disable STMS(mpxio), the times are much better but still not that great? 

# time zpool import testpool
real 1m15.01s
user 0m.15s
sys 0m0.35s

Are these normal symproms??

Can anyone explain why I too see delays even though I don't have any file 
systems in the zpool?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS failover without multipathing

2006-12-06 Thread Luke Schwab
I simply created a zpool with an array disk like

hosta# zpool created testpool c6tnumd0   //runs within a second
hosta# zpool export testpool   // runs within a second
hostb# zpool import testpool   // takes 5-7 minutes

If STMS(mpxio) is disabled, it takes from 45-60 seconds. I tested this with 
LUNs of size 10GB and 100MB. I got simular results on both LUNs.

However, I am not LUN masking and when I run a format command I can see all of 
the luns on the array (about 40 of them). and all together they are about 1TB 
in size.  

Maybe the problem is that there are many paths/luns to check when importing the 
zpool. but why do I get fater times when I disable STMS(mpxio)??

It is strange, I may try my testing on another array that has only a few luns 
and see what happens. Or enable LUN masking. This might help also??? Any 
thoughts.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: Managed to corrupt my pool

2006-12-06 Thread Eric Schrock
On Wed, Dec 06, 2006 at 12:35:58PM -0800, Jim Hranicky wrote:
  If those are the original path ids, and you didn't
  move the disks on the bus?  Why is the is_spare flag
 
 Well, I'm not sure, but these drives were set as spares in another pool 
 I deleted -- should I have done something to the drives (fdisk?) before
 rearranging it?
 
 The rest of the options are spitting out a bunch of stuff I'll be
 glad to post links too, but if the problem is that the drives are
 erroneously marked as spares I'll re-init them and start over.

There are known issues with the way spares are tracked and recorded on
disk that can result in a variety of strange behavior in exceptional
circumstances.  We are working on resolving these issues.

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS failover without multipathing

2006-12-06 Thread James C. McPherson

Luke Schwab wrote:

I simply created a zpool with an array disk like

hosta# zpool created testpool c6tnumd0   //runs within a second hosta#
zpool export testpool   // runs within a second hostb# zpool import
testpool   // takes 5-7 minutes

If STMS(mpxio) is disabled, it takes from 45-60 seconds. I tested this
with LUNs of size 10GB and 100MB. I got simular results on both LUNs.

However, I am not LUN masking and when I run a format command I can see
all of the luns on the array (about 40 of them). and all together they
are about 1TB in size.

Maybe the problem is that there are many paths/luns to check when
importing the zpool. but why do I get fater times when I disable
STMS(mpxio)??

It is strange, I may try my testing on another array that has only a few
luns and see what happens. Or enable LUN masking. This might help also???
Any thoughts.


First question to ask -- are you using the emlxs driver for
the Emulex card?

Second question -- are you up to date on the SAN Foundation
Kit (SFK) patches? I think the current version is 4.4.11. If
you're not running that version, I strongly recommend that
you upgrade your patch levels to it. Ditto for kernel, sd
and scsi_vhci.


James C. McPherson
--
Solaris kernel software engineer
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Overview (rollup) of recent activity on zfs-discuss

2006-12-06 Thread Eric Boutilier

For background on what this is, see:

http://www.opensolaris.org/jive/message.jspa?messageID=24416#24416
http://www.opensolaris.org/jive/message.jspa?messageID=25200#25200

=
zfs-discuss 11/16 - 11/30
=

Size of all threads during period:

Thread size Topic
--- -
 29   poor NFS/ZFS performance
 19   system wont boot after zfs
 17   Production ZFS Server Death (06/06)
 13   Setting ACLs
 12   ZFS problems
 11   zfs corrupted my data!
 10   'legacy' vs 'none'
  9   zfs hot spare not automatically getting used
  9   raidz DEGRADED state
  8   Convert Zpool RAID Types
  7   bare metal ZFS ? How To ?
  7   ZFS goes catatonic when drives go dead?
  7   Tired of VxVM - too many issues and too  - Maybe ZFS as 
alternative
  6   ZFS as root FS
  6   SVM - UFS Upgrade
  6   Que: ZFS - Automatic Endian Adaptiveness
  5   shareiscsi supports breaks booting when a ZFS /usr filesystem is 
used
  5   ZFS/iSCSI target integration
  5   Size of raidz
  4   Root, umask and zfs
  4   How to backup/clone all filesystems *and* snapshots in a zpool?
  3   ZFS and EFI labels
  2   sharing a zfs file system
  2   listing zpools by id?
  2   Why Does zfs list and zpool list give different answers
  2   What happens when adding a mirror, or put a mirror offline/online
  2   Temporary mount Properties, small bug?
  2   Sun STK3320  ZFS
  2   Is there zfs configuration ? where is it?
  2   How do I obtain zfs with spare implementation?
  1   zpool import core
  1   zil_disable
  1   shareiscsi supports breaks booting when a ZFS /usr
  1   hi it's Huerta
  1   hi it's Farmer
  1   Zfs scrub and fjge interface on Prime Power
  1   ZFS caught resilvering when only one side of mirror persent
  1   Thoughts on patching + zfs root
  1   Rivas message
  1   Managed to corrupt my pool
  1   Another win for ZFS


Posting activity by person for period:

# of posts  By
--   --
 12   darren.moffat at sun.com (darren j moffat)
 11   richard.elling at sun.com (richard elling)
 10   rasputnik at gmail.com (dick davies)
 10   ceri at submonkey.net (ceri davies)
  8   roch.bourbonnais at sun.com (roch - pae)
  7   jfh at cise.ufl.edu (jim hranicky)
  7   casper.dik at sun.com (casper dik)
  7   al at logical-approach.com (al hopper)
  6   toby at smartgames.ca (toby thain)
  6   jasonjwwilliams at gmail.com (jason j. w. williams)
  5   jonathan.edwards at sun.com (jonathan edwards)
  5   dd-b at dd-b.net (david dyer-bennet)
  5   dclarke at blastwave.org (dennis clarke)
  4   tim.foster at sun.com (tim foster)
  4   peter at ifm.liu.se (peter eriksson)
  4   krzys at perfekt.net (krzys)
  4   calum.mackay at sun.com (calum mackay)
  4   betsy.schwartz at gmail.com (elizabeth schwartz)
  3   tmcmahon2 at yahoo.com (torrey mcmahon)
  3   sommerfeld at sun.com (bill sommerfeld)
  3   sanjeev.bagewadi at sun.com (sanjeev bagewadi)
  3   pjd at freebsd.org (pawel jakub dawidek)
  3   justin.conover at gmail.com (justin conover)
  3   jmlittle at gmail.com (joe little)
  3   james.c.mcpherson at gmail.com (james mcpherson)
  3   eric.schrock at sun.com (eric schrock)
  3   cindy.swearingen at sun.com (cindy swearingen)
  3   bill.moore at sun.com (bill moore)
  3   anantha.srirama at cdc.hhs.gov (anantha n. srirama)
  2   zfs at michael.mailshell.com (zfs)
  2   sean.w.oneill at sun.com (sean o'neill)
  2   peter.buckingham at sun.com (peter buckingham)
  2   nicolas.williams at sun.com (nicolas williams)
  2   nicholas.senedzuk at gmail.com (nicholas senedzuk)
  2   mritun+opensolaris at gmail.com (akhilesh mritunjai)
  2   mbarto at logiqwest.com (michael barto)
  2   matthew.sweeney at sun.com (matthew b sweeney - sun microsystems 
inc.)
  2   matthew.ahrens at sun.com (matthew ahrens)
  2   lori.alt at sun.com (lori alt)
  2   ktd at club-internet.fr (pierre chatelier)
  2   jk at tools.de (=?utf-8?q?j=c3=bcrgen_keil?=)
  2   jeanmarc.lacoste at ambre-systems.com (marlanne delasource)
  2   jay.sisodiya at sun.com (jay sisodiya)
  2   jamesd.wi at gmail.com (james dickens)
  2   fcusack at fcusack.com (frank cusack)
  2   elefante72 at hotmail.com (david 

[zfs-discuss] Re: Creating zfs filesystem on a partition with ufs - Newbie

2006-12-06 Thread Ian Brown
Hello, 
Thanks.
Here is the needed info: 
zpool status
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  c1d0s6ONLINE   0 0 0

errors: No known data errors

df -h returns:
FilesystemSize  Used Avail Use% Mounted on
/dev/dsk/c1d0s070G   59G   11G  85% /
swap  2.3G  788K  2.3G   1% /etc/svc/volatile
/usr/lib/libc/libc_hwcap1.so.1
   70G   59G   11G  85% /lib/libc.so.1
swap  2.3G   20K  2.3G   1% /tmp
swap  2.3G   32K  2.3G   1% /var/run
/dev/dsk/c1d0s7   251M  1.1M  225M   1% /export/home


prtvtoc /dev/dsk/c1d0s0 returns:
* /dev/dsk/c1d0s0 partition map
*
* Dimensions:
* 512 bytes/sector
*  63 sectors/track
* 255 tracks/cylinder
*   16065 sectors/cylinder
*9728 cylinders
*9726 accessible cylinders
*
* Flags:
*   1: unmountable
*  10: read-only
*
*  First SectorLast
* Partition  Tag  FlagsSector CountSector  Mount Directory
   0  2008787555 147460635 156248189   /
   1  301  48195   4096575   4144769
   2  500  0 156248190 156248189
   6  0004690980   4096575   8787554
   7  8004144770546210   4690979   /export/home
   8  101  0 16065 16064
   9  901  16065 32130 48194

I cannot destroy this pool; 
zpool destroy tank returns:
internal error: No such device
Abort (core dumped)
 

Regards,
Ian
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Basil check this.

2006-12-06 Thread Basil Walsh
These days it's like the dot com revolution round two.  The 
only difference is, this time there will be no crash.  
Everyone is wiser and the internet has already proven that 
it is THE place to do business.

Acquisitions are happening at a record pace.  Google 
picking up Youtube. News corp picking up Myspace, the list 
goes on.  

In all of these acquisitions those that are already holding 
an interest in these smaller companies are making a fortune 
in the process.  Word on the street is that our next pick 
is about to be acquired by a big player!

Premier Holdings Group 

Symbol: PMHD
Current Price:   0.37
Target price:1.35

PMHD is a high tech company involved in a wide range of 
internet related businesses from broadband to online 
commerce.  This is one hot item!

Remember, we get you the inside scoop BEFORE it hits the 
street.  With the acquisition set to be announced shortly 
the time to act is now.  Once the word is out it will be 
too late to get in, as we expect this one to go up quickly. 

Go PMHD!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss