[zfs-discuss] 'sync' properties and write operations.

2010-08-28 Thread eXeC001er
Hi.

Can you explain to me:

1. dataset has 'sync=always'

I start write to file on this dataset in no-sync mode: system write file in
sync or async mode?

2. dataset has 'sync=disabled'

I start write to file on this dataset in sync mode: system write file in
sync or async mode?



Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs lists discrepancy after added a new vdev to pool

2010-08-28 Thread Tomas Ögren
On 27 August, 2010 - Darin Perusich sent me these 2,1K bytes:

 Hello All,
 
 I'm sure this has been discussed previously but I haven't been able to find 
 an 
 answer to this. I've added another raidz1 vdev to an existing storage pool 
 and 
 the increased available storage isn't reflected in the 'zfs list' output. Why 
 is this?
 
 The system in question is runnning Solaris 10 5/09 s10s_u7wos_08, kernel 
 Generic_139555-08. The system does not have the lastest patches which might 
 be 
 the cure.
 
 Thanks!
 
 Here's what I'm seeing.
 zpool create datapool raidz1 c1t50060E800042AA70d0  c1t50060E800042AA70d1

Just fyi, this is an inefficient variant of a mirror. More cpu required
and lower performance.

/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs lists discrepancy after added a new vdev to pool

2010-08-28 Thread Mattias Pantzare
On Sat, Aug 28, 2010 at 02:54, Darin Perusich
darin.perus...@cognigencorp.com wrote:
 Hello All,

 I'm sure this has been discussed previously but I haven't been able to find an
 answer to this. I've added another raidz1 vdev to an existing storage pool and
 the increased available storage isn't reflected in the 'zfs list' output. Why
 is this?

 The system in question is runnning Solaris 10 5/09 s10s_u7wos_08, kernel
 Generic_139555-08. The system does not have the lastest patches which might be
 the cure.

 Thanks!

 Here's what I'm seeing.
 zpool create datapool raidz1 c1t50060E800042AA70d0  c1t50060E800042AA70d1

 zpool status
  pool: datapool
  state: ONLINE
  scrub: none requested
 config:

        NAME                       STATE     READ WRITE CKSUM
        datapool                   ONLINE       0     0     0
          raidz1                   ONLINE       0     0     0
            c1t50060E800042AA70d0  ONLINE       0     0     0
            c1t50060E800042AA70d1  ONLINE       0     0     0

 zfs list
 NAME       USED  AVAIL  REFER  MOUNTPOINT
 datapool   108K   196G    18K  /datapool

 zpool add datapool raidz1 c1t50060E800042AA70d2 c1t50060E800042AA70d3

 zpool status
  pool: datapool
  state: ONLINE
  scrub: none requested
 config:

        NAME                       STATE     READ WRITE CKSUM
        datapool                   ONLINE       0     0     0
          raidz1                   ONLINE       0     0     0
            c1t50060E800042AA70d0  ONLINE       0     0     0
            c1t50060E800042AA70d1  ONLINE       0     0     0
          raidz1                   ONLINE       0     0     0
            c1t50060E800042AA70d2  ONLINE       0     0     0
            c1t50060E800042AA70d3  ONLINE       0     0     0

 zfs list
 NAME       USED  AVAIL  REFER  MOUNTPOINT
 datapool   112K   392G    18K  /datapool

I think you have to explain your problem more, 392G is more than 196G?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs lists discrepancy after added a new vdev to pool

2010-08-28 Thread eXeC001er
 On Sat, Aug 28, 2010 at 02:54, Darin Perusich
 darin.perus...@cognigencorp.com wrote:
  Hello All,
 
  I'm sure this has been discussed previously but I haven't been able to
 find an
  answer to this. I've added another raidz1 vdev to an existing storage
 pool and
  the increased available storage isn't reflected in the 'zfs list' output.
 Why
  is this?
 
  The system in question is runnning Solaris 10 5/09 s10s_u7wos_08, kernel
  Generic_139555-08. The system does not have the lastest patches which
 might be
  the cure.
 
  Thanks!
 
  Here's what I'm seeing.
  zpool create datapool raidz1 c1t50060E800042AA70d0  c1t50060E800042AA70d1
 
  zpool status
   pool: datapool
   state: ONLINE
   scrub: none requested
  config:
 
 NAME   STATE READ WRITE CKSUM
 datapool   ONLINE   0 0 0
   raidz1   ONLINE   0 0 0
 c1t50060E800042AA70d0  ONLINE   0 0 0
 c1t50060E800042AA70d1  ONLINE   0 0 0
 
  zfs list
  NAME   USED  AVAIL  REFER  MOUNTPOINT
  datapool   108K   196G18K  /datapool
 
  zpool add datapool raidz1 c1t50060E800042AA70d2 c1t50060E800042AA70d3
 
  zpool status
   pool: datapool
   state: ONLINE
   scrub: none requested
  config:
 
 NAME   STATE READ WRITE CKSUM
 datapool   ONLINE   0 0 0
   raidz1   ONLINE   0 0 0
 c1t50060E800042AA70d0  ONLINE   0 0 0
 c1t50060E800042AA70d1  ONLINE   0 0 0
   raidz1   ONLINE   0 0 0
 c1t50060E800042AA70d2  ONLINE   0 0 0
 c1t50060E800042AA70d3  ONLINE   0 0 0
 
  zfs list
  NAME   USED  AVAIL  REFER  MOUNTPOINT
  datapool   112K   392G18K  /datapool


Darin, you created 'pool'-vdev from the two 'raid-z'-vdev: result you have
size_of_pool = 2 * 'raid-z'




 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ufs root to zfs root liveupgrade?

2010-08-28 Thread LaoTsao 老曹

 hi all
Try to learn how UFS root to ZFS root  liveUG work.

I download the vbox image of s10u8, it come up as UFS root.
add a new  disks (16GB)
create zpool rpool
run lucreate -n zfsroot -p rpool
run luactivate zfsroot
run lustatus it do show zfsroot will be active in next boot
init 6
but it come up with UFS root,
lustatus show ufsroot active
zpool rpool is mounted but not used by boot

Is this a known bug? I donot have access to sunsolve now
regards


attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ufs root to zfs root liveupgrade?

2010-08-28 Thread Casper . Dik


  hi all
Try to learn how UFS root to ZFS root  liveUG work.

I download the vbox image of s10u8, it come up as UFS root.
add a new  disks (16GB)
create zpool rpool
run lucreate -n zfsroot -p rpool
run luactivate zfsroot
run lustatus it do show zfsroot will be active in next boot
init 6
but it come up with UFS root,
lustatus show ufsroot active
zpool rpool is mounted but not used by boot


You'll need to boot from a different disk; I don't think that the
OS can change the boot disk (it can on SPARC but it can't on x86)

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-28 Thread Eff Norwood
I can't think of an easy way to measure pages that have not been consumed since 
it's really an SSD controller function which is obfuscated from the OS, and add 
the variable of over provisioning on top of that. If anyone would like to 
really get into what's going on inside of an SSD that makes it a bad choice for 
a ZIL, you can start here:

http://en.wikipedia.org/wiki/TRIM_%28SSD_command%29

and

http://en.wikipedia.org/wiki/Write_amplification

Which will be more than you might have ever wanted to know. :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ufs root to zfs root liveupgrade?

2010-08-28 Thread LaoTsao 老曹

 thx
try to detach the old ufs and boot from the new zfsroot
it fail
I reattach the ufsroot disk and find out that in the 
rpool/boot/grub/menu.1st

findroot (rootfs0,0,a) and not findroot (pool_rpool,0,a)
not sure what is the correct findroot here
even with this change to findroot and try to boot it fail with cannot 
find file

regards


On 8/28/2010 7:47 AM, casper@sun.com wrote:



  hi all
Try to learn how UFS root to ZFS root  liveUG work.

I download the vbox image of s10u8, it come up as UFS root.
add a new  disks (16GB)
create zpool rpool
run lucreate -n zfsroot -p rpool
run luactivate zfsroot
run lustatus it do show zfsroot will be active in next boot
init 6
but it come up with UFS root,
lustatus show ufsroot active
zpool rpool is mounted but not used by boot


You'll need to boot from a different disk; I don't think that the
OS can change the boot disk (it can on SPARC but it can't on x86)

Casper

attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-28 Thread Ray Van Dolson
On Sat, Aug 28, 2010 at 05:50:38AM -0700, Eff Norwood wrote:
 I can't think of an easy way to measure pages that have not been consumed 
 since it's really an SSD controller function which is obfuscated from the OS, 
 and add the variable of over provisioning on top of that. If anyone would 
 like to really get into what's going on inside of an SSD that makes it a bad 
 choice for a ZIL, you can start here:
 
 http://en.wikipedia.org/wiki/TRIM_%28SSD_command%29
 
 and
 
 http://en.wikipedia.org/wiki/Write_amplification
 
 Which will be more than you might have ever wanted to know. :)

So has anyone on this list actually run into this issue?  Tons of
people use SSD-backed slog devices...

The theory sounds sound, but if it's not really happening much in
practice then I'm not too worried.  Especially when I can replace a
drive from my slog mirror for a $400 or so if problems do arise... (the
alternative being much more expensive DRAM backed devices)

Ray
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-28 Thread Mike Gerdts
On Sat, Aug 28, 2010 at 8:19 AM, Ray Van Dolson rvandol...@esri.com wrote:
 On Sat, Aug 28, 2010 at 05:50:38AM -0700, Eff Norwood wrote:
 I can't think of an easy way to measure pages that have not been consumed 
 since it's really an SSD controller function which is obfuscated from the 
 OS, and add the variable of over provisioning on top of that. If anyone 
 would like to really get into what's going on inside of an SSD that makes it 
 a bad choice for a ZIL, you can start here:

 http://en.wikipedia.org/wiki/TRIM_%28SSD_command%29

 and

 http://en.wikipedia.org/wiki/Write_amplification

 Which will be more than you might have ever wanted to know. :)

 So has anyone on this list actually run into this issue?  Tons of
 people use SSD-backed slog devices...

 The theory sounds sound, but if it's not really happening much in
 practice then I'm not too worried.  Especially when I can replace a
 drive from my slog mirror for a $400 or so if problems do arise... (the
 alternative being much more expensive DRAM backed devices)

Presumably this problem is being worked...

http://hg.genunix.org/onnv-gate.hg/rev/d560524b6bb6

Notice that it implements:

866610  Add SATA TRIM support

With this in place, I would imagine a next step is for zfs to issue
TRIM commands as zil entries have been committed to the data disks.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs-discuss Digest, Vol 58, Issue 117

2010-08-28 Thread Allen Eastwood

 hi all
 Try to learn how UFS root to ZFS root  liveUG work.
 
 I download the vbox image of s10u8, it come up as UFS root.
 add a new  disks (16GB)
 create zpool rpool
 run lucreate -n zfsroot -p rpool
 run luactivate zfsroot
 run lustatus it do show zfsroot will be active in next boot
 init 6
 but it come up with UFS root,
 lustatus show ufsroot active
 zpool rpool is mounted but not used by boot
 
 
 You'll need to boot from a different disk; I don't think that the
 OS can change the boot disk (it can on SPARC but it can't on x86)

You can do it, but it's a pain.  Basically you have to boot in single-user, 
mount your ZFS on an alternate root, clear out and rebuild the /dev disk links, 
and fix up grub.

When you boot, grub should show you 4 boot choices, if you only see two, I'll 
be you forgot to run installgrub against the new disk?

-A
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] mirrored pool unimportable (FAULTED)

2010-08-28 Thread Norbert Harder
Hi,

more than a year ago I created a mirrored ZFS-Pool consiting of 2x1TB
HDDs using the OSX 10.5 ZFS Kernel Extension (Zpool Version 8, ZFS
Version 2). Everything went fine and I used the pool to store personal
stuff on it, like lots of photos and music. (So getting the data back is
not time critical, but still important to me.)

Later, since the development of the ZFS extension was discontinued, I
tried to move the pool to FreeBSD 8, where I detached one of the drives
to use it in another way. (Yes, dumb idea, but I needed the disk and
thought I would be fine since it was a mirrored pool)
After that the pool was no longer importable - neither on OSX nor on
FreeBSD. Since then I tried to access my data with OpenSolaris (build
134), ZFS-Fuse on Ubuntu and FreeBSD 8.1 - so far without any success.

I am aware of the -F, -D, -d options of zpool and also tried symlinking
in order to match the path of the drives according to the labels. Beyond
that I didn´t do much because I was afraid to damage the data which to
my understanding should still be intact. I also have to admit that low
level filesystem-stuff is a bit above my head.

I really hope that someone here can give me a clue or at least a hint
about if its worth to continue trying...

Now here is what i tried on OpenSolaris:

(Drives connected via USB, ZFS Partitions: /dev/dsk/c4t0d0s1,
/dev/dsk/c4t0d1s1 ; Symlinks: /dev/da0p2 - /dev/dsk/c4t0d1s1 ,
/dev/da2p2 - /dev/dsk/c4t0d0s1)


j...@opensolaris:~# zpool import Media
cannot import 'Media': one or more devices is currently unavailable
Destroy and re-create the pool from
a backup source.

j...@opensolaris:~# zpool import -fF -d /dev/dsk
  pool: Media
id: 6503452912318286686
 state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-5E
config:

Media   FAULTED  corrupted data
  c4t0d0s1  FAULTED  corrupted data

Labels:

j...@opensolaris:~# zdb -l /dev/dsk/c4t0d0s1

LABEL 0

version: 6
name: 'Media'
state: 1
txg: 262869
pool_guid: 6503452912318286686
hostid: 4220169081
hostname: 'mini.home'
top_guid: 18181370402585537036
guid: 18181370402585537036
vdev_tree:
type: 'disk'
id: 0
guid: 18181370402585537036
path: '/dev/da0p2'
whole_disk: 0
metaslab_array: 14
metaslab_shift: 30
ashift: 9
asize: 999856013312
DTL: 869
(LABEL 1 - 3 identical)

j...@opensolaris:~# zdb -l /dev/dsk/c4t0d1s1

LABEL 0

version: 6
name: 'Media'
state: 0
txg: 0
pool_guid: 6503452912318286686
hostid: 4220169081
hostname: 'mini.home'
top_guid: 18181370402585537036
guid: 17772452695039664796
vdev_tree:
type: 'mirror'
id: 0
guid: 18181370402585537036
whole_disk: 0
metaslab_array: 14
metaslab_shift: 30
ashift: 9
asize: 999856013312
children[0]:
type: 'disk'
id: 0
guid: 8869551029051110993
path: '/dev/da0p2'
whole_disk: 0
DTL: 869
children[1]:
type: 'disk'
id: 1
guid: 17772452695039664796
path: '/dev/da2p2'
whole_disk: 0
DTL: 868
create_txg: 0
(LABEL 1 - 3 identical)

Thanks,
Norbert
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mirrored pool unimportable (FAULTED)

2010-08-28 Thread Alex Blewitt
On 28 Aug 2010, at 16:25, Norbert Harder n.har...@d3vnull.de wrote:

 Later, since the development of the ZFS extension was discontinued ...

The MacZFS project lives on at Google Code and http://github.com/alblue/mac-zfs

Not that it helps if the data has already become corrupted. 

Alex
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'sync' properties and write operations.

2010-08-28 Thread Robert Milkowski

On 28/08/2010 09:55, eXeC001er wrote:

Hi.

Can you explain to me:

1. dataset has 'sync=always'

I start write to file on this dataset in no-sync mode: system write 
file in sync or async mode?




sync


2. dataset has 'sync=disabled'

I start write to file on this dataset in sync mode: system write file 
in sync or async mode?




async


The sync property takes an effect immediately for all new writes even if 
a file was open before the property was changed.


--
Robert Milkowski
http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ufs root to zfs root liveupgrade?

2010-08-28 Thread Ian Collins

On 08/28/10 11:39 PM, LaoTsao 老曹 wrote:

 hi all
Try to learn how UFS root to ZFS root  liveUG work.

I download the vbox image of s10u8, it come up as UFS root.
add a new  disks (16GB)
create zpool rpool
run lucreate -n zfsroot -p rpool
run luactivate zfsroot
run lustatus it do show zfsroot will be active in next boot
init 6
but it come up with UFS root,
lustatus show ufsroot active
zpool rpool is mounted but not used by boot


As Casper said, you have to change boot drive.

The easiest way to migrate to ZFS is to use a spare slice on the 
original drive for the new pool You can then mirror that off to another 
drive.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] native ZFS on Linux

2010-08-28 Thread Miles Nordin
 aa == Anurag Agarwal anu...@kqinfotech.com writes:

aa * Currently we are planning to do a closed beta 

aa * Source code will be made available with release.

CDDL violation.

aa * We will be providing paid support for our binary
aa releases.

great, so long as your ``binary releases'' always include source that
matches the release exactly.


pgpOBx1yJdmLD.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss