Re: [zfs-discuss] Upgrade from UFS - ZFS on a single disk?

2008-12-29 Thread Akhilesh Mritunjai
Seriously, if I had that many on _field_ I'd directly ring my support rep.

Getting one step go wrong from instruction provided in forum might mean that 
you'd have to spend quite a long time fixing everyone (or worse re-installing) 
one by one from scratch!

Get a support guy walk you through this... every step documented and tested 
(twice! with power failures thrown in the mix)... this isn't a question for the 
forum.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Problem with time-slider

2008-12-29 Thread Charles
Hi

I'm a new user of OpenSolaris 2008.11, I switched from Linux to try the 
time-slider, but now when I execute the time-slider I get this message:

http://img115.imageshack.us/my.php?image=capturefentresansnomfx9.png


Thanks you and happy new year ^^
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem with time-slider

2008-12-29 Thread Mike Gerdts
On Mon, Dec 29, 2008 at 8:21 AM, Charles seriph...@gmail.com wrote:
 Hi

 I'm a new user of OpenSolaris 2008.11, I switched from Linux to try the 
 time-slider, but now when I execute the time-slider I get this message:

 http://img115.imageshack.us/my.php?image=capturefentresansnomfx9.png

Try running

svcs -v zfs/auto-snapshot

The last few lines of the log files mentioned in the output from the
above command may provide helpful hints.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem with time-slider

2008-12-29 Thread Tim Foster
Hi there,

Charles wrote:
 I'm a new user of OpenSolaris 2008.11, I switched from Linux to try
 the time-slider, but now when I execute the time-slider I get this
 message:
 
 http://img115.imageshack.us/my.php?image=capturefentresansnomfx9.png

I wish we'd release-noted this particular problem. I'm willing to bet 
it's due to the same issue as mentioned under Services inexplicably 
dropping to maintenance mode at

http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_in_nv

  - there's a bug filed against ZFS for this behaviour,
http://bugs.opensolaris.org/view_bug.do?bug_id=6462803

  and a workaround you can use in the meantime.

cheers,
tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem with time-slider

2008-12-29 Thread Charles
Thanks you all for you help 


First, I tried the command and this is what I get:

svcs -v zfs/auto-snapshot

STATE  NSTATESTIMECTID   FMRI
online - 14:48:56  - 
svc:/system/filesystem/zfs/auto-snapshot:weekly
online - 14:48:57  - 
svc:/system/filesystem/zfs/auto-snapshot:monthly
maintenance- 14:48:59  - 
svc:/system/filesystem/zfs/auto-snapshot:daily
maintenance- 14:49:01  - 
svc:/system/filesystem/zfs/auto-snapshot:hourly
maintenance- 14:49:01  - 
svc:/system/filesystem/zfs/auto-snapshot:frequent



And for timf, I have red your pages, but I don't understand what to do. In the 
bug report, the workaround is to unmount and mount the filesystem, but I don't 
know how to do this with zfs.

When I enter zfs list I get that:

NAMEUSED  AVAIL  REFER  MOUNTPOINT
rpool  32,8G   424G  75,5K  /rpool
rpool/ROOT 5,15G   424G18K  legacy
rpool/ROOT/opensolaris 8,96M   424G  2,47G  /
rpool/ROOT/opensolaris-1   5,14G   424G  4,78G  /
rpool/dump 4,00G   424G  4,00G  -
rpool/export   19,7G   424G19K  /export
rpool/export/home  19,7G   424G19K  /export/home
rpool/export/home/charles  19,7G   424G  16,2G  /export/home/charles

I don't know what to unmount here


thanks again for your help :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] disk resizing with zfs

2008-12-29 Thread Cyril Payet
Hello there,
I just wonder how to make zfs aware of a lun resizing, with no downtime
(Veritas vxdisk resize or MS diskpart can do this).
I'm quite sure that there were some threads on this but I can't find
them.
I do know that resizing a lun is not the common way to expand a pool
(since a simple zpool add vdev is possible).
 
Hope to hear from you soon.
Cyril.
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Upgrade from UFS - ZFS on a single disk?

2008-12-29 Thread Bob Friesenhahn
On Mon, 29 Dec 2008, Akhilesh Mritunjai wrote:

 Seriously, if I had that many on _field_ I'd directly ring my support rep.

 Getting one step go wrong from instruction provided in forum might 
 mean that you'd have to spend quite a long time fixing everyone (or 
 worse re-installing) one by one from scratch!

Since OpenSolaris is used as part of a packaged product, there is no 
doubt that the process used will be tested quite thoroughly before 
updating field sites.

 Get a support guy walk you through this... every step documented and 
 tested (twice! with power failures thrown in the mix)... this isn't 
 a question for the forum.

There are no wrong questions here.  A paid support guy may lack the 
experience or sheer ingenuity of some of the folks here.  Clearly some 
ingenuity is required.

Bob
==
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot remove a file on a GOOD ZFS filesystem

2008-12-29 Thread Marcelo Leal
Hello all...
 Can that be caused by some cache on the LSI controller? 
 Some flush that the controller or disk did not honour?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Zero page reclaim with ZFS

2008-12-29 Thread Cyril Payet
Hello there,
Hitachi USP-V (sold as 9990V by Sun) provides thin provisioning, known
as Hitachi Dynamic Provisioning (HDP).
This gives a way to make the OS believes that a huge lun is available
whilst its size is not physically allocated on the DataSystem side.
A simple example : 100Gb seen by the OS but only 50Gb physically
allocated in the frame, in a physical devices stock (called a HDP-pool)
 
The USP-V is now able to reclaim zero pages that are not used by a
Filesystem.
Then, it could put them back to this physical pool, as free many 42Mb
blocks.
 
As far as I know, when a file is deleted, zfs just stop to reference
blocks associated to this file, like MMU does with RAM.
Blocks are not deleted, nor zeored (sounds very good to get back to some
files after a crash !).
 
Is there a way to transform - a posteriori or a priori - these
unreferenced blocks to zero blocks to make the HDS-Frame able to
reclaime these ones ? I know that this will create some overhead...
 
It might leads to a smaller block allocation history but could be very
usefull for zero-pages-reclaim.
I do hope that my question was clear enough...
Thanx for your hints,
 
Cyril Payet
 
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How ZFS decides if write to the slog or directly to the POOL

2008-12-29 Thread Marcelo Leal
Hello all,
 Somedays ago i was looking at the code and did see some variable that seems to 
make a correlation between the size of the data, and if the data is written to 
the slog or directly to the pool. But i did not find it anymore, and i think is 
way more complex than that.
  For example, if we have a pool of just two disks, it's fine to write to the 
slog (SSD). But if we have a 20 disks pool, write to the slog will not be a 
good idea, don't you agree? But if someone has that configuration (20 disks and 
a slog), the ZFS code would not identify that, and write directly to the pool?
 I'm asking this because i did some tests and seems like the SSD became a 
bottleneck... and i guess that even if the admin did make such mistake, the ZFS 
had the logic to avoid write to the intent log.

 Thanks a lot for your time!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zero page reclaim with ZFS

2008-12-29 Thread Andrew Gabriel




Cyril Payet wrote:

  
  
  Hello
there,
  HitachiUSP-V
(sold as 9990V by Sun) provides thin provisioning, known as Hitachi
Dynamic Provisioning (HDP).
  This
gives a way to make the OS believes that a huge lun is availablewhilst
its size is not physically allocatedon the DataSystem side.
  A
simple example : 100Gb seen by the OS but only 50Gb physically
allocated in the frame, in a physical devices stock (called aHDP-pool)
  
  The
USP-V is now able to reclaimzero pages that are not usedby
aFilesystem.
  Then,
it could put them back tothis physical pool, as free many 42Mb blocks.
  
  As
far as I know, when a file is deleted, zfs just stop to reference
blocks associated to this file, like MMU does with RAM.
  Blocks
are not deleted, nor zeored (sounds very good to get back to some files
after a crash !).
  
  Is
there a way to transform - a posteriori or a priori - these
"unreferenced blocks"to "zero blocks" to make the HDS-Frame able to
reclaime these ones ? I know that this will create some overhead...
  
  It
might leads to a smaller "block allocation history" but could be very
usefull for zero-pages-reclaim.
  I
do hope that my question was clear enough...
  Thanx
for your hints,
  
  
  Cyril Payet
  
  


Out of curiosity, is there any filesystem which zeros blocks as they
are freed up?

-- 
Andrew


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2008-12-29 Thread Vincent Fox
To answer original post, simple answer:

Almost all old RAID designs have holes in their logic where they are 
insufficiently paranoid on the writes or read, and sometimes both.  One example 
is the infamous RAID-5 write hole.

Look at simple example of mirrored SVM versus ZFS in page 1516  of this 
presentation:

http://opensolaris.org/os/community/zfs/docs/zfs_last.pdf

Critical metadata is triple duped, and all metadata is at least double-duped on 
even a single disk configuration.  Almost all other filesystems are kludges 
with insufficient paranoia by default, and only become sufficiently paranoid by 
twiddling knobs  adding things like EMC did.   After using ZFS for a while 
there is no other filesystem as good.   I haven't played with Linux BTRFS 
though maybe it has some good stuff but last I heard was still in alpha.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Redundancy for /opt by raidz?

2008-12-29 Thread Vincent Fox
Seems a lot simpler to create a multi-way mirror.
Then symlink your /opt/BIND or whatever off to new place.

# zpool create opt-new mirror c3t40d0 c3t40d1 c3t40d2 c3t40d3
# zpool status
  pool: opt-new
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
opt-new  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c3t40d0  ONLINE   0 0 0
c3t40d1  ONLINE   0 0 0
c3t40d2  ONLINE   0 0 0
c3t40d3  ONLINE   0 0 0

errors: No known data errors

There is no problem with having more than 2 disks in a mirror.
If you have an old system with a bunch of old disk I could see
good reason to be this paranoid.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zero page reclaim with ZFS

2008-12-29 Thread Kees Nuyt
On Mon, 29 Dec 2008 19:17:27 +, Andrew Gabriel
agabr...@opensolaris.org wrote:

Out of curiosity, is there any filesystem which 
zeros blocks as they are freed up?

The native filesystem of the Fujitsu Siemens mainframe
operating system BS2000/OSD does that:

- if a file is deleted with the DELETE-FILE command, 
  using DESTROY=*YES parameter
- if a file that is deleted has its DESTROY attribute
  set to *YES (even if the DESTROY parameter isn't 
  used in the DELETE-FILE command).

I think the defragmentation tool (spaceopt) respects the
DESTROY attribute as well.

If my memory serves me well, the default value for the
DESTROY fileattribute can be determined per volumeset
(=catalog=filesystem).


-- 
Andrew
-- 
  (  Kees Nuyt
  )
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zero page reclaim with ZFS

2008-12-29 Thread Torrey McMahon
Cyril Payet wrote:
 Hello there,
 Hitachi USP-V (sold as 9990V by Sun) provides thin provisioning, 
 known as Hitachi Dynamic Provisioning (HDP).
 This gives a way to make the OS believes that a huge lun is 
 available whilst its size is not physically allocated on the 
 DataSystem side.
 A simple example : 100Gb seen by the OS but only 50Gb physically 
 allocated in the frame, in a physical devices stock (called a HDP-pool)
 The USP-V is now able to reclaim zero pages that are not used by 
 a Filesystem.
 Then, it could put them back to this physical pool, as free many 42Mb 
 blocks.
 As far as I know, when a file is deleted, zfs just stop to reference 
 blocks associated to this file, like MMU does with RAM.
 Blocks are not deleted, nor zeored (sounds very good to get back to 
 some files after a crash !).
 Is there a way to transform - a posteriori or a priori - these 
 unreferenced blocks to zero blocks to make the HDS-Frame able to 
 reclaime these ones ? I know that this will create some overhead...
 It might leads to a smaller block allocation history but could be 
 very usefull for zero-pages-reclaim.
 I do hope that my question was clear enough...
 Thanx for your hints,

There are some mainframe filesystems that do such things. I think there 
was also an STK array - Iceberg[?] - that had similar functionality. 
However, why would you use ZFS on top of HDP? If the filesystem lets you 
grow dynamically, and the OS let's you add storage dynamically or grow 
the LUNs when the array doeswhat does HDP get you?

Serious question as I get asked it all the time and I can't come up with 
a good answer outside of procedural things such as, We don't like to 
bother the storage guys or, We thin provision everything no matter the 
app/fs/os or choose your own adventure.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zero page reclaim with ZFS

2008-12-29 Thread Tim
On Mon, Dec 29, 2008 at 6:09 PM, Torrey McMahon tmcmah...@yahoo.com wrote:


 There are some mainframe filesystems that do such things. I think there
 was also an STK array - Iceberg[?] - that had similar functionality.
 However, why would you use ZFS on top of HDP? If the filesystem lets you
 grow dynamically, and the OS let's you add storage dynamically or grow
 the LUNs when the array doeswhat does HDP get you?

 Serious question as I get asked it all the time and I can't come up with
 a good answer outside of procedural things such as, We don't like to
 bother the storage guys or, We thin provision everything no matter the
 app/fs/os or choose your own adventure.


Assign your database admin who swears he needs 2TB day one a 2TB lun.  And 6
months from now when he's really only using 200GB, you aren't wasting 1.8TB
of disk on him.

I see it on a weekly basis.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Upgrade from UFS - ZFS on a single disk?

2008-12-29 Thread Richard Elling
The ZFS Administration Guide describes how to do this in the section
on Migrating a UFS Root File System to a ZFS Root File System
(Solaris Live Upgrade).  LiveUpgrade only applies to SXCE, though.
The OpenSolaris installer does not currently have the ability to do such
things.
http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
 -- richard


Josh Rivel wrote:
 So we have roughly 700 OpenSolaris snv_81 boxes out in the field.  We're 
 looking to upgrade them all to probable OpenSolaris 11.08 or the latest 
 snv_10x build soon.  Currently all boxes have a single 80gb HD (these are 
 small appliance type devices, so we can't add a second hard drive).  What 
 we'd like to do is figure out a way via liveupgrade to upgrade to a newer 
 SXCE release (or OpenSolaris 11.08) AND migrate the filesystems from UFS to 
 ZFS at the same time.

 The reason for wanting to go to ZFS is for the filesystem improvements and 
 resiliency, as a lot of these boxes get power cycled regularly..

 Here is the current partition table:

 /dev/dsk/c0d0s0 /  8gb
 /dev/dsk/c0d0s1 swap 1.5gb
 /dev/dsk/c0d0s3 /var 4gb
 /dev/dsk/c0d0s4 /backup 2gb
 /dev/dsk/c0d0s5 /luroot 8gb
 /dev/dsk/c0d0s6 /luvar 4gb
 /dev/dsk/c0d0s7 /export 45gb

 /backup, /luroot and /luvar are not in use.
 /export contains all the zones (each box has 3 non-global zones on it)

 I was thinking about removing the /backup, /luroot, and /luvar partitions and 
 using that to create a ZFS rpool on it, then installing the new OS onto that, 
 but /export needs to stay with the zones on it.

 An alternative option is to just re-install the boxes from scratch, and 
 devise a way to auto-configure them when they boot back up (they are all 
 remotely located and there's no console access to any of them, so that's a 
 bit tricky)

 Anyway, just wanted to toss the idea out, see if anyone has done a UFS - ZFS 
 in place upgrade while doing a live upgrade (or post live upgrade is also an 
 option of course, so long as it can be scripted)

 Thanks!
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] disk resizing with zfs

2008-12-29 Thread Tim
On Mon, Dec 29, 2008 at 10:49 AM, Cyril Payet cyril.pa...@hds.com wrote:

  Hello there,
 I just wonder how to make zfs aware of a lun resizing, with no downtime
 (Veritas vxdisk resize or MS diskpart can do this).
 I'm quite sure that there were some threads on this but I can't find them.
 I do know that resizing a lun is not the common way to expand a pool (since
 a simple zpool add vdev is possible).

 Hope to hear from you soon.
 *Cyril.*



It should just pick it up automatically.  There were some bugs that required
a reboot, or a zpool export/import, but that's been fixed in the latest
versions of opensolaris.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Upgrade from UFS - ZFS on a single disk?

2008-12-29 Thread Josh Rivel
Richard,

Thank you for the tip. We are running SXCE currently (snv_81) on all the 
clients, and are probably going to stick with it.  We may run 2008.11 for the 
backend servers (those are already running ZFS).

Josh
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zero page reclaim with ZFS

2008-12-29 Thread Torrey McMahon
On 12/29/2008 8:20 PM, Tim wrote:


 On Mon, Dec 29, 2008 at 6:09 PM, Torrey McMahon tmcmah...@yahoo.com 
 mailto:tmcmah...@yahoo.com wrote:


 There are some mainframe filesystems that do such things. I think
 there
 was also an STK array - Iceberg[?] - that had similar functionality.
 However, why would you use ZFS on top of HDP? If the filesystem
 lets you
 grow dynamically, and the OS let's you add storage dynamically or grow
 the LUNs when the array doeswhat does HDP get you?

 Serious question as I get asked it all the time and I can't come
 up with
 a good answer outside of procedural things such as, We don't like to
 bother the storage guys or, We thin provision everything no
 matter the
 app/fs/os or choose your own adventure.


 Assign your database admin who swears he needs 2TB day one a 2TB lun.  
 And 6 months from now when he's really only using 200GB, you aren't 
 wasting 1.8TB of disk on him.

I run into the same thing but once I say, I can add more space without 
downtime they tend to smarten up. Also, ZFS will not reuse blocks in a, 
for lack of better words, economical fashion. If you throw them a 2TB 
LUN ZFS will allocate blocks all over the LUN when they're only using a 
small fraction.

Unless you have, as the original poster mentioned, a empty block 
reclaim you'll have problems. UFS can show the same results btw.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zero page reclaim with ZFS

2008-12-29 Thread Richard Elling
Torrey McMahon wrote:
 On 12/29/2008 8:20 PM, Tim wrote:
   
 On Mon, Dec 29, 2008 at 6:09 PM, Torrey McMahon tmcmah...@yahoo.com 
 mailto:tmcmah...@yahoo.com wrote:


 There are some mainframe filesystems that do such things. I think
 there
 was also an STK array - Iceberg[?] - that had similar functionality.
 However, why would you use ZFS on top of HDP? If the filesystem
 lets you
 grow dynamically, and the OS let's you add storage dynamically or grow
 the LUNs when the array doeswhat does HDP get you?

 Serious question as I get asked it all the time and I can't come
 up with
 a good answer outside of procedural things such as, We don't like to
 bother the storage guys or, We thin provision everything no
 matter the
 app/fs/os or choose your own adventure.


 Assign your database admin who swears he needs 2TB day one a 2TB lun.  
 And 6 months from now when he's really only using 200GB, you aren't 
 wasting 1.8TB of disk on him.
 

 I run into the same thing but once I say, I can add more space without 
 downtime they tend to smarten up. Also, ZFS will not reuse blocks in a, 
 for lack of better words, economical fashion. If you throw them a 2TB 
 LUN ZFS will allocate blocks all over the LUN when they're only using a 
 small fraction.
   

Absolutely agree. 

Note: if you enable ZFS compression, zero-filled blocks will not exist
as part of the data set :-)
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zero page reclaim with ZFS

2008-12-29 Thread Tim
On Mon, Dec 29, 2008 at 8:52 PM, Torrey McMahon tmcmah...@yahoo.com wrote:

 On 12/29/2008 8:20 PM, Tim wrote:

 I run into the same thing but once I say, I can add more space without
 downtime they tend to smarten up. Also, ZFS will not reuse blocks in a, for
 lack of better words, economical fashion. If you throw them a 2TB LUN ZFS
 will allocate blocks all over the LUN when they're only using a small
 fraction.

 Unless you have, as the original poster mentioned, a empty block reclaim
 you'll have problems. UFS can show the same results btw.


I'm not arguing anything towards his specific scenario.  You said you
couldn't imagine why anyone would ever want thin provisioning, so I told you
why.  Some admins do not have the luxury of trying to debate with other
teams they work with as to why they should do things a different way than
they want to ;)  That speaks nothing of the change control needed to even
get a LUN grown in some shops.

It's out there, it's being used, it isn't a good fit for zfs.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot remove a file on a GOOD ZFS filesystem

2008-12-29 Thread Sanjeev Bagewadi
Marcelo,

Marcelo Leal wrote:
 Hello all...
  Can that be caused by some cache on the LSI controller? 
  Some flush that the controller or disk did not honour?
   
More details on the problem would help. Can you please give the 
following details :
- zpool status
- zfs list -r
- The details of the directory :
- How many entries does it have ?
- Which filesystem (of the zpool) does it belong to ?

Thanks and regards,
Sanjeev.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zero page reclaim with ZFS

2008-12-29 Thread Torrey McMahon
On 12/29/2008 10:36 PM, Tim wrote:


 On Mon, Dec 29, 2008 at 8:52 PM, Torrey McMahon tmcmah...@yahoo.com 
 mailto:tmcmah...@yahoo.com wrote:

 On 12/29/2008 8:20 PM, Tim wrote:

 I run into the same thing but once I say, I can add more space
 without downtime they tend to smarten up. Also, ZFS will not
 reuse blocks in a, for lack of better words, economical fashion.
 If you throw them a 2TB LUN ZFS will allocate blocks all over the
 LUN when they're only using a small fraction.

 Unless you have, as the original poster mentioned, a empty block
 reclaim you'll have problems. UFS can show the same results btw.


 I'm not arguing anything towards his specific scenario.  You said you 
 couldn't imagine why anyone would ever want thin provisioning, so I 
 told you why.  Some admins do not have the luxury of trying to debate 
 with other teams they work with as to why they should do things a 
 different way than they want to ;)  That speaks nothing of the change 
 control needed to even get a LUN grown in some shops.

 It's out there, it's being used, it isn't a good fit for zfs.

Right...I called those process issues. Perhaps organizational issues 
would have been better?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Noob: Best way to replace a disk when you're out of internal connectors?

2008-12-29 Thread Larry Hastings
Hope I'm posting this in the right place.

I've got a RAIDZ2 volume made of 14 SATA 1TB drives.  The box they're in is 
absolutely packed full; I know of no way to add any additional drives, or 
internal SATA connectors.  I have a dying drive in the array (hereafter drive 
N).  Obviously I should replace it.  But how?

The best method would be to add the new drive, zpool replace drive N with 
drive 15, and remove the old drive.  But I'd have to use an external enclosure 
for drive 15, so I wouldn't want this to be the permanent layout.  I actually 
tried it this way, thinking that I could then remove the old drive N, move 
drive 15 into the chassis and give it drive N's connector, and go from there. 
 But I could not for the life of me figure out how to tell ZFS I moved drive 
15 to drive N.

I could swap the dying drive with the fresh drive, then run an in-place zpool 
replace.  But the drive isn't [i]dead,[/i] it is merely [i]dying[/i].  That 
seems like overkill.

What's the best approach to replacing drives if I'm out of internal connectors?

Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home partition?

2008-12-29 Thread scott
at the risk of venturing off topic:

that looks like a good revisioning scheme. opensolaris creates a new (default) 
boot environment during the update process, which seems like a very cool 
feature. seems like. when i update my 2008.11 install, nwam breaks, apparently 
a known bug (the workaround didn't work for me, probably due to inexperience). 
no problem me thinks, i just boot into the old environment. nwam still 
broken. i had naively assumed that the new BE was a delta, again, i know 
little.

anyway, back to zfs, you didn't voice any alarm at my virtual home folder 
scheme. regarding root partition, i'll think it over, but given my luck with 
updates, i don't imagine doing any.

thank you once again for all of your valuable input.

scott
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS capable GRUB install from within Linux?

2008-12-29 Thread David Abrahams

on Tue Nov 11 2008, Mario Goebbels me-AT-tomservo.cc wrote:

 Is it possible to install a GRUB that can boot a ZFS root, but installing it 
 from
 within Linux?

 I was planning on getting a new unmanaged dedicated server, which however 
 only comes
 with Linux preinstalled. The thing has a software RAID1, so the cunning plan 
 was to
 break up the RAID1, install VirtualBox RDP and install OpenSolaris onto the 
 freed
 drive. The only roadblock is GRUB at that point, which would still reside on 
 the
 existing Linux disk. Once OpenSolaris has been booted, I can recreate the 
 mirror and
 do a proper GRUB install.

 Getting a IP-KVM attached to change BIOS settings is kind of the cost I'd 
 want to
 avoid here.

 Any ideas?

Hi Mario,

Did you get anywhere with this?  It occurs to me that I might get my
Linux/ZFS-fuse server to be entirely ZFS.  I'd really like to accomplish
that if possible.

TIA,

-- 
Dave Abrahams
BoostPro Computing
http://www.boostpro.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using zfs mirror as a simple backup mechanism for time-slider.

2008-12-29 Thread Nicolas Williams
On Sat, Dec 27, 2008 at 02:29:58PM -0800, Ross wrote:
 All of which sound like good reasons to use send/receive and a 2nd zfs
 pool instead of mirroring.

Yes.

 Send/receive has the advantage that the receiving filesystem is
 guaranteed to be in a stable state.  How would you go about recovering

Among many other advantages.

 the system in the event of a drive failure though?  Would you have to
 replace the system drive, boot off a solaris DVD and then connect the
 external drive and send/receive it back?

You could boot from your backup, if you zfs sent it the relevant
snapshots of your boot datasets and installed GRUB on it.

So: replace main drive, boot from backup, backup the backup to the new
main drive, reboot from main drive.

If you want to only backup your user data (home directory) then yes,
you'd have to re-install, then restore the user data from the backup.

And yes, the restore procedure would involve a zfs send from the backup
to the new pool.

 It won't be quick, but replacing a failed single boot drive never is.
 Would it be possible to combine the send/receive backup with a
 scripted installation saved on the external media?  Something that

That would be nice, primarily so that you needn't backup anything that
can be simply re-installed.

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] lofiadm -d keeps a lock on file in an nbmand-mounted zfs

2008-12-29 Thread Kevin Sze
Hi,

Has anyone seen the following problem?

After lofiadm -d removes an association, the file is still locked and cannot 
be moved or deleted if the file resides in a ZFS mounted with nbmand=on.

There are two ways to remove the lock.  (1) remount the zfs by the 
unmount+mount; the lock is removed even if nbmand=on option is given again, or 
(2) reboot the system.

I don't have a system with UFS to test the nbmand mount-option to see if the 
problem exists for UFS as well.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss