Re: [zfs-discuss] Making 'zfs destroy' safer

2007-05-19 Thread Peter Schuller
 Rather than rehash this, again, from scratch.  Refer to a previous
 rehashing.
 http://www.opensolaris.org/jive/thread.jspa?messageID=15363;

I agree that adding a -f requirement and/or an interactive prompt is not
a good solution. As has already been pointed out, my suggestion is
different.

zfs destroy is very general. Often, generality is good (e.g. in
programming languages). But when dealing with something this dangerous
and by it's very nature likely to be used on live production data either
manually or in scripts (that are not subject to a release engineering
process), I think it is useful to make it possible to be more specific,
such that the possible repercussions of a misstake are limited.

As an analogy - would you want rm to automatically do rm -rf if
invoked on a directory? Most probably would not. The general solution
would be for rm to just do what you tell it to - remove whatever you
are pointing it to. But I think most would agree that things are safer
the way they work now.

However that said I am not suggesting removing existing functionality of
destroy, but to provide a way be more specific about your intended
actions in cases where you want to destroy snapshots or clones.

-- 
/ Peter Schuller

PGP userID: 0xE9758B7D or 'Peter Schuller [EMAIL PROTECTED]'
Key retrieval: Send an E-Mail to [EMAIL PROTECTED]
E-Mail: [EMAIL PROTECTED] Web: http://www.scode.org




signature.asc
Description: OpenPGP digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Making 'zfs destroy' safer

2007-05-19 Thread Peter Schuller
 Apparently (and I'm not sure where this is documented), you can 'rmdir'
 a snapshot to remove it (in some cases).

Ok. That would be useful, though I also don't like that it breaks
standard rmdir semantics.

In any case it does not work in my case - but that was on FreeBSD.

-- 
/ Peter Schuller

PGP userID: 0xE9758B7D or 'Peter Schuller [EMAIL PROTECTED]'
Key retrieval: Send an E-Mail to [EMAIL PROTECTED]
E-Mail: [EMAIL PROTECTED] Web: http://www.scode.org




signature.asc
Description: OpenPGP digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Making 'zfs destroy' safer

2007-05-19 Thread Chris Gerhard
You are not alone.

My preference would be for an optional -t option to zfs destroy:

zfs destroy -t snapshot tank/[EMAIL PROTECTED]

or 

zfs destroy -t snapshot -r tank/fs 

would delete all the snapshots below tank/fs
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] New zfs pr0n server :)))

2007-05-19 Thread Diego Righi
Hi all, I just built a new zfs server for home and, being a long time and avid 
reader of this forum, I'm going to post my config specs and my benchmarks 
hoping this could be of some help for others :)

http://www.sickness.it/zfspr0nserver.jpg
http://www.sickness.it/zfspr0nserver.txt
http://www.sickness.it/zfspr0nserver.png
http://www.sickness.it/zfspr0nserver.pdf

Correct me if I'm wrong: from the benchmark results, I understand that this 
setup is slow at writing, but fast at reading (and this is perfect for my 
usage, copying large files once and then accessing only to read them). It also 
seems that at 128kb it gives the best performances, iirc due to the zfs stripe 
size (again, correct me if I'm wrong :).

I'd happily try any other test, but if you suggest bonnie++ please tell me 
what's the right version to use, too much of them I really can't understand 
which to try!

tnx :)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: AVS replication vs ZFS send recieve for odd sized volume pairs

2007-05-19 Thread Torrey McMahon

John-Paul Drawneek wrote:

Yes, i am also interested in this.

We can't afford two super fast setup so we are looking at having a huge pile 
sata to act as a real time  backup for all our streams.

So what can AVS do and its limitations are?

Would a just using zfs send and receive do or does AVS make it all seamless?
  


Checkout http://www.opensolaris.org/os/project/avs/Demos/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: Lots of overhead with ZFS - what am I doing wrong?

2007-05-19 Thread Torrey McMahon

Jonathan Edwards wrote:


On May 15, 2007, at 13:13, Jürgen Keil wrote:


Would you mind also doing:

ptime dd if=/dev/dsk/c2t1d0 of=/dev/null bs=128k count=1

to see the raw performance of underlying hardware.


This dd command is reading from the block device,
which might cache dataand probably splits requests
into maxphys pieces (which happens to be 56K on an
x86 box).


to increase this to say 8MB, add the following to /etc/system:

set maxphys=0x80

and you'll probably want to increase sd_max_xfer_size as
well (should be 256K on x86/x64) .. add the following to
/kernel/drv/sd.conf:

sd_max_xfer_size=0x80;

then reboot to get the kernel and sd tunings to take.

---
.je

btw - the defaults on sparc:
maxphys = 128K
ssd_max_xfer_size = maxphys
sd_max_xfer_size = maxphys


Maybe we should file a bug to increase the max transfer request sizes?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Trying to understand zfs RAID-Z

2007-05-19 Thread Martin
 Quoth Steven Sim on Thu, May 17, 2007 at 09:55:37AM
 +0800:
 Gurus;
 I am exceedingly impressed by the ZFS although
 it is my humble opinion
 that Sun is not doing enough evangelizing for
 it.
 
 What else do you think we should be doing?
 
 
 David

I'll jump in here.  I am a huge fan of ZFS.  At the same time, I know about 
some of its warts.

ZFS hints at adding agility to data management and is a wonderful system.  At 
the same time, it operates on some assumptions which are antithetical to data 
agility, including:
* inability to online restripe: add/remove data/parity disks
* inability to make effective use of varying sized disks

In one breath ZFS says, Look how well you can dynamically alter filesystem 
storage.

In another breath ZFS says, Make sure that your pools have identical spindles 
and you have accurately predicted future bandwidth, access time, vdev size, and 
parity disks.  Because you can't change any of that later.

I know, down the road you can tack new vdevs onto the pool, but that really 
misses the point.  Even so, if I accidentally add a vdev to a pool and then 
realize my mistake, I am sunk.  Once a vdev is added to a pool, it is attached 
to the pool forever.

Ideally I could provision a vdev, later decide that I need a disk/LUN from that 
vdev and simply remove the disk/LUN, decreasing the vdev capacity.  I should 
have the ability to decide that current redundancy needs are insufficient and 
allocate [b]any[/b] number of new parity disks.  I should be able to have a 
pool from a rack of 15x250GB disks and then later add a rack of 11x750GB disks 
[b]to the vdev[/b], not by making another vdev.

I should have the luxury of deciding to put hot Oracle indexes on their own 
vdev, deallocate spindles form an existing vdev and put those indexes on the 
new vdev.  I should be able to change my mind later and put it all back.

Most importantly is the access time issue.  Since there are no partial-stripe 
reads in ZFS, then access time for a RAIDZ vdev is the same as single-disk 
access time, no matter how wide the stripe is.

How to evangelize better?

Get rid of the glaring you can't change it later problems.

Another thought is that flash storage has all of the indicators of being a 
disruptive technology described in [i]The Innovator's Dilemma[/i].  What this 
means is that flash storage [b]will[/b] take over hard disks.  It is 
inevitable.  ZFS has a weakness with access times but handles single-block 
corruption very nicely.  ZFS also has the ability to do very wide RAIDZ 
stripes, up to 256(?) devices, providing mind-numbing throughput.

Flash has near-zero access times and relatively low throughput.  Flash is also 
prone to single-block failures once the erase-limit has been reached for a 
block.

ZFS + Flash = near-zero access time, very high throughput and high data 
integrity.

To answer the question: get rid of the limitations and build a Thumper-like 
device using flash.  Market it for Oracle redo logs, temp space, swap space 
(flash is now cheaper than RAM), anything that needs massive throughput and 
ridiculous iops numbers, but not necessarily huge storage.

Each month, the cost of flash will fall 4% anyway, so get ahead of the curve 
now.

My 2 cents, at least.

Marty
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New zfs pr0n server :)))

2007-05-19 Thread Ben Rockwood

Diego Righi wrote:

Hi all, I just built a new zfs server for home and, being a long time and avid 
reader of this forum, I'm going to post my config specs and my benchmarks 
hoping this could be of some help for others :)

http://www.sickness.it/zfspr0nserver.jpg
http://www.sickness.it/zfspr0nserver.txt
http://www.sickness.it/zfspr0nserver.png
http://www.sickness.it/zfspr0nserver.pdf

Correct me if I'm wrong: from the benchmark results, I understand that this 
setup is slow at writing, but fast at reading (and this is perfect for my 
usage, copying large files once and then accessing only to read them). It also 
seems that at 128kb it gives the best performances, iirc due to the zfs stripe 
size (again, correct me if I'm wrong :).

I'd happily try any other test, but if you suggest bonnie++ please tell me 
what's the right version to use, too much of them I really can't understand 
which to try!

tnx :)
 


Classy.  +1 for style. ;)

benr.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss