Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread can you guess?
> I suppose we're all just wrong.

By George, you've got it!

- bill
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread can you guess?
> > Now, not being a psychic myself, I can't state
> with
> > authority that Stefano really meant to ask the
> > question that he posed rather than something else.
> > In retrospect, I suppose that some of his
> > surrounding phrasing *might* suggest that he was
> > attempting (however unskillfully) to twist my
> > comment about other open source solutions being
> > similarly enterprise-capable into a provably-false
> > assertion that those other solutions offered the
> > *same* features that he apparently considers so
> > critical in ZFS rather than just comparably-useful
> > ones.  But that didn't cross my mind at the time:
>  I
>  simply answered the question that he asked, and in
>  passing also pointed out that those features which
>  he apparently considered so critical might well not
>   be.
> dear bill,
> my question was honest

That's how I originally accepted it, and I wouldn't have revisited the issue 
looking for other interpretations if two people hadn't obviously thought it 
meant something else.

For that matter, even if you actually intended it to mean something else that 
doesn't imply that there was any devious intent.  In any event, what you 
actually asked was what I had referred to, and I told you:  it may not have met 
your personal goals for your own storage, but that wasn't relevant to the 
question that you asked (and that I answered).

Your English is so good that the possibility that it might be a second language 
had not occurred to me - but if so it would help explain any subtle 
miscommunication.

...

> if there are no alternatives to zfs,

As I explained, there are eminently acceptable alternatives to ZFS from any 
objective standpoint.

 I'd gladly
> stick with it,

And you're welcome to, without any argument from me - unless you try to 
convince other people that there are strong technical reasons to do so, in 
which case I'll challenge you to justify them in detail so that any hidden 
assumptions can be brought out into the open.

- bill
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread Tim Cook
> Actually, it's central to the issue:  if you were
> capable of understanding what I've been talking about
> (or at least sufficiently humble to recognize the
> depths of your ignorance), you'd stop polluting this
> forum with posts lacking any technical content
> whatsoever.

I don't speak "full of myself", apparently nobody else here does either, 
because nobody has a clue what you continue to ramble about.

> The question that was asked was answered - it's
> hardly my problem if you could not competently parse
> the question, or the answer, or the subsequent
> explanation (though your continuing drivel after
> those three strikes suggests that you may simply be
> ineducable).

Except nobody but you seems to be able to acertain any sort of answer from your 
rambling response.  The question was simple, as would an adequate answer.  You 
either aren't "literate" enough to understand the question, or you're wrong.  
It's clearly the latter.

> No:  I answered his question and *also* observed that
> he probably really didn't know what he wanted (at
> least insofar as being able to *justify* the
> intensity of his desire for it).

Funny, the original poster, and everyone else disagrees with you.  But with 
such visions of granduer, I suppose we're all just wrong.


> No one said that there were:  the real issue is that
> there's not much reason to care, since the available
> solutions don't need to be *identical* to offer
> *comparable* value (i.e., they each have different
> strengths and weaknesses and the net result yields no
> clear winner - much as some of you would like to
> believe otherwise).
> 

Right, so yet again, you were wrong.   Stop telling us what you think we need.  
Stop trying to impose your arrogant ASSumptions onto us.  WE don't care what 
YOU think WE need.


> Indeed, but it has become obvious that most of the
> reasons are non-technical in nature.  This place is
> fanboy heaven, where never is heard a discouraging
> word (and you're hip-deep in buffalo sh!t).

There you go.  You heard it here first folks.  Anyone who doesn't agree with 
bill is a fanboy.

> 
> Hell, I came here myself 18 months ago because ZFS
> seemed interesting, but found out that the closer I
> looked, the less interesting it got.  Perhaps it's
> not surprising that so many of you never took that
> second step:  it does require actual technical
> insight, which seems to be in extremely short supply
> here.
>

So leave.
 
> So short that it's not worth spending time here from
> any technical standpoint:  at this point I'm mostly
> here for the entertainment, and even that is starting
> to get a little tedious.
> 
> - bill

Oh bill, I think we both know your ego won't be able to stop without being 
banned or getting the *last word*.  Unfortunately you bring nothing to the 
table but arrogance, which hasn't, and isn't getting you very far.  Keep up the 
good work though.  Are you getting paid by word count, or by post?  I'm 
guessing word count given the long winded content void responses.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread can you guess?
> Literacy has nothing to do with the glaringly obvious
> BS you keep spewing.

Actually, it's central to the issue:  if you were capable of understanding what 
I've been talking about (or at least sufficiently humble to recognize the 
depths of your ignorance), you'd stop polluting this forum with posts lacking 
any technical content whatsoever.

  Rather than answer a question,
> which couldn't be answered,

The question that was asked was answered - it's hardly my problem if you could 
not competently parse the question, or the answer, or the subsequent 
explanation (though your continuing drivel after those three strikes suggests 
that you may simply be ineducable).

 because you were full of
> it, you tried to convince us all he really didn't
> know what he wanted.  

No:  I answered his question and *also* observed that he probably really didn't 
know what he wanted (at least insofar as being able to *justify* the intensity 
of his desire for it).

...
 
> There aren't free alternatives in linux or freebsd
> that do what zfs does, period.

No one said that there were:  the real issue is that there's not much reason to 
care, since the available solutions don't need to be *identical* to offer 
*comparable* value (i.e., they each have different strengths and weaknesses and 
the net result yields no clear winner - much as some of you would like to 
believe otherwise).

  You can keep talking
> in circles till you're blue in the face, or I suppose
> your fingers go numb in this case, but the fact isn't
> going to change.  Yes, people do want zfs for any
> number of reasons, that's why they're here.

Indeed, but it has become obvious that most of the reasons are non-technical in 
nature.  This place is fanboy heaven, where never is heard a discouraging word 
(and you're hip-deep in buffalo sh!t).

Hell, I came here myself 18 months ago because ZFS seemed interesting, but 
found out that the closer I looked, the less interesting it got.  Perhaps it's 
not surprising that so many of you never took that second step:  it does 
require actual technical insight, which seems to be in extremely short supply 
here.

So short that it's not worth spending time here from any technical standpoint:  
at this point I'm mostly here for the entertainment, and even that is starting 
to get a little tedious.

- bill
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread Jonathan Edwards

On Dec 6, 2007, at 00:03, Anton B. Rang wrote:

>> what are you terming as "ZFS' incremental risk reduction"?
>
> I'm not Bill, but I'll try to explain.
>
> Compare a system using ZFS to one using another file system -- say,  
> UFS, XFS, or ext3.
>
> Consider which situations may lead to data loss in each case, and  
> the probability of each such situation.
>
> The difference between those two sets is the 'incremental risk  
> reduction' provided by ZFS.

ah .. thanks Anton - so the next step would be to calculate the  
probability of occurrence, the impact to operation, and the return to  
service for each anticipated risk in a given environment in order to  
determine the size of the increment that constitutes the risk  
reduction that ZFS is providing.  Without this there's just a lot of  
hot air blowing around in here ..



excellent summary of risks - perhaps we should also consider the  
availability and transparency of the code to potentially mitigate  
future problems .. that's currently where i'm starting to see  
tremendous value in open and free raid controller solutions to help  
drive down the cost of implementation for this sort of data  
protection instead of paying through the nose for a closed hardware  
based solutions (which is still a great margin in licensing for  
dedicated storage vendors)

---
.je
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread Stefano Spinucci
> Now, not being a psychic myself, I can't state with
> authority that Stefano really meant to ask the
> question that he posed rather than something else.
> In retrospect, I suppose that some of his
> surrounding phrasing *might* suggest that he was
> attempting (however unskillfully) to twist my
> comment about other open source solutions being
> similarly enterprise-capable into a provably-false
> assertion that those other solutions offered the
> *same* features that he apparently considers so
> critical in ZFS rather than just comparably-useful
> ones.  But that didn't cross my mind at the time:  I
> simply answered the question that he asked, and in
> passing also pointed out that those features which
> he apparently considered so critical might well not
>  be.

dear bill,
my question was honest and, as I stated before: I'm a linux user who discovered 
zfs and 'd like to use it to store (versioned and checksummed) valuable data.

then, if there are no alternatives to zfs, I'd gladly stick with it, and unless 
you have a *better* solution (repeat with me: important data, 1 laptop, three 
disks), please don't use further my name for your guessing of an hidden plot to 
discover the (evident) bias of your messages.

thanks

---
Stefano Spinucci
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Inherited quota question

2007-12-05 Thread Rahul Mehta
Hi everyone,

I have been following this thread and I feel that this has been resolved in the 
ZFS version 8, which is done as follows,

bash-3.00# zfs list
NAME USED  AVAIL  REFER  MOUNTPOINT
tank 266K   263G  32.0K  /tank
tank/bm 28.8K  5.00G  28.8K  /tank/bm
tank/rm 59.1K  5.00G  30.4K  /tank/rm
tank/rm/child1  28.8K  5.00G  28.8K  /tank/rm/child1
bash-3.00# zfs create tank/rm/child2 
bash-3.00# zfs set quota=10G tank/rm/child2
bash-3.00# zfs list
NAME USED  AVAIL  REFER  MOUNTPOINT
tank 304K   263G  32.0K  /tank
tank/bm 28.8K  5.00G  28.8K  /tank/bm
tank/rm 89.5K  5.00G  32.0K  /tank/rm
tank/rm/child1  28.8K  5.00G  28.8K  /tank/rm/child1
tank/rm/child2  28.8K  5.00G  28.8K  /tank/rm/child2
bash-3.00# 


Here you can see when I try to create a child with a quota of higher limit
than the parent quota, it sets the child quota to the limit of the parent.

This is my understanding of how the problem has been resolved (correct 
me if I am wrong), if anyone can please explain as to what exactly has 
been done here. 

Also, in the above example I will like to highlight the issue of how ZFS handles
two child with the same quota limit as the parent (i.e. it means it doesn't
inherit the quota property from parent??).

If anyone can explain this

Thanks
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread Anton B. Rang
> what are you terming as "ZFS' incremental risk reduction"?

I'm not Bill, but I'll try to explain.

Compare a system using ZFS to one using another file system -- say, UFS, XFS, 
or ext3.

Consider which situations may lead to data loss in each case, and the 
probability of each such situation.

The difference between those two sets is the 'incremental risk reduction' 
provided by ZFS.

So, for instance, assuming you're using ZFS RAID in the first case, and a 
traditional RAID implementation in the second case:

* Single-disk failure ==> same probability of occurrence, no data loss in 
either case.

* Double-disk failure ==> same probability of occurrence, no data loss in 
either case (assuming 
RAID6/RAIDZ2; or data loss assuming RAID5/RAIDZ)

* Uncorrectable read error ==> same probability of occurrence, no data loss in 
either case

* Single-bit error on the wire ==> same, no data loss in either case

* Multi-bit error on the wire, detected by CRC ==> same, no data loss

* Multi-bit error on the wire ==>
  This is the first interesting case (since it differs).
  This is a case where ZFS will correct the error, and the standard RAID will 
not.
  The probability of occurrence is hard to compute, since it depends on the 
distribution of
  bit errors on the wire, which aren't really independent.  Roughly, though, 
since the wire
  transfers usually use a 32-bit CRC, the probability of an undetected error is 
2^-32, or
  0.000 000 023 2%.  [You could ask whether this is true for real data. It 
appears to be; see
  "Performance of Checksums and CRCs over Real Data" by Stone, Greenwald, 
Partridge & Hughes. ]

* Error in the file system code ==>
  Another interesting case, but we don't have sufficient data to gauge 
probabilities.

* Undetected error in host memory ==> same probability of occurrence, same data 
loss.

* Undetected error in RAID memory ==> same probability, but data loss in 
non-ZFS case.
  We can estimate the probability of this, but I don't have current data.
  Single-bit errors were measured at a rate of 2*10^-12 on a number of systems 
in the
  mid-1990s (see "Single Event Upset at Ground Level" by Eugene Normand).  If 
the bits
  are separated spatially (as is normally done), the probability of a 
double-bit error is
  roughly 4*10^-24, and of a triple-bit error, 8*10^-36.  So an undetected 
error is very,
  VERY unlikely, at least from RAM cell effects.  But ZFS can correct it, if it 
happens.

* Failure of facility (e.g. fire, flood, power surge) ==> same/total loss of 
data.
  [ Total loss if you don't have a backup, of course. ]

... go on as desired.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS write time performance question

2007-12-05 Thread Tim Cook
what firmware revision are you at?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread Jonathan Edwards
apologies in advance for prolonging this thread .. i had considered  
taking this completely offline, but thought of a few people at least  
who might find this discussion somewhat interesting .. at the least i  
haven't seen any mention of Merkle trees yet as the nerd in me yearns  
for

On Dec 5, 2007, at 19:42, bill todd - aka can you guess? wrote:

>> what are you terming as "ZFS' incremental risk reduction"? ..  
>> (seems like a leading statement toward a particular assumption)
>
> Primarily its checksumming features, since other open source  
> solutions support simple disk scrubbing (which given its ability to  
> catch most deteriorating disk sectors before they become unreadable  
> probably has a greater effect on reliability than checksums in any  
> environment where the hardware hasn't been slapped together so  
> sloppily that connections are flaky).

ah .. okay - at first reading "incremental risk reduction" seems to  
imply an incomplete approach to risk .. putting various creators and  
marketing organizations pride issues aside for a moment, as a  
complete risk reduction - nor should it billed as such.  However i do  
believe that an interesting use of the merkle tree with a sha256 hash  
is somewhat of an improvement over conventional volume based data  
scrubbing techniques since there can be a unique integration between  
the hash tree for the filesystem block layout and a hierarchical data  
validation method.  In addition to the finding unknown areas with the  
scrub, you're also doing relatively inexpensive data validation  
checks on every read.

> Aside from the problems that scrubbing handles (and you need  
> scrubbing even if you have checksums, because scrubbing is what  
> helps you *avoid* data loss rather than just discover it after it's  
> too late to do anything about it), and aside from problems deriving  
> from sloppy assembly (which tend to become obvious fairly quickly,  
> though it's certainly possible for some to be more subtle),  
> checksums primarily catch things like bugs in storage firmware and  
> otherwise undetected disk read errors (which occur orders of  
> magnitude less frequently than uncorrectable read errors).

sure - we've seen many transport errors, as well as firmware  
implementation errors .. in fact with many arrays we've seen data  
corruption issues with the scrub (particularly if the checksum is  
singly stored along with the data block) -  just like spam you really  
want to eliminate false positives that could indicate corruption  
where there isn't any.  if you take some time to read the on disk  
format for ZFS you'll see that there's a tradeoff that's done in  
favor of storing more checksums in many different areas instead of  
making more room for direct block pointers.

> Robert Milkowski cited some sobering evidence that mid-range arrays  
> may have non-negligible firmware problems that ZFS could often  
> catch, but a) those are hardly 'consumer' products (to address that  
> sub-thread, which I think is what applies in Stefano's case) and b)  
> ZFS's claimed attraction for higher-end (corporate) use is its  
> ability to *eliminate* the need for such products (hence its  
> ability to catch their bugs would not apply - though I can  
> understand why people who needed to use them anyway might like to  
> have ZFS's integrity checks along for the ride, especially when  
> using less-than-fully-mature firmware).

actually on this list we've seen a number of consumer level products  
including sata controllers, and raid cards (which are also becoming  
more commonplace in the consumer realm) that can be confirmed to  
throw data errors.  Code maturity issues aside, there aren't very  
many array vendors that are open-sourcing their array firmware - and  
if you consider zfs as a feature-set that could function as a multi- 
purpose storage array (systems are cheap) - i find it refreshing that  
everything that's being done under the covers is really out in the open.

> And otherwise undetected disk errors occur with negligible  
> frequency compared with software errors that can silently trash  
> your data in ZFS cache or in application buffers (especially in PC  
> environments:  enterprise software at least tends to be more stable  
> and more carefully controlled - not to mention their typical use of  
> ECC RAM).
>
> So depending upon ZFS's checksums to protect your data in most PC  
> environments is sort of like leaving on a vacation and locking and  
> bolting the back door of your house while leaving the front door  
> wide open:  yes, a burglar is less likely to enter by the back  
> door, but thinking that the extra bolt there made you much safer is  
> likely foolish.

granted - it's not an all-in-one solution, but by combining the  
merkle tree approach with the sha256 checksum along with periodic  
data scrubbing - it's a darn good approach .. particularly since it  
also tends to cost a lot less than what you migh

Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread Stefano Spinucci
> > I have budget constraints then I can use only
> user-level storage.
> > 
> > until I discovered zfs I used subversion and git,
> but none of them is designe
> > d to  manage gigabytes of data, some to be
> versioned, some to be unversioned.
> > 
> > I can't afford silent data corruption and, if the
> final response is "*now* th
> > ere is no *real* opensource software alternative to
> zfs automatic checksummin
> > g and simple snapshotting" I'll be an happy solaris
> user (for data storage), 
> > an happy linux user (for everyday work), and an
> unhappy offline windows user 
> > (for some video-related activity I can't do with
> linux).
> 
> Note that I don't wish to argue for/against
> zfs/billtodd but
> the comment above about "no *real* opensource
> software
> alternative zfs automating checksumming and simple
> snapshotting" caught my eye.
> 
> There is an open source alternative for archiving
> that works
> quite well.  venti has been available for a few years
> now.
> It runs on *BSD, linux, macOS & plan9 (its native
> os).  It
> uses strong crypto checksums, stored separately from
> the data
> (stored in the pointer blocks) so you get a similar
> guarantee
> against silent data corruption as ZFS.
> 
> You can back up a variety of filesystems (ufs, hfs,
> ext2fs,
> fat) or use it to to backup a file tree.  Each backup
> results
> in a single 45 byte "score" containing the checksum
> of root
> pointer block.  Using this score you can retrieve the
> entire
> backup.  Further, it stores only one copy of a data
> block
> regardless of what files or which backup it may
> belong to. In
> effect every "full backup" is an incremental backup
> (only
> changed data blocks and changed or new ptr blocks are
> stored).
> 
> So it is really an "archival" server.  You don't take
> snapshots but you do a backup.  However you can nfs
> mount a
> venti and all your backups will show up under
> directories
> like ///.
> 
> Ideally you'd store a venti on RAID storage.  You can
> even
> copy a bunch of venti to another one, you can store
> its
> arenas on CDs or DVD and so on.
> 
> It is not as fast as ZFS nor anywhere near as easy to
> use and
> its intended use is not the same as ZFS (not a
> primary
> filesystem). But for what it does, it is not bad at
> all!
> 
> Unlike ZFS, it fits best where you have a fast
> filesystem for
> speed critical use, venti for backups and RAID for
> redundancy.
> 
> Google for "venti sean dorward".  If interested, go
> to
> http://swtch.com/plan9port/ and pick up plan9port (a
> collection of programs from plan9, not just venti).
>  See
> ttp://swtch.com/plan9port/man/man8/index.html for how
> to use
> venti.

thank you for the suggestion.

after reading something about venti I like its features and its frugality (no 
fuss, no hype, only a reliable fs).

however, having touched zfs before venti, I admit I like zfs more and 
furthermore this give me a reason to use opensolaris and maybe tomorrow dump 
linux entirely.

I'd like to have time to play with plan9port and maybe also with inferno, but 
for now the descent can wait.


> > I think for every fully digital people own data are
> vital, and almost everyon
> > e would reply "NONE" at your question "what level
> of risk user is willing to 
> > tolerate".
> 
> NONE is not possible.  It is a question of how much
> risk you
> are willing to tolerate for what cost.  Thankfully,
> these
> days you have a variety of choices and much much
> lower cost
> for a given degree of risk compared to just a few
> years ago!

I know no risk is impossible, but a checksumming fs with snapshots (mirrored on 
two disks used alternatively) is a good compromise for me (a professional-home 
user, with data I can't -or I'd like not to- loose).

bye

---
Stefano Spinucci
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread Tim Cook
Literacy has nothing to do with the glaringly obvious BS you keep spewing.  
Rather than answer a question, which couldn't be answered, because you were 
full of it, you tried to convince us all he really didn't know what he wanted.  

The assumption sure made an a$$ out of someone, but you should be used to 
painting yourself into a corner by now.

There aren't free alternatives in linux or freebsd that do what zfs does, 
period.  You can keep talking in circles till you're blue in the face, or I 
suppose your fingers go numb in this case, but the fact isn't going to change.  
Yes, people do want zfs for any number of reasons, that's why they're here.

You would think the fact zfs was ported to freebsd so quickly would've been a 
good first indicator that the functionality wasn't already there.  Then again, 
the glaringly obvious seems to consistently bypass you.  I'm guessing it's 
because there's no space left in the room... your head is occupying any and all 
available.

Nevermind, your ability to admit when you're wrong is only rivaled by your 
petty attempts at insults.  

If you'd like to answer stephans question, feel free.  If all you can muster is 
a Microsoftesque "you don't really know what you want", I suggest giving up now.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS write time performance question

2007-12-05 Thread Anton B. Rang
This might have been affected by the cache flush issue -- if the 3310 flushes 
its NVRAM cache to disk on SYNCHRONIZE CACHE commands, then ZFS is penalizing 
itself.  I don't know whether the 3310 firmware has been updated to support the 
SYNC_NV bit.  It wasn't obvious on Sun's site where to download the latest 
firmware.

A quick glance through the OpenSolaris code indicates that ZFS & the sd driver 
have been updated to support this bit, but I didn't track down which release 
first introduced this functionality.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread Al Hopper
On Wed, 5 Dec 2007, Al Hopper wrote:

> On Wed, 5 Dec 2007, Eric Haycraft wrote:
>
> [... reformatted  ]
>
>> Why are we still feeding this troll? Paid trolls deserve no response and 
>> there is no value in continuing this thread. (And no guys, he isn't being 
>> paid by NetApp.. think bigger) The troll will continue to try to downplay 
>> features of zfs and the community will counter...and on and on.
>
> +1 - a troll
>
> Ques: does it matter why he's a troll?
> I don't think so but my best guess is that Bill is out of work, and, due 
> to the financial hardship, has had to cut his alzheimer's
> medication dosage in half.
>
> I could be wrong, with my guess, but as long as I keep seeing this "can you 
> guess?" question, I feel compelled to answer it.  :)
>
> Please feel free to offer your best "can you guess?" answer!
>
> Regards,
>
> Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
>   Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
> OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
> http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
> Graduate from "sugar-coating school"?  Sorry - I never attended! :)
>

Followup: Don't you hate it when you have to followup your own email!

I forgot to include the reference info that backs up my "best guess". 
Ref: http://www.alz.org/alzheimers_disease_what_is_alzheimers.asp

Quote: "Alzheimer's destroys brain cells, causing problems with 
memory, thinking and behavior severe enough to affect work, lifelong 
hobbies or social life. Alzheimer's gets worse over time, and it is 
fatal."

Quote: "Is the most common form of dementia, a general term for the 
loss of memory and other intellectual abilities serious enough to 
interfere with daily life."

Quote: "Just like the rest of our bodies, our brains change as we age. 
Most of us notice some slowed thinking and occasional problems 
remembering certain things. However, serious memory loss, confusion 
and other major changes in the way our minds work are not a normal 
part of aging. They may be a sign that brain cells are failing."

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
Graduate from "sugar-coating school"?  Sorry - I never attended! :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread Al Hopper
On Wed, 5 Dec 2007, can you guess? wrote:

 snip  reformatted .

> Changing ZFS's approach to snapshots from block-oriented to 
> audit-trail-oriented, in order to pave the way for a journaled 
> rather than shadow-paged approach to transactional consistency 
> (which then makes data redistribution easier to allow rebalancing 
> across not only local disks but across multiple nodes using 
> algorithmic rather than pointer-based placement) starts to get more 
> into a 'raze it to the ground and start over' mode, though - leaving 
> plenty of room for one or more extended postscripts to 'the last 
> word in file systems'.
>
> - bill
>

Beep; Beep; Beep, Beep, Beep, beep beep beep beep-beep-beep

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
Graduate from "sugar-coating school"?  Sorry - I never attended! :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread can you guess?
> On Tue, 4 Dec 2007, Stefano Spinucci wrote:
> 
> >>> On 11/7/07, can you guess?
> >> [EMAIL PROTECTED]
> >>> wrote:
> >> However, ZFS is not the *only* open-source
> approach
> >> which may allow that to happen, so the real
> question
> >> becomes just how it compares with equally
> inexpensive
> >> current and potential alternatives (and that would
> >> make for an interesting discussion that I'm not
> sure
> >> I have time to initiate tonight).
> >>
> >> - bill
> >
> > Hi bill, only a question:
> > I'm an ex linux user migrated to solaris for zfs
> and its checksumming; you say there are other
> open-source alternatives but, for a linux end user,
> I'm aware only of Oracle btrfs
> (http://oss.oracle.com/projects/btrfs/), who is a
> Checksumming Copy on Write Filesystem not in a final
> state.
> >
> > what *real* alternatives are you referring to???
> >
> > if I missed something tell me, and I'll happily
> stay with linux with my data checksummed and
> snapshotted.
> >
> > bye
> >
> > ---
> > Stefano Spinucci
> >
> 
> Hi Stefano,
> 
> Did you get a *real* answer to your question?
> Do you think that this (quoted) message is a *real*
> answer?

Hi, Al - I see that you're still having difficulty understanding basic English, 
and your other recent technical-content-free drivel here suggests that you 
might be better off considering a career in janitorial work than in anything 
requiring even basic analytical competence.  But I remain willing to help you 
out with English until you can find the time to take a remedial course (though 
for help with finding a vocation more consonant with your abilities you'll have 
to look elsewhere).

Let's begin by repeating the question at issue, since failing to understand 
that may be at the core of your problem:

"what *real* alternatives are you referring to???"

Despite a similar misunderstanding by your equally-illiterate associate Mr. 
Cook, that was not a question about what alternatives provided the specific 
support in which Stefano was particularly interested (though in another part of 
my response to him I did attempt to help him understand why that interest might 
be misplaced).  Rather, it was a question about what *I* had referred to in an 
earlier post of mine, as you might also have gleaned from the first sentence of 
my response to that question ("As I said in the post to which you 
responded...") had what passes for your brain been even minimally engaged when 
you read it.

My response to that question continued by listing some specific features 
(snapshots, disk scrubbing, software RAID) available in Linux and Free BSD that 
made them viable alternatives to ZFS for enterprise use (the context of that 
earlier post that I was being questioned about).  Whether Linux and FreeBSD 
also offer management aids I admitted I didn't know - though given ZFS's own 
limitations in this area such as the need to define mirror pairs and parity 
groups explicitly and the inability to expand parity groups it's not clear that 
lack thereof would constitute a significant drawback (especially since the 
management activities that their file systems require are comparable to what 
such enterprise installations are already used to dealing with).  And, in an 
attempt to forestall yet another round of babble, I then addressed the relative 
importance (or lack thereof) of several predictable "Yes, but ZFS also offers 
wonderful feature X..." responses.

Now, not being a psychic myself, I can't state with authority that Stefano 
really meant to ask the question that he posed rather than something else.  In 
retrospect, I suppose that some of his surrounding phrasing *might* suggest 
that he was attempting (however unskillfully) to twist my comment about other 
open source solutions being similarly enterprise-capable into a provably-false 
assertion that those other solutions offered the *same* features that he 
apparently considers so critical in ZFS rather than just comparably-useful 
ones.  But that didn't cross my mind at the time:  I simply answered the 
question that he asked, and in passing also pointed out that those features 
which he apparently considered so critical might well not be.

Once again, though, I've reached the limit of my ability to dumb down the 
discussion in an attempt to reach your level:  if you still can't grasp it, 
perhaps a friend will lend a hand.

- bill
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread Kyle McDonald
can you guess? wrote:
>
> Primarily its checksumming features, since other open source solutions 
> support simple disk scrubbing (which given its ability to catch most 
> deteriorating disk sectors before they become unreadable probably has a 
> greater effect on reliability than checksums in any environment where the 
> hardware hasn't been slapped together so sloppily that connections are flaky).
>   
 From what I've read on the subject, That premise seems bad from the 
start.  I don't believe that scrubbing will catch all the types of 
errors that checksumming will. There are a category of errors that are 
not caused by firmware, or any type of software. The hardware just 
doesn't write or read the correct bit value this time around. With out a 
checksum there's no way for the firmware to know, and next time it very 
well may write or read the correct bit value from the exact same spot on 
the disk, so scrubbing is not going to flag this sector as 'bad'.

Now you may claim that this type of error happens so infrequently that 
it's not worth it. You may think so since the number of bits you need to 
read or write to experience this is huge. However, hard disk sizes are 
still increasing exponentially, and the data we users are storing on 
them is too. I don't believe that the distinctive makers are making 
corresponding improvements in the bit error rates. Therefore while it 
may not be a huge benefit today, it's good we have it today, because 
it's value will increase as time goes on, drive sizes and data sizes 
increase.
> Aside from the problems that scrubbing handles (and you need scrubbing even 
> if you have checksums, because scrubbing is what helps you *avoid* data loss 
> rather than just discover it after it's too late to do anything about it), 
> and aside from problems 
Again I think you're wrong on the basis for your point. The checksumming 
in ZFS (if I understand it correctly) isn't used for only detecting the 
problem. If the ZFS pool has any redundancy at all, those same checksums 
can be used to repair that same data, thus *avoiding* the data loss. I 
agree that scrubbing is still a good idea. but as discussed above it 
won't catch (and avoid) all the types of errors that checksumming can 
catch *and repair*.
> deriving from sloppy assembly (which tend to become obvious fairly quickly, 
> though it's certainly possible for some to be more subtle), checksums 
> primarily catch things like bugs in storage firmware and otherwise undetected 
> disk read errors (which occur orders of magnitude less frequently than 
> uncorrectable read errors).
>   
Sloppy assembly isn't the only place these errors can occur. it can 
occur between the head and the platter, even with the best drive and 
controller firmware.
> Robert Milkowski cited some sobering evidence that mid-range arrays may have 
> non-negligible firmware problems that ZFS could often catch, but a) those are 
> hardly 'consumer' products (to address that sub-thread, which I think is what 
> applies in Stefano's case) and b) ZFS's claimed attraction for higher-end 
> (corporate) use is its ability to *eliminate* the need for such products 
> (hence its ability to catch their bugs would not apply - though I can 
> understand why people who needed to use them anyway might like to have ZFS's 
> integrity checks along for the ride, especially when using 
> less-than-fully-mature firmware).
>
>   
Every drive has firmware too. If it can be used to detect and repair 
array firmware problems, then it can be used by consumers to detect and 
repair drive firmware problems too.
> And otherwise undetected disk errors occur with negligible frequency compared 
> with software errors that can silently trash your data in ZFS cache or in 
> application buffers (especially in PC environments:  enterprise software at 
> least tends to be more stable and more carefully controlled - not to mention 
> their typical use of ECC RAM).
>
>   
As I wrote above. The undetected disk error rate is not improving 
(AFAIK) as fast as disk size and data size that these drives are used 
for. Therefore the value of this protection is increasing all the time.

Sure it's true that something else that could trash your data without 
checksumming can still trash your data with it. But making sure that the 
data gets unmangled if it can is still worth something, and the 
improvements you point out are needed in other components would be 
pointless (according to your argument) if something like ZFS didn't also 
exist.

> So depending upon ZFS's checksums to protect your data in most PC 
> environments is sort of like leaving on a vacation and locking and bolting 
> the back door of your house while leaving the front door wide open:  yes, a 
> burglar is less likely to enter by the back door, but thinking that the extra 
> bolt there made you much safer is likely foolish.
>
>  .. are you  
>   
>> just trying to say that without multiple copies of
>> data in multiple  
>> physic

Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread Bakul Shah
> I have budget constraints then I can use only user-level storage.
> 
> until I discovered zfs I used subversion and git, but none of them is designe
> d to  manage gigabytes of data, some to be versioned, some to be unversioned.
> 
> I can't afford silent data corruption and, if the final response is "*now* th
> ere is no *real* opensource software alternative to zfs automatic checksummin
> g and simple snapshotting" I'll be an happy solaris user (for data storage), 
> an happy linux user (for everyday work), and an unhappy offline windows user 
> (for some video-related activity I can't do with linux).

Note that I don't wish to argue for/against zfs/billtodd but
the comment above about "no *real* opensource software
alternative zfs automating checksumming and simple
snapshotting" caught my eye.

There is an open source alternative for archiving that works
quite well.  venti has been available for a few years now.
It runs on *BSD, linux, macOS & plan9 (its native os).  It
uses strong crypto checksums, stored separately from the data
(stored in the pointer blocks) so you get a similar guarantee
against silent data corruption as ZFS.

You can back up a variety of filesystems (ufs, hfs, ext2fs,
fat) or use it to to backup a file tree.  Each backup results
in a single 45 byte "score" containing the checksum of root
pointer block.  Using this score you can retrieve the entire
backup.  Further, it stores only one copy of a data block
regardless of what files or which backup it may belong to. In
effect every "full backup" is an incremental backup (only
changed data blocks and changed or new ptr blocks are
stored).

So it is really an "archival" server.  You don't take
snapshots but you do a backup.  However you can nfs mount a
venti and all your backups will show up under directories
like ///.

Ideally you'd store a venti on RAID storage.  You can even
copy a bunch of venti to another one, you can store its
arenas on CDs or DVD and so on.

It is not as fast as ZFS nor anywhere near as easy to use and
its intended use is not the same as ZFS (not a primary
filesystem). But for what it does, it is not bad at all!

Unlike ZFS, it fits best where you have a fast filesystem for
speed critical use, venti for backups and RAID for
redundancy.

Google for "venti sean dorward".  If interested, go to
http://swtch.com/plan9port/ and pick up plan9port (a
collection of programs from plan9, not just venti).  See
http://swtch.com/plan9port/man/man8/index.html for how to use
venti.

> I think for every fully digital people own data are vital, and almost everyon
> e would reply "NONE" at your question "what level of risk user is willing to 
> tolerate".

NONE is not possible.  It is a question of how much risk you
are willing to tolerate for what cost.  Thankfully, these
days you have a variety of choices and much much lower cost
for a given degree of risk compared to just a few years ago!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs mirroring question

2007-12-05 Thread Anton B. Rang
The file systems are striped between the two mirrors.  (If your disks are A, B, 
C, D then a single file's blocks would reside on disks A+B, then C+D, then A+B 
again.)

If you lose A and B, or C and D, you lose the whole pool.  (Hence if you have 
two power supplies, for instance, you'd probably want A+C to share a power 
supply, and B+D.)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread can you guess?
>  I was trying to get you
> to evaluate ZFS's  
> > incremental risk reduction *quantitatively* (and if
> you actually  
> > did so you'd likely be surprised at how little
> difference it makes  
> > - at least if you're at all rational about
> assessing it).
> 
> ok .. i'll bite since there's no ignore feature on
> the list yet:
> 
> what are you terming as "ZFS' incremental risk
> reduction"? .. (seems  
> like a leading statement toward a particular
> assumption)

Primarily its checksumming features, since other open source solutions support 
simple disk scrubbing (which given its ability to catch most deteriorating disk 
sectors before they become unreadable probably has a greater effect on 
reliability than checksums in any environment where the hardware hasn't been 
slapped together so sloppily that connections are flaky).

Aside from the problems that scrubbing handles (and you need scrubbing even if 
you have checksums, because scrubbing is what helps you *avoid* data loss 
rather than just discover it after it's too late to do anything about it), and 
aside from problems deriving from sloppy assembly (which tend to become obvious 
fairly quickly, though it's certainly possible for some to be more subtle), 
checksums primarily catch things like bugs in storage firmware and otherwise 
undetected disk read errors (which occur orders of magnitude less frequently 
than uncorrectable read errors).

Robert Milkowski cited some sobering evidence that mid-range arrays may have 
non-negligible firmware problems that ZFS could often catch, but a) those are 
hardly 'consumer' products (to address that sub-thread, which I think is what 
applies in Stefano's case) and b) ZFS's claimed attraction for higher-end 
(corporate) use is its ability to *eliminate* the need for such products (hence 
its ability to catch their bugs would not apply - though I can understand why 
people who needed to use them anyway might like to have ZFS's integrity checks 
along for the ride, especially when using less-than-fully-mature firmware).

And otherwise undetected disk errors occur with negligible frequency compared 
with software errors that can silently trash your data in ZFS cache or in 
application buffers (especially in PC environments:  enterprise software at 
least tends to be more stable and more carefully controlled - not to mention 
their typical use of ECC RAM).

So depending upon ZFS's checksums to protect your data in most PC environments 
is sort of like leaving on a vacation and locking and bolting the back door of 
your house while leaving the front door wide open:  yes, a burglar is less 
likely to enter by the back door, but thinking that the extra bolt there made 
you much safer is likely foolish.

 .. are you  
> just trying to say that without multiple copies of
> data in multiple  
> physical locations you're not really accomplishing a
> more complete  
> risk reduction

What I'm saying is that if you *really* care about your data, then you need to 
be willing to make the effort to lock and bolt the front door as well as the 
back door and install an alarm system:  if you do that, *then* ZFS's additional 
protection mechanisms may start to become significant (because you're 
eliminated the higher-probability risks and ZFS's extra protection then 
actually reduces the *remaining* risk by a significant percentage).

Conversely, if you don't care enough about your data to take those extra steps, 
then adding ZFS's incremental protection won't reduce your net risk by a 
significant percentage (because the other risks that still remain are so much 
larger).

Was my point really that unclear before?  It seems as if this must be at least 
the third or fourth time that I've explained it.

> 
> yes i have read this thread, as well as many of your
> other posts  
> around usenet and such .. in general i find your tone
> to be somewhat  
> demeaning (slightly rude too - but - eh, who's
> counting?  i'm none to  
> judge)

As I've said multiple times before, I respond to people in the manner they seem 
to deserve.  This thread has gone on long enough that there's little excuse for 
continued obtuseness at this point, but I still attempt to be pleasant as long 
as I'm not responding to something verging on being hostile.

 - now, you do know that we are currently in an
> era of  
> collaboration instead of deconstruction right?

Can't tell it from the political climate, and corporations seem to be following 
that lead (I guess they've finally stopped just gazing in slack-jawed disbelief 
at what this administration is getting away with and decided to cash in on the 
opportunity themselves).

Or were you referring to something else?

 .. so
> i'd love to see  
> the improvements on the many shortcomings you're
> pointing to and  
> passionate about written up, proposed, and freely
> implemented :)

Then ask the ZFS developers to get on the stick:  fixing the fragmentation 
problem discussed elsewhere should be easy, and

Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread Al Hopper
On Wed, 5 Dec 2007, Eric Haycraft wrote:

[... reformatted  ]

> Why are we still feeding this troll? Paid trolls deserve no response 
> and there is no value in continuing this thread. (And no guys, he 
> isn't being paid by NetApp.. think bigger) The troll will continue 
> to try to downplay features of zfs and the community will 
> counter...and on and on.

+1 - a troll

Ques: does it matter why he's a troll?
I don't think so but my best guess is that Bill is out of work, 
and, due to the financial hardship, has had to cut his alzheimer's
medication dosage in half.

I could be wrong, with my guess, but as long as I keep seeing this 
"can you guess?" question, I feel compelled to answer it.  :)

Please feel free to offer your best "can you guess?" answer!

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
Graduate from "sugar-coating school"?  Sorry - I never attended! :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread Al Hopper
On Tue, 4 Dec 2007, Stefano Spinucci wrote:

>>> On 11/7/07, can you guess?
>> [EMAIL PROTECTED]
>>> wrote:
>> However, ZFS is not the *only* open-source approach
>> which may allow that to happen, so the real question
>> becomes just how it compares with equally inexpensive
>> current and potential alternatives (and that would
>> make for an interesting discussion that I'm not sure
>> I have time to initiate tonight).
>>
>> - bill
>
> Hi bill, only a question:
> I'm an ex linux user migrated to solaris for zfs and its checksumming; you 
> say there are other open-source alternatives but, for a linux end user, I'm 
> aware only of Oracle btrfs (http://oss.oracle.com/projects/btrfs/), who is a 
> Checksumming Copy on Write Filesystem not in a final state.
>
> what *real* alternatives are you referring to???
>
> if I missed something tell me, and I'll happily stay with linux with my data 
> checksummed and snapshotted.
>
> bye
>
> ---
> Stefano Spinucci
>

Hi Stefano,

Did you get a *real* answer to your question?
Do you think that this (quoted) message is a *real* answer?

 can you guess? ---

Message-ID: <[EMAIL PROTECTED]>
Date: Tue, 04 Dec 2007 22:19:54 PST
From: can you guess? <[EMAIL PROTECTED]>
To: zfs-discuss@opensolaris.org
In-Reply-To: <[EMAIL PROTECTED]>
Subject: Re: [zfs-discuss] Yager on ZFS
List-Id: 

> > > On 11/7/07, can you guess?
> > [EMAIL PROTECTED]
> > > wrote:
> > However, ZFS is not the *only* open-source
> approach
> > which may allow that to happen, so the real
> question
> > becomes just how it compares with equally
> inexpensive
> > current and potential alternatives (and that would
> > make for an interesting discussion that I'm not
> sure
> > I have time to initiate tonight).
> > 
> > - bill
> 
> Hi bill, only a question:
> I'm an ex linux user migrated to solaris for zfs and
> its checksumming;

So the question is:  do you really need that feature (please quantify that need 
if you think you do), or do you just like it because it makes you feel all warm 
and safe?

Warm and safe is definitely a nice feeling, of course, but out in the real 
world of corporate purchasing it's just one feature out of many 'nice to haves' 
- and not necessarily the most important.  In particular, if the *actual* risk 
reduction turns out to be relatively minor, that nice 'feeling' doesn't carry 
all that much weight.

  you say there are other open-source
> alternatives but, for a linux end user, I'm aware
> only of Oracle btrfs
> (http://oss.oracle.com/projects/btrfs/), who is a
> Checksumming Copy on Write Filesystem not in a final
> state.
> 
> what *real* alternatives are you referring to???

As I said in the post to which you responded, I consider ZFS's ease of 
management to be more important (given that even in high-end installations 
storage management costs dwarf storage equipment costs) than its real but 
relatively marginal reliability edge, and that's the context in which I made my 
comment about alternatives (though even there if ZFS continues to require 
definition of mirror pairs and parity groups for redundancy that reduces its 
ease-of-management edge, as does its limitation to a single host system in 
terms of ease-of-scaling).

Specifically, features like snapshots, disk scrubbing (to improve reliability 
by dramatically reducing the likelihood of encountering an unreadable sector 
during a RAID rebuild), and software RAID (to reduce hardware costs) have been 
available for some time in Linux and FreeBSD, and canned management aids would 
not be difficult to develop if they don't exist already.  The dreaded 'write 
hole' in software RAID is a relatively minor exposure (since it only 
compromises data if a system crash or UPS failure - both rare events in an 
enterprise setting - sneaks in between a data write and the corresponding 
parity update and then, before the array has restored parity consistency in the 
background, a disk dies) - and that exposure can be reduced to seconds by a 
minuscule amount of NVRAM that remembers which writes were active (or to zero 
with somewhat more NVRAM to remember the updates themselves in an inexpensive 
hardware solution).

The real question is usually what level of risk an enterprise storage user is 
willing to tolerate.  At the paranoid end of the scale reside the users who 
will accept nothing less than z-series or Tandem-/Stratus-style end-to-end 
hardware checking from the processor traces on out - which rules out most 
environments that ZFS runs in (unless Sun's N-series telco products might fill 
the bill:  I'm not very familiar with them).  And once you get down into users 
of commodity processors, the risk level of using stable and robust file systems 
that lack ZFS's additional integrity checks is comparable to the risk inherent 
in the rest of the system (at least if the systems are carefully constructed, 
which should be a given in an enterprise setting) - so other open-source 
solutions are definitely in play ther

Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread Tim Cook
That would require coming up with something solid.  Much like his 
generalization that there's already snapshotting and checksumming that exists 
for linux.  yet when he was called out, he responded with a 20 page rant 
because there doesn't exist such a solution.  It's far easier to condescend 
when called out on your BS than to actually answer the question.  If there were 
such a solution available, it would've been a one line response.  

IE: sure, xfs has checksumming and snapshotting today in linux!!111  

But alas, nothing does exist, which is exactly why there's so much interest in 
zfs.  "but most consumers won't need what it provides" is a cop-out, as he 
knows.  Just like *most consumers* don't need more than 128kbit/sec of 
bandwidth, and *most consumers* didn't need bigger than a 10MB hard drive.  It 
turns out people tend to use the technology AFTER it's developed.  OF COURSE 
the need is a niche right now, just like every other technology before it.  It 
HAS to be by the very nature that people can't use what they don't have.

10 years ago I couldn't download an entire CD without waiting a couple days, 
and shockingly enough, there was no *consumer need* to do so.  Go figure, 10 
years later, the bandwidth is there, and there's a million other technologies 
built up around it.

But I digress, he's already assured us all he loves ZFS and isn't just trolling 
these forums.  Clearly that statement trumps any and all actions that proceeded 
it.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Seperate ZIL

2007-12-05 Thread Al Hopper
On Wed, 5 Dec 2007, Brian Hechinger wrote:

> I will be putting 4 500GB SATA disks in my Ultra80.  I currently have
> two 10K rpm 73G SCSI disks in it with 10G for the OS (UFS) and the
> remaining space for a ZFS pool (the two remaining partitions are setup
> in a mirror).
>
> Would it be worth my while to move all the data off of the zfs partitions
> of the 73G disks and use those partitions for ZIL?  Would I really gain
> any performance from that?

Hi Brian,

I don't think you'll see any worthwhile improvement.  For a ZIL 
device, you really need something like a (small) SAS 15k RPM 3.5" 
drive - which will sustain 700 to 900 IOPS (my number - open to 
argument) - or a RAM disk or one of these [1].

10K RPM SCSI disks will get (best case) 350 to 400 IOPS.  Remember, 
the main issue with legacy SCSI is that (SCSI) commands are sent 
8-bits wide at 5Mbits/Sec - for backwards compatibility.  You simply 
can't send enough commands over a SCSI bus to busy out a modern 10k 
RPM SCSI drive.

BTW, this is easy to verify: just buy the same drive, one with a SCSI 
interface and one with a SAS (or FC) interface, and in about 5 
minutes, you'll be looking for a victim ^H^H^H^H^H^H buyer for the 
SCSI drive.

PS: LsiLogic just updated their SAS HBAs and have a couple of products 
very reasonably priced IMHO.  Combine that with a (single ?) Fujitsu 
MAX3xxxRC (where xxx represents the size) and you'll be wearing a big 
smile every time you work on a system so equipped.

Tell Santa that you want an LsiLogic SAS HBA and some SAS disks for 
Xmas! :)

[1] Finally, someone built a flash SSD that rocks (and they know how 
fast it is judging by the pricetag):
http://www.tomshardware.com/2007/11/21/mtron_ssd_32_gb/
http://www.anandtech.com/storage/showdoc.aspx?i=3167

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
Graduate from "sugar-coating school"?  Sorry - I never attended! :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread Jonathan Edwards

On Dec 5, 2007, at 17:50, can you guess? wrote:

>> my personal-professional data are important (this is
>> my valuation, and it's an assumption you can't
>> dispute).
>
> Nor was I attempting to:  I was trying to get you to evaluate ZFS's  
> incremental risk reduction *quantitatively* (and if you actually  
> did so you'd likely be surprised at how little difference it makes  
> - at least if you're at all rational about assessing it).

ok .. i'll bite since there's no ignore feature on the list yet:

what are you terming as "ZFS' incremental risk reduction"? .. (seems  
like a leading statement toward a particular assumption) .. are you  
just trying to say that without multiple copies of data in multiple  
physical locations you're not really accomplishing a more complete  
risk reduction

yes i have read this thread, as well as many of your other posts  
around usenet and such .. in general i find your tone to be somewhat  
demeaning (slightly rude too - but - eh, who's counting?  i'm none to  
judge) - now, you do know that we are currently in an era of  
collaboration instead of deconstruction right? .. so i'd love to see  
the improvements on the many shortcomings you're pointing to and  
passionate about written up, proposed, and freely implemented :)

---
.je
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread Tim Spriggs


can you guess? wrote:
> he isn't being
>   
>> paid by NetApp.. think bigger
>> 
>
> O frabjous day!  Yet *another* self-professed psychic, but one whose internal 
> voices offer different counsel.
>
> While I don't have to be psychic myself to know that they're *all* wrong 
> (that's an advantage of fact-based rather than faith-based opinions), a 
> battle-of-the-incompetents would be amusing to watch (unless it took place in 
> a realm which no mere mortals could visit).
>
> - bill

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread can you guess?
he isn't being
> paid by NetApp.. think bigger

O frabjous day!  Yet *another* self-professed psychic, but one whose internal 
voices offer different counsel.

While I don't have to be psychic myself to know that they're *all* wrong 
(that's an advantage of fact-based rather than faith-based opinions), a 
battle-of-the-incompetents would be amusing to watch (unless it took place in a 
realm which no mere mortals could visit).

- bill
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread can you guess?
> my personal-professional data are important (this is
> my valuation, and it's an assumption you can't
> dispute).

Nor was I attempting to:  I was trying to get you to evaluate ZFS's incremental 
risk reduction *quantitatively* (and if you actually did so you'd likely be 
surprised at how little difference it makes - at least if you're at all 
rational about assessing it).

...

> I think for every fully digital people own data are
> vital, and almost everyone would reply "NONE" at your
> question "what level of risk user is willing to
> tolerate".

The fact that appears to escape people like you it that there is *always* some 
risk, and you *have* to tolerate it (or not save anything at all).  Therefore 
the issue changes to just how *much* risk you're willing to tolerate for a 
given amount of effort.

(There's also always the possibility of silent data corruption, even if you use 
ZFS - because it only eliminates *some* of the causes of such corruption.  If 
your data is corrupted in RAM during the period when ZFS is not watching over 
it, for example, you're SOL.)

How to *really* protect valuable data has already been thoroughly discussed in 
this thread, though you don't appear to have understood it.  It takes multiple 
copies (most of them off-line), in multiple locations, with verification of 
every copy operation and occasional re-verification of the stored content - and 
ZFS helps with only part of one of these strategies (reverifying the integrity 
of your on-line copy).  If you don't take the rest of the steps, ZFS's 
incremental protection is virtually useless, because the risk of data loss from 
causes that ZFS doesn't protect against is so much higher than the incremental 
protection that it provides (i.e., you may *feel* noticeably better protected 
but you're just kidding yourself).  If you *do* take the rest of the steps, 
then it takes little additional effort to revalidate your on-line content as 
well as the off-line copies, so all ZFS provides is a small reduction in effort 
to achieve the same (very respectable) level of protecti
 on that other solutions can achieve when manual steps are taken to reverify 
the on-line copy as well as the off-line copies.

Try to step out of your "my data is valuable" rut and wrap your mind around the 
fact that ZFS's marginal contribution to its protection, real though it may be, 
just isn't very significant in most environments compared to the rest of the 
protection solution that it *doesn't* help with.  That's why I encouraged you 
to *quantify* the effect that ZFS's protection features have in *your* 
environment (along with its other risks that ZFS can't ameliorate):  until you 
do that, you're just another fanboy (not that there's anything wrong with that, 
as long as you don't try to present your personal beliefs as something of more 
objective validity).

- bill
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Seperate ZIL

2007-12-05 Thread Robert Milkowski
Hello Brian,

Wednesday, December 5, 2007, 9:15:10 PM, you wrote:

BH> I will be putting 4 500GB SATA disks in my Ultra80.  I currently have
BH> two 10K rpm 73G SCSI disks in it with 10G for the OS (UFS) and the
BH> remaining space for a ZFS pool (the two remaining partitions are setup
BH> in a mirror).

BH> Would it be worth my while to move all the data off of the zfs partitions
BH> of the 73G disks and use those partitions for ZIL?  Would I really gain
BH> any performance from that?

BH> -brian

In a very specific scenario - maybe.
In most cases - I doubt it, I would say you rather loose some
performance.

-- 
Best regards,
 Robert Milkowskimailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with Memory Sticks

2007-12-05 Thread Paul Gress
Constantin Gonzalez wrote:
> Hi Paul,
>
> yes, ZFS is platform agnostic and I know it works in SANs.
>
> For the USB stick case, you may have run into labeling issues. Maybe
> Solaris SPARC did not recognize the x64 type label on the disk (which
> is strange, because it should...).
>
> Did you try making sure that ZFS creates an EFI label on the disk?
> You can check this by running zpool status and then the devices should
> look like c6t0d0 without the s0 part.
>
> If you want to force this, you can create an EFI label on the USB disk
> from hand by saying fdisk -E /dev/rdsk/cxtxdx.
>
> Hope this helps,
>Constantin
>
>
OK, tried some things you said.

This is the Volume formated on the PC (W2100z), the Volume is named 
"Radical-Vol"

# /usr/sbin/zpool import -f Radical-Vol
cannot import 'Radical-Vol': one or more devices is currently unavailable

# /usr/sbin/zpool import
  pool: Radical-Vol
id: 3051993120652382125
 state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-5E
config:

Radical-Vol  UNAVAIL   insufficient replicas
  c7t0d0s0  UNAVAIL   corrupted data



Here's the device:

$ rmformat
Looking for devices...
 1. Logical Node: /dev/rdsk/c1t2d0s2
Physical Node: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0
Connected Device: SONY DVD RW DRU-720A  JY02
Device Type: 
 2. Logical Node: /dev/rdsk/c7t0d0s2
Physical Node: /[EMAIL PROTECTED],70/[EMAIL PROTECTED],2/[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
Connected Device: USB 2.0  Flash Disk   1.00
Device Type: Removable

Following your command:

$ /opt/sfw/bin/sudo /usr/sbin/zpool status
  pool: Rad_Disk_1
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on older software versions.
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
Rad_Disk_1  ONLINE   0 0 0
  c0t1d0ONLINE   0 0 0

errors: No known data errors




It obviously doesn't show, not mounted.



And last the fdisk command:


# fdisk -E /dev/rdsk/c7t0d0
fdisk: Cannot stat device /dev/rdsk/c7t0d0



But this device works currently on my Solaris PC's, the W2100z and a 
laptop of mine.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs mirroring question

2007-12-05 Thread Dick Davies
On Dec 5, 2007 9:54 PM, Brian Lionberger <[EMAIL PROTECTED]> wrote:
> I create two zfs's on one pool of four disks with two mirrors, such as...
> /
> zpool create tank mirror disk1 disk2 mirror disk3 disk4
>
> zfs create tank/fs1
> zfs create tank/fs2/
>
> Are fs1 and fs2 striped across all four disks?

Yes - they're striped across both the mirrors (and so across all 4 submirrors).

> If two disks fail that represent a 2-way mirror, do I lose data?.

Hell yes.


-- 
Rasputnik :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs mirroring question

2007-12-05 Thread Brian Lionberger
I create two zfs's on one pool of four disks with two mirrors, such as...
/
zpool create tank mirror disk1 disk2 mirror disk3 disk4

zfs create tank/fs1
zfs create tank/fs2/

Are fs1 and fs2 striped across all four disks?
If two disks fail that represent a 2-way mirror, do I lose data?

Brian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Seperate ZIL

2007-12-05 Thread Brian Hechinger
I will be putting 4 500GB SATA disks in my Ultra80.  I currently have
two 10K rpm 73G SCSI disks in it with 10G for the OS (UFS) and the
remaining space for a ZFS pool (the two remaining partitions are setup
in a mirror).

Would it be worth my while to move all the data off of the zfs partitions
of the 73G disks and use those partitions for ZIL?  Would I really gain
any performance from that?

-brian
-- 
"Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of a skilled artisan, it can and does happen; it's just
that most of the shit out there is built by people who'd be better
suited to making sure that my burger is cooked thoroughly."  -- Jonathan 
Patschke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS performance with Oracle

2007-12-05 Thread Jason J. W. Williams
Seconded. Redundant controllers means you get one controller that
locks them both up, as much as it means you've got backup.

Best Regards,
Jason

On Mar 21, 2007 4:03 PM, Richard Elling <[EMAIL PROTECTED]> wrote:
> JS wrote:
> > I'd definitely prefer owning a sort of SAN solution that would basically 
> > just be trays of JBODs exported through redundant controllers, with 
> > enterprise level service. The world is still playing catch up to integrate 
> > with all the possibilities of zfs.
>
> It was called the A5000, later A5100 and A5200.  I've still
> got the scars and Torrey looks like one of the X-men.  If you think
> that a disk drive vendor can write better code than an OS/systems
> vendor, then you're due for a sad realization.
>   -- richard
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread Eric Haycraft
Why are we still feeding this troll? Paid trolls deserve no response and there 
is no value in continuing this thread. (And no guys, he isn't being paid by 
NetApp.. think bigger) The troll will continue to try to downplay features of 
zfs and the community will counter...and on and on.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread Stefano Spinucci
> > > > On 11/7/07, can you guess?
> > > [EMAIL PROTECTED]
> > > > wrote:
> As I said in the post to which you responded, I
> consider ZFS's ease of management to be more
> important (given that even in high-end installations
> storage management costs dwarf storage equipment
> costs) than its real but relatively marginal
> reliability edge, and that's the context in which I
> made my comment about alternatives (though even there
> if ZFS continues to require definition of mirror
> pairs and parity groups for redundancy that reduces
> its ease-of-management edge, as does its limitation
> to a single host system in terms of
> ease-of-scaling).
> 
> Specifically, features like snapshots, disk scrubbing
> (to improve reliability by dramatically reducing the
> likelihood of encountering an unreadable sector
> during a RAID rebuild), and software RAID (to reduce
> hardware costs) have been available for some time in
> Linux and FreeBSD, and canned management aids would
> not be difficult to develop if they don't exist
> already.  The dreaded 'write hole' in software RAID
> is a relatively minor exposure (since it only
> compromises data if a system crash or UPS failure -
> both rare events in an enterprise setting - sneaks in
> between a data write and the corresponding parity
> update and then, before the array has restored parity
> consistency in the background, a disk dies) - and
> that exposure can be reduced to seconds by a
> minuscule amount of NVRAM that remembers which writes
> were active (or to zero with somewhat more NVRAM to
> remember the updates themselves in an inexpensive
> hardware solution).
> 
> The real question is usually what level of risk an
> enterprise storage user is willing to tolerate.  At
> the paranoid end of the scale reside the users who
> will accept nothing less than z-series or
> Tandem-/Stratus-style end-to-end hardware checking
> from the processor traces on out - which rules out
> most environments that ZFS runs in (unless Sun's
> N-series telco products might fill the bill:  I'm not
> very familiar with them).  And once you get down into
> users of commodity processors, the risk level of
> using stable and robust file systems that lack ZFS's
> additional integrity checks is comparable to the risk
> inherent in the rest of the system (at least if the
> systems are carefully constructed, which should be a
> given in an enterprise setting) - so other
> open-source solutions are definitely in play there.
> 
> All things being equal, of course users would opt for
> even marginally higher reliability - but all things
> are never equal.  If using ZFS would require changing
> platforms or changing code, that's almost certainly a
> show-stopper for enterprise users.  If using ZFS
> would compromise performance or require changes in
> management practices (e.g., to accommodate
> file-system-level quotas), those are at least
> significant impediments.  In other words, ZFS has its
> pluses and minuses just as other open-source file
> systems do, and they *all* have the potential to
> start edging out expensive proprietary solutions in
> *some* applications (and in fact have already started
> to do so).
> 
> When we move from 'current' to 'potential'
> alternatives, the scope for competition widens.
> Because it's certainly possible to create a file
> system that has all of ZFS's added reliability but
> runs faster, scales better, incorporates additional
> useful features, and is easier to manage.  That
> discussion is the one that would take a lot of time
> to delve into adequately (and might be considered
> off topic for this forum - which is why I've tried
> to concentrate here on improvements that ZFS could
> actually incorporate without turning it upside
>  down).
> 
> - bill

my personal-professional data are important (this is my valuation, and it's an 
assumption you can't dispute).

my data are only digital and rapidly changing, and than I cannot print them.

I have budget constraints then I can use only user-level storage.

until I discovered zfs I used subversion and git, but none of them is designed 
to  manage gigabytes of data, some to be versioned, some to be unversioned.

I can't afford silent data corruption and, if the final response is "*now* 
there is no *real* opensource software alternative to zfs automatic 
checksumming and simple snapshotting" I'll be an happy solaris user (for data 
storage), an happy linux user (for everyday work), and an unhappy offline 
windows user (for some video-related activity I can't do with linux).

PS

I think for every fully digital people own data are vital, and almost everyone 
would reply "NONE" at your question "what level of risk user is willing to 
tolerate".

bye

---
Stefano Spinucci
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS performance with Oracle

2007-12-05 Thread Selim Daoud
basically you would add ZFS redundancy level, if you want to be
protected from silent data corruption (data corruption that could
occur somewhere along the IO path)

- XP12000 has all the features to protect from hardware failure (no-SPOF)
- ZFS has all the feature to protect from silent data corruption
(no-SPOC C=corruption)
this seems of over protection, but it's the price to pay when dealing
with large amount of data nowadays

selim

-- 
--
Blog: http://fakoli.blogspot.com/

On Dec 4, 2007 2:54 PM, Sean Parkinson <[EMAIL PROTECTED]> wrote:
> So, if your array is something big like an HP XP12000, you wouldn't just make 
> a zpool of one big LUN (LUSE volume), you'd split it in two and make a mirror 
> when creating the zpool?
>
> If the array has redundancy built in, you're suggesting to add another layer 
> of redundancy using ZFS on top of that?
>
> We're looking to use this in our environment. Just wanted some clarification.
>
>
> This message posted from opensolaris.org
> ___
>
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
--
Blog: http://fakoli.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread can you guess?
I
> >> suspect ZFS will change that game in the future.
>  In
> > particular for someone doing lots of editing,
> >> snapshots can help recover from user error.
> >
> > Ah - so now the rationalization has changed to
> snapshot support.   
> > Unfortunately for ZFS, snapshot support is pretty
> commonly available
> 
> We can cherry pick features all day. People choose
> ZFS for the  
> combination (as well as its unique features).

Actually, based on the self-selected and decidedly unscientific sample of ZFS 
proponents that I've encountered around the Web lately, it appears that people 
choose ZFS in large part because a) they've swallowed the "Last Word In File 
Systems" viral marketing mantra hook, line, and sinker (that's in itself not 
all that surprising, because the really nitty-gritty details of file system 
implementation aren't exactly prime topics of household conversation - even 
among the technically inclined), b) they've incorporated this mantra into their 
own self-image (the 'fanboy' phenomenon - but at least in the case of existing 
Sun customers this is also not very surprising, because dependency on a vendor 
always tends to engender loyalty - especially if that vendor is not doing all 
that well and its remaining customers have become increasingly desperate for 
good news that will reassure them). and/or c) they're open-source zealots 
who've been sucked in by Jonathan's recent attempt to turn t
 he patent dispute with NetApp into something more profound than the mundane 
inter-corporation spat which it so clearly is.

All of which certainly helps explain why so many of those proponents are so 
resistant to rational argument:  their zeal is not technically based, just 
technically rationalized (as I was pointing out in the post to which you 
responded) - much more like the approach of a (volunteer) marketeer with an 
agenda than like that of an objective analyst (not to suggest that *no one* 
uses ZFS based on an objective appreciation of the trade-offs involved in doing 
so, of course - just that a lot of its more vociferous supporters apparently 
don't).

- bill
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread can you guess?
...

> >> Hi bill, only a question:
> >> I'm an ex linux user migrated to solaris for zfs
> and
> >> its checksumming;
> >
> > So the question is:  do you really need that
> feature (please  
> > quantify that need if you think you do), or do you
> just like it  
> > because it makes you feel all warm and safe?
> >
> > Warm and safe is definitely a nice feeling, of
> course, but out in  
> > the real world of corporate purchasing it's just
> one feature out of  
> > many 'nice to haves' - and not necessarily the most
> important.  In  
> > particular, if the *actual* risk reduction turns
> out to be  
> > relatively minor, that nice 'feeling' doesn't carry
> all that much  
> > weight.
> 
> On the other hand, it's hard to argue for risk
> *increase* (using  
> something else)...

And no one that I'm aware of was doing anything like that:  what part of the 
"All things being equal" paragraph (I've left it in below in case you missed it 
the first time around) did you find difficult to understand?

- bill

...

> > All things being equal, of course users would opt
> for even  
> > marginally higher reliability - but all things are
> never equal.  If  
> > using ZFS would require changing platforms or
> changing code, that's  
> > almost certainly a show-stopper for enterprise
> users.  If using ZFS  
> > would compromise performance or require changes in
> management  
> > practices (e.g., to accommodate file-system-level
> quotas), those  
> > are at least significant impediments.  In other
> words, ZFS has its  
> > pluses and minuses just as other open-source file
> systems do, and  
> > they *all* have the potential to start edging out
> expensive  
> > proprietary solutions in *some* applications (and
> in fact have  
> > already started to do so).
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with Memory Sticks

2007-12-05 Thread Constantin Gonzalez
Hi Paul,

yes, ZFS is platform agnostic and I know it works in SANs.

For the USB stick case, you may have run into labeling issues. Maybe
Solaris SPARC did not recognize the x64 type label on the disk (which
is strange, because it should...).

Did you try making sure that ZFS creates an EFI label on the disk?
You can check this by running zpool status and then the devices should
look like c6t0d0 without the s0 part.

If you want to force this, you can create an EFI label on the USB disk
from hand by saying fdisk -E /dev/rdsk/cxtxdx.

Hope this helps,
Constantin


Paul Gress wrote:
> OK, I've been putting off this question for a while now, but it eating 
> at me, so  I can't hold off any more.  I have a nice 8 gig memory stick 
> I've formated with the ZFS file system.  Works great on all my Solaris 
> PC's, but refuses to work on my Sparc processor.  So I've formated it on 
> my Sparc machine (Blade 2500), works great there now, but not on my 
> PC's.  Re-Formatted it on my PC, doesn't work on Sparc, and so on and so on.
> 
> I thought it was a file system to go back and forth both architectures.  
> So when will this compatibility be here, or if it's possible now, what 
> is the secret?
> 
> Paul
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
Constantin GonzalezSun Microsystems GmbH, Germany
Platform Technology Group, Global Systems Engineering  http://www.sun.de/
Tel.: +49 89/4 60 08-25 91   http://blogs.sun.com/constantin/

Sitz d. Ges.: Sun Microsystems GmbH, Sonnenallee 1, 85551 Kirchheim-Heimstetten
Amtsgericht Muenchen: HRB 161028
Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Dr. Roland Boemer
Vorsitzender des Aufsichtsrates: Martin Haering
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Success Stories

2007-12-05 Thread Roshan Perera
Hi All,

I am after some ZFS success stories in the ZFS community. The stories of 
replacing Veritas VM/FS with ZFS in bigger data volume environments. If there 
is please let me know the size and type of storage used, with applications ie: 
DB etc. 

Likewise I will take a hit on horror stories if at all. (Hope not!:-))

Appreciate your help.

rgds

Roshan


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread Toby Thain

On 5-Dec-07, at 4:19 AM, can you guess? wrote:

 On 11/7/07, can you guess?
>>> [EMAIL PROTECTED]
 wrote:
>>> However, ZFS is not the *only* open-source
>> approach
>>> which may allow that to happen, so the real
>> question
>>> becomes just how it compares with equally
>> inexpensive
>>> current and potential alternatives (and that would
>>> make for an interesting discussion that I'm not
>> sure
>>> I have time to initiate tonight).
>>>
>>> - bill
>>
>> Hi bill, only a question:
>> I'm an ex linux user migrated to solaris for zfs and
>> its checksumming;
>
> So the question is:  do you really need that feature (please  
> quantify that need if you think you do), or do you just like it  
> because it makes you feel all warm and safe?
>
> Warm and safe is definitely a nice feeling, of course, but out in  
> the real world of corporate purchasing it's just one feature out of  
> many 'nice to haves' - and not necessarily the most important.  In  
> particular, if the *actual* risk reduction turns out to be  
> relatively minor, that nice 'feeling' doesn't carry all that much  
> weight.

On the other hand, it's hard to argue for risk *increase* (using  
something else)...

--Toby

>
>  you say there are other open-source
>> alternatives but, for a linux end user, I'm aware
>> only of Oracle btrfs
>> (http://oss.oracle.com/projects/btrfs/), who is a
>> Checksumming Copy on Write Filesystem not in a final
>> state.
>>
>> what *real* alternatives are you referring to???
>
> As I said in the post to which you responded, I consider ZFS's ease  
> of management to be more important (given that even in high-end  
> installations storage management costs dwarf storage equipment  
> costs) than its real but relatively marginal reliability edge, and  
> that's the context in which I made my comment about alternatives  
> (though even there if ZFS continues to require definition of mirror  
> pairs and parity groups for redundancy that reduces its ease-of- 
> management edge, as does its limitation to a single host system in  
> terms of ease-of-scaling).
>
> Specifically, features like snapshots, disk scrubbing (to improve  
> reliability by dramatically reducing the likelihood of encountering  
> an unreadable sector during a RAID rebuild), and software RAID (to  
> reduce hardware costs) have been available for some time in Linux  
> and FreeBSD, and canned management aids would not be difficult to  
> develop if they don't exist already.  The dreaded 'write hole' in  
> software RAID is a relatively minor exposure (since it only  
> compromises data if a system crash or UPS failure - both rare  
> events in an enterprise setting - sneaks in between a data write  
> and the corresponding parity update and then, before the array has  
> restored parity consistency in the background, a disk dies) - and  
> that exposure can be reduced to seconds by a minuscule amount of  
> NVRAM that remembers which writes were active (or to zero with  
> somewhat more NVRAM to remember the updates themselves in an  
> inexpensive hardware solution).
>
> The real question is usually what level of risk an enterprise  
> storage user is willing to tolerate.  At the paranoid end of the  
> scale reside the users who will accept nothing less than z-series  
> or Tandem-/Stratus-style end-to-end hardware checking from the  
> processor traces on out - which rules out most environments that  
> ZFS runs in (unless Sun's N-series telco products might fill the  
> bill:  I'm not very familiar with them).  And once you get down  
> into users of commodity processors, the risk level of using stable  
> and robust file systems that lack ZFS's additional integrity checks  
> is comparable to the risk inherent in the rest of the system (at  
> least if the systems are carefully constructed, which should be a  
> given in an enterprise setting) - so other open-source solutions  
> are definitely in play there.
>
> All things being equal, of course users would opt for even  
> marginally higher reliability - but all things are never equal.  If  
> using ZFS would require changing platforms or changing code, that's  
> almost certainly a show-stopper for enterprise users.  If using ZFS  
> would compromise performance or require changes in management  
> practices (e.g., to accommodate file-system-level quotas), those  
> are at least significant impediments.  In other words, ZFS has its  
> pluses and minuses just as other open-source file systems do, and  
> they *all* have the potential to start edging out expensive  
> proprietary solutions in *some* applications (and in fact have  
> already started to do so).
>
> When we move from 'current' to 'potential' alternatives, the scope  
> for competition widens.  Because it's certainly possible to create  
> a file system that has all of ZFS's added reliability but runs  
> faster, scales better, incorporates additional useful features, and  
> is easier to manage.  That discussion is the 

Re: [zfs-discuss] Yager on ZFS

2007-12-05 Thread Toby Thain

On 4-Dec-07, at 9:35 AM, can you guess? wrote:

> Your response here appears to refer to a different post in this  
> thread.
>
>> I never said I was a typical consumer.
>
> Then it's unclear how your comment related to the material which  
> you quoted (and hence to which it was apparently responding).
>
>> If you look around photo forums, you'll see an
>> interest the digital workflow which includes long
>> term storage and archiving.  A chunk of these users
>> will opt for an external RAID box (10%? 20%?).  I
>> suspect ZFS will change that game in the future.  In
>> particular for someone doing lots of editing,
>> snapshots can help recover from user error.
>
> Ah - so now the rationalization has changed to snapshot support.   
> Unfortunately for ZFS, snapshot support is pretty commonly available

We can cherry pick features all day. People choose ZFS for the  
combination (as well as its unique features).

--Toby

> (e.g., in Linux's LVM - and IIRC BSD's as well - if you're looking  
> at open-source solutions) so anyone who actually found this feature  
> important has had access to it for quite a while already.
>
> And my original comment which you quoted still obtains as far as  
> typical consumers are concerned.
>
> - bill
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS boot and GPT (Guid Partition Table - part of EFI spec.)

2007-12-05 Thread Tomas Dzik
Hi,
I would like to ask, whether is it possible to have my rootpool (it means zpool 
for root filesystem) on GPT partition ? From documentation, it looks like that 
I need to have Solaris fdisk partition on my disk and to have VTOC in that 
partition. Is it true ?
If that is true, is there any project for adding GPT support for ZFS boot ?
The reason, why I am asking is that you can use fdisk partition only on disks 
under 1 TB (because there is 32 bit number for number of sectors in fdisk 
partition). And now 500 GB internal disks are quite common in modern computers. 
So, without this support we will not be able to install Solaris on disks bigger 
than 1 TB.
>From what I found till now, 64 bit Windows support GPT, some Linux 
>distributions support GPT, and Mac OS X supports GPT as well. In Solaris, I 
>just found, that if you add the whole disk to zpool, zfs creates 1 big GPT 
>partition over the whole disk and adds it inot zpool. I also found, that 
>Solaris uses the same namespace for VTOC slices and for GPT partitions (which 
>is a little bit confusing). So, there is some support for GPT in Solaris. But 
>what about zfs boot ?

Thanks,

Tomas Dzik
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Simultaneous access to a single ZFS volume

2007-12-05 Thread Mertol Ozyoney
Hi ;

 

When will ZFS support multiple servers accessing same file system ? 

 

Best regards

 


  http://www.sun.com/emrkt/sigs/6g_top.gif

Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email   [EMAIL PROTECTED]

 

 

<>___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss