On Tue, Jul 11, 2006 at 11:03:17PM -0400, David Abrahams wrote:
> How can RAID-Z preserve transactional semantics when a single
> FS block write requires writing to multiple physical devices?
ZFS uses a technique that's been used in databases for years: phase
trees. First you write all
Robert Milkowski <[EMAIL PROTECTED]> writes:
> Hello David,
>
> Tuesday, July 11, 2006, 8:34:10 PM, you wrote:
>
> DA> Hi,
>
> DA> I've been trying to understand how transactional writes work in
> DA> RAID-Z. I think I understand the ZFS system for transactional writes
> DA> in general (the only
Sean Meighan wrote:
i made sure path is clean, i also qualified the paths. time varies
from 0.5 seconds to 15 seconds. If i just do a "timex pwd", it always
seems to be fast. We are using csh.
Here's a simple dscript to figure out how long each syscall is taking:
#!/usr/sbin/dtrace -FCs
s
Hello David,
Tuesday, July 11, 2006, 8:34:10 PM, you wrote:
DA> Hi,
DA> I've been trying to understand how transactional writes work in
DA> RAID-Z. I think I understand the ZFS system for transactional writes
DA> in general (the only place I could find that info was wikipedia;
DA> someone shoul
> Well, glue a beard on me and call me Nostradamus :
http://www.sun.com/servers/x64/x4500/arch-wp.pdf
http://www.cooldrives.com/8-channel-8-port-sata-pci-card.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mail
Hi,
I've been trying to understand how transactional writes work in
RAID-Z. I think I understand the ZFS system for transactional writes
in general (the only place I could find that info was wikipedia;
someone should fix that!). For RAID-Z it seems to me that the only
way to make it transaction
i made sure path is clean, i also qualified the paths. time varies from
0.5 seconds to 15 seconds. If i just do a "timex pwd", it always seems
to be fast. We are using csh.
itsm-mpk-2% env
HOME=/app/canary
PATH=/usr/bin:/usr/local/bin:/usr/sbin
LOGNAME=canary
HZ=100
TERM=xterm
TZ=US/Pacific
SH
On Jul 9, 2006, at 12:42 PM, Richard Elling wrote:
Ok, so I only managed data centers for 10 years. I can count on 2
fingers
the times this was useful to me. It is becoming less useful over time
unless your recovery disk is exactly identical to the lost disk. This
may sound easy, but it isn'
Richard Elling wrote:
> Michael Schuster - Sun Microsystems wrote:
>> Sean Meighan wrote:
>>> I am not sure if this is ZFS, Niagara or something else issue? Does
>>> someone know why commands have the latency shown below?
>>>
>>> *1) do a ls of a directory. 6.9 seconds total, truss only shows .07
Michael Schuster - Sun Microsystems wrote:
Sean Meighan wrote:
I am not sure if this is ZFS, Niagara or something else issue? Does
someone know why commands have the latency shown below?
*1) do a ls of a directory. 6.9 seconds total, truss only shows .07
seconds.*
[...]
this may be an is
Well, glue a beard on me and call me Nostradamus :
http://blogs.sun.com/roller/page/jonathan?entry=the_rise_of_the_general
On 03/07/06, Dick Davies <[EMAIL PROTECTED]> wrote:
With ZFS officially supported now, I'd say The Stars Are Right
--
Rasputin :: Jack of All Trades - Master of Nuns
Sean Meighan wrote:
I am not sure if this is ZFS, Niagara or something else issue? Does
someone know why commands have the latency shown below?
*1) do a ls of a directory. 6.9 seconds total, truss only shows .07
seconds.*
[...]
this may be an issue with your $PATH. Do you see the same beh
12 matches
Mail list logo