Welcome to the club, Andy...
I tried several times to attract the attention of the community to the dramatic
performance degradation (about 3 times) of NFZ/ZFS vs. ZFS/UFS combination -
without any result : http://www.opensolaris.org/jive/thread.jspa?messageID=98592";>[1] , http://www.opensolari
On Thu, Mar 22, 2007 at 01:21:04PM -0700, Frank Cusack wrote:
> Does anyone have a 6140 expansion shelf that they can hook directly to
> a host? Just wondering if this configuration works. Previously I
> though the expansion connector was proprietary but now I see it's
> just fibre channel.
The
> Hi,
>
>
> so far, discussing filesystem code via opensolaris
> means a certain
> "specialization", in the sense that we do have:
>
> zfs-discuss
> ufs-discuss
> fuse-discuss
>
> Likewise, there are ZFS, NFS and UFS communities
> (though I can't quite
> figure out if we have nfs-discuss ?).
On Tue, 2007-04-17 at 17:25 -0500, Shawn Walker wrote:
> > I would think the average person would want
> > to have access to 1000s of DVDs / CDs within
> > a small box versus taking up the full wall.
>
> This is already being done now, and most of the companies doing it a
remember that solaris express can only be distributed by authorized parties.
On 20/04/07, MC <[EMAIL PROTECTED]> wrote:
> Now the original question by MC I belive was about providing
VMware and/or Xen image with guest OS being snv_62 with / as zfs.
This is true.
I'm not sure what Jim meant abo
On Sat, Apr 21, 2007 at 11:14:02AM +1200, Ian Collins wrote:
>
> >Sure. However, it's somewhat cheaper to buy 100 MB/sec of local-attached
> >tape than 100 MB/sec of long-distance networking. (The pedant in me points
> >out that you also need to move the tape to the remote site, which isn't
>
A new driver was released recently that has support for ZFS; check the Coraid
website for details.
We have a Coraid at work that we are testing and hope to (eventually) put on
our production network. We're running Solaris 9, so I'm not sure how comparable
our results are with your situation. An
[EMAIL PROTECTED] wrote:
I suspect that if you have a bottleneck in your system, it would be due
to the available bandwidth on the PCI bus.
Mm. yeah, it's what I was worried about, too (mostly through ignorance
of the issues), which is why I was hoping HyperTransport and PCIe were
going to give
On 20-Apr-07, at 5:54 AM, Tim Thomas wrote:
Hi Wee
I run a setup of SAM-FS for our main file server and we loved the
backup/restore parts that you described.
That is great to hear.
The main concerns I have with SAM fronting the entire conversation is
data integrity. Unlike ZFS, SAMFS does
But a tape in a van is a very high bandwidth connection :)
Australia used to get it's usenet feed on FedExed 9-tracks.
--lyndon
The two most common elements in the universe are Hydrogen and stupidity.
-- Harlan Ellison
___
z
Anton B. Rang wrote:
>>You need exactly the same bandwidth as with any other
>>classical backup solution - it doesn't matter how at the end you need
>>to copy all those data (differential) out of the box regardless if it's
>>a tape or a disk.
>>
>>
>
>Sure. However, it's somewhat cheaper to b
Matty wrote:
On 4/20/07, George Wilson <[EMAIL PROTECTED]> wrote:
This is a high priority for us and is actively being worked.
Vague enough for you. :-) Sorry I can't give you anything more exact
that that.
Hi George,
If ZFS is supposed to be part of "open"solaris, then why can't the
communi
Im not sure about the workload but I did configure the volumes with the block
size in mind.. didnt seem to do much. it could be due to the fact im basically
HW raid then zfs raid and i just dont know the equation to define a smarter
blocksize. seems like if i have 2 arrays with 64kb striped to
[EMAIL PROTECTED] said:
> The 6120 isn't the same as a 6130/61340/6540. The instructions referenced
> above won't work on a T3/T3+/6120/6320
Sigh. I can't keep up (:-). Thanks for the correction.
Marion
___
zfs-discuss mailing list
zfs-discuss@ope
My initial reaction is that the world has got by without
[email|cellphone|
other technology] for a long time ... so not a big deal.
Well, I did say I viewed it as an indefensible position :-)
Now shall we debate if the world is a better place because of cell
phones :-P
_
yeah i saw that post about the other arrays but none for this EOL'd hunk of
metal. i have some 6130's but hopefully by the time they are implemented we
will have retired this nfs stuff and stepped into zvol iscsi targets.
thanks anyways.. back to the drawing board on how to resolve this!
-And
Marion Hakanson wrote:
[EMAIL PROTECTED] said:
We have been combing the message boards and it looks like there was a lot of
talk about this interaction of zfs+nfs back in november and before but since
i have not seen much. It seems the only fix up to that date was to disable
zil, is that sti
[EMAIL PROTECTED] said:
> We have been combing the message boards and it looks like there was a lot of
> talk about this interaction of zfs+nfs back in november and before but since
> i have not seen much. It seems the only fix up to that date was to disable
> zil, is that still the case? Did any
When you say rewrites, can you give more detail? For example, are you
rewriting in 8K chunks, random sizes, etc? The reason I ask is because
ZFS will, by default, use 128K blocks for large files. If you then
rewrite a small chunk at a time, ZFS is forced to read 128K, modify the
small chunk you'
Good deal. We'll have a race to build a a vm image, then :)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Knowing that this is a planned feature and the ZFS team is actively working on
it answers my question more than expected. Thanks.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolari
Tony:
> Now to another question related to Anton's post. You mention that
> directIO does not exist in ZFS at this point. Are their plan's to
> support DirectIO; any functionality that will simulate directIO or
> some other non-caching ability suitable for critical systems such as
> databases if t
Adam:
> Hi, hope you don't mind if I make some portions of your email public in
> a reply--I hadn't seen it come through on the list at all, so it's no
> duplicate to me.
I don't mind at all. I had hoped to avoid sending the list a duplicate
e-mail, although it looks like my first post never m
On April 20, 2007 9:54:07 AM +0100 Tim Thomas <[EMAIL PROTECTED]> wrote:
My initial reaction is that the world has got by without file systems
that can do [end-to-end data integrity] for a long time...so I don't see
the absence of this as a big deal.
How about
My initial reaction is that the w
> You need exactly the same bandwidth as with any other
> classical backup solution - it doesn't matter how at the end you need
> to copy all those data (differential) out of the box regardless if it's
> a tape or a disk.
Sure. However, it's somewhat cheaper to buy 100 MB/sec of local-attached ta
On Fri, Apr 20, 2007 at 12:25:30PM -0700, MC wrote:
>
> > I will setup a VM image that can be downloaded (I hope to get it done
> tomorrow, but if not definitely by early next week) and played with
> by anyone who is interested.
>
> That would be golden, Brian. Let me know if you can't get suita
We are having a really tough time accepting the performance with ZFS and NFS
interaction. I have tried so many different ways trying to make it work (even
zfs set:zil_disable 1) and I'm still no where near the performance of using a
standard NFS mounted UFS filesystem - insanely slow; especial
> So if someone has a real world workload where having the ability to purposely
> not cache user
> data would be a win, please let me know.
Multimedia streaming is an obvious one.
For databases, it depends on the application, but in general the database will
do a better job of selecting which d
> Now the original question by MC I belive was about providing
VMware and/or Xen image with guest OS being snv_62 with / as zfs.
This is true.
I'm not sure what Jim meant about the host system needing to support zfs.
Maybe you're on a different page, Jim :)
> I will setup a VM image that can b
On 4/20/07, George Wilson <[EMAIL PROTECTED]> wrote:
This is a high priority for us and is actively being worked.
Vague enough for you. :-) Sorry I can't give you anything more exact
that that.
Hi George,
If ZFS is supposed to be part of "open"solaris, then why can't the
community get additio
Tim Thomas wrote:
I don't know enough about how ZFS manages memory other than what I have
seen on this alias (I just joined a couple of weeks ago) which seems to
indicate it is a memory hog...as is VxFS so we are in good company. I
am not against keeping data in memory so long as it has also b
Thanks to all for the helpful comments and questions.
[EMAIL PROTECTED] said:
> Isn't MPXIO support by HBA and hard drive identification (not by the
> enclosure)? At least I don't see how the enclosure should matter, as long as
> it has 2 active paths. So if you add the drive vendor info into /
It does not work, I did try to remove every snap and I ended destroying that
pool all together and had to resend it all.. My goal is to use zfs send/receive
for backup purposes to big storage system that I have, and keep snaps, I dont
care if file system is mounted or not but I want to have abil
hello everyone, I have strange issue and I am not sure why is this happening.
syncing file systems... done
rebooting...
SC Alert: Host System has Reset
Probing system devices
Probing memory
Probing I/O buses
Sun Fire V240, No Keyboard
Copyright 2006 Sun Microsystems, Inc. All rights reserved.
Tony Galway writes:
> Anton & Roch,
>
> Thank you for helping me understand this. I didn't want
to make too many assumptions that were unfounded and then
incorrectly relay that information back to clients.
>
> So if I might just repeat your statements, so my slow mind is sure it
> unders
Hi, hope you don't mind if I make some portions of your email public in
a reply--I hadn't seen it come through on the list at all, so it's no
duplicate to me.
Johansen wrote:
> Adam:
>
> Sorry if this is a duplicate, I had issues sending e-mail this morning.
>
> Based upon your CPU choices, I t
Hello Robert,
Friday, April 20, 2007, 4:54:33 PM, you wrote:
RM> Perhaps fsinfo::: could help but it's not on current s10 - I hope it
RM> will be in U4 as it looks that it works with zfs (without manually
RM> looking into vnodes, etc.):
Well, it's already in s10! (122641)
I missed that... :)
-
Hello eric,
Friday, April 20, 2007, 4:01:46 PM, you wrote:
ek> On Apr 18, 2007, at 9:33 PM, Robert Milkowski wrote:
>> Hello Robert,
>>
>> Thursday, April 19, 2007, 1:57:38 AM, you wrote:
>>
>> RM> Hello nfs-discuss,
>>
>> RM> Does anyone have a dtrace script (or any other means) to
>> track
Anton B. Rang wrote:
If you're using this for multimedia, do some serious testing first. ZFS tends to have
"bursty" write behaviour, and the worst-case latency can be measured in seconds. This has
been improved a bit in recent builds but it still seems to "stall" periodically.
I had wondered
Hi,
Krzys wrote:
> Ok, so -F option is not in U3, is there any way to replicate file system
> and not be able to mount it automatically? so when I do zfs send/receive
> it wont be mounted and changes would not be made so that further
> replications could be possible? What I did notice was that if
Richard Elling wrote:
Does anyone have a clue as to where the bottlenecks are going to be
with this:
16x hot swap SATAII hard drives (plus an internal boot drive)
Be sure to check the actual bandwidth of the drives when installed in the
final location. We have been doing some studies on the
On Apr 20, 2007, at 10:47 AM, Anton B. Rang wrote:
ZFS uses caching heavily as well; much more so, in fact, than UFS.
Copy-on-write and direct i/o are not related. As you say, data gets
written first, then the metadata which points to it, but this isn't
anything like direct I/O. In particu
Tony Galway writes:
> Let me elaborate slightly on the reason I ask these questions.
>
> I am performing some simple benchmarking, and during this a file is
> created by sequentially writing 64k blocks until the 100Gb file is
> created. I am seeing, and this is the exact same as VxFS, large p
Ok, so -F option is not in U3, is there any way to replicate file system and not
be able to mount it automatically? so when I do zfs send/receive it wont be
mounted and changes would not be made so that further replications could be
possible? What I did notice was that if I am doing zfs send/rec
Tony Galway writes:
> I have a few questions regarding ZFS, and would appreciate if someone
> could enlighten me as I work my way through.
>
> First write cache.
>
We often use write cache to designate the cache present at
the disk level. Lets call this "disk write cache".
Most FS will c
Hello eric,
Friday, April 20, 2007, 3:36:20 PM, you wrote:
>>
>> Has an analysis of most common storage system been done on how they
>> treat SYNC_NV bit and if any additional tweaking is needed? Would such
>> analysis be publicly available?
>>
ek> I am not aware of any analysis and would love t
Hello Anton,
Friday, April 20, 2007, 3:54:52 PM, you wrote:
ABR> To clarify, there are at least two issues with remote
ABR> replication vs. backups in my mind. (Feel free to joke about the state of
my mind! ;-)
ABR> The first, which as you point out can be alleviated with
ABR> snapshots, is th
On Apr 18, 2007, at 9:33 PM, Robert Milkowski wrote:
Hello Robert,
Thursday, April 19, 2007, 1:57:38 AM, you wrote:
RM> Hello nfs-discuss,
RM> Does anyone have a dtrace script (or any other means) to
track which
RM> files are open/read/write (ops and bytes) by nfsd? To make
things
R
To clarify, there are at least two issues with remote replication vs. backups
in my mind. (Feel free to joke about the state of my mind! ;-)
The first, which as you point out can be alleviated with snapshots, is the
ability to "go back" in time. If an accident wipes out a file, the missing file
ZFS uses caching heavily as well; much more so, in fact, than UFS.
Copy-on-write and direct i/o are not related. As you say, data gets written
first, then the metadata which points to it, but this isn't anything like
direct I/O. In particular, direct I/O avoids caching the data, instead
transfe
Anton & Roch,
Thank you for helping me understand this. I didn't want to make too many
assumptions that were unfounded and then incorrectly relay that information
back to clients.
So if I might just repeat your statements, so my slow mind is sure it
understands, and Roch, yes your assumption i
Let me elaborate slightly on the reason I ask these questions.
I am performing some simple benchmarking, and during this a file is created by
sequentially writing 64k blocks until the 100Gb file is created. I am seeing,
and this is the exact same as VxFS, large pauses while the system reclaims t
Has an analysis of most common storage system been done on how they
treat SYNC_NV bit and if any additional tweaking is needed? Would such
analysis be publicly available?
I am not aware of any analysis and would love to see it done (i'm
sure any vendors who are lurking on this list that supp
I have a few questions regarding ZFS, and would appreciate if someone could
enlighten me as I work my way through.
First write cache.
If I look at traditional UFS / VxFS type file systems, they normally cache
metadata to RAM before flushing it to disk. This helps increase their perceived
write
On Apr 19, 2007, at 12:50 PM, Ricardo Correia wrote:
eric kustarz wrote:
Two reasons:
1) cluttered the output (as the path name is variable length). We
could perhaps add another flag (-V or -vv or something) to display
the
ranges.
2) i wasn't convinced that output was useful, especially to
Hello Anton,
Friday, April 20, 2007, 9:02:12 AM, you wrote:
>> Initially I wanted a way to do a dump to tape like ufsdump. I
>> don't know if this makes sense anymore because the tape market is
>> crashing slowly.
ABR> It makes sense if you need to keep backups for more than a
ABR> handful of
Hello George,
Friday, April 20, 2007, 7:37:52 AM, you wrote:
GW> This is a high priority for us and is actively being worked.
GW> Vague enough for you. :-) Sorry I can't give you anything more exact
GW> that that.
Can you at least give us feature list being developed?
Some answers to question
Hello Wee,
Friday, April 20, 2007, 5:20:00 AM, you wrote:
WYT> On 4/20/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
>> You can limit how much memory zfs can use for its caching.
>>
WYT> Indeed, but that memory will still be locked. How can you tell the
WYT> system to be "flexible" with the c
Adam Lindsay wrote:
> In asking about ZFS performance in streaming IO situations, discussion
> quite quickly turned to potential bottlenecks. By coincidence, I was
> wondering about the same thing.
>
> Richard Elling said:
>
>> We know that channels, controllers, memory, network, and CPU bottlenec
Hi Wee
I run a setup of SAM-FS for our main file server and we loved the
backup/restore parts that you described.
That is great to hear.
The main concerns I have with SAM fronting the entire conversation is
data integrity. Unlike ZFS, SAMFS does not do end to end checksumming.
My initial rea
Hi,
>> 2. After going through the zfs-bootification, Solaris complains on
>> reboot that
>>/etc/dfs/sharetab is missing. Somehow this seems to have been
>> fallen through
>>the cracks of the find command. Well, touching /etc/dfs/sharetab
>> just fixes
>>the issue.
>
> This is unrelate
You should definitely worry about the number of files when it comes to backup &
management. It will also make a big difference in space overhead.
A ZFS filesystem with 2^35 files will have a minimum of 2^44 bytes of overhead
just for the file nodes, which is about 16 TB.
If it takes about 20 ms
If you're using this for multimedia, do some serious testing first. ZFS tends
to have "bursty" write behaviour, and the worst-case latency can be measured in
seconds. This has been improved a bit in recent builds but it still seems to
"stall" periodically.
(QFS works extremely well for streamin
> Initially I wanted a way to do a dump to tape like ufsdump. I
> don't know if this makes sense anymore because the tape market is
> crashing slowly.
It makes sense if you need to keep backups for more than a handful of years
(think regulatory requirements or scientific data), or if cost is im
64 matches
Mail list logo