On Fri, Jul 3 at 16:34, Erik Trimble wrote:
Ian Collins wrote:
Ross wrote:
[please keep some context for the email list]
Quick question to the more experienced guys here - how much space
would you end up with from 8 1.5TB drives in a raid-z array? Around
8-9TB?
Bearing in mind manufact
On 04/07/2009, at 1:49 PM, Ross Walker wrote:
I ran some benchmarks back when verifying this, but didn't keep them
unfortunately.
You can google: XFS Barrier LVM OR EVMS and see the threads about
this.
Interesting reading. Testing seems to show that either it's not
relevant or there is
As the subject says, I can't import a seemingly okay raidz pool and I really
need to as it has some information on it that is newer than the last backup
cycle :-( I'm really in a bind; I hope anyone can help...
Background: A drive in a four-slice pool failed (I have to use slices due to a
mot
Ross wrote:
Is that accounting for ZFS overhead? I thought it was more than that (but of
course, it's great news if not) :-)
A raidz2 pool with 8 500G drives showed 2.67GB free.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ht
Brent Jones wrote:
On Fri, Jul 3, 2009 at 8:31 PM, Ian Collins wrote:
Ian Collins wrote:
I was doing an incremental send between pools, the receive side is locked
up and no zfs/zpool commands work on that pool.
The stacks look different from those reported in the earlier "ZFS snapshot
On 04/07/2009, at 2:08 PM, Miles Nordin wrote:
iostat -xcnXTdz c3t31d0 1
on that device being used as a slog, a higher range of output looks
like:
extended device statistics
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 1477.80.0 2955.
Is that accounting for ZFS overhead? I thought it was more than that (but of
course, it's great news if not) :-)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/li
On Fri, 3 Jul 2009, Bob Friesenhahn wrote:
Copy Method Data Rate
==
cpio -pdum 75 MB/s
cp -r 32 MB/s
tar -cf - . | (cd dest && tar -xf -)26 MB/s
> "jl" == James Lever writes:
jl> if I had disabled the ZIL, writes would have to go direct to
jl> disk (not ZIL) before returning, which would potentially be
jl> even slower than ZIL on zpool.
no, I'm all but certain you are confused.
jl> Has anybody been measuring the IOPS
I am still trying to determine why Solaris 10 (Generic_141415-03) ZFS
performs so terribly on my system. I blew a good bit of personal life
savings on this set-up but am not seeing performance anywhere near
what is expected. Testing with iozone shows that bulk I/O performance
is good. Testin
On Fri, Jul 3, 2009 at 8:31 PM, Ian Collins wrote:
> Ian Collins wrote:
>>
>> I was doing an incremental send between pools, the receive side is locked
>> up and no zfs/zpool commands work on that pool.
>>
>> The stacks look different from those reported in the earlier "ZFS snapshot
>> send/recv "h
On Fri, Jul 3, 2009 at 9:47 PM, James Lever wrote:
>
> On 04/07/2009, at 10:42 AM, Ross Walker wrote:
>
>> XFS on LVM or EVMS volumes can't do barrier writes due to the lack of
>> barrier support in LVM and EVMS, so it doesn't do a hard cache sync like it
>> would on a raw disk partition which make
Ian Collins wrote:
I was doing an incremental send between pools, the receive side is
locked up and no zfs/zpool commands work on that pool.
The stacks look different from those reported in the earlier "ZFS
snapshot send/recv "hangs" X4540 servers" thread.
Here is the process information fro
On 04/07/2009, at 10:42 AM, Ross Walker wrote:
XFS on LVM or EVMS volumes can't do barrier writes due to the lack
of barrier support in LVM and EVMS, so it doesn't do a hard cache
sync like it would on a raw disk partition which makes the numbers
higher, BUT with battery backed write cache
On Jul 3, 2009, at 8:20 PM, James Lever wrote:
On 03/07/2009, at 10:37 PM, Victor Latushkin wrote:
Slog in ramdisk is analogous to no slog at all and disable zil
(well, it may be actually a bit worse). If you say that your old
system is 5 years old difference in above numbers may be due
On 03/07/2009, at 10:37 PM, Victor Latushkin wrote:
Slog in ramdisk is analogous to no slog at all and disable zil
(well, it may be actually a bit worse). If you say that your old
system is 5 years old difference in above numbers may be due to
difference in CPU and memory speed, and so it
Ian Collins wrote:
Ross wrote:
[please keep some context for the email list]
Quick question to the more experienced guys here - how much space
would you end up with from 8 1.5TB drives in a raid-z array? Around
8-9TB?
Bearing in mind manufacturer TB != real TB, each drive will give about
On Fri, Jul 3, 2009 at 7:34 AM, James Lever wrote:
> Hi Mertol,
>
> On 03/07/2009, at 6:49 PM, Mertol Ozyoney wrote:
>
>> ZFS SSD usage behaviour heavly depends on access pattern and for asynch
>> ops ZFS will not use SSD's. I'd suggest you to disable SSD's , create a
>> ram disk and use it as SL
On 07/03/09 14:42, Bob Friesenhahn wrote:
I had never heard of the TCP_CORK socket option before. There is an
excellent summmary at "http://www.baus.net/on-tcp_cork";. The decription
includes mention that mis-using TCP_CORK could cause a socket hang at
the end of the transfer or if the appl
On Fri, 3 Jul 2009, Eric Schrock wrote:
This sneaky line:
811:setsockopt(13, tcp, TCP_CORK, 0x08047AC4, 4, SOV_DEFAULT) = 0
As well as the fact that you are using proftpd. The sight of TCP_CORK still
triggers some deep fight or flight reaction in my ani
On 07/03/09 14:16, Jim Leonard wrote:
This is probably:
6837719 TCP tx might hang when tcp_cork option is set
Fixed in build 115. This is a generic networking bug
and doesn't have
anything to do with ZFS. If you build proftp with
TCP_CORK off you
won't have this problem.
Wow, that was it,
Martin Englund wrote:
I'm wondering if someone has tried using Sans Digital's Tower Raid
TR8M[1] with ZFS (I'm especially curious about the bundled 2-port
eSATA PCIe Host Bus Adapter)
The port multiplier issue will probably prevent this from working right
now, as someone else has already menti
> This is probably:
>
> 6837719 TCP tx might hang when tcp_cork option is set
>
> Fixed in build 115. This is a generic networking bug
> and doesn't have
> anything to do with ZFS. If you build proftp with
> TCP_CORK off you
> won't have this problem.
Wow, that was it, thanks! What in the t
Joe Locker wrote:
Hi there,
As very much a new convert top Opensolaris & similar environments - I've been trying to get my head around how ZFS relates to NFS when dealing with user/group permissions.
In my case I have set up an Opensolaris system with a zfs pool with sharenfs
on. Now, I woul
Ross wrote:
[please keep some context for the email list]
Quick question to the more experienced guys here - how much space would you end
up with from 8 1.5TB drives in a raid-z array? Around 8-9TB?
Bearing in mind manufacturer TB != real TB, each drive will give about
1.35TB of formatted
It sounds like this tower uses a port multiplier to multiplex 8 drives onto two
eSATA cables. I know SATA port multiplier support is being worked on, but I'm
not sure if it is done yet:
http://opensolaris.org/jive/thread.jspa?threadID=84068&tstart=0
--
This message posted from opensolaris.org
On 07/03/09 13:06, Jim Leonard wrote:
I'm having a pretty serious issue with 200906 with simple operations that used
to work fine on nv_79. The problem I'm trying to solve right now are FTP
transfers from a ZFS filesystem using proftpd as a server that pause for over
ten minutes with no disce
I'm wondering if someone has tried using Sans Digital's Tower Raid TR8M[1] with
ZFS (I'm especially curious about the bundled 2-port eSATA PCIe Host Bus
Adapter)
It seems like an very good expansion tower as it holds up to 8 SATA disks, but
before I dish out $395 I'd like to know that it works
I'm having a pretty serious issue with 200906 with simple operations that used
to work fine on nv_79. The problem I'm trying to solve right now are FTP
transfers from a ZFS filesystem using proftpd as a server that pause for over
ten minutes with no discernible cause. When the transfer hangs,
This is something that I've run into as well across various installs
very similar to the one described (PE2950 backed by an MD1000). I
find that overall the write performance across NFS is absolutely
horrible on 2008.11 and 2009.06. Worse, I use iSCSI under 2008.11 and
it's just fine with
> "vl" == Victor Latushkin writes:
vl> Above results make me question whether your Linux NFS server
vl> is really honoring synchronous semantics or not...
Any idea how to test it?
pgpB0K5gXsZ5o.pgp
Description: PGP signature
___
zfs-discu
On Sat, 4 Jul 2009, Tristan Ball wrote:
Is the system otherwise responsive during the zfs sync cycles?
I ask because I think I'm seeing a similar thing - except that it's not only
other writers that block , it seems like other interrupts are blocked.
Pinging my zfs server in 1s intervals resu
On Fri, 3 Jul 2009, Victor Latushkin wrote:
On 02.07.09 22:05, Bob Friesenhahn wrote:
On Thu, 2 Jul 2009, Zhu, Lejun wrote:
Actually it seems to be 3/4:
3/4 is an awful lot. That would be 15 GB on my system, which explains why
the "5 seconds to write" rule is dominant.
3/4 is 1/8 * 6, w
On Fri, 3 Jul 2009, James Lever wrote:
I did some tests with a ramdisk slog and the the write IOPS seemed to run
about the 4k/s mark vs about 800/s when using the SSD as slog and 200/s
without a slog.
It seems like you may have selected the wrong SSD product to use.
There seems to be a huge
Red herring...
Actually, I had compression=gzip-9 enabled on that filesystem, which is
apparently too much for the old xeon's in that server (it's a Dell
1850). The CPU was sitting at 100% kernel time while it tried to
compress + sync.
Switching to compression=off or compression=on (lzjb) ma
With regards too http://blogs.sun.com/roch/entry/the_new_zfs_write_throttle
I would have thought that if you had enough data to be written, it is
worth just writing it, and not waiting X seconds or trying to adjust
things so it only takes 5 seconds
For example different Disk Bus have differe
Hi there,
As very much a new convert top Opensolaris & similar environments - I've been
trying to get my head around how ZFS relates to NFS when dealing with
user/group permissions.
In my case I have set up an Opensolaris system with a zfs pool with sharenfs
on. Now, I would like several Lin
Is the system otherwise responsive during the zfs sync cycles?
I ask because I think I'm seeing a similar thing - except that it's not
only other writers that block , it seems like other interrupts are
blocked. Pinging my zfs server in 1s intervals results in large delays
while the system sync
o_0 So you've got 8 drives that are all completely separate? Are the drives
completely full? Do you have any space at all?
When you say 2TB of empty drives, how many drives, what capacities?
It may be possible to come up with something, but you have to bear in mind that
you'll loose some spa
with 2tb of empty drives to use as virgin raid-z. And 8 seprate full 1.5tb NTFS
drives that are in no way linked mirrored raided. What would be my best data
migration strategy?
I also apologise if this comes across noobish, but the fact is thats what I am
when it comes to this.
And once again t
Absolutely no way to do that without wiping the data and restoring it from your
backup server.
There isn't any in place conversion from NTFS to ZFS, and in any case with
those drives you would be highly advised to go for raid-z or raid-z2. You'll
end up with less capacity, but far less risk of
Xen Dar wrote:
1st thx for the quick response.
Current config is windows XP with 8 seprate non raided 1.5TB sata NTFS drives
That doesn't look hopeful whats more I would hightly recommend *against*
running ZFS without either mirroring or raidz.
Are these drives all actually full or can you
On 02.07.09 22:05, Bob Friesenhahn wrote:
On Thu, 2 Jul 2009, Zhu, Lejun wrote:
Actually it seems to be 3/4:
3/4 is an awful lot. That would be 15 GB on my system, which explains
why the "5 seconds to write" rule is dominant.
3/4 is 1/8 * 6, where 6 is worst-case inflation factor (for rai
1st thx for the quick response.
Current config is windows XP with 8 seprate non raided 1.5TB sata NTFS drives
My backup data is stored on another server with hardware raid 5 and FreeNas
this wont come into the equation.
I currently dont have a opensolaris install. The current does and donts of t
On 03.07.09 15:34, James Lever wrote:
Hi Mertol,
On 03/07/2009, at 6:49 PM, Mertol Ozyoney wrote:
ZFS SSD usage behaviour heavly depends on access pattern and for
asynch ops ZFS will not use SSD's. I'd suggest you to disable SSD's
, create a ram disk and use it as SLOG device to compare the
Xen Dar wrote:
I currently have 10tb of data on an NTFS windows system and I would like to
move it to ZFS on open solaris, without having to buy an extra 10TB to do the
transfer. If anyone has a method for doing this I would really appreciate any
help.
You will need to describe the physical
I currently have 10tb of data on an NTFS windows system and I would like to
move it to ZFS on open solaris, without having to buy an extra 10TB to do the
transfer. If anyone has a method for doing this I would really appreciate any
help.
--
This message posted from opensolaris.org
_
Hi
Hope this is the correct forum for my questions.
I have 3 x m9000's and 2 x m5000's, Which I would like to use ZFS root/boot
disks with support for Dynamic Reconfiguration.
Sun SPARC Enterprise M8000/M9000 Servers Product Notes for XCP Version 1040
reports this issue as being broken.
Sun SP
Hi Mertol,
On 03/07/2009, at 6:49 PM, Mertol Ozyoney wrote:
ZFS SSD usage behaviour heavly depends on access pattern and for
asynch ops ZFS will not use SSD's. I'd suggest you to disable
SSD's , create a ram disk and use it as SLOG device to compare the
performance. If performance doesnt
Hej Henrik,
On 03/07/2009, at 8:57 PM, Henrik Johansen wrote:
Have you tried running this locally on your OpenSolaris box - just to
get an idea of what it could deliver in terms of speed ? Which NFS
version are you using ?
Most of the tests shown in my original message are local except the
Hi,
James Lever wrote:
Hi All,
We have recently acquired hardware for a new fileserver and my task,
if I want to use OpenSolaris (osol or sxce) on it is for it to perform
at least as well as Linux (and our 5 year old fileserver) in our
environment.
Our current file server is a whitebox
Hi James,
ZFS SSD usage behaviour heavly depends on access pattern and for asynch ops
ZFS will not use SSD's.
I'd suggest you to disable SSD's , create a ram disk and use it as SLOG
device to compare the performance. If performance doesnt change, it means
that the measurement method have some fla
It really depends on what you're going to be doing with it. The project that I
feel really benefits from the latest versions right now is CIFS - there's so
much going into that it's worth running the latest and greatest.
We've been running various versions of OpenSolaris and sxce for some time
On 03/07/2009, at 5:03 PM, Brent Jones wrote:
Are you sure the slog is working right? Try disabling the ZIL to see
if that helps with your NFS performance.
If your performance increases a hundred fold, I'm suspecting the slog
isn't perming well, or even doing its job at all.
The slog appears
On Thu, Jul 2, 2009 at 11:39 PM, James Lever wrote:
> Hi All,
>
> We have recently acquired hardware for a new fileserver and my task, if I
> want to use OpenSolaris (osol or sxce) on it is for it to perform at least
> as well as Linux (and our 5 year old fileserver) in our environment.
>
> Our cur
55 matches
Mail list logo