hilsen / Best Regards
Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00
ScanNet Group
A/S ScanNet
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
'Edward Ned Harvey' wrote:
From: Henrik Johansen [mailto:hen...@scannet.dk]
The 10g models are stable - especially the R905's are real workhorses.
You would generally consider all your machines stable now?
Can you easily pdsh to all those machines?
Yes - the only problem c
st
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Med venlig hilsen / Best Regards
Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00
ScanNet Group
A/S ScanNet
___
zfs-discuss mailing list
zf
22.3396771
wcnt0
wlastupdate 2334856.43951113
wlentime0.103487047
writes 510
wtime 0.069508209
2010/5/17 Henrik Johansen
pool (zpool iostat).
I want to view statistics for each file system on pool. Is it
possible?
See fsstat(1M)
--
Darren J Moffat
--
Med venlig hilsen / Best Regards
Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00
ScanNet Group
running smoothly. This single price,
complete system approach is ideal for companies running Solaris on Sun
hardware.
Sun System Service Plans != (Open)Solaris Support subscriptions
But thank you for the scare chicken little.
--Tim
--
Med venl
On 02/22/10 03:35 PM, Jacob Ritorto wrote:
On 02/22/10 09:19, Henrik Johansen wrote:
On 02/22/10 02:33 PM, Jacob Ritorto wrote:
On 02/22/10 06:12, Henrik Johansen wrote:
Well - once thing that makes me feel a bit uncomfortable is the fact
that you no longer can buy OpenSolaris Support
ons are.
--
Med venlig hilsen / Best Regards
Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00
ScanNet Group
A/S ScanNet
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 01/29/10 07:36 PM, Richard Elling wrote:
On Jan 29, 2010, at 12:45 AM, Henrik Johansen wrote:
On 01/28/10 11:13 PM, Lutz Schumann wrote:
While thinking about ZFS as the next generation filesystem
without limits I am wondering if the real world is ready for this
kind of incredible technology
been running for about a year with no major issues so
far. The only hickups we've had were all HW related (no fun in firmware
upgrading 200+ disks).
Will you ? :) Thanks, Robert
--
Med venlig hilsen / Best Regards
Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00
ScanNet Group
A
@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Med venlig hilsen / Best Regards
Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00
ScanNet Group
A/S ScanNet
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
x27;s.
Thanks!
jlc
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Med venlig hilsen / Best Regards
Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00
ScanNet Group
A/S ScanNet
___
Ross Walker wrote:
On Aug 5, 2009, at 2:49 AM, Henrik Johansen wrote:
Ross Walker wrote:
On Aug 4, 2009, at 8:36 PM, Carson Gaspar wrote:
Ross Walker wrote:
I get pretty good NFS write speeds with NVRAM (40MB/s 4k
sequential write). It's a Dell PERC 6/e with 512MB onboard.
...
Ross Walker wrote:
On Aug 5, 2009, at 3:09 AM, Henrik Johansen wrote:
Ross Walker wrote:
On Aug 4, 2009, at 10:22 PM, Bob Friesenhahn > wrote:
On Tue, 4 Aug 2009, Ross Walker wrote:
Are you sure that it is faster than an SSD? The data is indeed
pushed closer to the disks, but th
discuss
--
Med venlig hilsen / Best Regards
Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00
ScanNet Group
A/S ScanNet
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
olaris.org/mailman/listinfo/zfs-discuss
--
Med venlig hilsen / Best Regards
Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00
ScanNet Group
A/S ScanNet
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
s-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Med venlig hilsen / Best Regards
Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00
ScanNet Group
A/S ScanNet
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:
enlig hilsen / Best Regards
Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00
ScanNet Group
A/S ScanNet
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
flexibility and portability. If I remember correctly,
I think we're using the Adaptec 3085. I've pulled 465MB/s write and
1GB/s read off the MD1000 filled with SATA drives.
Cordialement,
Erik Ableson
+33.6.80.83.58.28
Envoyé depuis mon iPhone
On 23 juin 2009, at 21:18, Henrik Johanse
would be useful for ZFS to provide the option to not
>> load-share across huge VDEVs and use VDEV-level space allocators.
>>
>> Bob
>> ==
>> Bob Friesenhahn
>> [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
>> GraphicsMagick Maintainer,
ke more specific recommendations.
> -- richard
Well, my choice of hardware is kind of limited by 2 things :
1. We are a 100% Dell shop.
2. We already have lots of enclosures that i would like to reuse for my project.
The HBA cards are SAS 5/E (LSI SAS1068 chipset) cards, the enclosures are
Dell MD1000 diskarrays.
>
--
Med venlig hilsen / Best Regards
Henrik Johansen
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> I'm just interested in understanding how zfs determined there was data
> corruption when I have checksums disabled and there were no
> non-retryable read errors reported in the messages file.
If the metadata is corrupt, how is ZFS going to find the data blocks on
disk?
> > I don't believe it w
> Is deleting the old files/directories in the ZFS file system
> sufficient or do I need to destroy/recreate the pool and/or file
> system itself? I've been doing the former.
The former should be sufficient, it's not necessary to destroy the pool.
-j
> -Still playing with 'recsize' values but it doesn't seem to be doing
> much...I don't think I have a good understand of what exactly is being
> written...I think the whole file might be overwritten each time
> because it's in binary format.
The other thing to keep in mind is that the tunables li
> ZFS data buffers are attached to zvp; however, we still keep
> metadata in the crashdump. At least right now, this means that
> cached ZFS metadata has kvp as its vnode.
>
>Still, it's better than what you get currently.
I absolutely agree.
At one point, we discussed a
>I don't think it should be too bad (for ::memstat), given that (at
>least in Nevada), all of the ZFS caching data belongs to the "zvp"
>vnode, instead of "kvp".
ZFS data buffers are attached to zvp; however, we still keep metadata in
the crashdump. At least right now, this means that
> statfile1 988ops/s 0.0mb/s 0.0ms/op 22us/op-cpu
> deletefile1 991ops/s 0.0mb/s 0.0ms/op 48us/op-cpu
> closefile2997ops/s 0.0mb/s 0.0ms/op4us/op-cpu
> readfile1 997ops/s 139.8mb/s 0.2ms/op
> But note that, for ZFS, the win with direct I/O will be somewhat
> less. That's because you still need to read the page to compute
> its checksum. So for direct I/O with ZFS (with checksums enabled),
> the cost is W:LPS, R:2*LPS. Is saving one page of writes enough to
> make a difference? Pos
It's Columbia Pictures vs. Bunnell:
http://www.eff.org/legal/cases/torrentspy/columbia_v_bunnell_magistrate_order.pdf
The Register syndicated a Security Focus article that summarizes the
potential impact of the court decision:
http://www.theregister.co.uk/2007/08/08/litigation_data_retention/
You might also consider taking a look at this thread:
http://mail.opensolaris.org/pipermail/zfs-discuss/2007-July/041760.html
Although I'm not certain, this sounds a lot like the other pool
fragmentation issues.
-j
On Wed, Aug 15, 2007 at 01:11:40AM -0700, Yaniv Aknin wrote:
> Hello friends,
>
You can do it either way. Eric Kustarz has a good explanation of how to
set up incremental send/receive on your laptop. The description is on
his blog:
http://blogs.sun.com/erickustarz/date/20070612
The technique he uses is applicable to any ZFS filesystem.
-j
On Wed, Aug 08, 2007 at
/si3124.c ---
Index: usr/src/uts/common/io/sata/adapters/si3124/si3124.c
--- /ws/onnv-clone/usr/src/uts/common/io/sata/adapters/si3124/si3124.c Mon Nov
13 23:20:01 2006
+++
/export/johansen/si-fixes/usr/src/uts/common/io/sata/adapters/si3124/si3124.c
Tue Jul 17 14:37:17 2007
@@ -22,11
> But now I have another question.
> How 8k blocks will impact on performance ?
When tuning recordsize for things like databases, we try to recommend
that the customer's recordsize match the I/O size of the database
record.
I don't think that's the case in your situation. ZFS is clever enough
th
> it's been assigned CR 6566207 by Linda Bernal. Basically, if you look
> at si_intr and read the comments in the code, the bug is pretty
> obvious.
>
> si3124 driver's interrupt routine is incorrectly coded. The ddi_put32
> that clears the interrupts should be enclosed in an "else" block,
>
> When sequential I/O is done to the disk directly there is no performance
> degradation at all.
All filesystems impose some overhead compared to the rate of raw disk
I/O. It's going to be hard to store data on a disk unless some kind of
filesystem is used. All the tests that Eric and I have p
Marko,
Matt and I discussed this offline some more and he had a couple of ideas
about double-checking your hardware.
It looks like your controller (or disks, maybe?) is having trouble with
multiple simultaneous I/Os to the same disk. It looks like prefetch
aggravates this problem.
When I asked M
At Matt's request, I did some further experiments and have found that
this appears to be particular to your hardware. This is not a general
32-bit problem. I re-ran this experiment on a 1-disk pool using a 32
and 64-bit kernel. I got identical results:
64-bit
==
$ /usr/bin/time dd if=/test
> >*sata_hba_list::list sata_hba_inst_t satahba_next | ::print
> >sata_hba_inst_t satahba_dev_port | ::array void* 32 | ::print void* |
> >::grep ".!=0" | ::print sata_cport_info_t cport_devp.cport_sata_drive |
> >::print -a sata_drive_info_t satadrv_features_support satadrv_settings
> >satadrv
> Each drive is freshly formatted with one 2G file copied to it.
How are you creating each of these files?
Also, would you please include a the output from the isalist(1) command?
> These are snapshots of iostat -xnczpm 3 captured somewhere in the
> middle of the operation.
Have you double-che
Marko,
I tried this experiment again using 1 disk and got nearly identical
times:
# /usr/bin/time dd if=/dev/dsk/c0t0d0 of=/dev/null bs=128k count=1
1+0 records in
1+0 records out
real 21.4
user0.0
sys 2.4
$ /usr/bin/time dd if=/test/filebench/testfile of=/dev/
This certainly isn't the case on my machine.
$ /usr/bin/time dd if=/test/filebench/largefile2 of=/dev/null bs=128k
count=1
1+0 records in
1+0 records out
real1.3
user0.0
sys 1.2
# /usr/bin/time dd if=/dev/dsk/c0t0d0 of=/dev/null bs=128k count=1
1+0 re
A couple more questions here.
[mpstat]
> CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
> 00 0 3109 3616 316 1965 17 48 45 2450 85 0 15
> 10 0 3127 3797 592 2174 17 63 46 1760 84 0 15
> CPU minf mjf xcal
Tony:
> Now to another question related to Anton's post. You mention that
> directIO does not exist in ZFS at this point. Are their plan's to
> support DirectIO; any functionality that will simulate directIO or
> some other non-caching ability suitable for critical systems such as
> databases if t
Adam:
> Hi, hope you don't mind if I make some portions of your email public in
> a reply--I hadn't seen it come through on the list at all, so it's no
> duplicate to me.
I don't mind at all. I had hoped to avoid sending the list a duplicate
e-mail, although it looks like my first post never m
Adam:
> Does anyone have a clue as to where the bottlenecks are going to be with
> this:
>
> 16x hot swap SATAII hard drives (plus an internal boot drive)
> Tyan S2895 (K8WE) motherboard
> Dual GigE (integral nVidia ports)
> 2x Areca 8-port PCIe (8-lane) RAID drivers
> 2x AMD Opteron 275 CPUs (2
> I've been seeing this failure to cap on a number of (Solaris 10 update
> 2 and 3) machines since the script came out (arc hogging is a huge
> problem for me, esp on Oracle). This is probably a red herring, but my
> v490 testbed seemed to actually cap on 3 separate tests, but my t2000
> testbed do
I suppose I should have been more forward about making my last point.
If the arc_c_max isn't set in /etc/system, I don't believe that the ARC
will initialize arc.p to the correct value. I could be wrong about
this; however, next time you set c_max, set c to the same value as c_max
and set p to ha
Something else to consider, depending upon how you set arc_c_max, you
may just want to set arc_c and arc_p at the same time. If you try
setting arc_c_max, and then setting arc_c to arc_c_max, and then set
arc_p to arc_c / 2, do you still get this problem?
-j
On Thu, Mar 15, 2007 at 05:18:12PM -0
Gar. This isn't what I was hoping to see. Buffers that aren't
available for eviction aren't listed in the lsize count. It looks like
the MRU has grown to 10Gb and most of this could be successfully
evicted.
The calculation for determining if we evict from the MRU is in
arc_adjust() and looks so
This seems a bit strange. What's the workload, and also, what's the
output for:
> ARC_mru::print size lsize
> ARC_mfu::print size lsize
and
> ARC_anon::print size
For obvious reasons, the ARC can't evict buffers that are in use.
Buffers that are available to be evicted should be on the mru or mf
> it seems there isn't an algorithm in ZFS that detects sequential write
> in traditional fs such as ufs, one would trigger directio.
There is no directio for ZFS. Are you encountering a situation in which
you believe directio support would improve performance? If so, please
explain.
-j
___
> Would the logic behind ZFS take full advantage of a heavily multicored
> system, such as on the Sun Niagara platform? Would it utilize of the
> 32 concurrent threads for generating its checksums? Has anyone
> compared ZFS on a Sun Tx000, to that of a 2-4 thread x64 machine?
Pete and I are workin
> And this feature is independant on whether or not the data is
> DMA'ed straight into the user buffer.
I suppose so, however, it seems like it would make more sense to
configure a dataset property that specifically describes the caching
policy that is desired. When directio implies different
> Note also that for most applications, the size of their IO operations
> would often not match the current page size of the buffer, causing
> additional performance and scalability issues.
Thanks for mentioning this, I forgot about it.
Since ZFS's default block size is configured to be larger th
> Basically speaking - there needs to be some sort of strategy for
> bypassing the ARC or even parts of the ARC for applications that
> may need to advise the filesystem of either:
> 1) the delicate nature of imposing additional buffering for their
> data flow
> 2) already well optimized applicatio
Robert:
> Better yet would be if memory consumed by ZFS for caching (dnodes,
> vnodes, data, ...) would behave similar to page cache like with UFS so
> applications will be able to get back almost all memory used for ZFS
> caches if needed.
I believe that a better response to memory pressure is a
> > Note that you'd actually have to verify that the blocks were the same;
> > you cannot count on the hash function. If you didn't do this, anyone
> > discovering a collision could destroy the colliding blocks/files.
>
> Given that nobody knows how to find sha256 collisions, you'd of course
> ne
This is CR: 4894692 caching data in heap inflates crash dump
I have a fix which I am testing now. It still needs review from
Matt/Mark before it's eligible for putback, though.
-j
On Fri, Nov 10, 2006 at 02:40:40PM -0800, Thomas Maier-Komor wrote:
> Hi,
>
> I'm not sure if this is the right f
Harley:
> Old 36GB drives:
>
> | # time mkfile -v 1g zeros-1g
> | zeros-1g 1073741824 bytes
> |
> | real2m31.991s
> | user0m0.007s
> | sys 0m0.923s
>
> Newer 300GB drives:
>
> | # time mkfile -v 1g zeros-1g
> | zeros-1g 1073741824 bytes
> |
> | real0m8.425s
> | user0m0.010
Harley:
>I had tried other sizes with much the same results, but
> hadnt gone as large as 128K. With bs=128K, it gets worse:
>
> | # time dd if=zeros-10g of=/dev/null bs=128k count=102400
> | 81920+0 records in
> | 81920+0 records out
> |
> | real2m19.023s
> | user0m0.105s
> | sys
ZFS uses a 128k block size. If you change dd to use a bs=128k, do you observe
any performance improvement?
> | # time dd if=zeros-10g of=/dev/null bs=8k
> count=102400
> | 102400+0 records in
> | 102400+0 records out
>
> | real1m8.763s
> | user0m0.104s
> | sys 0m1.759s
It's also wor
> 1) You should be able to limit your cache max size by
> setting arc.c_max. Its currently initialized to be
> phys-mem-size - 1GB.
Mark's assertion that this is not a best practice is something of an
understatement. ZFS was designed so that users/administrators wouldn't have to
configure tuna
62 matches
Mail list logo