On Tue, Jan 26, 2010 at 11:09:45PM -0800, Nick Rogers wrote:
> Can anyone clarify if I should be looking to disable TSO or TXCSUM, or both,
> or does disabling either one somehow work around the problem? Thanks a lot.
>
> On Mon, Jan 25, 2010 at 8:31 PM, Joshua Boyd wrote:
>
> > I've been having
Can anyone clarify if I should be looking to disable TSO or TXCSUM, or both,
or does disabling either one somehow work around the problem? Thanks a lot.
On Mon, Jan 25, 2010 at 8:31 PM, Joshua Boyd wrote:
> I've been having a similar problem with my network dropping completely on
> my
> 8-STABLE
On 2010-Jan-26 15:10:59 -0500, John Baldwin wrote:
>On Tuesday 26 January 2010 1:37:56 pm Marius Strobl wrote:
>> On Tue, Jan 26, 2010 at 09:46:44AM -0500, John Baldwin wrote:
>> > On Tuesday 26 January 2010 2:33:37 am Peter Jeremy wrote:
>> > > I have just upgraded to 8-STABLE/amd64 from about 18
On Tue, 26 Jan 2010 19:12:01 -0500 Damian Gerow
wrote about Re: immense delayed write to file system (ZFS and UFS2),
performance issues:
DG> Adrian Wontroba wrote:
DG> Having a script kick off and write to a disk will help so long as that
DG> disk is writable; if it's being used as a hot spare i
Here's what I got from one of my 2TB WD green drives. This one
is Firmware 01.00A01. Load_Cycle_Count is 26... seems under
control.
It gets hit with a lot of activity separated by a lot of time
(several minutes to several hours), depending on what is going on.
The box is
On Tue, Jan 26, 2010 at 07:49:05PM -0500, Damian Gerow wrote:
> Specific cases aside, writing to the FS is a workaround to a rather
> inconvenient issue. I, too, would like to see if the problem is fixed, not
> avoided, by using wdidle -- but I suspect I'll have to contact WD myself to
> get that
Dan Naumov wrote:
: >This drive is sitting, unused, with no filesystem, and I've performed
: >approximately zero writes to the disk.
: >
: >Having a script kick off and write to a disk will help so long as that
: >disk is writable; if it's being used as a hot spare in a raidz array, it's
: >not goi
On Tue, Jan 26, 2010 at 12:36:22PM -0800, Jack Vogel wrote:
> Great, if you can get the changes to me quickly I'd like to incorporate
> them.
>
Unfortunately I couldn't reproduce it on my box. I've tested both
IPv4 and IPv6 at the same time with netperf and it worked as
expected.
Reading the code
>I have a WD2003FYPS sitting in a system, to be used for testing. Bought it
>just before this thread started, and here's what it looks like right now:
>
> 9 Power_On_Hours 0x0032 100 100 000Old_age Always -
> 508
>193 Load_Cycle_Count 0x0032 200 200 000Ol
Thank you, thank you, thank you!
Now I neither have to worry about premature death of my disks, nor do
I have to endure the loud clicking noises (I have a NAS with these in
my living room)!
If either of you (or both) want me to Paypal you money for a beer,
send me details offlist :)
- Sincerely
Adrian Wontroba wrote:
: How about using the "write every 5 seconds" python script posted earlier
: in this thread by e...@tefre.com? Works nicely for me and stops the load
: cycle count increase.
I have a WD2003FYPS sitting in a system, to be used for testing. Bought it
just before this thread s
On Wed, Jan 27, 2010 at 01:15:17AM +0200, Dan Naumov wrote:
> Can anyone confirm that using the WDIDLE3 utility on the 2TB WD20EADS
> discs will not cause any issues if these disks are part of a ZFS
> mirror pool? I do have backups of data, but I would rather not spend
> the time rebuilding the ent
Can anyone confirm that using the WDIDLE3 utility on the 2TB WD20EADS
discs will not cause any issues if these disks are part of a ZFS
mirror pool? I do have backups of data, but I would rather not spend
the time rebuilding the entire system and restoring enormous amounts
of data over a 100mbit net
Great, if you can get the changes to me quickly I'd like to incorporate
them.
BTW, I have merged your igb changes into my code and its very stable, should
see that checked in for 7.3 shortly.
Thanks for your hard work Pyun!
Jack
On Tue, Jan 26, 2010 at 12:33 PM, Pyun YongHyeon wrote:
> On Tu
On Tue, Jan 26, 2010 at 12:22:01PM -0800, Jack Vogel wrote:
> Well, what our testers do is assign BOTH an ipv4 and ipv6 address to an
> interface,
> then netperf runs over both, I don't know the internal details but I assume
> both TCP
> and UDP are going over ipv6.
>
> Prior to your change there
Well, what our testers do is assign BOTH an ipv4 and ipv6 address to an
interface,
then netperf runs over both, I don't know the internal details but I assume
both TCP
and UDP are going over ipv6.
Prior to your change there is IPv6 handling code in the tx checksum
routine, so I
assume the hardwar
On Tue, Jan 26, 2010 at 11:55:00AM -0800, Jack Vogel wrote:
> I've tried this patch, and it completely breaks IPv6 offloads, which DO work
> btw,
> our testers have a netperf stress test that does both ipv4 and ipv6, and
> that test
> fails 100% after this change.
>
> I could go hacking at it myse
On Tuesday 26 January 2010 1:37:56 pm Marius Strobl wrote:
> On Tue, Jan 26, 2010 at 09:46:44AM -0500, John Baldwin wrote:
> > On Tuesday 26 January 2010 2:33:37 am Peter Jeremy wrote:
> > > I have just upgraded to 8-STABLE/amd64 from about 18 hours ago and am
> > > now getting regular (the followi
I've tried this patch, and it completely breaks IPv6 offloads, which DO work
btw,
our testers have a netperf stress test that does both ipv4 and ipv6, and
that test
fails 100% after this change.
I could go hacking at it myself but as its your code Pyun would you like to
resolve this issue?
Regard
On Wed, 27 Jan 2010 03:53:20 +0900 Tommi Lätti wrote about
Re: immense delayed write to file system (ZFS and UFS2), performance
issues:
TL> Well AFAIK WD certifies that there's no extra risk involved unless you
TL> go over 300.000 park cycles. On the other hand, my 9 month 1.5tb green
TL> drive h
Hi--
On Jan 26, 2010, at 10:45 AM, Dan Naumov wrote:
> 9 Power_On_Hours 0x0032 100 100 000Old_age
> Always - 136
> 193 Load_Cycle_Count0x0032 199 199 000Old_age
> Always - 5908
>
> The disks are of exact same model and look to be same
> 9 Power_On_Hours 0x0032 100 100 000 Old_age
> Always - 136
> 193 Load_Cycle_Count 0x0032 199 199 000 Old_age
> Always - 5908
>
> The disks are of exact same model and look to be same firmware. Should
> I be worried that the newer disk has
On Tue, 26 Jan 2010 20:45:10 +0200
Dan Naumov wrote:
> The disks are of exact same model and look to be same firmware. Should
> I be worried that the newer disk has, in 136 hours reached a higher
> Load Cycle count twice as big as on the disk thats 5253 hours old?
There's a similar problem with
On 01/26/10 02:06, M. Vale wrote:
If you upgrade to FreeBSD 8 you have to remote the package libusb from your
system.
So remove the libusb package that HAL installs and then rebuild hal,
something like:
portmaster -rRfp hal-0.5.11_26
First time through just 'portmaster -r hal-0.5.11_26' is en
> You're welcome. I just feel as bad for you as for everyone else who
> has bought these obviously Windoze optimized harddrives. Unfortunately
> neither wdidle3 nor an updated firmware is available or functioning on
> the latest models in the Green series. At least that's what I've read
> from othe
On Tue, Jan 26, 2010 at 09:46:44AM -0500, John Baldwin wrote:
> On Tuesday 26 January 2010 2:33:37 am Peter Jeremy wrote:
> > I have just upgraded to 8-STABLE/amd64 from about 18 hours ago and am
> > now getting regular (the following pair of messages about every
> > minute) compaints as follows:
>
On Tue, 19 Jan 2010 10:28:58 +0100
Morgan Wesström wrote:
> Garrett Moore wrote:
> > The drives being discussed in my related thread (regarding poor
> > performance) are all WD Green drives. I have used wdidle3 to set
> > all of my drive timeouts to 5 minutes. I'll see what sort of
> > difference
On Tue, 26 Jan 2010, Dmitry Morozovsky wrote:
DM> Dear colleagues,
DM>
DM> stable/7 as of yesterday. Operation not permitted errors during
DM> `rsync -avH --fileflags' to ZFS. Investigating:
DM>
DM> r...@woozle:/usr/ports# la -io
DM> /w/backup/woozle/var/log/www/woozle/.access_log.9.gz.*
DM> 7
:I'm experiencing the same thing, except in my case it's most noticeable
:when writing to a USB flash drive with a FAT32 filesystem. It slows the
:entire system down, even if the data being written is coming from cache
:or a memory file system.
:
:I don't know if it's related. I'm running 8-ST
On Tue, 26 Jan 2010, Artem Belevich wrote:
AB> > will do, thank you. is fletcher4 faster?
AB> Not necessarily. But it does work as a checksum much better. See
AB> following link for the details.
AB>
AB> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6740597
Yes, I already read some a
Dear colleagues,
stable/7 as of yesterday. Operation not permitted errors during
`rsync -avH --fileflags' to ZFS. Investigating:
r...@woozle:/usr/ports# la -io
/w/backup/woozle/var/log/www/woozle/.access_log.9.gz.*
73613 -rw-r--r-- 1 root wheel - 1200326 Sep 25 2005
/w/backup/woozle/var/lo
No, it hasn't, I need time to look it over and be convinced of what he was
doing.
Jack
On Tue, Jan 26, 2010 at 9:14 AM, Nick Rogers wrote:
> looks like the patch mentioned in kern/141843 has not been applied to the
> tree?
>
> On Tue, Jan 26, 2010 at 9:00 AM, Nick Rogers wrote:
>
> > Is it ad
> will do, thank you. is fletcher4 faster?
Not necessarily. But it does work as a checksum much better. See
following link for the details.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6740597
--Artem
___
freebsd-stable@freebsd.org mailing
looks like the patch mentioned in kern/141843 has not been applied to the
tree?
On Tue, Jan 26, 2010 at 9:00 AM, Nick Rogers wrote:
> Is it advisable to patch 8.0-RELEASE kernel sources with the latest
> (CURRENT) em driver (i.e., src/sys/dev/e1000)? It looks like there are some
> updates to the
On Tue, 26 Jan 2010 08:59:27 -0800 Chuck Swiger wrote
about Re: ZFS "zpool replace" problems:
CS> As a general matter of maintaining RAID systems, however, the approach
CS> to upgrading drive firmware on members of a RAID array should be to
CS> take down the entire container and offline the drive
Is it advisable to patch 8.0-RELEASE kernel sources with the latest
(CURRENT) em driver (i.e., src/sys/dev/e1000)? It looks like there are some
updates to the driver since 8.0-RELEASE that may fix some problems?
On Mon, Jan 25, 2010 at 8:31 PM, Joshua Boyd wrote:
> I've been having a similar pro
On Tue, 26 Jan 2010 08:46:19 -0800 Jeremy Chadwick
wrote about Re: ZFS "zpool replace" problems:
JC> - zpool offline
JC> - atacontrol detach ataX (where X = channel associated with disk)
JC> - Physically remove bad disk
JC> - Physically insert new disk
JC> - Wait 15 seconds for stuff to settle
Hi--
On Jan 26, 2010, at 8:25 AM, Gerrit Kühn wrote:
> CS> There's your problem-- the Silicon Image 3112/4 chips are remarkably
> CS> buggy and exhibit data corruption:
>
> Hm, sure?
I'm sure that the SII 3112 is buggy.
I am not sure that it is the primary or only cause of the problems you descr
Thanks a lot. Thats a bummer. What are the chances of getting something like
that worked into arp(8) permanently?
On Tue, Jan 26, 2010 at 4:41 AM, Ruslan Ermilov wrote:
> On Mon, Jan 25, 2010 at 07:01:46PM -0800, Nick Rogers wrote:
> > Before 8.0-RELEASE, if I ran netstat -rn, it listed a separa
On Tue, Jan 26, 2010 at 04:03:20PM +0100, Gerrit Kühn wrote:
> On Tue, 26 Jan 2010 06:30:21 -0800 Jeremy Chadwick
> wrote about Re: ZFS "zpool replace" problems:
> JC> 2) How did you attach ad18? Did you tell the system about it using
> JC>atacontrol? If so, what commands did you use?
>
> Y
On Tue, 26 Jan 2010 08:27:37 -0800 Jeremy Chadwick
wrote about Re: ZFS "zpool replace" problems:
JC> Well, to be fair, we can't be 100% certain he got bit by that bug.
JC> It's possible/likely, but we don't know for certain at this point. We
JC> also don't know what brand hard disks he had conne
On Tue, Jan 26, 2010 at 08:15:27AM -0800, Chuck Swiger wrote:
> Hi--
>
> On Jan 26, 2010, at 7:03 AM, Gerrit Kühn wrote:
> [ ... ]
> > atap...@pci0:3:6:0: class=0x010401 card=0x02409005 chip=0x02401095
> > rev=0x02 hdr=0x00 vendor = 'Silicon Image Inc (Was: CMD Technology
> > Inc)' device
On Tue, 26 Jan 2010 08:15:27 -0800 Chuck Swiger wrote
about Re: ZFS "zpool replace" problems:
CS> > Meanwhile I took out the ad18 drive again and tried to use a
CS> > different drive. But that was listed as "UNAVAIL" with corrupted
CS> > data by zfs.
CS> There's your problem-- the Silicon Image
Hi--
On Jan 26, 2010, at 7:03 AM, Gerrit Kühn wrote:
[ ... ]
> atap...@pci0:3:6:0: class=0x010401 card=0x02409005 chip=0x02401095
> rev=0x02 hdr=0x00 vendor = 'Silicon Image Inc (Was: CMD Technology
> Inc)' device = 'SATA/Raid controller(2XSATA150) (SIL3112)'
>class = mass sto
Hi,
On Sat, Jan 23, 2010 at 04:20:54PM -0800, Kevin Oberman wrote:
> Also, at the start of this thread, the OP said that he did a buildkernel
> and a buildworld. This is broken and may produce a non-bootable
> kernel. Always buildworld and then buildkernel so that the new toolchain
> is used to bu
On Tue, Jan 26, 2010 at 09:49:45AM -0500, John Baldwin wrote:
> On Tuesday 26 January 2010 4:29:05 am Dmitry Sivachenko wrote:
> > Hello!
> >
> > I recompiled recent RELENG_7 and I get the following panic after
> > trying to kldload if_nve (interesting stack frames are 12, 13, 14 I guess).
> > Pre
On Tuesday 26 January 2010 4:29:05 am Dmitry Sivachenko wrote:
> Hello!
>
> I recompiled recent RELENG_7 and I get the following panic after
> trying to kldload if_nve (interesting stack frames are 12, 13, 14 I guess).
> Previous version of RELENG_7 (compiled in the middle of December)
> worked fi
On Tuesday 26 January 2010 2:33:37 am Peter Jeremy wrote:
> I have just upgraded to 8-STABLE/amd64 from about 18 hours ago and am
> now getting regular (the following pair of messages about every
> minute) compaints as follows:
>
> kernel: uma_zalloc_arg: zone "mbuf" with the following non-sleepab
On Tue, 26 Jan 2010 06:30:21 -0800 Jeremy Chadwick
wrote about Re: ZFS "zpool replace" problems:
JC> I'm removing the In-Reply-To mail headers for this thread, as you've
JC> now hijacked it for a different purpose. Please don't do this; start
JC> a new thread altogether. :-)
Thanks. You're per
I'm removing the In-Reply-To mail headers for this thread, as you've now
hijacked it for a different purpose. Please don't do this; start a new
thread altogether. :-)
On Tue, Jan 26, 2010 at 02:57:20PM +0100, Gerrit Kühn wrote:
> I am still busy replacing RE2-disks with updated drives. I came ac
On Tue, 19 Jan 2010 03:24:49 -0800 Jeremy Chadwick
wrote about Re: immense delayed write to file
system (ZFS and UFS2), performance issues:
JC> So which drive models above are experiencing a continual increase in
JC> SMART attribute 193 (Load Cycle Count)? My guess is that some of the
JC> WD Cav
On Tue, 26 Jan 2010, Alexander Leidinger wrote:
AL> > Well, after updating to fresh system scrub finished without errors, and
AL> > now
AL> > rsync is running, now copied 15G out of 150.
AL>
AL> You may want to switch the checksum algorithm to fletcher4. It (fletcher4
AL> the default instead of f
On Tue, Jan 26, 2010 at 01:00:29PM +0100, Bartosz Stec wrote:
> W dniu 2010-01-26 10:29, Dmitry Sivachenko pisze:
> > Hello!
> >
> > I recompiled recent RELENG_7 and I get the following panic after
> > trying to kldload if_nve (interesting stack frames are 12, 13, 14 I guess).
> > Previous version
On Mon, Jan 25, 2010 at 07:01:46PM -0800, Nick Rogers wrote:
> Before 8.0-RELEASE, if I ran netstat -rn, it listed a separate route for
> each host on the network, along with its MAC address. For example ...
>
> 172.20.172.17 00:02:b3:2f:64:6a UHLW1 105712 1500
> vlan17259
2010/1/26 Jeremy Chadwick :
> As mentioned a while back on the list[1], I worked on getting atacontrol
> to spit out SMART statistics for ATA disks. Specifically, this would be
> those using the standard ata(4) layer (including ataahci.ko and
> similar), but not ahci(4) (ahci.ko), which uses ATA/C
W dniu 2010-01-26 10:29, Dmitry Sivachenko pisze:
Hello!
I recompiled recent RELENG_7 and I get the following panic after
trying to kldload if_nve (interesting stack frames are 12, 13, 14 I guess).
Previous version of RELENG_7 (compiled in the middle of December)
worked fine. Last few days I wa
On Monday, 25 of January 2010 11:57:44 Torfinn Ingolfsen wrote:
> On Sun, 24 Jan 2010 23:47:46 +
>
> Krzysztof Dajka wrote:
> > I did check my keyboard with FreeBSD 7.2 and it wasn't supported either.
> > Xev also didn't return anything.
>
> Di you try this: http://www.freshports.org/misc/ho
Quoting Dmitry Morozovsky (from Tue, 26 Jan 2010
01:16:28 +0300 (MSK)):
On Mon, 25 Jan 2010, Dmitry Morozovsky wrote:
DM> PJD> > I had a crash durinc rsync to ZFS today:
DM> PJD>
DM> PJD> Do you have recent 7-STABLE? Not sure if it was the same before MFC,
DM>
DM> r...@woozle:/var/crash# u
2010/1/25 Dan Langille
>
> On Mon, January 25, 2010 7:20 am, Ruben van Staveren wrote:
> >
> > On 13 Nov 2009, at 2:59, Dan Langille wrote:
> >
> >> -BEGIN PGP SIGNED MESSAGE-
> >> Hash: SHA1
> >>
> >> After upgrading to 8.0-PRERELEASE today, I'm seeing hald at 100% on both
> >> my laptop
Hello!
I recompiled recent RELENG_7 and I get the following panic after
trying to kldload if_nve (interesting stack frames are 12, 13, 14 I guess).
Previous version of RELENG_7 (compiled in the middle of December)
worked fine. Last few days I was trying to re-cvsup and always get the
same panic.
On 01/25/10 19:53, Jeremy Chadwick wrote:
That's just the thing -- I/O transactions, not to mention ZFS itself,
are CPU-bound. If you start seeing slow I/O as a result of the Atom's
limitations, I don't think there's anything that can be done about it.
Choose wisely. :-)
It's not *that* terr
61 matches
Mail list logo