Re: packet loss on ixgbe using vlans and ipv6

2010-07-19 Thread John Hay
On Mon, Jul 19, 2010 at 01:46:18PM -0700, Jeremy Chadwick wrote: > On Mon, Jul 19, 2010 at 10:25:42PM +0200, John Hay wrote: > > I have a Dell T710 with 4 X 10G ethernet interfaces (2 X Dual port Intel > > 82599 cards). It is running FreeBSD RELENG_8 last updated on July 13. > > > > What I see is

Re: Problems replacing failing drive in ZFS pool

2010-07-19 Thread Adam Vande More
On Mon, Jul 19, 2010 at 9:07 PM, Dan Langille wrote: > I think it's because you pull the old drive, boot with the new drive, >> the controller re-numbers all the devices (ie da3 is now da2, da2 is >> now da1, da1 is now da0, da0 is now da6, etc), and ZFS thinks that all >> the drives have changed

Re: Problems replacing failing drive in ZFS pool

2010-07-19 Thread Dan Langille
On 7/19/2010 12:15 PM, Freddie Cash wrote: On Mon, Jul 19, 2010 at 8:56 AM, Garrett Moore wrote: So you think it's because when I switch from the old disk to the new disk, ZFS doesn't realize the disk has changed, and thinks the data is just corrupt now? Even if that happens, shouldn't the pool

Re: Problems replacing failing drive in ZFS pool

2010-07-19 Thread Daniel O'Connor
On 20/07/2010, at 10:55, Clifton Royston wrote: > The space sacrificed is trivial compared to the convenience and safety > net. > > I think I got both those suggestions on this list, and I would hope > (assume?) that they have equivalents under ZFS. I partitioned my ZFS disks using GPT so I cou

Re: Problems replacing failing drive in ZFS pool

2010-07-19 Thread Clifton Royston
On Mon, Jul 19, 2010 at 06:28:16PM -0400, Garrett Moore wrote: > Well thank you very much Western Digital for your absolutely pathetic RMA > service sending me an inferior drive. I'll call tomorrow and see what can be > done; I'm going to insist on these 00R6B0 drives being sent back, and being > g

Re: Problems replacing failing drive in ZFS pool

2010-07-19 Thread Garrett Moore
Well thank you very much Western Digital for your absolutely pathetic RMA service sending me an inferior drive. I'll call tomorrow and see what can be done; I'm going to insist on these 00R6B0 drives being sent back, and being given a drive of >= 1,500,301,910,016 bytes capacity. At least now I le

Re: Problems replacing failing drive in ZFS pool

2010-07-19 Thread Adam Vande More
On Mon, Jul 19, 2010 at 5:04 PM, Garrett Moore wrote: > Well, hotswapping worked, but now I have a totally different problem. Just > for reference: > # zpool offline tank da3 > # camcontrol stop da3 > > # camcontrol rescan all > <'da3 lost device, removing device entry'> > # camcontrol rescan all

Re: Problems replacing failing drive in ZFS pool

2010-07-19 Thread Freddie Cash
On Mon, Jul 19, 2010 at 3:04 PM, Garrett Moore wrote: > Well, hotswapping worked, but now I have a totally different problem. Just Yay. :) > for reference: > # zpool offline tank da3 > # camcontrol stop da3 > > # camcontrol rescan all > <'da3 lost device, removing device entry'> > # camcontrol

Re: Problems replacing failing drive in ZFS pool

2010-07-19 Thread Garrett Moore
Well, hotswapping worked, but now I have a totally different problem. Just for reference: # zpool offline tank da3 # camcontrol stop da3 # camcontrol rescan all <'da3 lost device, removing device entry'> # camcontrol rescan all <'da3 at mpt0 ...', so new drive was found! yay> # zpool replace tank

Re: em(4) duplex problems with 82541EI on RELENG_8, -CURRENT on PowerEdge 1850

2010-07-19 Thread Doug Barton
On Mon, 19 Jul 2010, Brian A. Seklecki wrote: On Thu, 2010-07-15 at 10:53 -0700, Jack Vogel wrote: The fact that I WISH it to be MFC'd doesn't mean that I am actually given permission to do so. It seems 8.1 release was tagged on Saturday so we're proper-f* I can appreciate your frustra

Re: panic: handle_written_inodeblock: bad size

2010-07-19 Thread Mikhail T.
19.07.2010 07:31, Jeremy Chadwick написав(ла): If you boot the machine in single-user, and run fsck manually, are there any errors? Thanks, Jeremy... I wish, there was a way to learn, /which/ file-system is giving trouble... However, after sending the question out last night, I tried to pkg

Re: packet loss on ixgbe using vlans and ipv6

2010-07-19 Thread Jeremy Chadwick
On Mon, Jul 19, 2010 at 10:25:42PM +0200, John Hay wrote: > I have a Dell T710 with 4 X 10G ethernet interfaces (2 X Dual port Intel > 82599 cards). It is running FreeBSD RELENG_8 last updated on July 13. > > What I see is packet loss (0 - 40%) on IPv6 packets in vlans, when the > machine is not t

Re: panic: handle_written_inodeblock: bad size

2010-07-19 Thread Jeremy Chadwick
On Mon, Jul 19, 2010 at 11:55:59AM -0400, Mikhail T. wrote: > 19.07.2010 07:31, Jeremy Chadwick написав(ла): > >If you boot the machine in single-user, and run fsck manually, are there > >any errors? > Thanks, Jeremy... I wish, there was a way to learn, /which/ > file-system is giving trouble... Ho

packet loss on ixgbe using vlans and ipv6

2010-07-19 Thread John Hay
Hi, I have a Dell T710 with 4 X 10G ethernet interfaces (2 X Dual port Intel 82599 cards). It is running FreeBSD RELENG_8 last updated on July 13. What I see is packet loss (0 - 40%) on IPv6 packets in vlans, when the machine is not the originator of the packets. Let me try to describe a little

Re: deadlock or bad disk ? RELENG_8

2010-07-19 Thread Jeremy Chadwick
On Mon, Jul 19, 2010 at 08:37:50AM -0400, Mike Tancsa wrote: > At 11:34 PM 7/18/2010, Jeremy Chadwick wrote: > >> > >> yes, da0 is a RAID volume with 4 disks behind the scenes. > > > >Okay, so can you get full SMART statistics for all 4 of those disks? > >The adjusted/calculated values for SMART th

Re: Problems replacing failing drive in ZFS pool

2010-07-19 Thread John Hawkes-Reed
On 19/07/2010 17:52, Garrett Moore wrote: I'm nervous to trust the hotswap features and camcontrol to set things up properly, but I guess I could try it. When I first set the system up before I put data on the array I tried the hotswap functionality and drives wouldn't always re-attach when reins

Re: deadlock or bad disk ? RELENG_8

2010-07-19 Thread Jeremy Chadwick
On Mon, Jul 19, 2010 at 08:41:40AM -0400, Mike Tancsa wrote: > At 11:58 PM 7/18/2010, Jeremy Chadwick wrote: > > >So I believe this indicates the message only gets printed during swapin, > >not swapout. Meaning it's happening during an I/O read from da0. > > Yes, and from my existing ssh session

Re: Strange video mode output with VESA

2010-07-19 Thread Jung-uk Kim
On Friday 16 July 2010 07:18 pm, Jung-uk Kim wrote: > On Friday 16 July 2010 03:22 pm, Jung-uk Kim wrote: > > On Friday 16 July 2010 03:00 pm, David DEMELIER wrote: > > > 2010/6/19 paradox : > > > >>On Wednesday 02 June 2010 04:25 pm, David DEMELIER wrote: > > > >>> Hi there, > > > >>> > > > >>> I

Re: Problems replacing failing drive in ZFS pool

2010-07-19 Thread Garrett Moore
I'm nervous to trust the hotswap features and camcontrol to set things up properly, but I guess I could try it. When I first set the system up before I put data on the array I tried the hotswap functionality and drives wouldn't always re-attach when reinserted, even if I fiddled with camcontrol, bu

Re: Problems replacing failing drive in ZFS pool

2010-07-19 Thread Freddie Cash
On Mon, Jul 19, 2010 at 9:33 AM, Garrett Moore wrote: > I forgot to ask in the last email, is there a way to convert from Z1 to Z2 > without losing data? I actually have far more storage than I need so I'd > consider going to Z2. No, unfortunately it's not currently possible to change vdev types

Re: Problems replacing failing drive in ZFS pool

2010-07-19 Thread Freddie Cash
On Mon, Jul 19, 2010 at 9:32 AM, Garrett Moore wrote: > The data on the disks is not irreplaceable so if I lose the array it isn't > the end of the world but I would prefer not to lose it as it would be a pain > to get all of the data again. > > Freddie's explanation is reasonable, but any ideas w

Re: Problems replacing failing drive in ZFS pool

2010-07-19 Thread Garrett Moore
I forgot to ask in the last email, is there a way to convert from Z1 to Z2 without losing data? I actually have far more storage than I need so I'd consider going to Z2. On Mon, Jul 19, 2010 at 12:18 PM, Adam Vande More wrote: > On Mon, Jul 19, 2010 at 10:56 AM, Garrett Moore wrote: > >> So you

RE: update on kern/145064?

2010-07-19 Thread Petr Holub
Dear stable list, > is there any update on bug hunting of the issue described here? > http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/145064 > I've attempted to install PC BSD 8.1-RC1 on my desktop and I'm facing > the same problem with the Marvell SATA driver. Therefore, PC BSD is > not installab

Re: Problems replacing failing drive in ZFS pool

2010-07-19 Thread Garrett Moore
The data on the disks is not irreplaceable so if I lose the array it isn't the end of the world but I would prefer not to lose it as it would be a pain to get all of the data again. Freddie's explanation is reasonable, but any ideas why it didn't happen when I replaced my first dead drive (da5)? T

Re: Problems replacing failing drive in ZFS pool

2010-07-19 Thread Adam Vande More
On Mon, Jul 19, 2010 at 10:56 AM, Garrett Moore wrote: > So you think it's because when I switch from the old disk to the new disk, > ZFS doesn't realize the disk has changed, and thinks the data is just > corrupt now? Even if that happens, shouldn't the pool still be available, > since it's RAIDZ

Re: Problems replacing failing drive in ZFS pool

2010-07-19 Thread Freddie Cash
On Mon, Jul 19, 2010 at 8:56 AM, Garrett Moore wrote: > So you think it's because when I switch from the old disk to the new disk, > ZFS doesn't realize the disk has changed, and thinks the data is just > corrupt now? Even if that happens, shouldn't the pool still be available, > since it's RAIDZ1

Re: Problems replacing failing drive in ZFS pool

2010-07-19 Thread Garrett Moore
So you think it's because when I switch from the old disk to the new disk, ZFS doesn't realize the disk has changed, and thinks the data is just corrupt now? Even if that happens, shouldn't the pool still be available, since it's RAIDZ1 and only one disk has gone away? I don't have / on ZFS; I'm o

Re: 8.1-PRERELEASE: CPU packages not detected correctly

2010-07-19 Thread Oliver Fromme
Oliver Fromme wrote: > Jung-uk Kim wrote: > > On Thursday 15 July 2010 01:56 pm, Andriy Gapon wrote: > > > on 15/07/2010 19:57 Oliver Fromme said the following: > > > > I patched topo_probe() so it calls topo_probe_0x4() after > > > > topo_probe_0xb() if cpu_cores is still 0. I think this >

Re: Problems replacing failing drive in ZFS pool

2010-07-19 Thread Freddie Cash
On Mon, Jul 19, 2010 at 8:21 AM, Garrett Moore wrote: > I have an 8-drive ZFS array consisting of WD15EADS drives. One of my disks > has started to fail, so I got a replacement disk. I have replaced a disk > before by: > >  zpool offline tank /dev/da5 > shutting down, swapping from old disk to new

Re: Problems replacing failing drive in ZFS pool

2010-07-19 Thread Garrett Moore
Oops - shouldn't have forgotten that, sorry. FreeBSD leviathan 8.0-RELEASE FreeBSD 8.0-RELEASE #0: Sat Nov 21 15:02:08 UTC 2009 r...@mason.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64 On Mon, Jul 19, 2010 at 11:24 AM, Jeremy Chadwick wrote: > On Mon, Jul 19, 2010 at 11:21:38AM -0400, Ga

Re: Problems replacing failing drive in ZFS pool

2010-07-19 Thread Jeremy Chadwick
On Mon, Jul 19, 2010 at 11:21:38AM -0400, Garrett Moore wrote: > I have an 8-drive ZFS array consisting of WD15EADS drives. One of my disks > has started to fail, so I got a replacement disk. I have replaced a disk > before by: > > zpool offline tank /dev/da5 > shutting down, swapping from old di

Problems replacing failing drive in ZFS pool

2010-07-19 Thread Garrett Moore
I have an 8-drive ZFS array consisting of WD15EADS drives. One of my disks has started to fail, so I got a replacement disk. I have replaced a disk before by: zpool offline tank /dev/da5 shutting down, swapping from old disk to new disk booting zpool replace tank /dev/da5 This worked fine. Thi

Re: em(4) duplex problems with 82541EI on RELENG_8, -CURRENT on PowerEdge 1850

2010-07-19 Thread Brian A. Seklecki
On Thu, 2010-07-15 at 10:53 -0700, Jack Vogel wrote: > The fact that I WISH it to be MFC'd doesn't mean that I am actually > given permission to do so. It seems 8.1 release was tagged on Saturday so we're proper-fucked (we will have to run local patches on all 1850s and 2850s for the duratio

Re: deadlock or bad disk ? RELENG_8

2010-07-19 Thread Sascha Holzleiter
> > just hangs, I guess because its having trouble reading from the disk. > If I hit CTRL+t, I see > > load: 0.00 cmd: csh 73167 [vnread] 22.32r 0.00u 0.00s 0% 3232k > load: 0.00 cmd: csh 73167 [vnread] 22.65r 0.00u 0.00s 0% 3232k > load: 0.00 cmd: csh 73167 [vnread] 22.96r 0.00u 0.00s 0% 3232

Re: deadlock or bad disk ? RELENG_8

2010-07-19 Thread Mike Tancsa
At 12:11 AM 7/19/2010, Jeremy Chadwick wrote: On Sun, Jul 18, 2010 at 08:58:44PM -0700, Jeremy Chadwick wrote: > I took a look at the RELENG_8 code responsible for printing this > message: src/sys/vm/swap_pager.c > > [...] > 1086 static int > 1087 swap_pager_getpages(vm_object_t object, vm_page_t

Re: deadlock or bad disk ? RELENG_8

2010-07-19 Thread Mike Tancsa
At 11:58 PM 7/18/2010, Jeremy Chadwick wrote: So I believe this indicates the message only gets printed during swapin, not swapout. Meaning it's happening during an I/O read from da0. Yes, and from my existing ssh sessions, it would _seem_ no disk IO was completing. ie I tried a killall -9

Re: deadlock or bad disk ? RELENG_8

2010-07-19 Thread Mike Tancsa
At 11:34 PM 7/18/2010, Jeremy Chadwick wrote: > > yes, da0 is a RAID volume with 4 disks behind the scenes. Okay, so can you get full SMART statistics for all 4 of those disks? The adjusted/calculated values for SMART thresholds won't be helpful here, one will need the actual raw SMART data. I

Re: panic: handle_written_inodeblock: bad size

2010-07-19 Thread Jeremy Chadwick
On Mon, Jul 19, 2010 at 02:40:29AM -0400, Mikhail T. wrote: > An 8.1-prerelease machine I have throws the panic in subject quite > often. Does anyone care? Is this evidence of some filesystem > corruption here, or a known problem that's (almost) solved already? > > The stacks all look the same: >

panic: handle_written_inodeblock: bad size

2010-07-19 Thread Mikhail T.
An 8.1-prerelease machine I have throws the panic in subject quite often. Does anyone care? Is this evidence of some filesystem corruption here, or a known problem that's (almost) solved already? The stacks all look the same: panic: handle_written_inodeblock: bad size ts_to_ct(1279145603

Re: Reporting Functional Server Models

2010-07-19 Thread Joel Dahl
On 18-07-2010 12:38, Sean Bruno wrote: > I spent some time last week validating the 7, 8 and -CURRENT on > different vendor hardware over here in my lab. > > Is there a current h/w compatibility list that folks are maintaining > that I can update with my findings? I don't think there is such a l