On Fri, Sep 29, 2017 at 8:23 PM, Pete French wrote:
>> Hmm, by ntpd I think you mean ntp client? You will have to disable
>> timesync if you run ntp client:
>> sysctl hw.hvtimesync.sample_thresh=-1
>> sysctl hw.hvtimesync.ignore_sync=1
>>
>> They interfere w/ each
> Hmm, by ntpd I think you mean ntp client? You will have to disable
> timesync if you run ntp client:
> sysctl hw.hvtimesync.sample_thresh=-1
> sysctl hw.hvtimesync.ignore_sync=1
>
> They interfere w/ each other.
Oh! Does this apply to machines in Azure Hyper-V as well as on standalone
On Fri, 29 Sep 2017 13:31:22 +0800
Sepherosa Ziehau wrote:
> On Fri, Sep 29, 2017 at 12:36 PM, Paul Koch wrote:
> > On Thu, 14 Sep 2017 09:54:56 +0800
> > Sepherosa Ziehau wrote:
> >
> >> If you have any updates on this, please let
On Fri, Sep 29, 2017 at 12:36 PM, Paul Koch wrote:
> On Thu, 14 Sep 2017 09:54:56 +0800
> Sepherosa Ziehau wrote:
>
>> If you have any updates on this, please let me know. There is still
>> time for 10.4.
>
> We are still playing around with this in the
On Thu, 14 Sep 2017 09:54:56 +0800
Sepherosa Ziehau wrote:
> If you have any updates on this, please let me know. There is still
> time for 10.4.
We are still playing around with this in the lab...
Running similar setup as the customer
Microsoft Windows Server 2012 R2
On Thu, 14 Sep 2017 09:54:56 +0800
Sepherosa Ziehau wrote:
> If you have any updates on this, please let me know. There is still
> time for 10.4.
Still working on it. We are trying to replicate the FreeBSD 11.1
running in a Hyper-V VM setup in our test lab. We have
If you have any updates on this, please let me know. There is still
time for 10.4.
On Thu, Sep 7, 2017 at 11:04 PM, Paul Koch wrote:
> On Thu, 7 Sep 2017 13:51:11 +0800
> Sepherosa Ziehau wrote:
>
>> Weird, your traffic pattern does not even belong to
On Thu, 7 Sep 2017 13:51:11 +0800
Sepherosa Ziehau wrote:
> Weird, your traffic pattern does not even belong to anything heavy.
> Sending is mainly UDP, which will never be able to saturate the TX
> buffer ring causing the RXBUF ACK sending failure. This is weird.
It's a bit
Weird, your traffic pattern does not even belong to anything heavy.
Sending is mainly UDP, which will never be able to saturate the TX
buffer ring causing the RXBUF ACK sending failure. This is weird.
Anyhow, make sure to test this patch:
8762017-Sep-07 02:19 hn_inc_txbr.diff
On Thu, Sep 7, 2017
Ignore ths hn_dec_txdesc.diff, please try this done; should be more effective:
https://people.freebsd.org/~sephe/hn_inc_txbr.diff
On Thu, Sep 7, 2017 at 10:22 AM, Sepherosa Ziehau wrote:
> Is it possible to tell me your workload? e.g. TX heavy or RX heavy.
> Enabled TSO or
On Thu, 7 Sep 2017 10:22:40 +0800
Sepherosa Ziehau wrote:
> Is it possible to tell me your workload? e.g. TX heavy or RX heavy.
> Enabled TSO or not. Details like how the send syscalls are issue will
> be interesting. And your Windows version, include the patch level,
>
Is it possible to tell me your workload? e.g. TX heavy or RX heavy.
Enabled TSO or not. Details like how the send syscalls are issue will
be interesting. And your Windows version, include the patch level,
etc.
Please try the following patch:
https://people.freebsd.org/~sephe/hn_dec_txdesc.diff
Paul Koch paul.koch at akips.com wrote on
Wed Sep 6 09:33:26 UTC 2017 :
> We recently moved our software from 11.0-p9 to 11.1-p1, but looks like there
> is a regression in 11.1-p1 running on HyperV (Windows/HyperV 2012 R2) where
> the virtual hn0 interface hangs with the following kernel
On 6/9/17 7:02 pm, Pete French wrote:
We recently moved our software from 11.0-p9 to 11.1-p1, but looks
like there
is a regression in 11.1-p1 running on HyperV (Windows/HyperV 2012
R2) where
the virtual hn0 interface hangs with the following kernel messages:
hn0: on vmbus0
hn0: Ethernet
On Wed, 6 Sep 2017 12:02:43 +0100
Pete French wrote:
> > We recently moved our software from 11.0-p9 to 11.1-p1, but looks like
> > there is a regression in 11.1-p1 running on HyperV (Windows/HyperV 2012
> > R2) where the virtual hn0 interface hangs with the following
We recently moved our software from 11.0-p9 to 11.1-p1, but looks like there
is a regression in 11.1-p1 running on HyperV (Windows/HyperV 2012 R2) where
the virtual hn0 interface hangs with the following kernel messages:
hn0: on vmbus0
hn0: Ethernet address: 00:15:5d:31:21:0f
hn0: link
No sure if -stable is the right mailing list for this one.
We recently moved our software from 11.0-p9 to 11.1-p1, but looks like there
is a regression in 11.1-p1 running on HyperV (Windows/HyperV 2012 R2) where
the virtual hn0 interface hangs with the following kernel messages:
hn0: on
17 matches
Mail list logo