Jarod Wilson wrote:
>On 2017-11-02 9:11 PM, Jay Vosburgh wrote:
[...]
>> diff --git a/drivers/net/bonding/bond_main.c
>> b/drivers/net/bonding/bond_main.c
>> index 18b58e1376f1..6f89f9981a6c 100644
>> --- a/drivers/net/bonding/bond_main.c
>> +++ b/drivers/net/bonding/bond_main.c
>> @@ -2046,6 +2
On 2017-11-02 9:11 PM, Jay Vosburgh wrote:
Alex Sidorenko wrote:
...> I think I see the flaw in the logic.
1) bond_miimon_inspect finds link_state = 0, then makes a call
to bond_propose_link_state(BOND_LINK_FAIL), setting link_new_state to
BOND_LINK_FAIL. _inspect then sets s
On 2017-11-03 3:30 PM, Alex Sidorenko wrote:
Indeed, we do not print slave's ->link_new_state on each entry - so it
is quite possible that we are at stage 6.
It is even possible that this has something to do with how NM initially
created bonds.
Customer says that the problem occurs once only a
Indeed, we do not print slave's ->link_new_state on each entry - so it is quite
possible that we are at stage 6.
It is even possible that this has something to do with how NM initially created
bonds.
Customer says that the problem occurs once only after host reboot, after that
failover works fi
Alex Sidorenko wrote:
>Jay,
>
>while scenario you describe makes sense, it does not match what we see in our
>tests.
>
>The instrumentation prints info every time we enter bond_mii_monitor(),
>bond_miimon_inspect(),
>bond_miimon_commit() and every time we are committing link state. And we print
Jay,
while scenario you describe makes sense, it does not match what we see in our
tests.
The instrumentation prints info every time we enter bond_mii_monitor(),
bond_miimon_inspect(),
bond_miimon_commit() and every time we are committing link state. And we print
a message every time we
propo
Alex Sidorenko wrote:
>On 11/02/2017 12:51 AM, Jay Vosburgh wrote:
>> Jarod Wilson wrote:
>>
>>> On 2017-11-01 8:35 PM, Jay Vosburgh wrote:
Jay Vosburgh wrote:
> Alex Sidorenko wrote:
>
>> The problem has been found while trying to deploy RHEL7 on HPE Synergy
>> platfo
On 11/02/2017 12:51 AM, Jay Vosburgh wrote:
Jarod Wilson wrote:
On 2017-11-01 8:35 PM, Jay Vosburgh wrote:
Jay Vosburgh wrote:
Alex Sidorenko wrote:
The problem has been found while trying to deploy RHEL7 on HPE Synergy
platform, it is seen both in customer's environment and in HPE te
Jarod Wilson wrote:
>On 2017-11-01 8:35 PM, Jay Vosburgh wrote:
>> Jay Vosburgh wrote:
>>
>>> Alex Sidorenko wrote:
>>>
The problem has been found while trying to deploy RHEL7 on HPE Synergy
platform, it is seen both in customer's environment and in HPE test lab.
There are s
On 2017-11-01 8:35 PM, Jay Vosburgh wrote:
Jay Vosburgh wrote:
Alex Sidorenko wrote:
The problem has been found while trying to deploy RHEL7 on HPE Synergy
platform, it is seen both in customer's environment and in HPE test lab.
There are several bonds configured in TLB mode and miimon=100
Jay Vosburgh wrote:
>Alex Sidorenko wrote:
>
>>The problem has been found while trying to deploy RHEL7 on HPE Synergy
>>platform, it is seen both in customer's environment and in HPE test lab.
>>
>>There are several bonds configured in TLB mode and miimon=100, all other
>>options are default. Sl
Alex Sidorenko wrote:
>The problem has been found while trying to deploy RHEL7 on HPE Synergy
>platform, it is seen both in customer's environment and in HPE test lab.
>
>There are several bonds configured in TLB mode and miimon=100, all other
>options are default. Slaves are connected to Virtual
The problem has been found while trying to deploy RHEL7 on HPE Synergy platform, it is
seen both in customer's environment and in HPE test lab.
There are several bonds configured in TLB mode and miimon=100, all other options are
default. Slaves are connected to VirtualConnect modules. Rebooting
13 matches
Mail list logo