RE: 6509 roaming disconnects part2 [7:32449]
in my past experience it's always to let a device just keep running. I was under the impression and always looked at it and compared it to all other electrical devices. constant power is less damaging to any electrical equiptment that continuous off/on. Much like the life span of a light bulb. As we know CAT 6509's are expensive light bulbs. Chris Puckette, Larry (TIFPC) wrote: Hello again group. I have another question to propose to you. But first an updated history of the issue at hand. We have a 6509 that serves as the core to a server farm that has both NT and Unix boxes on it. In the beginning there were infrequent link drops between servers and the switch that had no pattern to isolate a card or VLAN, etc... and then frequency increased to be a constant problem. Sniffer information gave very little to hang our hat on, with 99% of it's findings being 2 messages. Too many retransmissions TCP and octets/s: current value 932,384. High Threshold=500,000. An example of the logging buffer on the switch's interesting messages were; IPPS6509 (enable) show logging buffer 2002 Jan 16 02:15:44 %PAGP-5-PORTFROMSTP:Port 8/23 left bridge port 8/23 2002 Jan 16 02:15:49 %PAGP-5-PORTTOSTP:Port 8/22 joined bridge port 8/22 2002 Jan 16 02:15:49 %PAGP-5-PORTFROMSTP:Port 6/23 left bridge port 6/23 2002 Jan 16 02:15:50 %SPANTREE-6-PORTFWD: Port 8/22 state in VLAN 172 changed to forwarding 2002 Jan 16 02:16:01 %PAGP-5-PORTTOSTP:Port 8/23 joined bridge port 8/23 2002 Jan 16 02:16:02 %SPANTREE-6-PORTFWD: Port 8/23 state in VLAN 172 changed to forwarding 2002 Jan 16 02:16:06 %PAGP-5-PORTTOSTP:Port 6/23 joined bridge port 6/23 2002 Jan 16 02:16:07 %SPANTREE-6-PORTFWD: Port 6/23 state in VLAN 172 changed to forwarding 2002 Jan 16 03:41:28 %PAGP-5-PORTFROMSTP:Port 8/17 left bridge port 8/17 2002 Jan 16 03:41:29 %PAGP-5-PORTFROMSTP:Port 7/16 left bridge port 7/16 2002 Jan 16 03:41:35 %SYS-6-CFG_CHG:Global block changed by SNMP/216.141.33.71/ 2002 Jan 16 03:41:47 %PAGP-5-PORTTOSTP:Port 8/17 joined bridge port 8/17 2002 Jan 16 03:41:47 %PAGP-5-PORTTOSTP:Port 7/16 joined bridge port 7/16 2002 Jan 16 03:41:48 %SPANTREE-6-PORTFWD: Port 7/16 state in VLAN 172 changed to forwarding 2002 Jan 16 03:41:48 %SPANTREE-6-PORTFWD: Port 8/17 state in VLAN 172 changed to forwarding 2002 Jan 16 03:44:27 %PAGP-5-PORTFROMSTP:Port 8/17 left bridge port 8/17 2002 Jan 16 03:44:43 %PAGP-5-PORTTOSTP:Port 8/17 joined bridge port 8/17 2002 Jan 16 03:44:44 %SPANTREE-6-PORTFWD: Port 8/17 state in VLAN 172 changed to forwarding but these had no consistency over time as to what port or group of ports were experiencing this. some interesting 'show tech' information was; udp: 0 incomplete headers 0 bad data length fields 2 bad checksums 20839 socket overflows 108568195 no such ports tcp: 111664 completely duplicate packets (6407 bytes) 29 keepalive timeouts Ok, if you're still with me... It was dictated that we REPLACE the switch by the customer but of course Cisco did not go for that and we did a scheduled reboot on the switch and all problems have cleared. Now the customer wants a bi-monthly reboot of this switch scheduled to prevent the problem from occurring. My questions are: Is there any technical reason that these scheduled reboots would be a bad idea? (politics dictate that logical reasons don't apply) Does anyone know of a previously proven fix for this problem that has documentation that could be used in discussions of whether these scheduled reboots are necessary? Thank you all for any help,, in advance. Larry Puckette Network Analyst CCNA,MCP,LANCP Temple Inland [EMAIL PROTECTED] 512/434-1838 Message Posted at: http://www.groupstudy.com/form/read.php?f=7i=32463t=32449 -- FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]
RE: 6509 roaming disconnects part2 [7:32449]
Larry, you haven't given us much but maybe you don't have much. One thing that may help ease the symptoms is to turn on portfast on the ports the servers are connected. When the port does flap, it won't take so long for it to begin forwarding again. You didn't mention what type of cards the servers are using. Are these 100mbs or gig cards and who makes the cards? There are known issues with certain cards and certain drivers. Are you using the latest drivers downloaded from the vendor's website? If these are gig cards, are they fiber or copper? If copper, could you have bad or old cable or maybe the cables are routed over something causing EMI? What about the OS on the Cat? Is it the latest available (it's up to 7.x now)? Is flow control turned on or off? You can set this separately for transmit and receive. Did you try moving the server(s) to a different port on the switch? Did you get the same results? Is it possible to move ther server(s) to a different blade in the Cat? What about to a different switch? Your logs indicate the port is going up and down and Spanning Tree is doing its job and not much else. You can see when troubleshooting issues on the list, we need more info. This is just a small list to check but maybe it will be helpful. Rik -Original Message- From: Puckette, Larry (TIFPC) [mailto:[EMAIL PROTECTED]] Sent: Friday, January 18, 2002 10:10 AM To: [EMAIL PROTECTED] Subject: 6509 roaming disconnects part2 [7:32449] Hello again group. I have another question to propose to you. But first an updated history of the issue at hand. We have a 6509 that serves as the core to a server farm that has both NT and Unix boxes on it. In the beginning there were infrequent link drops between servers and the switch that had no pattern to isolate a card or VLAN, etc... and then frequency increased to be a constant problem. Sniffer information gave very little to hang our hat on, with 99% of it's findings being 2 messages. Too many retransmissions TCP and octets/s: current value 932,384. High Threshold=500,000. An example of the logging buffer on the switch's interesting messages were; IPPS6509 (enable) show logging buffer 2002 Jan 16 02:15:44 %PAGP-5-PORTFROMSTP:Port 8/23 left bridge port 8/23 2002 Jan 16 02:15:49 %PAGP-5-PORTTOSTP:Port 8/22 joined bridge port 8/22 2002 Jan 16 02:15:49 %PAGP-5-PORTFROMSTP:Port 6/23 left bridge port 6/23 2002 Jan 16 02:15:50 %SPANTREE-6-PORTFWD: Port 8/22 state in VLAN 172 changed to forwarding 2002 Jan 16 02:16:01 %PAGP-5-PORTTOSTP:Port 8/23 joined bridge port 8/23 2002 Jan 16 02:16:02 %SPANTREE-6-PORTFWD: Port 8/23 state in VLAN 172 changed to forwarding 2002 Jan 16 02:16:06 %PAGP-5-PORTTOSTP:Port 6/23 joined bridge port 6/23 2002 Jan 16 02:16:07 %SPANTREE-6-PORTFWD: Port 6/23 state in VLAN 172 changed to forwarding 2002 Jan 16 03:41:28 %PAGP-5-PORTFROMSTP:Port 8/17 left bridge port 8/17 2002 Jan 16 03:41:29 %PAGP-5-PORTFROMSTP:Port 7/16 left bridge port 7/16 2002 Jan 16 03:41:35 %SYS-6-CFG_CHG:Global block changed by SNMP/216.141.33.71/ 2002 Jan 16 03:41:47 %PAGP-5-PORTTOSTP:Port 8/17 joined bridge port 8/17 2002 Jan 16 03:41:47 %PAGP-5-PORTTOSTP:Port 7/16 joined bridge port 7/16 2002 Jan 16 03:41:48 %SPANTREE-6-PORTFWD: Port 7/16 state in VLAN 172 changed to forwarding 2002 Jan 16 03:41:48 %SPANTREE-6-PORTFWD: Port 8/17 state in VLAN 172 changed to forwarding 2002 Jan 16 03:44:27 %PAGP-5-PORTFROMSTP:Port 8/17 left bridge port 8/17 2002 Jan 16 03:44:43 %PAGP-5-PORTTOSTP:Port 8/17 joined bridge port 8/17 2002 Jan 16 03:44:44 %SPANTREE-6-PORTFWD: Port 8/17 state in VLAN 172 changed to forwarding but these had no consistency over time as to what port or group of ports were experiencing this. some interesting 'show tech' information was; udp: 0 incomplete headers 0 bad data length fields 2 bad checksums 20839 socket overflows 108568195 no such ports tcp: 111664 completely duplicate packets (6407 bytes) 29 keepalive timeouts Ok, if you're still with me... It was dictated that we REPLACE the switch by the customer but of course Cisco did not go for that and we did a scheduled reboot on the switch and all problems have cleared. Now the customer wants a bi-monthly reboot of this switch scheduled to prevent the problem from occurring. My questions are: Is there any technical reason that these scheduled reboots would be a bad idea? (politics dictate that logical reasons don't apply) Does anyone know of a previously proven fix for this problem that has documentation that could be used in discussions of whether these scheduled reboots are necessary? Thank you all for any help,, in advance. Larry Puckette Network Analyst CCNA,MCP,LANCP Temple Inland [EMAIL PROTECTED] 512/434-1838 Message Posted at: http://www.groupstudy.com/form/read.php?f=7i=32472t=32449 -- FAQ, list archives, and subscription info:
RE: 6509 roaming disconnects part2 [7:32449]
Agreed Rik. For an attempt at brevity, I left out some things. Some I shouldn't have like; These are all copper and set to 100 full The drivers on the NICs were updated the last time we addressed this problem, but the NICs are not all the same. Most are 3com, though not standardized. Portfast has now been turned on JUST ONE of these ports, mostly because of politics, so that when the issue comes back we can use it as evidence of helping or not. The powers that be are software people and won't let us turn on flow control. The link drops were happening on all Ethernet modules but not all ports on that module at the same time, but only some ports on different modules at the same time. This is in a large data center and in the wiring has been carefully routed as well as EMI considered during electrical wiring installation. The COS is now already in the 7.x range but not the latest, that will be addressed soon. I do appreciate you demeanor in pointing those out and the attempt to help. Larry Puckette Network Analyst CCNA,MCP,LANCP Temple Inland [EMAIL PROTECTED] 512/434-1838 -Original Message- From: Rik Guyler [mailto:[EMAIL PROTECTED]] Sent: Friday, January 18, 2002 10:16 AM To: [EMAIL PROTECTED] Subject:RE: 6509 roaming disconnects part2 [7:32449] Larry, you haven't given us much but maybe you don't have much. One thing that may help ease the symptoms is to turn on portfast on the ports the servers are connected. When the port does flap, it won't take so long for it to begin forwarding again. You didn't mention what type of cards the servers are using. Are these 100mbs or gig cards and who makes the cards? There are known issues with certain cards and certain drivers. Are you using the latest drivers downloaded from the vendor's website? If these are gig cards, are they fiber or copper? If copper, could you have bad or old cable or maybe the cables are routed over something causing EMI? What about the OS on the Cat? Is it the latest available (it's up to 7.x now)? Is flow control turned on or off? You can set this separately for transmit and receive. Did you try moving the server(s) to a different port on the switch? Did you get the same results? Is it possible to move ther server(s) to a different blade in the Cat? What about to a different switch? Your logs indicate the port is going up and down and Spanning Tree is doing its job and not much else. You can see when troubleshooting issues on the list, we need more info. This is just a small list to check but maybe it will be helpful. Rik -Original Message- From: Puckette, Larry (TIFPC) [mailto:[EMAIL PROTECTED]] Sent: Friday, January 18, 2002 10:10 AM To: [EMAIL PROTECTED] Subject: 6509 roaming disconnects part2 [7:32449] Hello again group. I have another question to propose to you. But first an updated history of the issue at hand. We have a 6509 that serves as the core to a server farm that has both NT and Unix boxes on it. In the beginning there were infrequent link drops between servers and the switch that had no pattern to isolate a card or VLAN, etc... and then frequency increased to be a constant problem. Sniffer information gave very little to hang our hat on, with 99% of it's findings being 2 messages. Too many retransmissions TCP and octets/s: current value 932,384. High Threshold=500,000. An example of the logging buffer on the switch's interesting messages were; IPPS6509 (enable) show logging buffer 2002 Jan 16 02:15:44 %PAGP-5-PORTFROMSTP:Port 8/23 left bridge port 8/23 2002 Jan 16 02:15:49 %PAGP-5-PORTTOSTP:Port 8/22 joined bridge port 8/22 2002 Jan 16 02:15:49 %PAGP-5-PORTFROMSTP:Port 6/23 left bridge port 6/23 2002 Jan 16 02:15:50 %SPANTREE-6-PORTFWD: Port 8/22 state in VLAN 172 changed to forwarding 2002 Jan 16 02:16:01 %PAGP-5-PORTTOSTP:Port 8/23 joined bridge port 8/23 2002 Jan 16 02:16:02 %SPANTREE-6-PORTFWD: Port 8/23 state in VLAN 172 changed to forwarding 2002 Jan 16 02:16:06 %PAGP-5-PORTTOSTP:Port 6/23 joined bridge port 6/23 2002 Jan 16 02:16:07 %SPANTREE-6-PORTFWD: Port 6/23 state in VLAN 172 changed to forwarding 2002 Jan 16 03:41:28 %PAGP-5-PORTFROMSTP:Port 8/17 left bridge port 8/17 2002 Jan 16 03:41:29 %PAGP-5-PORTFROMSTP:Port 7/16 left bridge port 7/16 2002 Jan 16 03:41:35 %SYS-6-CFG_CHG:Global block changed by SNMP/216.141.33.71/ 2002 Jan 16 03:41:47 %PAGP-5-PORTTOSTP:Port 8/17 joined bridge port 8/17 2002 Jan 16 03:41:47 %PAGP-5-PORTTOSTP:Port 7/16 joined bridge port 7/16 2002 Jan 16 03:41:48 %SPANTREE-6-PORTFWD: Port 7/16 state in VLAN 172 changed to forwarding 2002 Jan 16 03:41:48 %SPANTREE-6-PORTFWD: Port 8/17 state in VLAN 172 changed to forwarding 2002 Jan 16 03:44:27 %PAGP-5-PORTFROMSTP:Port 8/17 left bridge port 8/17 2002 Jan 16 03:44:43 %PAGP-5-PORTTOSTP:Port 8/17 joined bridge port 8/17 2002 Jan 16 03:44:44 %SPANTREE-6-PORTFWD: Port 8/17 state in VLAN 172 changed to forwarding but these had no consistency over time as to what port or group of ports were
RE: 6509 roaming disconnects part2 [7:32449]
From Cisco LAN Switching by Clark and Hamilton pages 262-3, 271-3 see the discussion of PortFast and disabling Port Aggregation Protocol. On CCO look for a command set port host that should change several parameters in one shot. The set port host command sets channel mode to off, enables spanning-tree portfast, and sets trunk mode to off. Only an end station can accept this configuration. That should eliminate your logging messages. It should speed reconnection in the case of a disconnect. You have already indicated that speed and duplex are hard coded on the switch and (I hope) the NIC as well. I cannot comment on the reason for the initial disconnect. Sorry about the politics - -Original Message- From: Puckette, Larry (TIFPC) [mailto:[EMAIL PROTECTED]] Sent: Friday, January 18, 2002 9:10 AM To: [EMAIL PROTECTED] Subject: 6509 roaming disconnects part2 [7:32449] Hello again group. I have another question to propose to you. But first an updated history of the issue at hand. We have a 6509 that serves as the core to a server farm that has both NT and Unix boxes on it. In the beginning there were infrequent link drops between servers and the switch that had no pattern to isolate a card or VLAN, etc... and then frequency increased to be a constant problem. Sniffer information gave very little to hang our hat on, with 99% of it's findings being 2 messages. Too many retransmissions TCP and octets/s: current value 932,384. High Threshold=500,000. An example of the logging buffer on the switch's interesting messages were; IPPS6509 (enable) show logging buffer 2002 Jan 16 02:15:44 %PAGP-5-PORTFROMSTP:Port 8/23 left bridge port 8/23 2002 Jan 16 02:15:49 %PAGP-5-PORTTOSTP:Port 8/22 joined bridge port 8/22 2002 Jan 16 02:15:49 %PAGP-5-PORTFROMSTP:Port 6/23 left bridge port 6/23 2002 Jan 16 02:15:50 %SPANTREE-6-PORTFWD: Port 8/22 state in VLAN 172 changed to forwarding 2002 Jan 16 02:16:01 %PAGP-5-PORTTOSTP:Port 8/23 joined bridge port 8/23 2002 Jan 16 02:16:02 %SPANTREE-6-PORTFWD: Port 8/23 state in VLAN 172 changed to forwarding 2002 Jan 16 02:16:06 %PAGP-5-PORTTOSTP:Port 6/23 joined bridge port 6/23 2002 Jan 16 02:16:07 %SPANTREE-6-PORTFWD: Port 6/23 state in VLAN 172 changed to forwarding 2002 Jan 16 03:41:28 %PAGP-5-PORTFROMSTP:Port 8/17 left bridge port 8/17 2002 Jan 16 03:41:29 %PAGP-5-PORTFROMSTP:Port 7/16 left bridge port 7/16 2002 Jan 16 03:41:35 %SYS-6-CFG_CHG:Global block changed by SNMP/216.141.33.71/ 2002 Jan 16 03:41:47 %PAGP-5-PORTTOSTP:Port 8/17 joined bridge port 8/17 2002 Jan 16 03:41:47 %PAGP-5-PORTTOSTP:Port 7/16 joined bridge port 7/16 2002 Jan 16 03:41:48 %SPANTREE-6-PORTFWD: Port 7/16 state in VLAN 172 changed to forwarding 2002 Jan 16 03:41:48 %SPANTREE-6-PORTFWD: Port 8/17 state in VLAN 172 changed to forwarding 2002 Jan 16 03:44:27 %PAGP-5-PORTFROMSTP:Port 8/17 left bridge port 8/17 2002 Jan 16 03:44:43 %PAGP-5-PORTTOSTP:Port 8/17 joined bridge port 8/17 2002 Jan 16 03:44:44 %SPANTREE-6-PORTFWD: Port 8/17 state in VLAN 172 changed to forwarding but these had no consistency over time as to what port or group of ports were experiencing this. some interesting 'show tech' information was; udp: 0 incomplete headers 0 bad data length fields 2 bad checksums 20839 socket overflows 108568195 no such ports tcp: 111664 completely duplicate packets (6407 bytes) 29 keepalive timeouts Ok, if you're still with me... It was dictated that we REPLACE the switch by the customer but of course Cisco did not go for that and we did a scheduled reboot on the switch and all problems have cleared. Now the customer wants a bi-monthly reboot of this switch scheduled to prevent the problem from occurring. My questions are: Is there any technical reason that these scheduled reboots would be a bad idea? (politics dictate that logical reasons don't apply) Does anyone know of a previously proven fix for this problem that has documentation that could be used in discussions of whether these scheduled reboots are necessary? Thank you all for any help,, in advance. Larry Puckette Network Analyst CCNA,MCP,LANCP Temple Inland [EMAIL PROTECTED] 512/434-1838 Message Posted at: http://www.groupstudy.com/form/read.php?f=7i=32529t=32449 -- FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]
RE: 6509 roaming disconnects part2 [7:32449]
You should also look at set option debounce and set port debounce. These commands were added to deal with NIC vendors (3Com) who were staying from the IEEE Ethernet standards. Basically electrical signals from the NIC would go link up/down/up/down and the switch would see it as the card going up and down (silly Cisco!!). Debounce tweaks the tolerances for these NICs so Cisco will once again play nice with 3Com. As an additional note to my bashing 3Com...2 customers recently purchased hundreds of new PC's (Manufacture name withheld) which came with built in 3Com NICs. Not a single PC will auto-negotiate properly. The cards all go to 100-Half and the Switch 100-Full. When the switch is forced to 100-Full the PC's still go 100-Half. One customer was replacing Compaqs with Intel cards that auto-negotiated correctly 95% of the time. Will 3Com go bankrupt within 12 months? -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Daniel Cotts Sent: Friday, January 18, 2002 3:08 PM To: [EMAIL PROTECTED] Subject: RE: 6509 roaming disconnects part2 [7:32449] From Cisco LAN Switching by Clark and Hamilton pages 262-3, 271-3 see the discussion of PortFast and disabling Port Aggregation Protocol. On CCO look for a command set port host that should change several parameters in one shot. The set port host command sets channel mode to off, enables spanning-tree portfast, and sets trunk mode to off. Only an end station can accept this configuration. That should eliminate your logging messages. It should speed reconnection in the case of a disconnect. You have already indicated that speed and duplex are hard coded on the switch and (I hope) the NIC as well. I cannot comment on the reason for the initial disconnect. Sorry about the politics - -Original Message- From: Puckette, Larry (TIFPC) [mailto:[EMAIL PROTECTED]] Sent: Friday, January 18, 2002 9:10 AM To: [EMAIL PROTECTED] Subject: 6509 roaming disconnects part2 [7:32449] Hello again group. I have another question to propose to you. But first an updated history of the issue at hand. We have a 6509 that serves as the core to a server farm that has both NT and Unix boxes on it. In the beginning there were infrequent link drops between servers and the switch that had no pattern to isolate a card or VLAN, etc... and then frequency increased to be a constant problem. Sniffer information gave very little to hang our hat on, with 99% of it's findings being 2 messages. Too many retransmissions TCP and octets/s: current value 932,384. High Threshold=500,000. An example of the logging buffer on the switch's interesting messages were; IPPS6509 (enable) show logging buffer 2002 Jan 16 02:15:44 %PAGP-5-PORTFROMSTP:Port 8/23 left bridge port 8/23 2002 Jan 16 02:15:49 %PAGP-5-PORTTOSTP:Port 8/22 joined bridge port 8/22 2002 Jan 16 02:15:49 %PAGP-5-PORTFROMSTP:Port 6/23 left bridge port 6/23 2002 Jan 16 02:15:50 %SPANTREE-6-PORTFWD: Port 8/22 state in VLAN 172 changed to forwarding 2002 Jan 16 02:16:01 %PAGP-5-PORTTOSTP:Port 8/23 joined bridge port 8/23 2002 Jan 16 02:16:02 %SPANTREE-6-PORTFWD: Port 8/23 state in VLAN 172 changed to forwarding 2002 Jan 16 02:16:06 %PAGP-5-PORTTOSTP:Port 6/23 joined bridge port 6/23 2002 Jan 16 02:16:07 %SPANTREE-6-PORTFWD: Port 6/23 state in VLAN 172 changed to forwarding 2002 Jan 16 03:41:28 %PAGP-5-PORTFROMSTP:Port 8/17 left bridge port 8/17 2002 Jan 16 03:41:29 %PAGP-5-PORTFROMSTP:Port 7/16 left bridge port 7/16 2002 Jan 16 03:41:35 %SYS-6-CFG_CHG:Global block changed by SNMP/216.141.33.71/ 2002 Jan 16 03:41:47 %PAGP-5-PORTTOSTP:Port 8/17 joined bridge port 8/17 2002 Jan 16 03:41:47 %PAGP-5-PORTTOSTP:Port 7/16 joined bridge port 7/16 2002 Jan 16 03:41:48 %SPANTREE-6-PORTFWD: Port 7/16 state in VLAN 172 changed to forwarding 2002 Jan 16 03:41:48 %SPANTREE-6-PORTFWD: Port 8/17 state in VLAN 172 changed to forwarding 2002 Jan 16 03:44:27 %PAGP-5-PORTFROMSTP:Port 8/17 left bridge port 8/17 2002 Jan 16 03:44:43 %PAGP-5-PORTTOSTP:Port 8/17 joined bridge port 8/17 2002 Jan 16 03:44:44 %SPANTREE-6-PORTFWD: Port 8/17 state in VLAN 172 changed to forwarding but these had no consistency over time as to what port or group of ports were experiencing this. some interesting 'show tech' information was; udp: 0 incomplete headers 0 bad data length fields 2 bad checksums 20839 socket overflows 108568195 no such ports tcp: 111664 completely duplicate packets (6407 bytes) 29 keepalive timeouts Ok, if you're still with me... It was dictated that we REPLACE the switch by the customer but of course Cisco did not go for that and we did a scheduled reboot on the switch and all problems have cleared. Now the customer wants a bi-monthly reboot of this switch scheduled to prevent the problem from occurring. My questions are: Is there any technical reason that these scheduled reboots would