On 2020-05-28 11:33 AM, Alvin Starr via talk wrote:
If your doing the cabling then the cost of the termination and testing
equipment can become a stumbling block.\
There is also the cable prep and cleaning that is needed with optical
fiber which is not part of the CATx
When running fibre, patch cords are normally used. I've run in some
that were 35M. So, it's just a matter of run and plug in, though
cleaning is always a good idea. The outside plant guys at Rogers and
Bell would be splicing though. You see them hiding in their trucks or
tents when they do that.
Try this consideration, cost/bandwidth. Fibre is much cheaper than
copper at the higher bit rates. Of course at some point you are past
the point where copper is even usable. For example I mentioned 100 Mb
and higher.
I made an assumption from your comment about the cable being along the
LRT right of way.
But that being said all cable along a single right of way is in
essence a single point of failure( think back hoe ).
Multiple fiber bundles does not make that problem go away.
On the LRT, the conduits are buried in the concrete, so the backhoe guy
would really have to be determined to cut the fibre. ;-)
Still Ethernet is not real-time.
There are a lot of things that can be done to make it more amenable to
human scale real-time but hard real-time operation is just not part of
the mix.
Once you hit a switch it gets worse.
There may be some switches out there with deterministic queuing but I
don't know of them.
And neither is token ring. When you configure a managed switch, at
least the better ones, you can choose from different queing methods,
such as round robin, priority ports, how much data can pass in each
cycle and more. All you have with token ring is the maximum wait time.
Well, you can do better than that with a managed switch. Also, what do
you mean by "worst"? If you have a single switch that connects all the
devices, then you can say that as soon as a frame comes in on port X, it
gets handled ahead of any other. Or you could put it into higher
priorty queues etc. You can't get much more deterministic than that.
I still have some thinnet hardware in my junk pile.
And I have a 10 Mb hub. But how long has it been since it was last used?
As for "cheapness", a lot of that depends on the target market. The
stuff you might buy at Sayal or Canada Computers is not likely to find
it's way into a data centre or telecom office (I've worked in both).
ATM was designed for telecom carriers and not really suitable for LAN
use. With the cells being so small, there was a lot of overhead
bandwidth and also the difference in cell size, depending on where you
were in the world.
Avionics is a whole different area. The Collins computers I mentioned
were designed for use on Navy ships and were built a whole lot different
from other systems I worked on. One example would be they were water
cooled. They were also military spec. Since you were in avionics, you
probably are quite familiar with Collins.
---
Post to this mailing list talk@gtalug.org
Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk