Re: [Starlink] Why ISLs are difficult...

2022-09-01 Thread Dave Taht via Starlink
On Thu, Sep 1, 2022 at 3:00 PM Michael Richardson via Starlink
 wrote:
>
>
> Mike Puchol via Starlink  wrote:
> > In terms of ground station coverage, once the entire ISL “mesh” is
> > complete, you could find a satellite with gateway coverage somewhere,
> > all the time. The path will change every few minutes, as the satellite
> > linking to the gateway changes, but it’s in the order of minutes, not
> > seconds.
>
> And, it's clockwork as you've said, so it's not like our traditional routing
> protocols where failures are due to problems or errors.
>
> To my mind, I'd want to have a fourth laser so that one could always be
> making before breaking, but if it's fast enough then one can probably buffer
> the packets while the lasers move.  That's an evolution to my mind.

One of the big flaws in early wifi extending to today was only 3
useful channels.

https://mathworld.wolfram.com/Four-ColorTheorem.html

I used to fiddle with graph theory a lot, one of my favorites for
describing "optimum"
resilient connectivity was the blanusa snark:

https://en.wikipedia.org/wiki/Blanu%C5%A1a_snarks

Which became the logo of the cerowrt project, and a subset of which,
of the babel routing protocol.

> That creates spikes in latency though, and it would be wise to keep the
> maximum apparent bandwidth to some 95% (or something) of max in order to
> always have enough bandwidth to catch up. (By Theory of Constraints)
>
> >  Turning this into a global network in the shell: Even harder.
>
> > Agreed! If you equate this to an OSPF network with 4400 nodes, which
> > are reconfiguring themselves every few minutes, the task is not
> > trivial.
>
> OSPF is just not what I'd use :-)
> RPL (RFC6550) is probably better, but you'd still need a few tweaks since the
> parent selection is going to be predictable.

Tee-hee. I would use something derived from a DV protocol in general,
and as I said, would use a l2 more amiable to movement, which is
something that ipv6 gloriously failed at. L3 identifiers would stay
fixed, the points underneath change

The effects of what seems to be a starlink link state protocol (every
15s) are rather noticible at the scale they are currently at, and far
more dynamic convergence is possible with DV. Any centralized approach
fails at distance.

I have not been paying attention, to what extent are any rtt sensitive
metrics being used in production?

https://datatracker.ietf.org/doc/html/draft-ietf-babel-rtt-extension-00

> > automatically adjust. Any calculation as to what links are established,
> > are active, etc. can be done on the ground and sent to the satellites
> > for execution, much in the same way that RF resource scheduling is done
> > centrally in 15 second blocks.
>
> SDN is great, but a self-healing control plane loop is better (as Rogers 
> learnt on July 7 in Canada).
>
>
> --
> Michael Richardson. o O ( IPv6 IøT consulting )
>Sandelman Software Works Inc, Ottawa and Worldwide
>
>
>
>
> ___
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink



-- 
FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
Dave Täht CEO, TekLibre, LLC
___
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink


Re: [Starlink] Why ISLs are difficult...

2022-09-01 Thread Michael Richardson via Starlink

Mike Puchol via Starlink  wrote:
> In terms of ground station coverage, once the entire ISL “mesh” is
> complete, you could find a satellite with gateway coverage somewhere,
> all the time. The path will change every few minutes, as the satellite
> linking to the gateway changes, but it’s in the order of minutes, not
> seconds.

And, it's clockwork as you've said, so it's not like our traditional routing
protocols where failures are due to problems or errors.

To my mind, I'd want to have a fourth laser so that one could always be
making before breaking, but if it's fast enough then one can probably buffer
the packets while the lasers move.  That's an evolution to my mind.

That creates spikes in latency though, and it would be wise to keep the
maximum apparent bandwidth to some 95% (or something) of max in order to
always have enough bandwidth to catch up. (By Theory of Constraints)

>  Turning this into a global network in the shell: Even harder.

> Agreed! If you equate this to an OSPF network with 4400 nodes, which
> are reconfiguring themselves every few minutes, the task is not
> trivial.

OSPF is just not what I'd use :-)
RPL (RFC6550) is probably better, but you'd still need a few tweaks since the
parent selection is going to be predictable.

> automatically adjust. Any calculation as to what links are established,
> are active, etc. can be done on the ground and sent to the satellites
> for execution, much in the same way that RF resource scheduling is done
> centrally in 15 second blocks.

SDN is great, but a self-healing control plane loop is better (as Rogers learnt 
on July 7 in Canada).


--
Michael Richardson. o O ( IPv6 IøT consulting )
   Sandelman Software Works Inc, Ottawa and Worldwide






signature.asc
Description: PGP signature
___
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink


Re: [Starlink] Starlink "beam spread"

2022-09-01 Thread Tom Evslin via Starlink
I think manufacturing orbital datacenters in space is absolutely necessary. 
Then, at no point, is a heavy set of frames needed to hold the weight of the 
boards. Producing the chips in a real vacuum no gravity environment may also 
allow radically different design

-Original Message-
From: Starlink  On Behalf Of Michael 
Richardson via Starlink
Sent: Thursday, September 1, 2022 3:54 PM
To: Ulrich Speidel ; starlink@lists.bufferbloat.net
Subject: Re: [Starlink] Starlink "beam spread"


Is there any orbit other than GEO that would make CDNs in space useful?

While current Starlink don't have lasers that could reach up to higher orbits, 
maybe a subsequent generation could have such a thing.  Maybe there could even 
be a standard which OneWeb/StarLink/??? could all agree to, and CDN satellites 
(with bigger solar panels and longer service lifetimes) could be built to.

Having said all of this, it sure seems that the better place today for CDNs
is within satellite serviced villages.Some may even remember the Internet
Cache Protocol (ICP), which never really got anywhere (RFC2186).

There are perhaps energy arguments for moving datacenters to space, but stuff 
just isn't reliable enough, and I'm sure it's a fail until you manufacture in 
space.





___
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink


Re: [Starlink] Starlink "beam spread"

2022-09-01 Thread Michael Richardson via Starlink

David Fernández via Starlink wrote:
> If Starlink satellites are processing IP packets, shouldn't them be
> shown in traceroutes? They are not shown now, AFAIK.

sadly, they aren't doing IP processing.
I don't think that they ever will decide to for NIH reasons.
I suspect that their SDN hardware probably can, and I think that SR6 is
probably ideal for their use, but...



signature.asc
Description: PGP signature
___
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink


Re: [Starlink] Starlink "beam spread"

2022-09-01 Thread Michael Richardson via Starlink

Is there any orbit other than GEO that would make CDNs in space useful?

While current Starlink don't have lasers that could reach up to higher
orbits, maybe a subsequent generation could have such a thing.  Maybe there
could even be a standard which OneWeb/StarLink/??? could all agree to, and
CDN satellites (with bigger solar panels and longer service lifetimes) could
be built to.

Having said all of this, it sure seems that the better place today for CDNs
is within satellite serviced villages.Some may even remember the Internet
Cache Protocol (ICP), which never really got anywhere (RFC2186).

There are perhaps energy arguments for moving datacenters to space, but stuff
just isn't reliable enough, and I'm sure it's a fail until you manufacture in
space.






signature.asc
Description: PGP signature
___
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink


Re: [Starlink] starlink upload behavior

2022-09-01 Thread Dave Taht via Starlink
On Thu, Sep 1, 2022 at 12:24 PM Luis A. Cornejo
 wrote:
>
> Dave,
>
> Did you leave your ingress at 0?

I could never find an operating point that was suitable.

> How about ack filtering and overhead on egress? Do you mind sharing your tc 
> qdisc commands?

ack-filter works better than ack-filter-aggressive, though I switch
back and forth.

> Have you run the auto rate script?

I used to - but my own usage is principally bound by annoyance at
upload speeds, so I reverted to just using ack-filtering and a 6Mbit
rate on uploads for the sqm-scripts. See attached.

For new subscribers to this list, the genesis of it all was the idea
of doing a cerowrt II project
targetted at making all of openwrt easily capable of interoperating
with starlink's products, and
in at least a few ways, superior. Despite pitching this in multiple
directions, no funding arrived, and Starlink stopped communicating
with us at all...

https://docs.google.com/document/d/1rVGC-iNq2NZ0jk4f3IAiVUHz2S9O6P-F3vVZU2yBYtw/edit#heading=h.qev8j7cst4lc

and instead we've been poking at various subprojects in loose
formation, off of that list. Notably, the cake-autorate work hit 1.0
with some decent solutions for LTE/5G that I hope more are using, some
details on that here:
https://forum.openwrt.org/t/cake-w-adaptive-bandwidth/135379

Given how the starlink network is (d)evolving, and my continued,
fervent hope they are upgrading their dishys and downlinks to manage
congestion better, I went back to paid work and polishing up the
openwrt 22.03 release. (I think we fixed the last major wifi bug a
week ago).

LibreQos is experiencing a small explosion of popularity, and I hope
more small ISPs are leaping on that. When we started developing cake
in 2015, XDP, ebpf didn't exist, and htb was too locky to scale much
past a gbit, we recently got LibreQos past 10gbits and 5000 60/6 mbit
users on really cheap hardware. Next stop, 20Gbit!

https://github.com/rchac/LibreQoS

I picked up a pretty cool FPGA the other day, as the stop after that
is 100Gbits. 6ns per packet is difficult.

A couple days worth of dishy cake stats:

tc -s qdisc show dev wlp3s0
qdisc cake 818c: root refcnt 2 bandwidth 6Mbit diffserv3
triple-isolate nonat nowash ack-filter split-gso rtt 100.0ms raw
overhead 0
 Sent 1090433207 bytes 3498702 pkt (dropped 240608, overlimits 2592116
requeues 0)
 backlog 0b 0p requeues 0
 memory used: 68Kb of 4Mb
 capacity estimate: 6Mbit
 min/max network layer size:   42 /1514
 min/max overhead-adjusted size:   42 /1514
 average network hdr offset:   14

   Bulk  Best EffortVoice
  thresh375Kbit6Mbit 1500Kbit
  target 48.4ms5.0ms   12.1ms
  interval  143.4ms  100.0ms  107.1ms
  pk_delay  0us3.2ms384us
  av_delay  0us415us 15us
  sp_delay  0us  3us  2us
  backlog0b   0b   0b
  pkts0  3732804 6506
  bytes   0   1105842362   295688
  way_inds0   2545690
  way_miss032630   84
  way_cols000
  drops   0   460
  marks   0  1600
  ack_drop0   2405620
  sp_flows020
  bk_flows010
  un_flows000
  max_len 013446  329
  quantum   300  300  300

For contrast here's two days worth of stats for a 60Mbit customer of
this wisp. Overall we see about a 1% drop rate...

qdisc cake c538: parent 7:1218 bandwidth unlimited diffserv4 triple-isolate nona
t nowash ack-filter split-gso rtt 100ms raw overhead 0
 Sent 1068282731 bytes 6570868 pkt (dropped 187845, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 292608b of 15140Kb
 capacity estimate: 0bit
 min/max network layer size:   60 /1494
 min/max overhead-adjusted size:   60 /1494
 average network hdr offset:   14

   Bulk  Best EffortVideoVoice
  thresh   0bit 0bit 0bit 0bit
  target5ms  5ms  5ms  5ms
  interval100ms100ms100ms100ms
  pk_delay   1.08ms196us130us 44us
  av_delay 26us  4us  3us  2us
  sp_delay  0us  0us  1us  0us
  backlog0b   0b   0b   0b
  pkts  258  6753105  320 5030
  bytes   15480   108183516224846   860791
  way_inds0   49976400
  way_miss   9063092  229  309
  way_cols00   

Re: [Starlink] Starlink "beam spread"

2022-09-01 Thread Dave Taht via Starlink
In general I have been assuming that the starlink mac layer was
"switched", didn't resemble ethernet much, and can run any protocol on
top. there's a lot of conceptions for this, but my
principal thought is that it looks like this:

dst, src, otherminimalstuff, encrypted payload..

the link to the sat itself is public key encrypted, as well as each
packet payload between dishy and ground.
___
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink


Re: [Starlink] starlink upload behavior

2022-09-01 Thread Luis A. Cornejo via Starlink
Dave,

Did you leave your ingress at 0?

How about ack filtering and overhead on egress? Do you mind sharing your tc
qdisc commands?

Have you run the auto rate script?

-Luis

On Wed, Aug 31, 2022 at 10:22 AM Dave Taht via Starlink <
starlink@lists.bufferbloat.net> wrote:

> I have just been leaving cake at 6Mbit on the upload and that fq and
> control make it much
> more tolerable for my mix of videoconferencing and uploads. That said,
> I hadn't looked
> at the native performance in a while which is markedly better than it
> was a few months ago, at least for this quick test run.
>
> If anyone can make sense of these... for the first 1/3 of the trace
> throughput is low and RTTs are *NICE*,
> the second looks like a sat switch - still nice... then another sat
> switch where I get full upload throughput
> and the latency grows significantly.
>
> If anyone is into packet captures:
>
> http://www.taht.net/~d/starlink-1-auto.cap # starlink through their wifi
> http://www.taht.net/~d/starlink-1-cake6.cap # starlink through their
> wifi, with cake bandwidth 6mbit on mine
>
> Graphs produced with xplot.
> ___
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
___
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink


Re: [Starlink] Starlink "beam spread"

2022-09-01 Thread Lin Han via Starlink
Hi, Ulrich,

I agree with you even I don't know if StarLink satellite will process IP 
packet. IETF may work on this area soon. IP is the mandatory if some IP based 
features are moved to satellite, such as DNS, CDN, etc. Also, from 3GPP 
perspective, future LEO satellite network should be IP based. Then, the NTN 
integration with 5G and Internet can be done.
check out my slide: 
https://datatracker.ietf.org/doc/slides-114-hotrfc-sessa-the-leo-satellite-networking-lin-han/.
 We will have side meeting to discuss this in the next IETF 115 (London).

BRs.

Lin



From: Starlink  On Behalf Of Ulrich 
Speidel via Starlink
Sent: Wednesday, August 31, 2022 2:46 PM
To: starlink@lists.bufferbloat.net
Subject: Re: [Starlink] Starlink "beam spread"


I work on the assumption that Starlink satellites are, or at least will 
eventually be, processing IP packets. For inter-satellite routing it's more or 
less a must-have unless you have some other packet switching protocol layered 
in between.
On 1/09/2022 2:51 am, David Fernández via Starlink wrote:
"DNS on Starlink satellites: Good idea, lightweight, and I'd suspect
maybe already in operation?"

Are the satellites processing IP packets? Are the ISLs even in
operation? I have been told Starlink satellites are transparent.


> Date: Thu, 1 Sep 2022 01:41:07 +1200
> From: Ulrich Speidel 
> 
> To: David Lang 
> Cc: Sebastian Moeller , Ulrich 
> Speidel via Starlink
> 
> Subject: Re: [Starlink] Starlink "beam spread"
> Message-ID: 
> <56e56b0f-07bd-fe0c-9434-2663ae9d4...@auckland.ac.nz>
> Content-Type: text/plain; charset=UTF-8; format=flowed
>
> Um, yes, but I think we're mixing a few things up here (trying to bundle
> responses here, so that's not just to you, David).
>
> In lieu of a reliable Starlink link budget, I'm going by this one:
>
> https://www.linkedin.com/pulse/quick-analysis-starlink-link-budget-potential-emf-david-witkowski/
>
> Parameters here are a little outdated but the critical one is the EIRP
> at the transmitter of up to ~97 dBm. Say we're looking at a 30 GHz Ka
> band signal over a 600 km path, which is more reflective of the current
> constellation. Then Friis propagation gives us a path loss of about 178
> dB, and if we pretend for a moment that Dishy is actually a 60 cm
> diameter parabolic dish, we're looking at around 45 dBi receive antenna
> gain. Probably a little less as Dishy isn't actually a dish.
>
> Then that gives us 97 dBm - 178 dB + 45 dB = -36 dBm at the ground
> receiver. Now I'm assuming here that this is for ALL user downlink beams
> from the satellite combined. What we don't really know is how many
> parallel signals a satellite multiplexes into these, but assuming at the
> moment a receive frontend bandwidth of about 100 MHz, noise power at the
> receiver should be around 38 pW or -74 dBm. That leaves Starlink around
> 38 dB of SNR to play with. Shannon lets us send up to just over 1.25
> Gb/s in that kind of channel, but then again that's just the Shannon
> limit, and in practice, we'll be looking a a wee bit less.
>
> That SNR also gives us an indication as to the signal separation Dishy
> needs to achieve from the beams from another satellite in order for that
> other satellite to re-use the same frequency. Note that this is
> significantly more than just the 3 dB that the 3 dB width of a beam
> gives us. The 3 dB width is what is commonly quoted as "beam width", and
> that's where you get those nice narrow angles. But that's just the width
> at which the beam drops to half its EIRP, not the width at which it can
> no longer interfere. For that, you need the 38 dB width - or thereabouts
> - if you can get it, and this will be significantly more than the 1.2
> degrees or so of 3dB beam width.
>
> But even if you worked with 1.2 degrees at a distance of 600 km and you
> assumed that sort of beam width at the satellite, it still gives you an
> >12 km radius on the ground within which you cannot reuse the downlink
> frequency from the same satellite. That's orders of magnitude more than
> the re-use spatial separation you can achieve in ground-based cellular
> networks. Note that the 0.1 deg beam "precision" is irrelevant here -
> that just tells me the increments in which they can point the beam, but
> not how wide it is and how intensity falls off with angle, or how bad
> the side lobes are.
>
> Whether 

Re: [Starlink] Starlink Digest, Vol 18, Issue 1

2022-09-01 Thread David P. Reed via Starlink

Hi Sebastian -
regarding slow start and video, I was just thinking and I came up with a 
possible answer regarding the slow start issue compatible with the Internet 
architecture. Read the whole piece below, I hope it makes sense.
Since the "bottleneck link" in Starlink will almost certainly be the downlink 
shared among multiple dishys, which to me is like the CMTS in a cable modem 
system, I think the reasoning is more general than Starlink.  Oddly, :-) the 
idea has to do with AQM and TCP flow control. Our common interest.
 
It's a concept, not a full implementation. It doesn't require massive 
rethinking of the whole Internet. Which means, unlike the QUIC project, it can 
be experimented with in just a part of the Internet, but would require a 
video-server partner and a bottleneck-link operator partner.
 
Please steal and improve if you like it.
 
- David
 
> Date: Thu, 1 Sep 2022 09:58:16 +0200

> From: Sebastian Moeller 
> To: Ulrich Speidel 
> Cc: David Lang , Ulrich Speidel via Starlink
> 
> Subject: Re: [Starlink] Starlink "beam spread"
 

> I am prepared to eat crow on this in the future, but I am highly skeptical 
> about
> CDNs in space (in spite of it being a cool project from the technological 
> side).
>
You and me both...
> 
> *) As it looks slow start is getting a bad rep from multiple sides, but I see 
> not
> better alternative out there that solves the challenge slow-start tackles in a
> better way. Namely gradual ramping and probing of sending rates/congestion 
> windows
> to avoid collapse, this in turn means that short flows will never reach 
> capacity,
> the solution to which might well be, use longer flows then...


I'm old enough to remember how the TCP slow start problem was first dealt with 
in HTTP pretty well (good enough for the time, and focused on the end-to-end 
picture of the WWW architecture (REST).
My friend Jim Gettys was involved, as were a couple of others - I think Jeff 
Mogul, too.
 
The answer was HTTP 1.1. That is, using a single TCP flow for all HTTP requests 
to a server. That would increase the flow to get through slow start more 
quickly (aggregating many flows) and holding the flow open, assuming (almost 
always correctly) that between the user's clicks and the multiple parts of 
content, this would keep the flow out of slow start. (HTTP 1.1's multiplexing 
of requests onto a single TCP flow had other advantages, too - OS Internet 
stacks couldn't handle many simultaneous HTTP connections, adding more delay 
when a web page was sourced from many servers).
 
As far as "pretty good" solution Akamai also solved the bulk of the Web's  
problem - NOT by caching at all, but by letting ISPs *purchase* server capacity 
closer to the bulk of their consumers, and letting that *purchased* capacity 
pay for Akamai's costs. This moved the function of what to cache "out of the 
network" to the edge, and didn't require ANY changes to HTTP (in fact, it 
tended to concentrate more traffic into a single HTTP 1.1 flow, to a shared 
Akamai server.
 
You may not see where I'm going with this yet - but actually it's an End-to-End 
Argument against putting "function into the network" when the function is about 
the end points. With HTTP 1.1, a small change *at the endpoints of the Web* was 
easier and simpler than some hypothetical "intelligence" in the network that 
would obviate "slow start". With Akamai, a small change that allowed heavily 
trafficed web services to "buy server capacity" in Akamai's server fabric, 
which was "at the endpoints of the Internet" itself just distributed around the 
edges of the Internet was good enough.
 
So, let's look at video traffic. Well, first of all, any response time needs it 
has is mostly fixed by buffering at the receiver. Not needing any fixes "in the 
network" - you want to receive video, allocate a big buffer and fill it at the 
start. (This is where slow start gets in the way, perhaps).
I'm pretty sure *most* video watching doesn't go over RTP anymore - it goes 
over TCP now. [Dropped frames are interpolated, but instead retransmitted, 
because the buffer is sufficient in all video watching gear at the viewing end. 
RTP is useful for two-way videoconferencing, but that's because conferencing 
has the 100 msec. need for RTT, unless you are conferencing with Mars or the 
Moon.]
 
So "slow start" of some sort is the current solution that deals with the 
congestion burst that would disrupt others early in a video watching session.
 
But I think you are right - there are better ways to deal with the shared 
bottleneck link that a new video session might encounter. - something like slow 
start is needed ONLY if the bottleneck is *shared* among multiple users.
 
The problem is the potential of "starvation" of other flows at the bottleneck 
link along the video's path.
 
And this is a problem we now have one partial solution for, but it's not 
deployed where it needs to be. That is, in a word - "fair" queuing plus 
aggressive packet 

Re: [Starlink] Starlink "beam spread"

2022-09-01 Thread Darrell Budic via Starlink
If I recall correctly, Starlink has told us they encrypt from ground to ground. 
So whatever they do in the sat constellation, it’s an underlay network and our 
IP traffic never sees it. This doesn’t preclude ISL, but it does work against 
the concept of a CDN in space. Seems like it’s easier for them to get on more 
IXs and even host cache servers at the ground stations if they have enough data 
center space at them. Of course, there’s nothing preventing them from putting 
up a satellite that gets treated as an endpoint and does decryption, but then 
it’s just another end point on their underlay.

> On Sep 1, 2022, at 10:19 AM, David Fernández via Starlink 
>  wrote:
> 
> If Starlink satellites are processing IP packets, shouldn't them be
> shown in traceroutes? They are not shown now, AFAIK.
> 
> A transparent geographical based routing could be possible, with
> signal-pass-through approach to the next satellite on a path
> connecting to a GW, via ISL, if the satellite receiving traffic from a
> dishy does not have any GW at direct sight.
> 
>> Date: Thu, 1 Sep 2022 09:46:20 +1200
>> From: Ulrich Speidel 
>> To: starlink@lists.bufferbloat.net
>> Subject: Re: [Starlink] Starlink "beam spread"
>> Message-ID: <7a357510-2d61-dd4a-a59f-3d7d4bd37...@auckland.ac.nz>
>> Content-Type: text/plain; charset="utf-8"; Format="flowed"
>> 
>> I work on the assumption that Starlink satellites are, or at least will
>> eventually be, processing IP packets. For inter-satellite routing it's
>> more or less a must-have unless you have some other packet switching
>> protocol layered in between.
>> 
>> On 1/09/2022 2:51 am, David Fernández via Starlink wrote:
>>> "DNS on Starlink satellites: Good idea, lightweight, and I'd suspect
>>> maybe already in operation?"
>>> 
>>> Are the satellites processing IP packets? Are the ISLs even in
>>> operation? I have been told Starlink satellites are transparent.
>>> 
>>> 
 Date: Thu, 1 Sep 2022 01:41:07 +1200
 From: Ulrich Speidel 
 To: David Lang 
 Cc: Sebastian Moeller , Ulrich Speidel via Starlink
 
 Subject: Re: [Starlink] Starlink "beam spread"
 Message-ID: <56e56b0f-07bd-fe0c-9434-2663ae9d4...@auckland.ac.nz>
 Content-Type: text/plain; charset=UTF-8; format=flowed
 
 Um, yes, but I think we're mixing a few things up here (trying to bundle
 responses here, so that's not just to you, David).
 
 In lieu of a reliable Starlink link budget, I'm going by this one:
 
 
>>> https://www.linkedin.com/pulse/quick-analysis-starlink-link-budget-potential-emf-david-witkowski/
>>> 
>>> 
 
 Parameters here are a little outdated but the critical one is the EIRP
 at the transmitter of up to ~97 dBm. Say we're looking at a 30 GHz Ka
 band signal over a 600 km path, which is more reflective of the current
 constellation. Then Friis propagation gives us a path loss of about 178
 dB, and if we pretend for a moment that Dishy is actually a 60 cm
 diameter parabolic dish, we're looking at around 45 dBi receive antenna
 gain. Probably a little less as Dishy isn't actually a dish.
 
 Then that gives us 97 dBm - 178 dB + 45 dB = -36 dBm at the ground
 receiver. Now I'm assuming here that this is for ALL user downlink beams
 from the satellite combined. What we don't really know is how many
 parallel signals a satellite multiplexes into these, but assuming at the
 moment a receive frontend bandwidth of about 100 MHz, noise power at the
 receiver should be around 38 pW or -74 dBm. That leaves Starlink around
 38 dB of SNR to play with. Shannon lets us send up to just over 1.25
 Gb/s in that kind of channel, but then again that's just the Shannon
 limit, and in practice, we'll be looking a a wee bit less.
 
 That SNR also gives us an indication as to the signal separation Dishy
 needs to achieve from the beams from another satellite in order for that
 other satellite to re-use the same frequency. Note that this is
 significantly more than just the 3 dB that the 3 dB width of a beam
 gives us. The 3 dB width is what is commonly quoted as "beam width", and
 that's where you get those nice narrow angles. But that's just the width
 at which the beam drops to half its EIRP, not the width at which it can
 no longer interfere. For that, you need the 38 dB width - or thereabouts
 - if you can get it, and this will be significantly more than the 1.2
 degrees or so of 3dB beam width.
 
 But even if you worked with 1.2 degrees at a distance of 600 km and you
 assumed that sort of beam width at the satellite, it still gives you an
> 12 km radius on the ground within which you cannot reuse the downlink
 frequency from the same satellite. That's orders of magnitude more than
 the re-use spatial separation you can achieve in 

Re: [Starlink] Starlink "beam spread"

2022-09-01 Thread David Fernández via Starlink
If Starlink satellites are processing IP packets, shouldn't them be
shown in traceroutes? They are not shown now, AFAIK.

A transparent geographical based routing could be possible, with
signal-pass-through approach to the next satellite on a path
connecting to a GW, via ISL, if the satellite receiving traffic from a
dishy does not have any GW at direct sight.

> Date: Thu, 1 Sep 2022 09:46:20 +1200
> From: Ulrich Speidel 
> To: starlink@lists.bufferbloat.net
> Subject: Re: [Starlink] Starlink "beam spread"
> Message-ID: <7a357510-2d61-dd4a-a59f-3d7d4bd37...@auckland.ac.nz>
> Content-Type: text/plain; charset="utf-8"; Format="flowed"
>
> I work on the assumption that Starlink satellites are, or at least will
> eventually be, processing IP packets. For inter-satellite routing it's
> more or less a must-have unless you have some other packet switching
> protocol layered in between.
>
> On 1/09/2022 2:51 am, David Fernández via Starlink wrote:
>> "DNS on Starlink satellites: Good idea, lightweight, and I'd suspect
>> maybe already in operation?"
>>
>> Are the satellites processing IP packets? Are the ISLs even in
>> operation? I have been told Starlink satellites are transparent.
>>
>>
>> > Date: Thu, 1 Sep 2022 01:41:07 +1200
>> > From: Ulrich Speidel 
>> > To: David Lang 
>> > Cc: Sebastian Moeller , Ulrich Speidel via Starlink
>> > 
>> > Subject: Re: [Starlink] Starlink "beam spread"
>> > Message-ID: <56e56b0f-07bd-fe0c-9434-2663ae9d4...@auckland.ac.nz>
>> > Content-Type: text/plain; charset=UTF-8; format=flowed
>> >
>> > Um, yes, but I think we're mixing a few things up here (trying to bundle
>> > responses here, so that's not just to you, David).
>> >
>> > In lieu of a reliable Starlink link budget, I'm going by this one:
>> >
>> >
>> https://www.linkedin.com/pulse/quick-analysis-starlink-link-budget-potential-emf-david-witkowski/
>>
>> 
>> >
>> > Parameters here are a little outdated but the critical one is the EIRP
>> > at the transmitter of up to ~97 dBm. Say we're looking at a 30 GHz Ka
>> > band signal over a 600 km path, which is more reflective of the current
>> > constellation. Then Friis propagation gives us a path loss of about 178
>> > dB, and if we pretend for a moment that Dishy is actually a 60 cm
>> > diameter parabolic dish, we're looking at around 45 dBi receive antenna
>> > gain. Probably a little less as Dishy isn't actually a dish.
>> >
>> > Then that gives us 97 dBm - 178 dB + 45 dB = -36 dBm at the ground
>> > receiver. Now I'm assuming here that this is for ALL user downlink beams
>> > from the satellite combined. What we don't really know is how many
>> > parallel signals a satellite multiplexes into these, but assuming at the
>> > moment a receive frontend bandwidth of about 100 MHz, noise power at the
>> > receiver should be around 38 pW or -74 dBm. That leaves Starlink around
>> > 38 dB of SNR to play with. Shannon lets us send up to just over 1.25
>> > Gb/s in that kind of channel, but then again that's just the Shannon
>> > limit, and in practice, we'll be looking a a wee bit less.
>> >
>> > That SNR also gives us an indication as to the signal separation Dishy
>> > needs to achieve from the beams from another satellite in order for that
>> > other satellite to re-use the same frequency. Note that this is
>> > significantly more than just the 3 dB that the 3 dB width of a beam
>> > gives us. The 3 dB width is what is commonly quoted as "beam width", and
>> > that's where you get those nice narrow angles. But that's just the width
>> > at which the beam drops to half its EIRP, not the width at which it can
>> > no longer interfere. For that, you need the 38 dB width - or thereabouts
>> > - if you can get it, and this will be significantly more than the 1.2
>> > degrees or so of 3dB beam width.
>> >
>> > But even if you worked with 1.2 degrees at a distance of 600 km and you
>> > assumed that sort of beam width at the satellite, it still gives you an
>> > >12 km radius on the ground within which you cannot reuse the downlink
>> > frequency from the same satellite. That's orders of magnitude more than
>> > the re-use spatial separation you can achieve in ground-based cellular
>> > networks. Note that the 0.1 deg beam "precision" is irrelevant here -
>> > that just tells me the increments in which they can point the beam, but
>> > not how wide it is and how intensity falls off with angle, or how bad
>> > the side lobes are.
>> >
>> > Whether you can re-use the same frequency from another satellite to the
>> > same ground area is a good question. We really don't know the beam
>> > patterns that we get from the birds and from the Dishys, and without
>> > these it's difficult to say how much angular separation a ground station
>> > needs between two satellites using the same frequency in order to
>> > receive one but not be interfered with by the other. Basically, there
>> > are just 

Re: [Starlink] Why ISLs are difficult...

2022-09-01 Thread Dave Taht via Starlink
Perhaps related, but, regardless, very interesting: optical switching
using mems mirrors, with a picture of the chip that does it:

https://cloud.google.com/blog/topics/systems/the-evolution-of-googles-jupiter-data-center-network

...

More about the crazy things google is doing leveraging this is in the
sigcomm2022 paper, here:
https://dl.acm.org/doi/10.1145/3544216.3544265
___
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink


[Starlink] Why ISLs are difficult...

2022-09-01 Thread Ulrich Speidel via Starlink
As this seems to have branched out... There are a whole bag of issues 
with ISL's and routing, really, and again we know diddly squat about 
what Starlink actually intend to do.


My 5 cents worth:

- Linking two satellites that follow each other on the same orbit is the 
easiest exercise. I gather that Starlink have ticked that one off. It's 
probably not too useful on its own for most real scenarios though: 
ground stations move through orbital planes. Also, two arbitrary ground 
stations between one would want to forward will probably not be 
connectable by a chain of satellites all in the same orbital plane.
- Linking two satellites that are in different but adjacent orbital 
planes is one notch up but probably not a lot harder if you master 
gimbal / mirror control. You have some relative movement, but most of 
the time it's slow. Low hanging fruit if it's not already been picked.
- Linking two satellites in range of each other that satisfy some 
arbitrary criterion (minimum distance, desired direction): A bit harder.

- Turning this into a global network in the shell: Even harder.

Let me elaborate a bit on this.

Let's assume we have one or more gimbals that allow us to point our 
space laser(s) at other satellites in range. Or a mirror arrangement - 
doesn't matter.


One unknown that we have is what the receiver side of these links will 
look like. As we'll see in a moment, this is actually quite important.


There are in principle two options for the receiver:

1) A receiver with a wide angle lens that can receive laser signals from 
multiple other satellites at once. This is a pretty simple arrangement 
and may not even need moving parts.


2) A receiver that gets pointed back at the transmitting satellite, 
perhaps with a telescopic zoom lens. This adds a little weight and could 
be on the same gimbal as a laser, so we could communicate both ways 
between the satellites. Moreover, the zoom lens would be like antenna 
gain in a link budget, so would allow a higher data rate between the 
satellites and / or less power.


Now 2) seem clearly superior, right, if we can handle a few extra grams? 
Then we could give each satellite n TX/RX gimbals and could, say, get 
each of our satellites to connect to its n nearest neighbours. And 
bingo, we'd have a network that spans the globe, right?


Not so simple. Two problems, and they're serious ones as it turns out:

A) What happens if one of our n nearest neighbours doesn't have us among 
its n nearest neighbours? Then they won't point their gimbal back at us. 
How do we resolve this?
B) If n=3 and I have Dave, Mike, and Brandon as my nearest neigbours, 
Dave's 3 nearest neighbours are Mike, Brandon and me, Mike's nearest 
neighbours are Dave, Brandon and me, and Brandon has Dave, Mike and me 
as his nearest neighbours, then David, Dick and Sebastian who may be 
orbiting a bit further away from us don't get to link to our elitist 
cluster and our dream of a global network turns to dust.


Now, Problem B (which also occurs for outward links from clusters with 
receiver type 1) can be mitigated by requiring a minimum distance to a 
neighbour, but in combination with a), we seem to have a nasty little 
overlay graph problem to solve. Oh, and we'd want to do that in a 
distributed fashion if possible, and every few seconds from scratch, please.


--

Dr. Ulrich Speidel

School of Computer Science

Room 303S.594 (City Campus)

The University of Auckland
u.spei...@auckland.ac.nz
http://www.cs.auckland.ac.nz/~ulrich/




___
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink


Re: [Starlink] Starlink "beam spread"

2022-09-01 Thread Ulrich Speidel via Starlink

On 1/09/2022 7:05 pm, Mike Puchol via Starlink wrote:


There is circumstantial evidence from a user in Nigeria that was 
getting service and exiting via London, there is no evidence that any 
of the gateways in Nigeria are operational, so ISL could have played a 
role: 
https://www.reddit.com/r/Starlink/comments/wwg0nc/starlink_speed_test_in_nigeria/ 




Where does it mention that it was exiting via London?

--


Dr. Ulrich Speidel

School of Computer Science

Room 303S.594 (City Campus)

The University of Auckland
u.spei...@auckland.ac.nz  
http://www.cs.auckland.ac.nz/~ulrich/




___
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink


Re: [Starlink] Starlink "beam spread"

2022-09-01 Thread Ulrich Speidel via Starlink

On 1/09/2022 7:58 pm, Sebastian Moeller wrote:

Hi Ulrich,

focussing on the CDN part

Sure, we're not on the same song sheet there yet I guess.



On Aug 31, 2022, at 15:41, Ulrich Speidel  wrote:
[...]
CDNs & Co - are NOT just dumb economic optimisations to lower bit miles. They 
actually improve performance, and significantly so. A lower RTT between you and a 
server that you grab data from via TCP allows a much faster opening of the 
congestion window. With initial TCP cwnd's being typically 10 packets or around 15 
kB of data, having a server within 10 ms of your client means that you've 
transferred 15 kB after 5 ms, 45 kB after 10 ms, 105 kB after 15 ms, 225 kB after 
20 ms, and 465 kB after 25 ms. Make your RTT 100 ms, and it takes half a second to 
get to your 465 kB. Having a CDN server in close topological proximity also 
generally reduces the number of queues between you and the server at which packets 
can die an untimely early death, and generally, by taking load off such links, 
reduces the probability of this happening at a lot of queues. Bottom line: Having a 
CDN keeps your users happier. Also, live streaming and video conferencing aside, 
most video is not multicast or broadcast, but unicast.
[...]

Sure, that is a consequence of slow start*, but I argue that having 2ms or 20ms is not going to 
result in too noticeable a slow down, and bulk transfers like movies really do not care since DASH 
in all likelihood leaves slow-start for good after the initial ramp-up. Yes, 1ms versus 100ms makes 
a difference for interactive uses, but putting the CDN in space versus at the base station IMHO is 
a less clear improvement. Add to this that caches partly work by exploiting "locality", 
so e.g. for video streaming platforms I expect different countries to have different viewing 
profiles and hence different content needs to be cached, but satellites cover large 
"rings" around the world; meaning either you cache everything (or at least the content 
for your primary market) in space multiple times over so that your main service area is covered at 
all times or...

I am prepared to eat crow on this in the future, but I am highly skeptical 
about CDNs in space (in spite of it being a cool project from the technological 
side).


Did I propose putting CDNs in space? I merely discussed whether there 
was any merit in this, and the answer is no.


As for the merit of CDNs in general (feeding into terrestrial last mile 
networks), that debate has been settled long ago. The example I chose 
was for TCP slow start in general. The difference isn't between 2 and 20 
ms but between your cwnd opening so slowly that a ~500 kB transfer takes 
half a second rather than 25 ms. And that example is easily extended.


If you have a network console on your browser, open a website that 
people frequently visit and have a look at how many files your browser 
loads for that. Have a look at how large they are. Then divide the size 
of each by 1500 bytes to get the number of packets in the transfer. Do a 
ping to the server the request goes to, and get the RTT. Then allow 10 
packets during the first RTT, 20 packets during the 2nd RTT, 40 during 
the 3rd, 80 during the 4th, and so on, and ask yourself how long it'll 
take to get all those packets across. And then you'll notice quickly 
that for size ranges up to a few MB, it makes a big difference whether 
the RTT is a few ms or a couple of hundred ms. A lot of common web site 
elements are in the order of a few 100 kB these days, and that means a 
few RTTs.


And yes, really small transfers done in less than one initial cwnd are a 
bit of an issue. You can see that on really crowded trunk GEO satellite 
links to a remote ISP. Because these small flows don't really back off, 
they are the only sort of flow that gets through, and as the load on the 
link increases, these can gobble up all the capacity while large flows die.



*) As it looks slow start is getting a bad rep from multiple sides, but I see 
not better alternative out there that solves the challenge slow-start tackles 
in a better way. Namely gradual ramping and probing of sending rates/congestion 
windows to avoid collapse, this in turn means that short flows will never reach 
capacity, the solution to which might well be, use longer flows then...


--

Dr. Ulrich Speidel

School of Computer Science

Room 303S.594 (City Campus)

The University of Auckland
u.spei...@auckland.ac.nz
http://www.cs.auckland.ac.nz/~ulrich/




___
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink


Re: [Starlink] CDNs in space!

2022-09-01 Thread Brandon Butterworth via Starlink
On Wed Aug 31, 2022 at 04:24:21PM -0400, David P. Reed via Starlink wrote:
> Having looked into this a lot, CDNs don't account for very much Internet 
> traffic

It is sufficient for many ISPs to host for free the VoD suppliers caches
(including ours).

> That doesn't mean that CDN servers don't help a fair bit

I think the CDN industry, like anti virus software vendors, has done
us a disservice and made themselved self perpetuating. Their goal to
interpose themselves and take a chunk of cash has taken some pressure
off backbone growth (and cash that would have paid for it)

CDNs may have helped at the time but the 40 / 100G decision was due
to delayed need for 100G (on servers but that rolls into network too)
so it's hard to say if we'd have gone faster sooner or been delayed
waiting for technology instead.

Regardless it's left us with people feeling they need to pay CDNs for
large traffic needs and that leads to slower backbone growth.

> Also, CDN's need to be BIG to hold all the videos that people might
> choose to watch at any particular time.

They don't need to hold everything, just the sufficiently highest requested
set to save enough bandwidth to be worth the expense. SSD are large now too
(100TB but good luck space qualifying that). Being global is going to
make that harder, eg our content is largely UK limited so is a waste of
space flying over other countries.
  
> So I'm just pointing out that the business case for CDN's in space to
> merely solve Starlink's potential issues is probably not great

I agree, we're just mulling over if, and why, that might change.

> The idea that everyone watches TV and the same few seconds of content
> of a few shows that are extraordinarily popular - well, that dog don't
> hunt. It doesn't justify multicast either.

People do but not on the Internet, a lot of people watch linear TV
still despite the claims of Internet VoD suppliers who would have
you think they are the only game now.

For the BBC about 5 to 10% of viewing is VoD. We've not moved the
remaining 90% to the internet yet as there is no multicast (some
FTTH providers are deploying it internally though) and CDNs are
a poor approximation to multicast.

> So let's improve the discussion here. The Internet, for the
> forseeable future, at the edge, is unicast.

Forever most likely

We aim to move all that linear over but slowly to allow the net to grow
with it. We could not move it in one go today (though we're a step closer
in the UK with the national fibre plan) except where there is multicast.
In moving it people habits may change and it all becomes time dilated
to VoD.

> Starlink isn't a media company. It doesn't want to own all the content,
> or even host all the content.

Give them time. Everyone looks to move up the stack where there is more
money

> One thing is clear - Starlink isn't the Internet of the future. It's
> filling a niche (a large one, but a niche).

Yes, that's where we started some of this discussion, some think they
are an alternative and for some they are as needs are diverse.

brandon
___
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink


Re: [Starlink] Starlink "beam spread"

2022-09-01 Thread Brandon Butterworth via Starlink
On Wed Aug 31, 2022 at 02:34:04AM -0700, David Lang wrote:
> On Wed, 31 Aug 2022, Brandon Butterworth via Starlink wrote:
> 
> >With Starlink capacity being multiplexed per Dishy and uplink
> >and downlink capacity equal on each satellite there doesn't appear
> >to be any sharing gain to be had there warranting a CDN in space.
> 
> don't forget that there are also the laser links, they could link you to a 
> shared space CDN, and they also 'complicate' the uplink/downlink 
> calculations for any one satellite.

That was the subject of the following paragraphs. I agree that is
likely the key enabler for a space CDN.

Some mentioned SSD density is too high for space.

We're used to some hard errors in flash, is the space error rate too
high to cope with, even with increased sparing?

Or is it the soft error rate that is too high? At least for a CDN the
soft rate is less of an issue as it is invalidating cache entries all
the time, this is just a new reason to that requires detecting, and
perhaps a less than whole file invaliation for more efficient replacement.

brandon
___
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink


Re: [Starlink] Starlink "beam spread"

2022-09-01 Thread David Lang via Starlink

On Wed, 31 Aug 2022, David P. Reed wrote:

I'm not going to reason from "intersatellite" routing being operational until 
they offer it in operation. It's feasible, sort of. Laser beam aiming is quite 
different from phased array beam steering, and though they may have tested it 
between two satellites, that makes it a "link technology" not a network. (you 
can steer a laser beam by moving lightweight mirrors, I know. But tracking 
isn't so easy when both satellites are moving relative to each other - it 
seems like way beyond the technology base that Starlink has put in its 
satellites so far. But who knows.


They have been launching laser enabled satellites for a while now. I suspect
that the only way we will really know when they are enabled is when we see
coverage expand to the poles and mid-ocean (unless they make announcements about
it)

I doubt that they would be launching laser enabled satellites that could not
track each other.

David Lang
___
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink


Re: [Starlink] Starlink "beam spread"

2022-09-01 Thread Sebastian Moeller via Starlink
Hi Ulrich,

focussing on the CDN part

> On Aug 31, 2022, at 15:41, Ulrich Speidel  wrote:
> [...]
> CDNs & Co - are NOT just dumb economic optimisations to lower bit miles. They 
> actually improve performance, and significantly so. A lower RTT between you 
> and a server that you grab data from via TCP allows a much faster opening of 
> the congestion window. With initial TCP cwnd's being typically 10 packets or 
> around 15 kB of data, having a server within 10 ms of your client means that 
> you've transferred 15 kB after 5 ms, 45 kB after 10 ms, 105 kB after 15 ms, 
> 225 kB after 20 ms, and 465 kB after 25 ms. Make your RTT 100 ms, and it 
> takes half a second to get to your 465 kB. Having a CDN server in close 
> topological proximity also generally reduces the number of queues between you 
> and the server at which packets can die an untimely early death, and 
> generally, by taking load off such links, reduces the probability of this 
> happening at a lot of queues. Bottom line: Having a CDN keeps your users 
> happier. Also, live streaming and video conferencing aside, most video is not 
> multicast or broadcast, but unicast.
> [...]

Sure, that is a consequence of slow start*, but I argue that having 2ms or 20ms 
is not going to result in too noticeable a slow down, and bulk transfers like 
movies really do not care since DASH in all likelihood leaves slow-start for 
good after the initial ramp-up. Yes, 1ms versus 100ms makes a difference for 
interactive uses, but putting the CDN in space versus at the base station IMHO 
is a less clear improvement. Add to this that caches partly work by exploiting 
"locality", so e.g. for video streaming platforms I expect different countries 
to have different viewing profiles and hence different content needs to be 
cached, but satellites cover large "rings" around the world; meaning either you 
cache everything (or at least the content for your primary market) in space 
multiple times over so that your main service area is covered at all times or...

I am prepared to eat crow on this in the future, but I am highly skeptical 
about CDNs in space (in spite of it being a cool project from the technological 
side).


*) As it looks slow start is getting a bad rep from multiple sides, but I see 
not better alternative out there that solves the challenge slow-start tackles 
in a better way. Namely gradual ramping and probing of sending rates/congestion 
windows to avoid collapse, this in turn means that short flows will never reach 
capacity, the solution to which might well be, use longer flows then...
___
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink


Re: [Starlink] Starlink "beam spread"

2022-09-01 Thread Mike Puchol via Starlink
The primary reason for -not- offering service in any given country is, 
primarily, regulatory (see the South Africa case). Once they
> I'm not going to reason from "intersatellite" routing being operational until 
> they offer it in operation. It's feasible, sort of. Laser beam aiming is 
> quite different from phased array beam steering, and though they may have 
> tested it between two satellites, that makes it a "link technology" not a 
> network. (you can steer a laser beam by moving lightweight mirrors, I know. 
> But tracking isn't so easy when both satellites are moving relative to each 
> other - it seems like way beyond the technology base that Starlink has put in 
> its satellites so far. But who knows.

We operate several of these in Kenya: https://x.company/projects/taara

They offer 20 Gbps at distances of 20km, and they operate under considerably 
more vibration, motion, and scintillation than you have in space. They have no 
issue keeping track of each other once initial acquisition is made. SpaceX 
launched 10 satellites into polar orbit in Jan 2021, which it used to test and 
characterize the ISL optical heads - you could see them positioning the 
satellites in configurations to test side-looking (thus cross-plane), and at 
different altitudes (cross-shell), and even parallel links to characterize 
hardware differences (we did this with ours in Kenya too). It was fascinating 
to watch. I’m quite certain the least problem for Starlink (unless they made 
major boo-boos in hardware or software) is acquisition and tracking.

A very good book (but not cheap) on the topic is "Free Space Optical 
Communication” by Hemani Kaushal.
> As far as "intersatellite" routing being out there soon, well, there's no 
> evidence it's happening soon.

There is circumstantial evidence from a user in Nigeria that was getting 
service and exiting via London, there is no evidence that any of the gateways 
in Nigeria are operational, so ISL could have played a role: 
https://www.reddit.com/r/Starlink/comments/wwg0nc/starlink_speed_test_in_nigeria/

Best,

Mike
On Aug 31, 2022, 23:33 +0200, David Lang via Starlink 
, wrote:
> On Wed, 31 Aug 2022, David P. Reed wrote:
>
> > What's interesting to me is that their coverage map definitely doesn't cover
> > Africa, South America, Cuba, large parts of Asia, and it isn't planned - if
> > they had "mesh routing" working among satellites, those would be easy. But
> > instead, they seem to be focused on the satellite one-bounce architecture
> > (what the satellite industry calls "bent-pipe" however it is done).
>
> The countries covered in the coverage map seems to be as much or more 
> restricted
> by regulations as anything technical.
>
> David Lang
> ___
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
___
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink