Re: [atlas] Long measurement delays

2019-11-30 Thread scg
What I’m getting back is “[]”. I’m not sure if I’m doing the right query to get 
the “synchronizing” response, though. 

-Steve

> On Nov 30, 2019, at 8:58 AM, Stephane Bortzmeyer  wrote:
> 
> On Fri, Nov 29, 2019 at 10:07:33PM -0700,
> Steve Gibbard  wrote 
> a message of 35 lines which said:
> 
>> I’ve been seeing delayed query responses to one-off measurements
>> since November 22, with the exception of November 25.
> 
> Isn't it the "synchronizing" problem? (Measurement status is reported
> as "{'id': None, 'name': 'synchronizing'}".)
> 



Re: [atlas] Communication with probes' owners

2019-04-23 Thread scg
Having individual users contacting other individual users about probe location 
problems seems like a not very scalable solution to this problem. It both 
leaves it somewhat random which issues will be caught, and may leave probe 
owners whose probes look somewhat atypical having to explain their situation 
over and over again to random people. 

I have a cron job that goes through the entire probe list every few hours and 
runs the IP addresses against the MaxMind Geolite databases.  MaxMind has its 
own accuracy issues, but after a bunch of spot checking I decided that trusting 
the MaxMind answers was better than trusting the owner-reported information for 
the probes. 

If somebody wants to take a more systematic approach to getting the Atlas 
location data cleaned up, I’d be happy to share a diff. But I’d suggest that it 
be done by somebody with access to the database cleaning up things that look 
wrong, instead of bugging a bunch of individual probe owners. 

-Steve

Steve Gibbard

> On Apr 23, 2019, at 4:10 AM, Ponikierski, Grzegorz  
> wrote:
> 
> I thought about simple web form available only for logged users of RIPE 
> Atlas. In this way all private data are hidden and RIPE can rate limit usage 
> of the form. Message itself can be send to probe's owner via email from RIPE 
> Atlas infra so sender identity also can be hidden. If somebody wants to 
> switch to email communication then form can also be used to exchange email 
> addresses.
>  
> Regards,
> Grzegorz
>  
> From: Martin Boissonneault 
> Date: Monday 2019-04-22 at 02:25
> To: Carsten Schiefner 
> Cc: "ripe-atlas@ripe.net" 
> Subject: Re: [atlas] Communication with probes' owners
>  
> The best might be for RIPE to contact the owner when the records don't match 
> what is detected from the probe? 
>  
> Some method to trigger a check could be added to the probe's profile, and 
> there would not be ANY chance of email abuse by throwaway accounts?
>  
> Allowing users to contact probe owners has to be VERY well made to avoid all 
> sorts of attacks and spam!
>  
> Martin Boissonneault
> Sent from my iPhone
> 
> On Apr 21, 2019, at 18:14, Carsten Schiefner  wrote:
> 
> Am 21.04.2019 um 19:59 schrieb Dave . :
> If this gets implemented, please add a checkbox where one can indicate 
> whether one is a user or also can get things fixed in the AS where your probe 
> is connected.
> Makes sense to me: +1.
>  
> Would then a reminder every 1/2/3 month[s] make sense that this is (still) 
> the case aka. this flag to be set?
>  
> As the probe’s circumstances may change...
>  
> Op vr 19 apr. 2019 om 12:37 schreef Paolo Pozzan :
> It seems a good idea. I don't think this will be abused and in case it would 
> be easy to point out the spammers.
> Would this be useful also for other kind of messages?
>  
> Paolo


Re: [atlas] Why has probe growth stagnated?

2019-02-19 Thread scg
I think it’s worth first considering a couple questions:  what is the goal 
here, and what are the constraints on meeting that goal? 

If the goal is “lots and lots of probes in ever increasing numbers,” than 
spinning up lots of VM probes would be great.  It would be an easy way to get 
probes in large numbers cheaply and efficiently.  But if the goal is to do 
actual network performance measurements from the perspective of the end users 
who actually use the Internet, that doesn’t help much. 

Where Atlas really shines is in the huge number of measurement points on end 
user connections all over the world.  Need to understand what the network looks 
like to users on some ISP in Venezuela?   Atlas probably has a probe, and can 
tell you that. 

Here we get into an issue of the low hanging fruit being pretty saturated.  For 
instance, I could plug in a probe at my house, but it would be the third Atlas 
probe on Comcast in Oakland, California, and wouldn’t really add anything 
(thus, I have a probe that I’ve been carrying around in my bag for the last few 
months waiting until I have time to plug it in somewhere more interesting). But 
there are still a lot of smaller ISPs that don’t have Atlas probes despite 
having enough end users for measurements to matter, probably because they don’t 
have any customers who are part of the global network operations community.  It 
should be possible to get probes installed in a bunch of those, but it would 
require both available probe hardware and a targeted effort.

My second question is what the constraints are on sending out new probes.  Is 
there a shortage at the supplier, or is this just something that needs funding?

-Steve


> On Feb 14, 2019, at 11:22 AM, Jared Mauch  wrote:
> 
> I think it’s quite easy to get a VM these days as well, so the needs have 
> perhaps changed somewhat.
> 
> I know that hosting a VM anchor is a lot easier now, and people may have an 
> easier time hosting a VM than a probe in some cases.
> 
> - Jared
> 
>> On Feb 14, 2019, at 12:13 PM, James Gannon  wrote:
>> 
>> Hard to get new probes these days.
>> 
>> On 14.02.19, 18:10, "ripe-atlas on behalf of Hank Nussbacher" 
>>  wrote:
>> 
>>   On 12/02/2019 18:22, Hank Nussbacher wrote:
>> 
>>   As I am preparing my presentation I went to the stats page:
>>   https://atlas.ripe.net/results/maps/network-coverage/
>>   and found that even user growth continues upward as well as number of 
>>   anchor probes, the number of actual probes has more or less tapered off 
>>   as of mid-2017 and ends close to 10,000 probes.  Why is that?
>>   Since Nov 2015 when we passed the 9000 probe mark, probe growth is 
>>   negligible.
>>   Why have all these new users (20,000 new uses since Nov 2015!) not added 
>>   probes?
>>   What are we doing wrong to entice users to install probes?
>> 
>>   Regards,
>>   Hank
>> 
>>> I have been invited to a large CS dept in a university to give a 40 
>>> minute intro into
>>> what is RIPE ATLAS, how does it work, how do you get credits, how many 
>>> probes
>>> are there, what is an anchor, where are they located, how does the GUI 
>>> work, what type of measurements
>>> can one do, etc.  Very very introductory - just to whet their 
>>> appetite.  A basic intro to RIPE ATLAS.
>>> So I looked in:
>>> https://atlas.ripe.net/resources/training-and-materials/
>>> and didn't find anything (PS the webinar link is broken).
>>> I am sure there must be some PPT/PDF presentation out there for this.
>>> Pointers?
>>> 
>>> Thanks,
>>> Hank
>> 
>> 
>> 
>> 
>> 
> 
> 



Re: [atlas] Global Traceroute

2018-07-17 Thread scg



> On Jul 16, 2018, at 1:49 PM, Vladislav V. Prodan  wrote:
> 
> Hello.
> 
> Thank you for your work.
> 

Thanks for the feedback!

> I will summarize my wishes:
> 
> 1) After clicking the "Submit" button, the picture on the page should
> be shown, that the request is coming or the text "Loading ", so
> that the user realizes that the request is not fast and did not hurry
> to leave the page.

Agreed.  This is on my to do list.  
> 
> 2) If there is only one probes in the selected ASN, then after
> selecting ASN, this probe should also be selected.

This sounds like a good idea.  I’ll work on it. 

> 
> 3) It would be desirable, that at https request
> https://www.globaltraceroute.com/?probe=19178 the top fields of sample
> #19178 are automatically filled.
> 
> 4) It would be desirable, that at https request
> https://www.globaltraceroute.com/?probe=19178&target=1.1.1.1 the top
> fields of sample #19178 were automatically filled, in "Target Address"
> the value 1.1.1.1 was set and the request for construction trails.
> 
> 5) It would be desirable that when https request
> https://www.globaltraceroute.com/?country=UK&target=1.1.1.1 randomly
> take a probe from the selected location (UK), automatically fill the
> top fields of this sample, the "Target Address" was exposed value
> 1.1.1.1 and automatically sent a request to build the route.
> 
> 6) I want to work correctly in the console programs curl, lynx and wget.
> 
I’m curious about the use case for this. 

Using the Atlas API, if you already know the probe ID, you do the trace route 
with two http transactions. The first one creates the measurement and returns a 
JSON containing a measurement ID.  The second, 30 seconds to a minute later 
(thus the slowness of the web app to return results) sends the measurement ID 
and retrieves a JSON containing the results. What I’m adding is:

- making it easier to find the right probes
- turning two requests into one
- supplying Atlas credits to pay for the traceroute
- reformatting the JSON output into a traditional text-based traceroute output, 
which is easier for humans to read but maybe less useful if you’re generating 
the traceroutes from a machine. 

Are you looking for a faster way to do manual requests and get human readable 
output, trying to point an automated system at it, or some hybrid of the two?  
And if automated, is the human-readable output ideal, or would you be better 
off dealing with something a more machine readable format?

> 7) Notes at the end of the route that "Target Address" is anycast
> address, especially for IP facebook, google, youtube and cloudflare.

I’m curious about the use case again. Also, is there a good source for that 
data, or would this be adding one at a time as I discover them?

> 
> 8) reCAPTCHA is certainly good against abuse, but is more accountable
> limits on the number of requests. It is possible, after authorization
> through Google or facebook, to raise the limits of requests.

This is largely an issue of resources. Thanks to a generous donor, I have 
enough credits for more than a million traceroutes.  If I run through that due 
to human users, that will mean this is far more successful than I expect it to 
be, and there should be no problem either getting more donated, or coming up 
with a revenue model to buy them through Atlas sponsorship. But if I open it up 
for machine-generated measurements, it wouldn’t be that difficult for a single 
user to run through a million measurements. 

So, I’m certainly happy to accommodate measurement by machine or in mass 
quantities, but need to figure out how to make it sustainable. I have a few 
models in mind for that, but again it largely depends on the use cases. 
> 
> 9) I want a correct recognition of ASN for gray IP - 10.137.128.1
> (10.137.128.1) [AS ???]
> 
> 10) I want a correct ASN recognition for some other IP - 185.1.50.68
> (185.1.50.68) [AS ???] 12.581 ms
> 

The IP address to ASN mapping is coming from MaxMind’s GeoLite2 ASN database. 
Putting in an override for RFC1918 would be pretty easy.  Other corrections 
should go through MaxMind — it will fix a lot more than just this, and I don’t 
think it’s scalable for me to track every error in MaxMind. 

Thanks again for the feedback.  It’s really useful. 

-Steve