So I get the impression that everybody on this list seems pretty cool with the 
approach of using this script to aid in ensuring that the points entered in 
JOSM have good coordinates and have correct addresses to the best knowledge of 
the armchair mapper. It clearly isn't a waterfall import. If there are no more 
objections, on Friday I'll send an email to the imports mailing list describing 
the approach. Given that the script can be run by anyone and is not part of a 
bulk import that is done by a single user, I agree that we need some way to 
connote the source. However instead of a tag, why don't we just add a 
`source:import=NSW LPI Web Services` to the changeset? That way anybody seeing 
the history will know.

Given that there are 3.8 million addresses in total in NSW, assuming it took 1 
second for somebody to add an address, it would take 440 days of non-stop work 
to add every single address. This is not exactly an exciting task! We can 
probably cover the city and immediate suburbs relatively quickly, but maybe it 
is worthwhile investigating the bulk import a bit more. Perhaps once Andrew 
Harvey finishes his work on openaddresses, we can use that data dump and follow 
the New Zealand approach of importing bit by bit - we can divide the dataset 
into an alphabetical list of suburbs, and then treat each suburb's import 
separately.

Ideas?

Dion Moult

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On June 18, 2018 9:23 PM, Dion Moult <d...@thinkmoult.com> wrote:

> On June 18, 2018 8:56 PM, Warin <61sundow...@gmail.com> wrote:
>
>> On 18/06/18 20:30, Andrew Harvey wrote:
>>
>>> On 18 June 2018 at 19:21, Dion Moult <d...@thinkmoult.com> wrote:
>>>
>>>> Thanks Andrew for your reply!
>>>>
>>>> 1. Thanks for the link to the import guidelines. My responses to the 
>>>> import guidelines below:
>>>
>>> First up I think any changesets that import addresses in this way should 
>>> have an extra changeset tag so if we need to we can identify which 
>>> changesets did the import (so more than just source=LPI NSW Base Map). 
>>> Something like import=NSW Address Points or something.
>>
>> source:import=LPI API via ?? something like that?
>
> Sure thing, I would be happy to if that is the appropriate thing to do :)
>
>>> I'm not sure about separating the address with a ";" like 
>>> https://www.openstreetmap.org/way/593297556/history#map=19/-33.78072/151.06688&layers=N,
>>>  could they not be two separate points? If it's a duplex, then I'd do it as 
>>> a single building with addr:housenumber=11, then if you want two nodes 
>>> inside the building for 11A and 11B.
>>
>> I have had separate buildings for A and B - share a common wall. In some 
>> instances I have 11 then 11A .. but no B.
>
> Thanks for the advice! I've fixed it to use two nodes. However, please note 
> that that particular building was not mapped as part of my import script 
> proposal. That was mapped previously by me completely manually. If I had used 
> the import script it would have created two nodes, one for 11A and one for 
> 11B.
>
>>> While I don't think there's anything wrong with 2/18 as a first pass, eg 
>>> https://www.openstreetmap.org/node/5667899003, I think it's better to use 
>>> addr:unit=2 addr:housenumber=18.
>
> Thanks :) I was wondering what was a better way of doing that. Fixed :) Again 
> as above this was mapped manually by me and not using the script.
>
>>>>  1. I am aware that big automatic updates can cause problems. I will only 
>>>> import addr:housenumber and addr:street and a single node.
>>>
>>> What are you planning on doing where the address in already in OSM? I think 
>>> in this case we should just not import that point and leave the existing 
>>> OSM addresses.
>>
>> Depends .. I have come across addresses that were out of sequence. Contacted 
>> the still active mapper (moved to Germany) and had not response .. after 
>> some months I have simple deleted them.
>> So it is worth checking that the new data is is not 'better' than the 
>> present OSM data.
>
> With my proposal of a semi-automated approach, every single new address will 
> have to be explicitly decided upon by a human mapper. A human mapper can 
> decide when to import the point if the existing data look bad (based off the 
> LPI Base Map raster background) and when to leave the existing OSM addresses.
>
>>>
>>>
>>>> 2. Yes, you are absolutely right that this is not a huge automatic import 
>>>> - it relies on a human choosing what addresses to add and a human 
>>>> submitting it as a change. All it does it automate the address lookup and 
>>>> make sure that the node is neatly positioned at the correct location.
>>>
>>>> 3. It looks like you're grabbing their entire dataset. That would be the 
>>>> alternative approach, doing a data dump, then importing that dump. This 
>>>> can import a lot more addresses, but is also much more complex. Is it 
>>>> worth pursuing? What do you reckon?
>>>
>>> Oh I'm not suggesting that. It makes sense for the OpenAddresses project to 
>>> use a complete extract, but as you might have seen in the openaddresses 
>>> ticket there's a lot of problems trying to dump the data, so your approach 
>>> of doing it bit by bit should work much better for an OSM import.
>
> Sounds good! Sorry for the misunderstanding :)
>
>>>
>>>
>>>> 4. It seems odd that they would provide an API but would prevent anything 
>>>> from using it.
>>>> 5. Looks like they are doing the big data import. See 3.
>>>
>>> Not quite, they did it using the approach you've described, broken it down 
>>> into pices and manually imported everything.
>>
>> It might be good to do one section and let people have a look at it?
>> I do think you'll find it repetitive. Maybe take a break and map something 
>> else for a while.
>>
>> Good Luck.
>
> Yes, an incremental approach followed by regular review sounds good.
_______________________________________________
Talk-au mailing list
Talk-au@openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk-au

Reply via email to