Re: [OSRM-talk] Shared memory with multiple OSRM files
Apologies, I'm just after seeing in the wiki that this is possible in later versions! Here's the link for anyone interested: https://github.com/Project-OSRM/osrm-backend/wiki/Configuring-and-using-Shared-Memory#using-shared-memory Kind regards, Kieran Caplice On 01/11/2018 12:56, Kieran Caplice wrote: Hello, We are having difficulty pre-processing planet data using the foot profile, so our option is to instead pre-process different regions separately. Is it possible to load multiple .osrm files into memory and use libosrm to do operations on them? Kind regards, Kieran Caplice ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk
[OSRM-talk] Shared memory with multiple OSRM files
Hello, We are having difficulty pre-processing planet data using the foot profile, so our option is to instead pre-process different regions separately. Is it possible to load multiple .osrm files into memory and use libosrm to do operations on them? Kind regards, Kieran Caplice ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk
Re: [OSRM-talk] osrm-datastore error code 21
Hi Daniel, Many thanks for the reply again. An update: turns out when I rebooted the machine to fix the issue with running osrm-datastore across different users, I had forgotten that changes to /etc/sysctl.conf are not persisted after rebooting, so all is working fine now after adding the kernel.shmall and kernel.shmmax properties again. Kind regards, Kieran Caplice On 29/01/18 18:07, Daniel Patterson wrote: Hi Kieran, The problem is definitely occurring when trying to allocate the shared memory block. This line from your strace output shows the error happening: shmget(0x10001b9, 96772768369, IPC_CREAT|0644) = -1 EINVAL (Invalid argument) I suspect the "code 21 (EISDIR)" message we're printing out here is wrong or misleading, maybe. Can you try playing with the constants in this test program? ---BEGIN test.c--- #include #include #include #include #include //#define MEMORY_SIZE 96772768369 #define MEMORY_SIZE 1024*1024 #define KEY_PATH "/tmp/osrm.lock" int main(void) { key_t tok = ftok(KEY_PATH, 0); // Original that was being called //int result = shmget(0x10001b9, 96772768369, IPC_CREAT|0644); int result = shmget(tok, MEMORY_SIZE, IPC_CREAT|0644); if (result == -1) { printf("shmget returned -1: errno is %d: %s\n", errno, strerror(errno)); } else { printf("shmget worked - cleaning up\n"); shmctl(result, IPC_RMID, NULL); } } --END test.c- Try different values of "MEMORY_SIZE", and also try uncommenting the original shmget line that I've included, see if that works standalone, and see what messages you get. Compiling the test program should be a simple "gcc test.c" daniel On Mon, Jan 29, 2018 at 1:48 AM, Kieran Caplice <kieran.capl...@temetra.com <mailto:kieran.capl...@temetra.com>> wrote: Hi Julien/Daniel, Thanks for the replies. I had read that github issue prior to emailing the list, and it did solve the initial error I was having, which was due to running osrm-datastore as root and later as user "osrm". Rebooting the machine solved this as it did for you, Julien. But after that I'm faced with this issue. @Daniel: /tmp contains the two files: -rw-rw-r-- 1 osrm osrm 0 Jan 26 16:34 osrm-datastore.lock -rw-rw-r-- 1 osrm osrm 0 Jan 26 16:34 osrm.lock Output of the strace: root@htzh /opt/osrm # su - osrm -c "strace osrm-datastore /opt/osrm/data/planet-latest/planet-latest.osrm" ... open("/opt/osrm/data/planet-latest/planet-latest.osrm.tld", O_RDONLY) = 5 read(5, "OSRN\5\17\0M\25\17\0\0\0\0\0\0\1\0\3\0\1\1\3\0\1\0\v\0\1\1\v\0"..., 8191) = 8191 close(5) = 0 stat("/opt/osrm/data/planet-latest/planet-latest.osrm.partition", 0x7ffcde3c8d20) = -1 ENOENT (No such file or directory) stat("/opt/osrm/data/planet-latest/planet-latest.osrm.cells", 0x7ffcde3c8d20) = -1 ENOENT (No such file or directory) stat("/opt/osrm/data/planet-latest/planet-latest.osrm.cell_metrics", 0x7ffcde3c8d20) = -1 ENOENT (No such file or directory) stat("/opt/osrm/data/planet-latest/planet-latest.osrm.mldgr", 0x7ffcde3c8d20) = -1 ENOENT (No such file or directory) ioctl(1, TCGETS, {B38400 opost isig icanon echo ...}) = 0 ioctl(1, TCGETS, {B38400 opost isig icanon echo ...}) = 0 write(1, "[info] Allocating shared memory "..., 57[info] Allocating shared memory of 96772768369 bytes ) = 57 stat("/tmp", {st_mode=S_IFDIR|S_ISVTX|0777, st_size=4096, ...}) = 0 stat("/tmp/osrm.lock", {st_mode=S_IFREG|0664, st_size=0, ...}) = 0 stat("/tmp", {st_mode=S_IFDIR|S_ISVTX|0777, st_size=4096, ...}) = 0 stat("/tmp/osrm.lock", {st_mode=S_IFREG|0664, st_size=0, ...}) = 0 shmget(0x10001b9, 96772768369, IPC_CREAT|0644) = -1 EINVAL (Invalid argument) ioctl(1, TCGETS, {B38400 opost isig icanon echo ...}) = 0 ioctl(1, TCGETS, {B38400 opost isig icanon echo ...}) = 0 write(2, "\33[31m[error] Error while attempt"..., 88[error] Error while attempting to allocate shared memory: Invalid argument, code 21) = 88 write(2, "\33[0m", 4) = 4 write(2, "\n", 1 ) = 1 write(2, "terminate called after throwing "..., 48terminate called after throwing an instance of ') = 48 ... Can you provide any further insight into what the problem might be? By the way, we're using the latest release version (v5.15.0) built from source on 16.04. Kind regards, Kieran Caplice On 26/01/18 17:48, Julien Coupey wrote: Hi, Not sure if you're hitting the same problem here, but I recall a related dis
Re: [OSRM-talk] Expecting time for osrm-contract for planet
Hi Daniel, Can you clarify if you use Docker at Mapbox? What kind of server do you guys have (if not the one described on the wiki page)? Over night, the process has gone to just 75% complete from 65% yesterday - 41 hours in total now, and back to maxing CPU again. Kind regards, Kieran Caplice On 21/09/17 17:17, Daniel Patterson wrote: OSRM supports *two* core routing algorithms - CH and MLD. The `osrm-contract` tool generates the CH dataset, but you can use the MLD pipeline instead with: osrm-extract -p profiles/bicycle.lua yourmap.osm.pbf osrm-partition yourmap.osrm osrm-customize yourmap.osrm osrm-routed -a MLD yourmap.osrm This sequence of tools should be significantly quicker than osrm-contract - the price you pay is that routing requests are about 5x slower (still pretty fast though!). The reason that MLD exists is for the `osrm-customize` step - it allows you to import traffic data very quickly and update the routing graph (~1 minute for North America). It's hard to say exactly what's going wrong with osrm-contract here - here at Mapbox, we daily run `osrm-contract` over the latest planet with the bicycle profile without a problem, however, Alex and others have reported issues with what seem like hangs on much smaller datasets that we've been unable to reproduce so far. The runtime of osrm-contract is affected by how much hierarchy exists in the data - the more similar the edge speeds (like in foot) and the more edges there are, the slower it gets, often in a non-linear fashion. The car profile has a very hierarchical structure (many different road speeds), so it fits well into the CH, the construction algorithm doesn't need to compare as many options. daniel On Thu, Sep 21, 2017 at 9:03 AM, Kieran Caplice <kieran.capl...@temetra.com <mailto:kieran.capl...@temetra.com>> wrote: We're actually looking for the best of both car and foot, so in my head, bicycle would be the happy medium (though I could be completely wrong on this). Kind regards, Kieran Caplice On 21/09/17 16:53, Alex Farioletti wrote: i've run into the same issues, and now i just use metroextracts of the areas that i need for the bike stuff i do and it reduces the time significantly *Alex Farioletti* *415.312.1674* /tcbcourier.com <http://tcbcourier.com> / On Thu, Sep 21, 2017 at 8:49 AM, Kieran Caplice <kieran.capl...@temetra.com <mailto:kieran.capl...@temetra.com>> wrote: Thanks Daniel. I'm using the bicycle profile, so I would expect based on what you've said that somewhere up to 36 hours would be likely. However, this is the current output, after 25h40m: [info] Input file: /data/1505492056/planet-latest.osrm [info] Threads: 12 [info] Reading node weights. [info] Done reading node weights. [info] Loading edge-expanded graph representation [info] merged 2379332 edges out of 152432 [info] initializing node priorities... ok. [info] preprocessing 389797971 (90%) nodes... [info] . 10% . 20% . 30% . 40% . 50% . 60% It hasn't advanced past 60% in the last 2-3 hours. It is however maxing CPU and using approximately the same amount of RAM since it started. Kind regards, Kieran Caplice On 21/09/17 16:39, Daniel Patterson wrote: Hi Kieran, The contraction time will be slow - many, many hours for the whole planet. *Typically* for the car profile it's about 12 hours, but if you use bike or foot, or your own profile, it can get a lot bigger. If you've messed with the travel speeds, that can have a big effect too. 24 hours is not unheard of, but whether it's legit will depend a lot on the details. daniel On Thu, Sep 21, 2017 at 7:00 AM, Kieran Caplice <kieran.capl...@temetra.com <mailto:kieran.capl...@temetra.com>> wrote: Hi all, Could anyone give an approx estimate for the time required to run the osrm-contract on planet data on a 12 thread, 256 GB RAM, SSD machine? The osrm-extract process finished in 232 minutes, but the contract has now been running solid for 24 hours, and appears to be stuck at 60% on "preprocessing nodes". All 12 cores are generally maxed out, and the process is using nearly 90 GB of RAM. This is the second time I've run the contract process, as my SSH connection to the server dropped the first time and the process wasn't running in a screen etc, so I assumed after the 40-odd hours it was running for, the connection drop caused it to hang, but now I'm not so sure. Were there any files I should maybe have cleared b
Re: [OSRM-talk] Expecting time for osrm-contract for planet
I take back what I said, the contract process has advanced 5%! Looks like it's now using more RAM than CPU, so I'll give it a bit more time before judging it prematurely again :-) Kind regards, Kieran Caplice On 21/09/17 17:03, Kieran Caplice wrote: We're actually looking for the best of both car and foot, so in my head, bicycle would be the happy medium (though I could be completely wrong on this). Kind regards, Kieran Caplice On 21/09/17 16:53, Alex Farioletti wrote: i've run into the same issues, and now i just use metroextracts of the areas that i need for the bike stuff i do and it reduces the time significantly *Alex Farioletti* *415.312.1674* /tcbcourier.com <http://tcbcourier.com> / On Thu, Sep 21, 2017 at 8:49 AM, Kieran Caplice <kieran.capl...@temetra.com <mailto:kieran.capl...@temetra.com>> wrote: Thanks Daniel. I'm using the bicycle profile, so I would expect based on what you've said that somewhere up to 36 hours would be likely. However, this is the current output, after 25h40m: [info] Input file: /data/1505492056/planet-latest.osrm [info] Threads: 12 [info] Reading node weights. [info] Done reading node weights. [info] Loading edge-expanded graph representation [info] merged 2379332 edges out of 152432 [info] initializing node priorities... ok. [info] preprocessing 389797971 (90%) nodes... [info] . 10% . 20% . 30% . 40% . 50% . 60% It hasn't advanced past 60% in the last 2-3 hours. It is however maxing CPU and using approximately the same amount of RAM since it started. Kind regards, Kieran Caplice On 21/09/17 16:39, Daniel Patterson wrote: Hi Kieran, The contraction time will be slow - many, many hours for the whole planet. *Typically* for the car profile it's about 12 hours, but if you use bike or foot, or your own profile, it can get a lot bigger. If you've messed with the travel speeds, that can have a big effect too. 24 hours is not unheard of, but whether it's legit will depend a lot on the details. daniel On Thu, Sep 21, 2017 at 7:00 AM, Kieran Caplice <kieran.capl...@temetra.com <mailto:kieran.capl...@temetra.com>> wrote: Hi all, Could anyone give an approx estimate for the time required to run the osrm-contract on planet data on a 12 thread, 256 GB RAM, SSD machine? The osrm-extract process finished in 232 minutes, but the contract has now been running solid for 24 hours, and appears to be stuck at 60% on "preprocessing nodes". All 12 cores are generally maxed out, and the process is using nearly 90 GB of RAM. This is the second time I've run the contract process, as my SSH connection to the server dropped the first time and the process wasn't running in a screen etc, so I assumed after the 40-odd hours it was running for, the connection drop caused it to hang, but now I'm not so sure. Were there any files I should maybe have cleared before trying to run it again? I'm using the docker image to run the command (using osrm/osrm-backend:latest): time docker run -t -v /opt/osrm/data:/data osrm/osrm-backend osrm-contract /data/1505492056/planet-latest.osrm Kind regards, Kieran Caplice ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org <mailto:OSRM-talk@openstreetmap.org> https://lists.openstreetmap.org/listinfo/osrm-talk <https://lists.openstreetmap.org/listinfo/osrm-talk> ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org <mailto:OSRM-talk@openstreetmap.org> https://lists.openstreetmap.org/listinfo/osrm-talk <https://lists.openstreetmap.org/listinfo/osrm-talk> ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org <mailto:OSRM-talk@openstreetmap.org> https://lists.openstreetmap.org/listinfo/osrm-talk <https://lists.openstreetmap.org/listinfo/osrm-talk> ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk
Re: [OSRM-talk] Expecting time for osrm-contract for planet
Thanks Daniel. I'm using the bicycle profile, so I would expect based on what you've said that somewhere up to 36 hours would be likely. However, this is the current output, after 25h40m: [info] Input file: /data/1505492056/planet-latest.osrm [info] Threads: 12 [info] Reading node weights. [info] Done reading node weights. [info] Loading edge-expanded graph representation [info] merged 2379332 edges out of 152432 [info] initializing node priorities... ok. [info] preprocessing 389797971 (90%) nodes... [info] . 10% . 20% . 30% . 40% . 50% . 60% It hasn't advanced past 60% in the last 2-3 hours. It is however maxing CPU and using approximately the same amount of RAM since it started. Kind regards, Kieran Caplice On 21/09/17 16:39, Daniel Patterson wrote: Hi Kieran, The contraction time will be slow - many, many hours for the whole planet. *Typically* for the car profile it's about 12 hours, but if you use bike or foot, or your own profile, it can get a lot bigger. If you've messed with the travel speeds, that can have a big effect too. 24 hours is not unheard of, but whether it's legit will depend a lot on the details. daniel On Thu, Sep 21, 2017 at 7:00 AM, Kieran Caplice <kieran.capl...@temetra.com <mailto:kieran.capl...@temetra.com>> wrote: Hi all, Could anyone give an approx estimate for the time required to run the osrm-contract on planet data on a 12 thread, 256 GB RAM, SSD machine? The osrm-extract process finished in 232 minutes, but the contract has now been running solid for 24 hours, and appears to be stuck at 60% on "preprocessing nodes". All 12 cores are generally maxed out, and the process is using nearly 90 GB of RAM. This is the second time I've run the contract process, as my SSH connection to the server dropped the first time and the process wasn't running in a screen etc, so I assumed after the 40-odd hours it was running for, the connection drop caused it to hang, but now I'm not so sure. Were there any files I should maybe have cleared before trying to run it again? I'm using the docker image to run the command (using osrm/osrm-backend:latest): time docker run -t -v /opt/osrm/data:/data osrm/osrm-backend osrm-contract /data/1505492056/planet-latest.osrm Kind regards, Kieran Caplice ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org <mailto:OSRM-talk@openstreetmap.org> https://lists.openstreetmap.org/listinfo/osrm-talk <https://lists.openstreetmap.org/listinfo/osrm-talk> ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk
[OSRM-talk] Expecting time for osrm-contract for planet
Hi all, Could anyone give an approx estimate for the time required to run the osrm-contract on planet data on a 12 thread, 256 GB RAM, SSD machine? The osrm-extract process finished in 232 minutes, but the contract has now been running solid for 24 hours, and appears to be stuck at 60% on "preprocessing nodes". All 12 cores are generally maxed out, and the process is using nearly 90 GB of RAM. This is the second time I've run the contract process, as my SSH connection to the server dropped the first time and the process wasn't running in a screen etc, so I assumed after the 40-odd hours it was running for, the connection drop caused it to hang, but now I'm not so sure. Were there any files I should maybe have cleared before trying to run it again? I'm using the docker image to run the command (using osrm/osrm-backend:latest): time docker run -t -v /opt/osrm/data:/data osrm/osrm-backend osrm-contract /data/1505492056/planet-latest.osrm Kind regards, Kieran Caplice ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk
Re: [OSRM-talk] Current server requirements for planet
Thank you all very much. Kind regards, Kieran Caplice On 05/07/17 22:19, Daniel Hofmann wrote: For the record: because we're seeing this question pop up every now and again I just added a disk and memory requirements page to our wiki: https://github.com/Project-OSRM/osrm-backend/wiki feel free to adapt and / or add your own findings. Cheers, Daniel J H On Wed, Jul 5, 2017 at 8:30 PM, Daniel Patterson <dan...@mapbox.com <mailto:dan...@mapbox.com>> wrote: For preprocessing: The demoserver uses about 175GB of RAM to preprocess the planet, and around 280GB of STXXL disk space (you'll also need 35GB for the planet file, and 40-50GB for the generated datafiles). For the foot profile, the latest number I have is about 248GB of RAM. Everything else is proportionally larger. The profile you choose has a big impact on size - the foot profile includes a lot more ways/roads/paths than the car profile, so it needs more resources. The cycling profile sits somewhere in between. For runtime: You should be able to route on the planet with 64GB of RAM - we basically just load all the files into memory, so whatever the output file size from preprocessing - that's roughly how much RAM you'll need (minus the size of the `.fileIndex` file, which is `mmap()`-ed and read on-demand). On Wed, Jul 5, 2017 at 7:45 AM, Artur Bialecki <abiale...@intellimec.com <mailto:abiale...@intellimec.com>> wrote: In my case the disk space used is 102 GB and about 64 GB of RAM while running with 30 threads. I run a non-standard profile though that returns additional data. Not sure if that affects the foot prints. Artur... -Original Message----- From: Kieran Caplice [mailto:kieran.capl...@temetra.com <mailto:kieran.capl...@temetra.com>] Sent: Wednesday, July 05, 2017 7:48 AM To: osrm-talk@openstreetmap.org <mailto:osrm-talk@openstreetmap.org> Subject: [OSRM-talk] Current server requirements for planet Hello, What are the current recommended RAM+disk requirements for running an OSRM planet server? Thanks in advance. Kind regards, Kieran Caplice ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org <mailto:OSRM-talk@openstreetmap.org> https://lists.openstreetmap.org/listinfo/osrm-talk <https://lists.openstreetmap.org/listinfo/osrm-talk> This e-mail message is confidential, may be privileged and is intended for the exclusive use of the addressee. Any other person is strictly prohibited from disclosing, distributing or reproducing it. If the addressee cannot be reached or is unknown to you, please inform us immediately and delete this e-mail message and destroy all copies. Thank you. ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org <mailto:OSRM-talk@openstreetmap.org> https://lists.openstreetmap.org/listinfo/osrm-talk <https://lists.openstreetmap.org/listinfo/osrm-talk> ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org <mailto:OSRM-talk@openstreetmap.org> https://lists.openstreetmap.org/listinfo/osrm-talk <https://lists.openstreetmap.org/listinfo/osrm-talk> ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk
Re: [OSRM-talk] Current server requirements for planet
Thanks. I saw that, but we're not really interested in using cloud-based VMs such as AWS or Azure, and the specs might differ somewhat between bare metal and VM, so I'm really looking to get opinions from people who have a planet server running and what hardware they're using. Kind regards, Kieran Caplice On 05/07/17 12:59, Ricardo Pereira wrote: You can get an idea of requirement and resulting performance by checking this: https://github.com/Project-OSRM/osrm-backend/wiki/Demo-server, if the information is actual. Cheers On Wed, 5 Jul 2017 at 13:49 Kieran Caplice <kieran.capl...@temetra.com <mailto:kieran.capl...@temetra.com>> wrote: Hello, What are the current recommended RAM+disk requirements for running an OSRM planet server? Thanks in advance. Kind regards, Kieran Caplice ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org <mailto:OSRM-talk@openstreetmap.org> https://lists.openstreetmap.org/listinfo/osrm-talk ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk
[OSRM-talk] Current server requirements for planet
Hello, What are the current recommended RAM+disk requirements for running an OSRM planet server? Thanks in advance. Kind regards, Kieran Caplice ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk
[OSRM-talk] Required RAM for planet
Hi all, Just enquiring if anyone has an up-to-date value for the required RAM to extract/prepare the planet PBF on OSRM v4.9.1? Below is the output from extraction on a 64 GB RAM machine: [info] Input file: planet-latest.osm.pbf [info] Profile: profile.lua [info] Threads: 12 [info] Using script profile.lua [STXXL-MSG] STXXL v1.3.1 (release) [STXXL-MSG] 1 disks are allocated, total space: 25 MiB [info] Parsing in progress.. [info] input file generated by planet-dump-ng 1.1.3 [info] timestamp: 2016-02-29T01:59:57Z [info] Using turn restrictions [info] Found 3 exceptions to turn restrictions: [info] motorcar [info] motor_vehicle [info] vehicle [info] Parsing finished after 3630.62 seconds [info] Raw input contains 3240515916 nodes, 333181812 ways, and 4046532 relations, and 0 unknown entities [extractor] Sorting used nodes... ok, after 301.697s [extractor] Erasing duplicate nodes ... ok, after 262.325s [extractor] Sorting all nodes ... ok, after 3419.92s [extractor] Building node id map ... ok, after 1674.42s [extractor] setting number of nodes ... ok [extractor] Confirming/Writing used nodes ... ok, after 837.176s [info] Processed 578533637 nodes [extractor] Sorting edges by start... ok, after 1919.47s [extractor] Setting start coords ... ok, after 2657.74s [extractor] Sorting edges by target ... ok, after 1894.56s [extractor] Computing edge weights... ok, after 2911.29s [extractor] Sorting edges by renumbered start ... ok, after 1864.9s [extractor] Writing used edges ... ok, after 557.274s [extractor] setting number of edges ... ok [info] Processed 610970822 edges [extractor] Sorting used ways ... ok, after 89.8886s [extractor] Sorting 491077 restriction. by from... ok, after 0.906043s [extractor] Fixing restriction starts ... ok, after 39.6105s [extractor] Sorting restrictions. by to ... ok, after 0.734124s [extractor] Fixing restriction ends ... ok, after 40.6053s [info] usable restrictions: 459264 [extractor] writing street name index ... ok, after 3.3452s [info] extraction finished after 23181.9s [info] Generating edge-expanded graph representation [info] - 459264 restrictions. [info] Importing n = 578533637 nodes [info] - 157152 bollard nodes, 792260 traffic lights [info] and 610970822 edges [info] Graph loaded ok and has 610970822 edges [warn] std::bad_alloc From my reading, this is caused by running out of RAM. The only files created were: -rw-r--r-- 1 rootroot 25G Mar 7 18:57 planet-latest.osrm -rw-r--r-- 1 rootroot118M Mar 7 19:00 planet-latest.osrm.names -rw-r--r-- 1 rootroot 15M Mar 7 19:00 planet-latest.osrm.restrictions -rw-r--r-- 1 rootroot 20 Mar 7 12:33 planet-latest.osrm.timestamp Which obviously means preparing gives the following: [info] Input file: planet-latest.osrm [info] Profile: profile.lua [info] Threads: 12 [info] Loading edge-expanded graph representation [info] Opening planet-latest.osrm.ebg [warn] [exception] osrm input file misses magic number. Check or reprocess the file Thanks. Kind regards, Kieran Caplice ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk
Re: [OSRM-talk] osrm-extract taking hours to complete
I realise now I did in fact send my last email to the list, rather than to Patrick directlyno harm done! The info might be useful to others anyway. Thanks Bjorn, that's very helpful. Our extract took just over 7 hours yesterday, which isn't as long as I thought it would take, so we'll probably just schedule it to run every weekend or so and move the files to the correct location when finished. Kind regards, Kieran Caplice On 03/03/16 09:23, Björn Semm wrote: Hi Kieran, we run an OSRM update (planet) once a week on a central instance and copy the generated files to diffrent environments. osrm@box:~$ ./osrm-update-planet-files.sh Checking for md5sum [OK] Checking for osrm-extract [OK] Checking for osrm-prepare [OK] Checking for tar [OK] Checking for wget [OK] Downloading planet-latest.osm.pbf.md5 ... [OK] Downloading http://planet.osm.org/pbf/planet-latest.osm.pbf ... [OK] Verifying md5 checksum of planet-latest.osm.pbf ... [OK] Starting osrm-extract at Wed Mar 2 11:57:41 CET 2016... Finished osrm-extract at Thu Mar 3 00:21:34 CET 2016! Starting osrm-prepare at Thu Mar 3 00:21:34 CET 2016... Finished osrm-prepare at Thu Mar 3 09:21:23 CET 2016! Removing old extracts from /data/current ... empty [OK] Copying new generated files to /data/current ... [OK] Renaming files in /data/current with Prefix 201609 ... [OK] Creating md5 checksum over all 201609_planet-latest* ... [OK] Compressing 201609_planet-latest* to 201609_planet-latest.tar.gz ... [OK] Determining if test or prod env is the target ... TEST [OK] Copying new generated files to /mnt/osrm-extract (TEST) ... [OK] Cleaning up /mnt/osrm-extract ... [OK] Cleanup /data/planet-latest.osm.pbf ... [OK] On a VM with 96GB RAM, 4 Cores and a RAID5 (HDD) it took about 12,5 hours to extract and 9 hours to prepare. SWAP is 100GB, stxxl=disk=/data/stxxl,25,syscall We currently use Version 4.9.0 of osrm-backend. BR Björn Von: Kieran Caplice <kieran.capl...@temetra.com> Gesendet: Mittwoch, 2. März 2016 18:23 An: osrm-talk@openstreetmap.org Betreff: Re: [OSRM-talk] osrm-extract taking hours to complete Hi Patrick, That makes sense then. It's obvious the process is just going to take upwards of 8-10 hours for us in that case. Thanks for the help. Kind regards, Kieran Caplice On 02/03/16 17:01, Patrick Niklaus wrote: Hey Kieran, there have been a lot of structural changes (e.g. moving code from osrm-prepare into osrm-extract) that probably invalidate that numbers. Also we support 64bit OSM ids now, which sadly uses a lot more disk space. I think stxxl need like 200GB. I think on our setup we have a turn-around of 6 hours for the planet dataset on an SSD setup (car profile, any other profile needs significantly longer). You should probably think about updating your hard drives as this is IO bound. At your current read/write speed it will already take more than an hour to just write 200GB of data once. We scan it at least twice just for pre-processing. Cheers, Patrick On Wed, Mar 2, 2016 at 5:51 PM, Kieran Caplice <kieran.capl...@temetra.com> wrote: Hello, I'm currently extracting the planet PBF (~31 GB), and it's been running for hours. I notice in the "Running OSRM" wiki page, it says " On a Core i7 with 8GB RAM and (slow) 5400 RPM Samsung SATA hard disks it took about 65 minutes to do so from a PBF formatted planet", which is making me wonder why it's taking so long on our server. Below are some example output messages: [info] Parsing finished after 3584.35 seconds [extractor] Erasing duplicate nodes ... ok, after 319.091s [extractor] Sorting all nodes ... ok, after 3632.87s [extractor] Building node id map ... ok, after 2025.29s [extractor] Confirming/Writing used nodes ... ok, after 1096.24s [extractor] Sorting edges by start... ok, after 2000.08s Some stxxl errors were outputted as I set the disk size to 100GB thinking it was enough - but I didn't think it would cause such slowdowns as this, considering extracting the Europe PBF takes hours also without the stxxl errors. Server specs: Ubuntu 14.04 Intel Xeon CPU E5-1650 v3 @ 3.50GHz (hex-core with HT) 64 GB RAM @ 2133 MHz 2 TB Western Digital Enterprise 7200 RPM hard drive At the moment, disk IO is averaging around 35-40 MB/s R/W (~90%). Anyone have any ideas as to what might be going on? Or is it normal to take this long without an SSD? Thanks in advance. Kind regards, Kieran Caplice ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk ___ OSRM-talk mailing list O
Re: [OSRM-talk] osrm-extract taking hours to complete
Hi Patrick, Just wanted to ping you off-list. I just had a question about your update processhow often if at all do you update your PBF data, and how do you manage the process? We're thinking of just setting a cron job to download and extract the new PBF every weekend to a temporary folder, stop the server, move the new data to the correct location and start the server again. Kind regards, Kieran Caplice On 02/03/16 17:01, Patrick Niklaus wrote: Hey Kieran, there have been a lot of structural changes (e.g. moving code from osrm-prepare into osrm-extract) that probably invalidate that numbers. Also we support 64bit OSM ids now, which sadly uses a lot more disk space. I think stxxl need like 200GB. I think on our setup we have a turn-around of 6 hours for the planet dataset on an SSD setup (car profile, any other profile needs significantly longer). You should probably think about updating your hard drives as this is IO bound. At your current read/write speed it will already take more than an hour to just write 200GB of data once. We scan it at least twice just for pre-processing. Cheers, Patrick On Wed, Mar 2, 2016 at 5:51 PM, Kieran Caplice <kieran.capl...@temetra.com> wrote: Hello, I'm currently extracting the planet PBF (~31 GB), and it's been running for hours. I notice in the "Running OSRM" wiki page, it says " On a Core i7 with 8GB RAM and (slow) 5400 RPM Samsung SATA hard disks it took about 65 minutes to do so from a PBF formatted planet", which is making me wonder why it's taking so long on our server. Below are some example output messages: [info] Parsing finished after 3584.35 seconds [extractor] Erasing duplicate nodes ... ok, after 319.091s [extractor] Sorting all nodes ... ok, after 3632.87s [extractor] Building node id map ... ok, after 2025.29s [extractor] Confirming/Writing used nodes ... ok, after 1096.24s [extractor] Sorting edges by start... ok, after 2000.08s Some stxxl errors were outputted as I set the disk size to 100GB thinking it was enough - but I didn't think it would cause such slowdowns as this, considering extracting the Europe PBF takes hours also without the stxxl errors. Server specs: Ubuntu 14.04 Intel Xeon CPU E5-1650 v3 @ 3.50GHz (hex-core with HT) 64 GB RAM @ 2133 MHz 2 TB Western Digital Enterprise 7200 RPM hard drive At the moment, disk IO is averaging around 35-40 MB/s R/W (~90%). Anyone have any ideas as to what might be going on? Or is it normal to take this long without an SSD? Thanks in advance. Kind regards, Kieran Caplice ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk
Re: [OSRM-talk] osrm-extract taking hours to complete
Hi Patrick, That makes sense then. It's obvious the process is just going to take upwards of 8-10 hours for us in that case. Thanks for the help. Kind regards, Kieran Caplice On 02/03/16 17:01, Patrick Niklaus wrote: Hey Kieran, there have been a lot of structural changes (e.g. moving code from osrm-prepare into osrm-extract) that probably invalidate that numbers. Also we support 64bit OSM ids now, which sadly uses a lot more disk space. I think stxxl need like 200GB. I think on our setup we have a turn-around of 6 hours for the planet dataset on an SSD setup (car profile, any other profile needs significantly longer). You should probably think about updating your hard drives as this is IO bound. At your current read/write speed it will already take more than an hour to just write 200GB of data once. We scan it at least twice just for pre-processing. Cheers, Patrick On Wed, Mar 2, 2016 at 5:51 PM, Kieran Caplice <kieran.capl...@temetra.com> wrote: Hello, I'm currently extracting the planet PBF (~31 GB), and it's been running for hours. I notice in the "Running OSRM" wiki page, it says " On a Core i7 with 8GB RAM and (slow) 5400 RPM Samsung SATA hard disks it took about 65 minutes to do so from a PBF formatted planet", which is making me wonder why it's taking so long on our server. Below are some example output messages: [info] Parsing finished after 3584.35 seconds [extractor] Erasing duplicate nodes ... ok, after 319.091s [extractor] Sorting all nodes ... ok, after 3632.87s [extractor] Building node id map ... ok, after 2025.29s [extractor] Confirming/Writing used nodes ... ok, after 1096.24s [extractor] Sorting edges by start... ok, after 2000.08s Some stxxl errors were outputted as I set the disk size to 100GB thinking it was enough - but I didn't think it would cause such slowdowns as this, considering extracting the Europe PBF takes hours also without the stxxl errors. Server specs: Ubuntu 14.04 Intel Xeon CPU E5-1650 v3 @ 3.50GHz (hex-core with HT) 64 GB RAM @ 2133 MHz 2 TB Western Digital Enterprise 7200 RPM hard drive At the moment, disk IO is averaging around 35-40 MB/s R/W (~90%). Anyone have any ideas as to what might be going on? Or is it normal to take this long without an SSD? Thanks in advance. Kind regards, Kieran Caplice ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk
[OSRM-talk] osrm-extract taking hours to complete
Hello, I'm currently extracting the planet PBF (~31 GB), and it's been running for hours. I notice in the "Running OSRM" wiki page, it says " On a Core i7 with 8GB RAM and (slow) 5400 RPM Samsung SATA hard disks it took about 65 minutes to do so from a PBF formatted planet", which is making me wonder why it's taking so long on our server. Below are some example output messages: [info] Parsing finished after 3584.35 seconds [extractor] Erasing duplicate nodes ... ok, after 319.091s [extractor] Sorting all nodes ... ok, after 3632.87s [extractor] Building node id map ... ok, after 2025.29s [extractor] Confirming/Writing used nodes ... ok, after 1096.24s [extractor] Sorting edges by start... ok, after 2000.08s Some stxxl errors were outputted as I set the disk size to 100GB thinking it was enough - but I didn't think it would cause such slowdowns as this, considering extracting the Europe PBF takes hours also without the stxxl errors. Server specs: Ubuntu 14.04 Intel Xeon CPU E5-1650 v3 @ 3.50GHz (hex-core with HT) 64 GB RAM @ 2133 MHz 2 TB Western Digital Enterprise 7200 RPM hard drive At the moment, disk IO is averaging around 35-40 MB/s R/W (~90%). Anyone have any ideas as to what might be going on? Or is it normal to take this long without an SSD? Thanks in advance. Kind regards, Kieran Caplice ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk
Re: [OSRM-talk] Server API change
Hi Patrick, I'm assuming both of the limits you mentioned will be configurable via command line options? Kind regards, Kieran Caplice On 20/12/15 18:27, Patrick Niklaus wrote: Hey! in preparation to the OSRM v4.9.0 release we merged a few pull requests to our development branch. These changes are now reflected on the demo server: - The status code for `ok` was changed from `status: 0` to `status: 200` - There is an additional status code `208` for no segment found (with regard to failed coordinate snapping) - `viaroute` has a maximum limit of 500 coordinates - `trip` has a maximum limit of 100 coordinates - `locate` service gone, use `nearest` instead You can expect the official 4.9.0 release soon, as we sort out the last bugs. Cheers, Patrick ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk
Re: [OSRM-talk] Shortest route given start and end point
Hi Helder, Of course - I meant to include a link in my previous email! We'll more than likely be going with VROOM (https://github.com/jcoupey/vroom), which runs on top of OSRM. I've made it web-accessible with a simple Java Spring Boot app. Kind regards, Kieran Caplice On 16/12/15 21:14, Helder Alves wrote: Dear Kieran, Do you want to share that solution with the list? :-) -- Helder Alves Em 16/12/2015 3:13 da tarde, "Kieran Caplice" <kieran.capl...@temetra.com <mailto:kieran.capl...@temetra.com>> escreveu: Hi all, Thanks for the replies. I think we have found a solution. Kind regards, Kieran Caplice On 09/12/15 21:37, Daniel Patterson wrote: Hi Kieran, You're correct, OSRM doesn't currently implement the query you want. All the data you need to answer the question is in the response of the `/table` API. In theory, supporting this exact situation (fixed start/end nodes) should be a fairly simple change to the trip plugin. With the addition of a URL parameter to indicate that it's not a round-trip, we could insert a dummy node between the start/end points with 0 weight, and this should find a path with the properties you want, once we discard the dummy node at the end. Changes here should be mostly limited to the `plugins/trip.cpp` file, adding some entries to the distance table before performing the TSP search. Even without this feature, you could test OSRM with a couple of thousand points for a full round-trip. Performance for the query would be roughly the same, and I have no idea how it would handle 1000's. It's absolutely unfeasible for a brute-force search, that is limited to 10 nodes inside OSRM, so it would use the Farthest Insertion algorithm, which we've had good results with with 10's to 100's of points, but I don't know if it's been tested to 1000's. I suspect it's probably still going to be slow, you're asking some pretty computationally expensive questions here. daniel On Dec 9, 2015, at 2:38 AM, Kieran Caplice <kieran.capl...@temetra.com <mailto:kieran.capl...@temetra.com>> wrote: Hello, At the moment we're using the MapQuest Optimize Route API (http://www.mapquestapi.com/directions/#optimized), which given a list of points, computes the shortest route, using the first point as the start and the last point as the end. This is the exactly the functionality we're looking for, but MapQuest is quite expensive, slow, and doesn't support large batches (we need to support a couple of thousand points). From what I've been told, OSRM doesn't support this - it only supports travelling salesman (trip), using the same start and end point, or viaroute, which doesn't do any optimisation. I'm wondering how easy/possible would it be to implement in OSRM, or is there any pre/post processing that we can do to achieve this? Thanks in advance. Kind regards, Kieran Caplice ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org <mailto:OSRM-talk@openstreetmap.org> https://lists.openstreetmap.org/listinfo/osrm-talk ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org <mailto:OSRM-talk@openstreetmap.org> https://lists.openstreetmap.org/listinfo/osrm-talk ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org <mailto:OSRM-talk@openstreetmap.org> https://lists.openstreetmap.org/listinfo/osrm-talk ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk
[OSRM-talk] Shortest route given start and end point
Hello, At the moment we're using the MapQuest Optimize Route API (http://www.mapquestapi.com/directions/#optimized), which given a list of points, computes the shortest route, using the first point as the start and the last point as the end. This is the exactly the functionality we're looking for, but MapQuest is quite expensive, slow, and doesn't support large batches (we need to support a couple of thousand points). From what I've been told, OSRM doesn't support this - it only supports travelling salesman (trip), using the same start and end point, or viaroute, which doesn't do any optimisation. I'm wondering how easy/possible would it be to implement in OSRM, or is there any pre/post processing that we can do to achieve this? Thanks in advance. Kind regards, Kieran Caplice ___ OSRM-talk mailing list OSRM-talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/osrm-talk