Thanks Anselm, This is an interesting mock-up, a little more refined than the the enkin.net hack . Here's a few quick of thoughts:
1. Google has already released mobile street view ( see http://googlemobile.blogspot.com/2008/09/street-view-and-walking-directions-come.html ) AR geotags are sure to follow. They're already paritally supported by KML see http://code.google.com/apis/maps/documentation/services.html#StreetviewOverlays 2. Creating an image DNS is a great idea, and necessary for 3D geopositioning by matching viewable points with a pointcloud database. perhaps achievable by an OSM style process. in the meantime experimenting with existing data seems like a path forward. Earthmine.com has offered the hacker community experimental access to their very detailed database of 3D pointclouds for cities in the western US ( not pdx, alas) they claim 20cm/pixel 3D location accuracy. 3. There's enormous work ahead working out a usable UI for sorting though what will be abundant visible data and media draped across the real-world, viewable through our mobile viewfinders. 4. So far there is no standard markup for KML placemark docs. - they are NOT, but should be, fully functional web objects. Yah I know you can insert html, but can you implement AJAX in a KML placemark? 4. We better move quick, The Handheld AR meme is in the air. The Tonchidot demo demonstrates, that the mainstream techcruch VC crowd is now dialed and very stoked by the possibilties the viewfinder as the next browser. - Mike Anselm Hook wrote: > I must say that although it is expected, and pointed out earlier in > many of our respective social cartography groups, and clearly heads > and shoulders above the other offerings in the space - it is so well > done that it is almost hard to even be jealous: > > http://www.techcrunch.com/2008/09/17/tonchidot-madness-the-video/ > > Reminds me quite a bit of the hands on siggraph demo's I used to try > way back in the last century before it was quite clear that the > gibsonian future was the baseline. The siggraph demos did very > similar things - you would put on a bulky heads up display and all of > a sudden your real world would be instrumented with digital media - > pinned to a cumbersome qr-code instead of just brute force image > recognition using SIFT or something as this one appears to do - but it > was still pretty freakishly cool at the time.... > > Here - in this demo - it is just so sweetly cute to see that these > folks 'get it' and skipped ahead of the tedium of having to 'click' or > engage in a complicated negotiation to get the instrumented reality > display up.... > > It would be so cool if we could inject our own perception into that > reality - if we could have a kind of image dns... as mentioned previously. > > To go into rant mode for a second -> basically these people are > recolonizing our fucking reality... They're the gatekeepers on the > new way that people will SEE. That's a crazy power. I want me some > of that. :-) > > - me > > ------------------------------------------------------------------------ > > _______________________________________________ > Geowanking mailing list > [email protected] > http://lists.burri.to/mailman/listinfo/geowanking _______________________________________________ Geowanking mailing list [email protected] http://lists.burri.to/mailman/listinfo/geowanking
