yes - and collecting google street view etc.... so it's clearly pretty much a done deal... this business of augmented reality.
earlier this summer a few friends and i built an image recognition service at imagewiki.org under the auspices of our hacker group makerlab - we were playing with similar ideas using off the shelf parts... and we saw some implications for sure... ( although as usual damned if we could convince anybody else on the planet to support the ideas financially ). i think the real issue is now one of who will own the metadata associated with images. creative commons licenses applies to images themselves, and trademarks protect reproduction of images - companies vociferously protect trademarks, but it's also clear that there is going to be new legal terrain over what an image lookup resolves - exactly the way that there are fights over dns right now... when pointing your phone at mcdonalds always resolves to a website talking about industrial agribiz and pollution - that is not good for mcdonalds - and they'll clearly put a stop to it.... pretty much instantly. ...so clearly if we the so called people want to have a voice in all this we will have to establish an image commons; not a commons of licenses, but a commons of actual 'search events' as they connect to the image requests... its pretty obvious.. exactly the same way wikipedia is non corporate [ i was about to say non partisian but of course it has its own politics - the main point however is that those politics are not revenue driven and so are not peverted along the lines of subverting human values in quite the same way as money does ] ...ideally even, to really go further, image search events should be federated - for example right now google handles search requests in secret - i mean seo experts can game it but nobody can be officially part of search. if google federated search so that domain experts could get in between requests and responses then it would create an explosion of search vendors - companies such as powerset could provide smarter search... the parallel here is that the gugenheim or the tate could provide very in depth response on certain visual search queries... much better than a generalist service... and/or of course any infinite number of small specialized vendors: if you asked about a certain piece of graffiti - you could be connected to some pseudononymous representative of said - if you asked about a store - the store itself could be the respondant... there could be a subjective landscape of peer scored returns, moderated by a trust network... etc... anyway all pretty much totally obvious, pretty boring, implications as i am sure you all see... tedious actually to walk through it... but all told, in sum, it'll still be pretty cool to pop on those glasses and see that instrumented reality.... - me On Wed, Sep 17, 2008 at 8:46 PM, Kevin Elliott <[EMAIL PROTECTED]> wrote: > I'm pretty sure this has been stated before, but Earthmine ( > http://www.earthmine.com > ) offers a platform and API capable of providing 3D datasets of the > street (something that street view can't do). On discussions with > them, they've said that technically they could map inside of > buildings, private roads, and off-road regions (their capture platform > is cheap!). > > This would be an ideal platform to make the iPhone app a reality. > > -Kevin > > > On Sep 17, 2008, at 4:54 PM, Mike Liebhold wrote: > > > Thanks Anselm, > > > > This is an interesting mock-up, a little more refined than the the > > enkin.net hack . Here's a few quick of thoughts: > > > > 1. Google has already released mobile street view ( see > > > http://googlemobile.blogspot.com/2008/09/street-view-and-walking-directions-come.html > > ) AR geotags are sure to follow. They're already paritally supported > > by > > KML see > > > http://code.google.com/apis/maps/documentation/services.html#StreetviewOverlays > > > > 2. Creating an image DNS is a great idea, and necessary for 3D > > geopositioning by matching viewable points with a pointcloud > > database. > > perhaps achievable by an OSM style process. in the meantime > > experimenting with existing data seems like a path forward. > > Earthmine.com has offered the hacker community experimental access to > > their very detailed database of 3D pointclouds for cities in the > > western > > US ( not pdx, alas) they claim 20cm/pixel 3D location accuracy. > > > > 3. There's enormous work ahead working out a usable UI for sorting > > though what will be abundant visible data and media draped across the > > real-world, viewable through our mobile viewfinders. > > > > 4. So far there is no standard markup for KML placemark docs. - they > > are NOT, but should be, fully functional web objects. Yah I know you > > can insert html, but can you implement AJAX in a KML placemark? > > > > > > 4. We better move quick, The Handheld AR meme is in the air. The > > Tonchidot demo demonstrates, that the mainstream techcruch VC crowd is > > now dialed and very stoked by the possibilties the viewfinder as the > > next browser. > > > > > > - Mike > > > > > > > > > > Anselm Hook wrote: > >> I must say that although it is expected, and pointed out earlier in > >> many of our respective social cartography groups, and clearly heads > >> and shoulders above the other offerings in the space - it is so well > >> done that it is almost hard to even be jealous: > >> > >> http://www.techcrunch.com/2008/09/17/tonchidot-madness-the-video/ > >> > >> Reminds me quite a bit of the hands on siggraph demo's I used to try > >> way back in the last century before it was quite clear that the > >> gibsonian future was the baseline. The siggraph demos did very > >> similar things - you would put on a bulky heads up display and all of > >> a sudden your real world would be instrumented with digital media - > >> pinned to a cumbersome qr-code instead of just brute force image > >> recognition using SIFT or something as this one appears to do - but > >> it > >> was still pretty freakishly cool at the time.... > >> > >> Here - in this demo - it is just so sweetly cute to see that these > >> folks 'get it' and skipped ahead of the tedium of having to 'click' > >> or > >> engage in a complicated negotiation to get the instrumented reality > >> display up.... > >> > >> It would be so cool if we could inject our own perception into that > >> reality - if we could have a kind of image dns... as mentioned > >> previously. > >> > >> To go into rant mode for a second -> basically these people are > >> recolonizing our fucking reality... They're the gatekeepers on the > >> new way that people will SEE. That's a crazy power. I want me some > >> of that. :-) > >> > >> - me > >> > >> ------------------------------------------------------------------------ > >> > >> _______________________________________________ > >> Geowanking mailing list > >> [email protected] > >> http://lists.burri.to/mailman/listinfo/geowanking > > > > _______________________________________________ > > Geowanking mailing list > > [email protected] > > http://lists.burri.to/mailman/listinfo/geowanking > > > > _______________________________________________ > Geowanking mailing list > [email protected] > http://lists.burri.to/mailman/listinfo/geowanking > -- anselm 415 215 4856 http://hook.org http://makerlab.com http://meedan.net
_______________________________________________ Geowanking mailing list [email protected] http://lists.burri.to/mailman/listinfo/geowanking
