Hi guys,

Quite a flood of messages, after I thought people weren't interested! I have done much of what you're talking about, and can offer some information and practical advice.

On 12/06/2007, at 9:58 PM, Josh Knauer wrote:
The State of Connecticut alone has terabytes of photos, taken in both directions on every state road each year for many years. This data is unambiguously in the public domain. We all should be working to get more of it on the web and make sure it stays there!

That could be a way of getting the boring stuff out of the way, as photos taken from the road contains some useful information but takes a lot of collection effort. This source of data has limited appeal and I stopped doing it after collecting 60,000 images.

I was surprised to see that Google thought that stuff was worth posting in its current state, and concluded that it was a lame-ass reply to Microsoft for Where 2.0. However they can extract building facades with Stanford's processing, so they should be back in the race soon enough.

On 13/06/2007, at 2:45 AM, Rich Gibson wrote:
I think we need to do the following...
-capture images
-process images
-store images
-provide an api to access the images
-provide a viewer that knows how to access that api to display them.

That's pretty much it.

Here are the exif 'gps' tags:
http://www.sno.phy.queensu.ca/~phil/exiftool/TagNames/GPS.html

No need for that, since the processed images will be different from the source images. GPS is only useful for getting ballpark location, as the positioning is approximate and drops out in urban locations. I use GPS for roughly shoving stuff into the map, then manually position key photos to string the rest out along lines or bezier curves.

(I assume we will also use the standard EXIF tags for things like lens
focal length, and when the shot was taken, etc)

That will be a fixed value for the range of photos. Eg, with my 360 lenses, I set the exposure and shutter speed and that's it for the whole run of photos. It is very important to know the time of each photo, as that allows correlation with the GPS trackpoints... then the manual adjustments are going to be within 10 to 20 metres.

On 13/06/2007, at 3:50 AM, SteveC wrote:
360 degree lens == kiss good by to your resolution.

But say hello to no stitching problems, no parallax problems, and no exposure difference problems. Also say hello to being able to make panoramas while moving, instead of using a tripod and pano-head mount for each one - which gets very painful when you're trying to make thousands.

Using a 12MP camera means that the final result from the 360 lenses isn't half bad. Any blurring or poor resolution is an advantage due to privacy considerations, so in practice I actually reduce the resolution from the 360 lenses even further.

Really, buy 5-6 cameras.

I really wouldn't recommend this. There is no way of mounting 5 to 6 cameras to avoid parallax problems, and no automatic (or manual) stitcher is capable of joining together the resultant photos. Even the Street View approach puts the cameras as close together as possible, and the results they're getting don't look any better than a 360 lenses.

Think of it this way, googlers might be zombie non-talkers under the biggest NDA in the universe but they are pretty smart and put time and effort in to figuring the best way to do this. They chose 5-6 cameras or whatever it was.

Google isn't a person or a brain... there are people working for Google, and we're just as smart as those people. They're buying everything that they have (other than the original Map-Reduce), so it's quite obvious that all the brains are outside of Google despite their hiring practices.

On 13/06/2007, at 4:29 AM, Artem Pavlenko wrote:
Anyway, building panoramas is not really interesting. Think bigger and start building 3D.

Building in 3d is a dead end that Microsoft and Google are wasting their time on, and it will eventually crap out. The only reason they're doing it that way is because it's a quick attempt to set up large amounts of coverage, but the list of problems with using 3d or VRML for virtual reality is longer than the up-front savings.

Taking a photograph is something that anyone can do, and arranging those photographs is the best way to build a virtual reality. The downside is the huge amount of bandwidth needed, and the upside is that this kind of effort is only possible through community and open methods.

On 13/06/2007, at 5:14 AM, Tom Longson (nym) wrote:
As soon as you start thinking 3D either you're trying to do Photosynth or you need a bigger budget to start doing LDAR as far as I know.

That's right... and LIDAR has terrible resolution (128x128 or 256x256)... so you get these blobby points with higher resolution imposed over the top, and all the problems of expensive gear, the average person not having one, and the enormous processing cost of 3d point samples.

Photosynth is interesting, but much more complicated than they needed to make it. Instead of processing a random sample of pictures and trying to auto-correlate, there's nothing wrong with just putting in the right information into key photos. Once some photos have the right information, then extra photos can be auto-correlated to those definitive data points.

On 13/06/2007, at 6:24 AM, SteveC wrote:
I spent a ton of time in 3D when the new hotness was VRML. I wrote things to take a set of photos and magically turn them in to models and so on. At its base it seemed like you spent 1000% more development effort to make a 3d model of your building/city for a 1% improvement in usability. You got some factor of whizzyness, it's a great educational tool, but no real ROI in its own space.

That's right... so don't bother.

On 13/06/2007, at 7:50 AM, Tom Longson (nym) wrote:
Anyways, this is digressing a lot, anyone have experience setting up multiple cameras to capture panoramas? Anyone have experience stitching these images?

Don't bother. Use two types of panoramas... 360 lenses for quick snaps where you don't really care and just want to cover large areas quickly... and use a normal camera with pano-head for high quality panoramas where it matters.

On 13/06/2007, at 8:34 AM, Andrew Larcombe wrote:
- there needs to be some compensation for potential differences in exposure between shots on adjacent cameras (a KISS approach might have 3 parallel cameras for each angle, one with auto exposure, one higher, one lower, with post-processing to determine which of the shots to use from each camera based on exif tags, histogram analysis etc...)

You can't do bracketing photos while moving, so that's only an option for normal cameras and pano-head mounts. The only way to capture while moving is to use a 360 lenses, or to go through the painful hassle of multiple cameras (and the subsequent parallax, etc etc, problems).

On 13/06/2007, at 8:43 AM, Rich Gibson wrote:
I'm going out on a limb here to say that I think doing open street
views is (primarily) a project in hacking community rather than code.

Yes, that's right. We'd need a huge number of people to go out and take a lot of panoramas of their areas, geolocate them, and make them available under some sort of Creative Commons license.

http://panotools.sourceforge.net/

This is only useful for panoramas taken from a single position with a pano-head mount. It won't help with snapping up large areas while moving.

-capturing location - 'obviously' GPS, but I suspect we'd like AGPS
resolution, and probably a compass independent of the GPS track (ie.

Use GPS for ballpark, and manually align key panoramas by using orthorectified aerial imagery.

-viewer - panorama viewers exist already, but we would like some cool
map based interface next to the panoramas.

I can provide Flash 9 viewer code, as mentioned previously.

On 13/06/2007, at 10:10 AM, Tom Longson (nym) wrote:
How long does it usually take to process a pano?

If you're making a stitched pano from multiple shots, then it takes up to about 10 minutes per pano as you have to manually verify and correct for problems in the SIFT algorithm lining up feature points. If you're using a 360 single shot lenses, then you can drag'n'drop the whole batch (10,000 worth) and leave it running overnight (about 10 seconds per pano).

On 13/06/2007, at 9:15 AM, Mike Liebhold wrote:
Actually there -is- an important question of code. Does creating an "open" streetview imply creating new notation or markup? Or simply applying the new features in KML2.2?

Having seen how VRML died under the weight of standards creating bureaucracy, how about we completely avoid any attempt to set standards until we have one project that is up and running and working, then we can extract features and schema worthy of preserving in a standard at some later point?

On 13/06/2007, at 10:00 AM, Neil Havermale wrote:
There are a large heap of patents in this area of technology; you had
better ruminate before you encourage others to invade that IP.

And that's the best news about all of this... none of this is protected by IP because it was all done back in the 70s. Welcome to a slice of computing history:

        http://www.honeylocust.com/vr/info/aspen.html
        http://en.wikipedia.org/wiki/Aspen_Movie_Map
        http://www.naimark.net/writing/spie97.html
        http://www.naimark.net/projects/aspen/aspen_v1.html

And I'll let Rich Gibson have the last word:

On 13/06/2007, at 11:48 AM, Rich Gibson wrote:
Let me put on my thoughtful hat...okay, done.

F*** 'their' IP.

We'll just think of new ways of doing the same things if there is any impediment.

Steve.

--
  [EMAIL PROTECTED]


_______________________________________________
Geowanking mailing list
[email protected]
http://lists.burri.to/mailman/listinfo/geowanking

Reply via email to