Re: [Bf-committers] Blender and laser scanner - amendment

2013-04-30 Thread Bastien Montagne
Hi,

Indeed, adding color CD layer to vertices is not a great deal. For 
render, I think the simplest solution would be to tweak a bit particles 
(we already have billboards, just would have to make them be able to 
take the color from actual vertices and not only from loops), a system 
with as much particles as vertices, emitted from those all at once and 
with lifetime=scene duration is a suitable way to render a points cloud. 
With a decent modern computer, Blender can render several millions of 
such billboards with no problem. :)

And we could also have some tool to convert points clouds to actual 
meshes + textures, which could generate much faster objects for rendering...

So, especially if Jace already did part of the work (btw, Jace, any 
patch, branch or so available?), to get a basic usable and rederable 
solution should not be *that* hard!

Best regards,
Bastien

On 30/04/2013 01:30, Isaac Lenton wrote:
 Hi

 I looked into this for a project I was working on for visualising point
 clouds generated from computational physics simulations.  I recall that
 I managed to get the points rendering in Blender Render with a few quick
 hacks and duplicating the colour data onto individual vertices (maybe
 not the best idea).  I ended up switching to VTK and ParaView for most
 of my scientific data visualisation, but I would be interested if
 someone were to implement a basic pipeline as described by Jace Priester
 but with a step 6:

  6) Support for community add-ons (through python) for preforming
  basic data/point processing/rendering operations.

 ParaView provides a large selection of operators, some of which would be
 trivial to implement in blender if a pipeline was set-up.

 This idea might also be of interest to Shabbir Marzban, whose recent
 GSoC email to the mailing list included a proposal to integrate Bundler:

  1) Automated rigid structure reconstruction from set of
  un-ordered images: A very common structure from motion (SFM)
  tool, Bundler[1] can be used to recover 3D point cloud from such
  images.

 Thanks,
 Isaac Lenton

 PS: The mailing list seems to be blocking some of my emails, not sure if
 this will get through.

 On Mon, 2013-04-29 at 15:02 -0700, Jace Priester wrote:
 I wrote in point cloud support for Blender over a year ago, but I have been
 told repeatedly by the developers that there is no interest in the
 community and therefore they aren't interested in implementing it.
 Hopefully another voice will help with that.

 The code isn't extensive as is. It requires a custom data layer to support
 true per-vertex color (as Blender currently implements vertex color as part
 of face data - not vertex data). There's a bit of extra code in the mesh
 drawing functions to access the colors in the custom data layer, and then
 it draws points to the screen using the set colors. As a bonus, it also
 supports per-vertex sizing which allows for point distance attenuation
 (which is necessary for a good appearance). My current code does *not*
 integrate with any rendering engine - it only works in the editor window -
 and I gave up trying to get help taking it any further, as I don't use it
 that often anyway.

 If a developer knowledgeable enough about BI/cycles wants to hop in and
 help with finishing this, I'd be happy to collaborate. I see no need for
 blender to be a point cloud editing program. Meshlab and others are better
 suited to that, as that is specifically what they are built for. But the
 ability to import and display point cloud data, in my opinion, is vital to
 Blender being usable in more fields than it is now. So that said, I think
 short term and easily attainable goals should be to

 1) import point cloud data in a couple common formats (I have done this
 already)
 2) display points with colors and distance attentuation (also done)
 3) export point cloud data (also done)
 4) some user interface controls for turning colors on/off, adjusting
 displayed vertex size and attenuation (have not done any of this)
 5) support for render at least in BI



 On Mon, Apr 29, 2013 at 2:19 PM, Antoine Cottinacot...@ultiscan.comwrote:

 Hi y'all,

 Sorry for the repost this might be obvious or not, but after re-reading my
 previous post, I feel that need to mention that our goal is not to produce
 a tool dedicate to our sole use. Our goal is to develop a tool for us and
 the Blender community that will be release under GPL.

 Cheers.
 Antoine

 ###

 My name is Antoine Cottin, I'm a research scientist specialized in active
 remote sensing applied to earth sciences. For the last 8 years I've been
 working for/with a laser scanner manufacturer developing processing tools
 chain for laser scanner data processing. I have an extended knowledge in
 the various laser scanner systems (hardware and software) and their
 associated data format.

 I'm the founder of Ultiscan, a French base company, specialized in laser
 scanning services.

 

Re: [Bf-committers] Blender and laser scanner - amendment

2013-04-30 Thread Jace Priester
Custom data layer was recommended by ideasman and it serves the purpose
very well. I added a method for python to interact with the CDL and flash
new data to it all in one shot instead of updating it one index at a time,
which essentially made color import have no noticeable impact on import
time. By using GLSL to draw the points in the editor I also don't notice
any delay in speed rendering the editor.

You guys are underestimating the difficulty of converting point clouds to
textured meshes. This is an area which is still in research and development
in many other programs and research projects. There are entire software
libraries and programs devoted to that task and frankly all of them fail
pretty horribly given anything except laboratory-grade clean point cloud
data. Under normal circumstances with a normal amount of noise and holes in
the data, point clouds do not mesh well. I would be all for adding it to
Blender if there were mature code to plug in, but there is not, and it's a
bad idea to merge in a solution that is going to produce poor results most
of the time.

Additionally, structure from motion is also far beyond the scope of
Blender. SFM point clouds work much better than point cloud to mesh, but
again there are entire software libraries and programs devoted to the task.
Blender has absolutely no mechanisms as-is for handling any part of an SFM
pipeline.

As far as:
6) Support for community add-ons (through python) for preforming
basic data/point processing/rendering operations.

..well, that's what bpy is there for. Vertex data is easily accessible
as-is.

I hate to be the one that shoots down all the ideas but what you're talking
about is massive changes in the direction of the program. That isn't going
to happen without backing from the core developers, which I assure you
isn't there. Restricting this to import-export-render of point colors is a
very attainable goal.


On Tue, Apr 30, 2013 at 5:39 AM, Bastien Montagne montagn...@wanadoo.frwrote:

 Hi,

 Indeed, adding color CD layer to vertices is not a great deal. For
 render, I think the simplest solution would be to tweak a bit particles
 (we already have billboards, just would have to make them be able to
 take the color from actual vertices and not only from loops), a system
 with as much particles as vertices, emitted from those all at once and
 with lifetime=scene duration is a suitable way to render a points cloud.
 With a decent modern computer, Blender can render several millions of
 such billboards with no problem. :)

 And we could also have some tool to convert points clouds to actual
 meshes + textures, which could generate much faster objects for
 rendering...

 So, especially if Jace already did part of the work (btw, Jace, any
 patch, branch or so available?), to get a basic usable and rederable
 solution should not be *that* hard!

 Best regards,
 Bastien

 On 30/04/2013 01:30, Isaac Lenton wrote:
  Hi
 
  I looked into this for a project I was working on for visualising point
  clouds generated from computational physics simulations.  I recall that
  I managed to get the points rendering in Blender Render with a few quick
  hacks and duplicating the colour data onto individual vertices (maybe
  not the best idea).  I ended up switching to VTK and ParaView for most
  of my scientific data visualisation, but I would be interested if
  someone were to implement a basic pipeline as described by Jace Priester
  but with a step 6:
 
   6) Support for community add-ons (through python) for preforming
   basic data/point processing/rendering operations.
 
  ParaView provides a large selection of operators, some of which would be
  trivial to implement in blender if a pipeline was set-up.
 
  This idea might also be of interest to Shabbir Marzban, whose recent
  GSoC email to the mailing list included a proposal to integrate Bundler:
 
   1) Automated rigid structure reconstruction from set of
   un-ordered images: A very common structure from motion (SFM)
   tool, Bundler[1] can be used to recover 3D point cloud from such
   images.
 
  Thanks,
  Isaac Lenton
 
  PS: The mailing list seems to be blocking some of my emails, not sure if
  this will get through.
 
  On Mon, 2013-04-29 at 15:02 -0700, Jace Priester wrote:
  I wrote in point cloud support for Blender over a year ago, but I have
 been
  told repeatedly by the developers that there is no interest in the
  community and therefore they aren't interested in implementing it.
  Hopefully another voice will help with that.
 
  The code isn't extensive as is. It requires a custom data layer to
 support
  true per-vertex color (as Blender currently implements vertex color as
 part
  of face data - not vertex data). There's a bit of extra code in the mesh
  drawing functions to access the colors in the custom data layer, and
 then
  it draws points to the screen using the set colors. As a bonus, it also
  supports 

Re: [Bf-committers] Blender and laser scanner - amendment

2013-04-30 Thread Antoine Cottin
I agree that Blender is not a point cloud editor but adding point cloud 
visualization would be a nice add-on and need to stick to import-export-render.
Concerning the SFM, I agree that they are dedicate softwares out there that do 
the job more or less nicely, however, SFM and camera tracking with 3D 
reconstruction use the same approach - photogrammetry.
---
Ultiscan SAS
6 rue de Molsheim
67000 Strasbourg
France
www.ultiscan.com




___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Blender and laser scanner - amendment

2013-04-30 Thread Jace Priester
Sort of. Camera tracking is based on following points through a video. As
it is in blender that means manually marking points, automatic tracking,
and manual adjustment to clean it up. SFM from photos is based on
determining arbitrary and automatic point correspondence rather than 2D
point tracking. I suppose that means that to implement SFM, Blender already
has at least some of the needed code, but it's still missing point
correspondence entirely. I'd skip all the slow methods and look at
SiftGPU/SURF for quick point correspondence determination. As a bonus this
would probably be a great help to the camera tracking portion as well.

My last set of point cloud code was for some early 2.6 version and custom
data layers have been added since then which breaks my code. I'll put
together a patch for the current version and get back to you.


On Tue, Apr 30, 2013 at 9:06 AM, Antoine Cottin acot...@ultiscan.comwrote:

 I agree that Blender is not a point cloud editor but adding point cloud
 visualization would be a nice add-on and need to stick to
 import-export-render.
 Concerning the SFM, I agree that they are dedicate softwares out there
 that do the job more or less nicely, however, SFM and camera tracking with
 3D reconstruction use the same approach - photogrammetry.
 ---
 Ultiscan SAS
 6 rue de Molsheim
 67000 Strasbourg
 France
 www.ultiscan.com




 ___
 Bf-committers mailing list
 Bf-committers@blender.org
 http://lists.blender.org/mailman/listinfo/bf-committers




-- 


--
Jace Priester
Threespace Imaging
jacepries...@threespaceimaging.com
559-284-0904
--
___
Bf-committers mailing list
Bf-committers@blender.org
http://lists.blender.org/mailman/listinfo/bf-committers


Re: [Bf-committers] Blender and laser scanner - amendment

2013-04-29 Thread Isaac Lenton
Hi

I looked into this for a project I was working on for visualising point
clouds generated from computational physics simulations.  I recall that
I managed to get the points rendering in Blender Render with a few quick
hacks and duplicating the colour data onto individual vertices (maybe
not the best idea).  I ended up switching to VTK and ParaView for most
of my scientific data visualisation, but I would be interested if
someone were to implement a basic pipeline as described by Jace Priester
but with a step 6:

6) Support for community add-ons (through python) for preforming
basic data/point processing/rendering operations.

ParaView provides a large selection of operators, some of which would be
trivial to implement in blender if a pipeline was set-up.

This idea might also be of interest to Shabbir Marzban, whose recent
GSoC email to the mailing list included a proposal to integrate Bundler:

1) Automated rigid structure reconstruction from set of
un-ordered images: A very common structure from motion (SFM)
tool, Bundler[1] can be used to recover 3D point cloud from such
images.

Thanks,
Isaac Lenton

PS: The mailing list seems to be blocking some of my emails, not sure if
this will get through.

On Mon, 2013-04-29 at 15:02 -0700, Jace Priester wrote:
 I wrote in point cloud support for Blender over a year ago, but I have been
 told repeatedly by the developers that there is no interest in the
 community and therefore they aren't interested in implementing it.
 Hopefully another voice will help with that.
 
 The code isn't extensive as is. It requires a custom data layer to support
 true per-vertex color (as Blender currently implements vertex color as part
 of face data - not vertex data). There's a bit of extra code in the mesh
 drawing functions to access the colors in the custom data layer, and then
 it draws points to the screen using the set colors. As a bonus, it also
 supports per-vertex sizing which allows for point distance attenuation
 (which is necessary for a good appearance). My current code does *not*
 integrate with any rendering engine - it only works in the editor window -
 and I gave up trying to get help taking it any further, as I don't use it
 that often anyway.
 
 If a developer knowledgeable enough about BI/cycles wants to hop in and
 help with finishing this, I'd be happy to collaborate. I see no need for
 blender to be a point cloud editing program. Meshlab and others are better
 suited to that, as that is specifically what they are built for. But the
 ability to import and display point cloud data, in my opinion, is vital to
 Blender being usable in more fields than it is now. So that said, I think
 short term and easily attainable goals should be to
 
 1) import point cloud data in a couple common formats (I have done this
 already)
 2) display points with colors and distance attentuation (also done)
 3) export point cloud data (also done)
 4) some user interface controls for turning colors on/off, adjusting
 displayed vertex size and attenuation (have not done any of this)
 5) support for render at least in BI
 
 
 
 On Mon, Apr 29, 2013 at 2:19 PM, Antoine Cottin acot...@ultiscan.comwrote:
 
  Hi y'all,
 
  Sorry for the repost this might be obvious or not, but after re-reading my
  previous post, I feel that need to mention that our goal is not to produce
  a tool dedicate to our sole use. Our goal is to develop a tool for us and
  the Blender community that will be release under GPL.
 
  Cheers.
  Antoine
 
  ###
 
  My name is Antoine Cottin, I'm a research scientist specialized in active
  remote sensing applied to earth sciences. For the last 8 years I've been
  working for/with a laser scanner manufacturer developing processing tools
  chain for laser scanner data processing. I have an extended knowledge in
  the various laser scanner systems (hardware and software) and their
  associated data format.
 
  I'm the founder of Ultiscan, a French base company, specialized in laser
  scanning services.
 
  Our main objective/activity at Ultiscan beside laser scanning assets is
  data fusion between laser scanner data (points cloud) and 3D assets for 3D
  rendering and animation. Here is an example of one of our last project.
  This purpose of this project was to merge a 3D asset (the front gate of the
  heritage building) with a point cloud of the whole building to show to the
  client how the restoration of gate of the building would look like (some
  work on texturing is still required):
 
  https://www.dropbox.com/s/u6bj80fi8ajqq8r/00_Gruss_Fusion_100p_Black.mp4
 
  The gate have been mesh, from the laser scanner data, and then
  retopologize (from ~2500k faces to 600 faces), texture and render with
  Blender. The points cloud animation was done with a commercial software.
  Then the two pieces was merged together. To do that, we used Blender for
  the camera tracking, the render and the compositing of the final