Hi,
You could try the vtkIterativeClosestPointTransform
(http://www.vtk.org/doc/release/5.4/html/a00927.html)
Regards,
Jochen
Von: Keshav Chintamani [mailto:[email protected]]
Gesendet: Dienstag, 14. Dezember 2010 10:01
An: Wegner Ingmar
Cc: Neuhaus Jochen; [email protected]
Betreff: Re: [mitk-users] Landmark Registration: scaling, shearing, rotation
and translation in the MITK IGT
Hi Ingmar,
Thanks for the code snippet. I have built a filter using this method and will
be testing it today. Unfortunately, there is no ICP support in the existing vtk
transform, but we could live without it.
Regards,
-Keshav
Keshav Chintamani,
Systems Engineer
Space Applications Services NV
Leuvensesteenweg 325
1932 Zaventem
Belgium
Tel: +32 2 721 54 84 Fax: +32 2 721 54 44
URL: http://www.spaceapplications.com
________________________________
From: "Wegner Ingmar" <[email protected]>
To: "Keshav Chintamani" <[email protected]>, "Neuhaus
Jochen" <[email protected]>, [email protected]
Sent: Monday, 13 December, 2010 2:17:06 PM
Subject: AW: [mitk-users] Landmark Registration: scaling, shearing, rotation
and translation in the MITK IGT
Hi Keshav,
I have a code snippet that shows the usage of vtkLandmarkTransform. The
resulting matrix is pushed into the geometry of the data node and this
transforms a surface accordingly. We use the mode Similarity, I changed the
snippet to affine.
Best Regards,
Ingmar
// needed for vtkLandmarkTransform
vtkPoints* vPointsTarget = vtkPoints::New();
vtkPoints* vPointsSource = vtkPoints::New();
mitk::PointSet::Pointer fixedPointSet(trackerFiducials);
mitk::PointSet::Pointer movingPointSet(imageFiducials);
mitk::Point3D point;
// copy mitk points to vtkPoints
for(int i=0; i < fixedPointSet->GetSize(); ++i)
{
point = fixedPointSet->GetPoint(i);
vPointsTarget->InsertNextPoint(point[0],point[1],point[2]);
point = movingPointSet->GetPoint(i);
vPointsSource->InsertNextPoint(point[0],point[1],point[2]);
}
vtkLandmarkTransform* transform = vtkLandmarkTransform::New();
// set source and target points to vtkTransform
transform->SetSourceLandmarks(vPointsSource);
transform->SetTargetLandmarks(vPointsTarget);
//transform->SetModeToSimilarity(); // transform mode similarity
transform->SetModeToAffine(); // transform mode affine
vtkMatrix4x4* matrix = transform->GetMatrix(); // calculate transform matrix
mitk::Geometry3D::Pointer g3d = surface->GetData()->GetGeometry(); //
surface geometry of surface to be transformed
g3d->Compose(matrix); // transform surface
vPointsTarget->Delete();
vPointsSource->Delete();
Von: Keshav Chintamani [mailto:[email protected]]
Gesendet: Freitag, 10. Dezember 2010 15:04
An: Neuhaus Jochen; [email protected]
Betreff: [mitk-users] Landmark Registration: scaling, shearing, rotation and
translation in the MITK IGT
Hi Jochen,
We are using the mitkNavigationDataLandmarkTransformFilter from the MITK IGT
module to generate a transform between a source image (virtual human) and
corresponding target points on the real model. The method we follow is :
1) Set source point set which follows this structure:
Number of points=6
p 0.0 0.0 0.0
p 0.0 -7.5 0.0
p 0.0 -15.0 0.0
p 9.0 -15.0 0.0
p 9.0 -7.5 0.0
p 9.0 0.0 0.0
This data is in centimeters.
2) Get Target points: positions of 6 NDI markers corresponding to the above
points. Currently the 6 NDI markers are positioned with the same geometry as
the source data on a flat surface.
3) Set Target and Source Points to the NavigationDataLandmarkTransformFilter
instance
4) Update the transform
The code snippet describes our general approach:
m_LandmarkTransformFilter=mitk::NavigationDataLandmarkTransformFilter::New()
mitk::PointSet::Pointer m_SourcePointSet= mitk::PointSet::New();
mitk::PointSet::Pointer m_TargetPointSet= mitk::PointSet::New();
LoadSourcePoints(m_SourcePointSet);
while(1){
//Get 6 ordered points from NDI
LoadTargetPoints(m_TargetPointSet);
//Get the first point in the source point set (this is 0,0,0)
mitk::PointSet::PointType pointonVirtual;
m_SourcePointSet->GetPointIfExists(0,&pointonVirtual);
m_virtualModelInputND->SetPosition(pointonVirtual);
m_virtualModelInputND->Modified();
m_LandmarkTransformFilter->SetSourceLandmarks(m_SourcePointSet);
m_LandmarkTransformFilter->SetTargetLandmarks(m_TargetPointSet);
m_LandmarkTransformFilter->SetInput( 0, m_virtualModelInputND);
m_LandmarkTransformFilter->Update();
mitk::NavigationData*
transformedND=m_LandmarkTransformFilter->GetOutput(0);
}
We were hoping you could provide help with some questions.
1) Does the NavigationDataLandmarkTransformFilter handle scaling? For example,
if the source data is in meters and the target data is in millimeters. Or if
the source data is scaled by some factor.
Only when we matched the source and target data measurement units
(Source/Target = 1) did we get a perfect transform. But when the source data
was in a different unit, the output was offset in an arbitrary direction. I
have attached some pictures:
In the pictures, the red ball should align with the circled purple ball. This
happens only when the source and target images are in the same units/spacing
2) Is there a different filter we can use to get a landmark transform between a
generic 3D human image landmarks and landmarks on the target body? This
landmark transform will have to include scaling, shearing, rotation and
translation.
Any help with these questions is greatly appreciated. Looking forward to
hearing your comments.
Best Regards,
-Keshav
Keshav Chintamani,
Systems Engineer
Space Applications Services NV
Leuvensesteenweg 325
1932 Zaventem
Belgium
Tel: +32 2 721 54 84 Fax: +32 2 721 54 44
URL: http://www.spaceapplications.com
------------------------------------------------------------------------------
Lotusphere 2011
Register now for Lotusphere 2011 and learn how
to connect the dots, take your collaborative environment
to the next level, and enter the era of Social Business.
http://p.sf.net/sfu/lotusphere-d2d
_______________________________________________
mitk-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/mitk-users