Greetings all,

I wanted to see if someone could advise as to the most efficient use of LibLAS 
for the following:


I have a series of 200-500mb las files with classified points.


I have a program (C++) that reads in .txt files and doesn't like them bigger 
than about 10mb. I have written some python to batch the C++ app for multiple 
tiles. Obviously, using LibLAS to load the .las files directly into the app 
would be best but due to time/resource constraints thats not going to happen. 


I need to subset the large tiles spatially and by return #/classification. My 
idea at the moment is to use write a script to batch las2las2 to produce text 
files for each combination of spatial extent and point class, then use the 
python api to produce text files that are digestible by the app. 


My question (finally), is weather this is the best way to approach this problem 
from an efficiency standpoint. I have used the python api to read through the 
points in the original (large) .las files and spit out text files with my 
criteria but its very brute force and slow. 


Thanks in advance,


Peter


-- 
Peter Tittmann
UC Davis
Geography Graduate Group

_______________________________________________
Liblas-devel mailing list
[email protected]
http://lists.osgeo.org/mailman/listinfo/liblas-devel

Reply via email to