I have only ever come up with hack ways to accomplish this - i.e.
using the xyz coordinates of the endpoints multiplied by different
factors to create a serial number for each line, sorting by serial,
and removing lines for which the serial is the same as the previous in
the list. I am sure this is not the best way - it is not extremely
accurate, but it seems to do the trick. Check out the file "select
unique lines.ghx" for an implementation of the technique. If anyone
has more computationally robust ways of doing this, I am all ears - I
have long wondered if there was a better way. Can the "select
duplicate" function be accessed via scripting?

Andrew

On Apr 22, 9:14 pm, oompa_l <[email protected]> wrote:
> I have a definition file that produces duplicate lines, because of
> adjecencies between neighbouring polygons - are there any strategies
> that anyone can think of to remove the duplicates?
>
> thanks

Reply via email to