Thanks ver much for your clarifications, Bob although I’m not sure I understand your correction.
Roger > On Mar 14, 2022, at 8:48 AM, Bob Sneidar via use-livecode > <use-livecode@lists.runrev.com> wrote: > > Actually I must correct myself. That will not work because the unique value > column (typically an autoincrementing integer) will not be unique for each > record. Instead, assuming your lines of text are in a column called > "textdata" > > SELECT textdata UNIQUE FROM... > > Bob S > > >> On Mar 14, 2022, at 08:29 , Bob Sneidar via use-livecode >> <use-livecode@lists.runrev.com> wrote: >> >> They depend on the fact that arrays cannot have duplicate keys. Dumping the >> data into an SQL database and querying using the UNIQUE statement would do >> it too. >> >> SELECT * UNIQUE from ... >> >> Bob S >> >> >> >>> On Mar 13, 2022, at 13:16 , Roger Guay via use-livecode >>> <use-livecode@lists.runrev.com> wrote: >>> >>> Thank you Jacqueline, Alex and Terry. Very interesting new (for me) methods >>> that I would never have come up with on my own. >>> >>> Roger >>> >>>> On Mar 13, 2022, at 1:05 PM, J. Landman Gay via use-livecode >>>> <use-livecode@lists.runrev.com> wrote: >>>> >>>> On 3/12/22 8:54 PM, Roger Guay via use-livecode wrote: >>>>> I have a field with about a thousand lines with many duplicate lines, and >>>>> I want to delete the duplicates. Seems like this should be simple but I >>>>> am running around in circles. Can anyone help me with this? >>>> >>>> Making the list into an array is the easiest way but as mentioned, it will >>>> destroy the original order. If the order is important then you can restore >>>> it with a custom sort function. Here's my test handlers: >>>> >>>> >>>> on mouseUp >>>> put fld 1 into tData -- we keep this as a reference to the original order >>>> put tData into tTrimmedData -- this one will change >>>> split tTrimmedData by cr as set -- removes duplicates >>>> put keys(tTrimmedData) into tTrimmedData -- convert to a text list >>>> sort tTrimmedData numeric by origOrder(each,tData) >>>> put tTrimmedData into fld 1 >>>> end mouseUp >>>> >>>> function origOrder pWord, @pData >>>> set wholematches to true -- may not matter, depends on the data >>>> return lineoffset(pWord, pData) >>>> end origOrder >>>> >>>> Field 1 contains lines in random order with duplicates. >>>> >>>> -- >>>> Jacqueline Landman Gay | jac...@hyperactivesw.com >>>> HyperActive Software | http://www.hyperactivesw.com >>>> >>>> _______________________________________________ >>>> use-livecode mailing list >>>> use-livecode@lists.runrev.com >>>> Please visit this url to subscribe, unsubscribe and manage your >>>> subscription preferences: >>>> http://lists.runrev.com/mailman/listinfo/use-livecode >>> >>> >>> _______________________________________________ >>> use-livecode mailing list >>> use-livecode@lists.runrev.com >>> Please visit this url to subscribe, unsubscribe and manage your >>> subscription preferences: >>> http://lists.runrev.com/mailman/listinfo/use-livecode >> >> >> _______________________________________________ >> use-livecode mailing list >> use-livecode@lists.runrev.com >> Please visit this url to subscribe, unsubscribe and manage your subscription >> preferences: >> http://lists.runrev.com/mailman/listinfo/use-livecode > > > _______________________________________________ > use-livecode mailing list > use-livecode@lists.runrev.com > Please visit this url to subscribe, unsubscribe and manage your subscription > preferences: > http://lists.runrev.com/mailman/listinfo/use-livecode _______________________________________________ use-livecode mailing list use-livecode@lists.runrev.com Please visit this url to subscribe, unsubscribe and manage your subscription preferences: http://lists.runrev.com/mailman/listinfo/use-livecode