If the data is in a database, and the rows are truly duplicates (i.e. keys
always map to the same values) you could probably remove the duplicates via
the query before loading the data into Python.

The DISTINCT keyword will do this for you, i.e.

SELECT DISTINCT * FROM x;

Otherwise, a GROUP BY could achieve the same.


On 21 February 2014 10:55, David Crisp <dcr...@netspace.net.au> wrote:

> Thanks for the thoughts!
>
> In this case I am reading in data from a SQL database query(pymssql) and
> feeding it into an excel spreedsheet (xlwt) Line by line which means I have
> to assemble the dict line by line.
>
> And yes,  the name i chose in the email to represent what I was doing was
> generic.  THe one I am actually using is representative of the data its
> holding! :)   as apposed to a generic word which could actually be a
> keyword!
>
> And Yes yes,  Theres quite a lot of data.  Many of these query returns
> have 3000 rows of data!
>
> Regards,
> David
>
>
> On Fri, 21 Feb 2014, William ML Leslie wrote:
>
>  On 21/02/2014 9:40 am, "Anthony Briggs" <anthony.bri...@gmail.com> wrote:
>>
>>>
>>> You can also use the dict() function or dictionary comprehensions to
>>>
>> create your dictionary:
>>
>>>   item = dict( (key, value) for key, value in list )
>>>
>>>
>> Otherwise written:
>>  item = dict(list)
>>
>>  _______________________________________________
> melbourne-pug mailing list
> melbourne-pug@python.org
> https://mail.python.org/mailman/listinfo/melbourne-pug
>
_______________________________________________
melbourne-pug mailing list
melbourne-pug@python.org
https://mail.python.org/mailman/listinfo/melbourne-pug

Reply via email to