Thanks a lot! That's what I consider "elegant". :) Could not figure it 
out... THX.
 

Am Dienstag, 12. April 2016 11:17:36 UTC+2 schrieb Jakob de Maeyer:
>
> Hey Salvador,
>
> if you don't scrape too many items (too many as in "cannot fit into your 
> memory"), just save the items in an attribute of the pipeline and write 
> them out on the spider close signal:
>
> class MyPipeline(object):
>
>     def __init__(self):
>         self.items = []
>
>     def process_item(self, item, spider):
>         self.items.append(item)
>         return item
>
>     def close_spider(self, spider):
>         # Save self.items
>
>
> Cheers,
> -Jakob
>
>
>
> On Sunday, April 10, 2016 at 11:04:44 AM UTC+1, Salvad0r wrote:
>>
>> I would like to act on all items, in other word s collect all items and 
>> wirte them once to a file adding a header wrapping all the items. A good 
>> place for this seem to bee the pipelines, here only item by item is 
>> handeled.
>>
>>  
>>
>> I fond this solution:  „How to access all scraped items in Scrapy item 
>> pipeline?“  
>> https://stackoverflow.com/questions/12768247/how-to-access-all-scraped-items-in-scrapy-item-pipeline
>>
>>
>> But the way seems to be more complex than nessesary.  Is there a 
>> smarter, shorter, easier or more elegant way? THX! 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to