Hi Michael,

The way I've implemented the DB is by creating a db_layer file in the same 
directory as settings.py.  
That db_layer file uses mysqldb to manage the db connection, and contains a 
number of methods that take data as params and fires off the sql to store 
the in the db. 
While the spider is running it keeps values of interest in the product 
item, and ultimately the pipeline is responsible for aggregating and 
calling the methods from the db_layer.

I always try my best not to meddle with the guts of the whole thing, but if 
you wanted to react to this data while the spider is running, perhaps a 
custom middleware is the answer?

Anyway, hope that helps.

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to