Cheers all,

I need the ability to log to a database.  I will be running multiple spider 
instances on different servers over extended periods of time and relying on 
log files for each spider is not realistic.  So far, current suggestions 
seem to point to writing a custom log observer for the twisd framework 
(<about:invalid#zClosurez>
https://groups.google.com/forum/#!searchin/scrapy-users/logging/scrapy-users/cdXzKFNCB7g/Jy510pzq9OIJ).
  
Unfortunately, logging in twistd 
<http://twistedmatrix.com/documents/current/core/howto/logging.html>is 
intended for files and there is no documented mechanism for creating a 
database logger, (nor am I equipped with the resources for brain surgery on 
twistd).  What I am considering doing is extending the ScrapyFileLogObserver 
<https://github.com/scrapy/scrapy/blob/master/scrapy/log.py#L37>so that 
whenever a spider logs to a file it will similarly log to a database.  
While this would necessitate always logging to a file and might impact disk 
I/O, I can't see another approach that can be done in a reasonable amount 
of time.  

If there is a cleaner way of implementing logging to a database I would 
definitely appreciate the advice.

Thanks in advance for any feedback.

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to