Hello, thinking about moving to python/scrapy from perl

So I'm wondering what's under the hood. 
One single process, or separate forks/threads for 
sheduler/downloader/engine/etc?
can I have blocking spider/parser and separate non-blocking downloader?
memory usage for single clean scrapy spider?


-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to