How do you know the process will not start?

On Wednesday, March 9, 2016 at 7:59:47 AM UTC-7, 林子言 wrote:
>
> i want to call the spider in django,but it turns out not run in main 
> thread.
>
> use i like this:
>
> fromscrapy.crawlerimportCrawlerProcess
>
> from scrapy.utils.project import get_project_settings
>
> def func():
>
> process = CrawlerProcess(get_project_settings()) # 'followall' is the name of 
> one of the spiders of the project. process.crawl('followall', 
> domain='scrapinghub.com') process.start() # the script will block here until 
> the crawling is finished
>
> the process will not start. i searched about it ,the twisted said if set 
> installsignalhandlers=0 to reactor it will solve .
>
> but how can i add this to the process? is there any solution? many thanks.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to