already figured out:

def start_job(crawler_name):
    payload = {
        "project": settings['BOT_NAME'],
        "spider": crawler_name,
        "setting": "JOBDIR=%s" % settings['JOBSDIR']
    }
    response = requests.post("%sschedule.json" % url,
                             data=payload)
    return json.loads(response.text)

在 2014年9月4日星期四UTC+8下午3时34分44秒,tim feirg写道:
>
> Hello scrapy users, I'd like to have persistence in my spider project 
> while using scrapyd, but I don't quite know how exactly to do it. On 
> command line, there was this command:
>
> scrapy crawl somespider -s JOBDIR=crawls/somespider-1
>
>
> but when it comes to scrapyd I can't figure out how to pass these 
> arguments, please help?
>

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to