Define your own make_requests_from_url(). The crawl spider inherits it.
def make_requests_from_url(self, url):
return Request(url, cookies={'lang': 'en'}, dont_filter=True)
Or you could define start_requests() instead of start_urls.
The cookie middleware will merges to/from the same cookiejar for subsequent
requests.
On Friday, 28 January 2011 16:46:14 UTC+2, Rosa Luna wrote:
>
> Hi,
>
> I'm trying to scrape some data from a site that use cookie for the
> language of the site.
>
> How to pass to my spider this cookie value for language:
>
> Cookie: code_pays=2; code_region=0;
>
> In the spider?
>
> I don't know where to set up the CookiesMiddleware shown here
> :http://doc.scrapy.org/topics/downloader-middleware.html??
>
> Thanks for your help to a new scrapy "lost" user ;-)
>
> Scrapy is awsome!
>
>
--
You received this message because you are subscribed to the Google Groups
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.