I havent, I thought allow=() would do the same, but going to try straight 
away and let you know, thanks!

El dimarts, 18 agost de 2015 17:34:57 UTC+1, Travis Leleu va escriure:
>
> Did you try setting allow=('.*',) in your Rule definition?
>
> On Tue, Aug 18, 2015 at 8:32 AM, Isaac Perez <[email protected] 
> <javascript:>> wrote:
>
>> Hi,
>>
>> I'm having some problems with the CrawlSpiders I'm creating, not one of 
>> them follows the links more than one level deep when the rule has the 
>> callback defined.
>> So far I sort of managed to get it working with specifying the links for 
>> the categories in the start_urls.
>> But this spider:
>> http://pastebin.com/rRLAP1X1
>>
>> Is not even parsing the links for the start links.
>> So it only crawls the start pages and that's it.
>>
>> Not sure what I'm doing wrong, as I understand that if you have 
>> Follow=True in the rule, it will crawl, scrap and follow.
>> But this doesn't seem to be happening for any of my spiders.
>> For the pasted spider, what is it missing that doesn't even follow the 
>> links from the start_urls?
>>
>> Any ideas?
>>
>> Thanks,
>> Isaac
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "scrapy-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> To post to this group, send email to [email protected] 
>> <javascript:>.
>> Visit this group at http://groups.google.com/group/scrapy-users.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to