Does scrapyd (from git) run with scrapy 1.x at all?
I don't see an open issue to track that.

Em segunda-feira, 22 de junho de 2015 06:38:55 UTC-3, Vasco escreveu:
>
> Hi Jose, 
>
> You are right, I can install 0.24 with pip. The download section on the 
> pypi page doesn't list any versions other than 1.0.0. I just assumed this 
> was an exhaustive list of the versions available in the repo, but this 
> apparently just shows the last versions. My bad.
>
> thanks,
> Vasco
>
>
> Hi Vasco, are you sure older versions o scrapy have been removed from 
>> pypi? 
>> I have just installed scrapy 0.24 and scrapyd without problems on a new 
>> virtualenv.
>>
> Regards,
>>
>> José
>>
>> On Sat, Jun 20, 2015 at 10:57 AM, Vasco <[email protected]> wrote:
>>
>>> Hi Julia, 
>>>
>>> Congrats to you and the other contributors for reaching this milestone! 
>>> The release notes show some very interesting changes! Particularly, does 
>>> per spider settings mean we can have different pipelines for different 
>>> spiders within the same project? For example, I currently have a lot of 
>>> different projects that differ only in one or two pipelines, but they share 
>>> a lot of pipelines too (which I now define in a separate package that I 
>>> make available to all projects). If I understand things correctly, with 1.0 
>>> I could put all spiders in one project and specify different pipeline paths 
>>> for each spider. If my understanding is correct, is this something you 
>>> would typically suggest users to do in 1.0?
>>>
>>> I noticed that scrapy 0.24 has been completely removed from pypi. For me 
>>> this created a small issue because 1.0 breaks the scrapyd package available 
>>> in pypi. What are your thoughts on keeping a 0.24 build available on pypi 
>>> so user can install that version if 1.0 breaks their code? 
>>>
>>> I also noticed that the scrapyd package on pypi hasn't been updated in 
>>> almost two years. Is using scrapyd to manage scrapy spider versions/runs 
>>> considered current best practice?
>>>
>>> Congrats again! Best,
>>> Vasco
>>>
>>>
>>>
>>> On Saturday, June 20, 2015 at 1:07:02 AM UTC+2, Julia Medina wrote:
>>>>
>>>> After nearly a month of testing candidates, we've finally reached the 
>>>> desired stability to roll out Scrapy 1.0. As announced in the first 
>>>> candidate for this release 
>>>> <https://groups.google.com/d/msg/scrapy-users/Ebf0SDHUAFo/x353GrVWdocJ>, 
>>>> 1.0 brings a lot of improvements, but more importantly, it represents an 
>>>> important milestone that marks a new stage of maturity for Scrapy.
>>>>
>>>> You can check our Release Notes 
>>>> <http://scrapy.readthedocs.org/en/stable/news.html#release-notes> 
>>>> detailing some of the introduced changes, as well as the whole 
>>>> Changelog <http://scrapy.readthedocs.org/en/stable/news.html#changelog> in 
>>>> the project's docs <http://scrapy.readthedocs.org/>. This little 
>>>> snippet brought up in the first announcement will give you a quick glance 
>>>> of some of those changes:
>>>>
>>>> import scrapy
>>>>
>>>> class MySpider(scrapy.Spider):
>>>>     # …
>>>>     custom_settings = {
>>>>         'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 
>>>> 5.1)',
>>>>     }
>>>>
>>>>     def parse(self, response):
>>>>         for href in response.xpath(‘//h2/a/@href’).extract():
>>>>             full_url = response.urljoin(href)
>>>>             yield scrapy.Response(full_url, callback=self.parse_post)
>>>>
>>>>     def parse_post(self, response):
>>>>         yield {
>>>>             ‘title’: response.xpath(‘//h1’).extract_first(),
>>>>             ‘body’: response.xpath(‘//div.content’).extract_first(),
>>>>         }
>>>>
>>>> Upgrade to 1.0 by running:
>>>>
>>>>     $ pip install --upgrade Scrapy
>>>>
>>>> Since this is a stable release pip will fetch this version anytime 
>>>> Scrapy is installed, unless explicitly told otherwise.
>>>>
>>>> As final note we want to thank all our developers and users again for 
>>>> contributing in shaping up a release we're really proud of, Scrapy's 
>>>> community never ceases to amaze us :)
>>>>
>>>> Happy hacking!
>>>>
>>>  -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "scrapy-users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to [email protected].
>>> To post to this group, send email to [email protected].
>>> Visit this group at http://groups.google.com/group/scrapy-users.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to