Tried the -s flag, still seeing INFO loglines:
$> scrapy crawl detail -s LOG_LEVEL=WARNING
2014-09-05 16:40:46-0400 [scrapy] INFO: Scrapy 0.24.4 started (bot: detail)
2014-09-05 16:40:46-0400 [scrapy] INFO: Optional features available: ssl,
http11
2014-09-05 16:40:46-0400 [scrapy] INFO: Overridden settings:
{'NEWSPIDER_MODULE': 'crawler.spiders', 'LOG_LEVEL': 'WARNING',
'SPIDER_MODULES': ['crawler.spiders'], 'BOT_NAME': 'chrome_store_crawler',
'USER_AGENT': 'MagicMikeBot (+http://www.magicmike.io)', 'DOWNLOAD_DELAY':
0.3}
2014-09-05 16:40:47-0400 [scrapy] INFO: Enabled extensions: LogStats,
TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-09-05 16:40:48-0400 [scrapy] INFO: Enabled downloader middlewares:
HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware,
RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware,
HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware,
ChunkedTransferMiddleware, DownloaderStats
2014-09-05 16:40:48-0400 [scrapy] INFO: Enabled spider middlewares:
HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware,
UrlLengthMiddleware, DepthMiddleware
2014-09-05 16:40:48-0400 [scrapy] INFO: Enabled item pipelines:
CsvExporterPipeline
2014-09-05 16:40:48-0400 [detail] INFO: Spider opened
......
Would there be any settings that would conflict with this? I'm running
Scrapy v0.24.4
On Friday, September 5, 2014 3:12:04 PM UTC-4, Nicolás Alejandro Ramírez
Quiros wrote:
>
> Review your code again because the settings are working fine.
> https://gist.github.com/nramirezuy/e75d8c041b07a8edb44f
>
> El viernes, 5 de septiembre de 2014 11:57:56 UTC-3, Hartley Brody escribió:
>>
>> I'm running scrapy as a cron job, and so all output that is sent to
>> stdout gets emailed to me at the end of the day, which is currently in the
>> dozens of MB. Most of the log lines are INFO messages that I'm trying to
>> suppress, but I still want WARNING, ERROR and CRITICAL to be printed to
>> stdout so that those get emailed to me.
>>
>> I know about the logging settings, and am currently using:
>>
>> ```
>> LOG_LEVEL = 'WARNING'
>> LOG_FILE = '/path/to/scrapy.log'
>> LOG_STDOUT = False
>> ```
>>
>> in my `settings.py`. These settings seem to be doing the right thing in
>> terms of the log *file* -- only logging the right messages -- but I'm still
>> seeing everything (including INFO) printed to stdout. I've also tried
>> running the scraper with the `scrapy crawl <spider> -L WARNING` flag, but
>> I'm still seeing INFO message on stdout.
>>
>> Is there a setting I'm missing somewhere that controls what gets sent to
>> stdout? I don't want to pipe it to /dev/null since I still want WARNINGS
>> and up to be sent to stdout. But I don't see a way to do this.
>>
>
--
You received this message because you are subscribed to the Google Groups
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.