The standard way to do that would be to implement a new Feed Storage, and
add it to your project via the
FEED_STORAGES<http://doc.scrapy.org/en/latest/topics/feed-exports.html#std:setting-FEED_STORAGES>
setting.
You can read the code of the built-in feed storages here as inspiration:
https://github.com/scrapy/scrapy/blob/master/scrapy/contrib/exporter/__init__.py


On Wed, Apr 23, 2014 at 9:18 AM, Shaun Marchant <[email protected]>wrote:

> Hi All,
>
> I have my project running as planned and can output successfully from the
> command line to a JSON file. Instead of trying to get the data into a
> database I thought it would be better to use the Feed Export option within
> Scrapy to upload my JSON file to the webserver (allowing the web server to
> parse it quicker on run time rather than querying a database).
>
> Having read the doc (
> http://doc.scrapy.org/en/latest/topics/feed-exports.html) I'm not very
> clear on how I would go about implementing this and wondered if anyone knew
> of any decent examples that would get me going?
>
>
> Cheers.
>
> --
> You received this message because you are subscribed to the Google Groups
> "scrapy-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/scrapy-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to