Absolutely.  Love to set up a VM for my server.  I just had a "duh" moment
when I did it.  No harm, no foul.

On Fri, Sep 2, 2016 at 10:00 AM, Quinn Stevenson <
qu...@pronoia-solutions.com> wrote:

> Sorry - I wanted to put in and example that worked, and download something
> big to make sure it was streaming.  Hopefully you needed a new CentOS image
> :-)
>
>
>
> > On Sep 2, 2016, at 8:58 AM, Brad Johnson <brad.john...@mediadriver.com>
> wrote:
> >
> > Neat.  I accidentally clicked on the link and Chrome downloaded the ISO
> for
> > me.  Are you propagating Trojan horses here?  Heh.
> >
> > On Fri, Sep 2, 2016 at 9:56 AM, Quinn Stevenson <
> qu...@pronoia-solutions.com
> >> wrote:
> >
> >> I think something like this might work for you
> >>
> >> <route>
> >>    <from uri="direct://trigger-download" />
> >>    <log message="Download Triggered" />
> >>    <to uri="http4://buildlogs.centos.org/rolling/7/isos/x86_64/
> >> CentOS-7-x86_64-DVD.iso?disableStreamCache=true" />
> >>    <log message="Writing File" />
> >>    <to uri="file://target/download" />
> >> </route>
> >>
> >>> On Sep 2, 2016, at 8:51 AM, Brad Johnson <brad.john...@mediadriver.com
> >
> >> wrote:
> >>>
> >>> Hmmm. That could be a problem if it doesn't actually chunk.  I thought
> it
> >>> read the entire chunk into memory before letting you read it.  So if
> the
> >>> chunk size is 10mb it would download that whole 10mb and then let you
> >> read,
> >>> then fetch the next 10mb and let you read.  But that may not be the
> >> case. I
> >>> haven't worked with it much so can't say.  I do know it's exceptionally
> >>> fast.
> >>>
> >>> The chunking almost seems pointless if it doesn't work that way.
> >>>
> >>> On Fri, Sep 2, 2016 at 9:27 AM, S Ahmed <sahmed1...@gmail.com> wrote:
> >>>
> >>>> Brad, that page says this: "Notice Netty4 HTTP reads the entire stream
> >> into
> >>>> memory using io.netty.handler.codec.http.HttpObjectAggregator to
> build
> >> the
> >>>> entire full http message. But the resulting message is still a stream
> >> based
> >>>> message which is readable once."
> >>>>
> >>>> On Fri, Sep 2, 2016 at 10:26 AM, S Ahmed <sahmed1...@gmail.com>
> wrote:
> >>>>
> >>>>> Thanks.
> >>>>>
> >>>>> Just to be clear, I don't run the server where I am downloading the
> >> file.
> >>>>> I want to download files that are very large, but stream them so they
> >> are
> >>>>> not held in memory and then written to disk.  I want to stream the
> >>>> download
> >>>>> straight to a file and not hold the entire file in memory.
> >>>>>
> >>>>> Is Netty for the server portion or the client?
> >>>>>
> >>>>> On Fri, Sep 2, 2016 at 12:34 AM, Brad Johnson <
> >>>>> brad.john...@mediadriver.com> wrote:
> >>>>>
> >>>>>> http://camel.apache.org/netty4-http.html
> >>>>>>
> >>>>>> Look at netty and see if that works.  It can control chunk size but
> it
> >>>> is
> >>>>>> also streaming in any case so you may not even need to be concerned
> >>>> about
> >>>>>> it.
> >>>>>>
> >>>>>> Brad
> >>>>>>
> >>>>>> On Thu, Sep 1, 2016 at 8:53 PM, S Ahmed <sahmed1...@gmail.com>
> wrote:
> >>>>>>
> >>>>>>> Does it have to be ftp, I just need http?
> >>>>>>>
> >>>>>>> On Thu, Sep 1, 2016 at 5:31 PM, Quinn Stevenson <
> >>>>>>> qu...@pronoia-solutions.com
> >>>>>>>> wrote:
> >>>>>>>
> >>>>>>>> Check out the section on the ftp component page about “Using a
> Local
> >>>>>> Work
> >>>>>>>> Directory” (http://people.apache.org/~dkulp/camel/ftp2.html <
> >>>>>>>> http://people.apache.org/~dkulp/camel/ftp2.html>) - I think that
> >>>> may
> >>>>>> be
> >>>>>>>> what you’re after.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>> On Sep 1, 2016, at 9:30 AM, S Ahmed <sahmed1...@gmail.com>
> wrote:
> >>>>>>>>>
> >>>>>>>>> Hello,
> >>>>>>>>>
> >>>>>>>>> Is there an example of how to download a large file in chunks and
> >>>>>> save
> >>>>>>>> the
> >>>>>>>>> file as the file downloads.
> >>>>>>>>>
> >>>>>>>>> The goal is not to hold the entire file in memory and then save
> it
> >>>>>> to
> >>>>>>>> disk.
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Thanks.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>>
> >>>>
> >>
> >>
>
>

Reply via email to