Unable to connect from camel-ftp to apache mina-sshd

2016-09-02 Thread Goyal, Arpit
Hi Colleagues,

I am trying a very simple scenario where I am hosting apache mina-sshd as SFTP 
Server in my UNIT Test. Any one has idea why connection to SFTP always fails?

Camel ftp - 2.16.3
Apache mina sshd - 1.2.0

Regards,
Arpit.


My Camel Route in test case
-
from("direct:xxx").process("new 
MyProcessor()").to("sftp:localhost:9696?username=...&password=&[options]")


ERROR:
---
org.apache.camel.component.file.GenericFileOperationFailedException: Cannot 
connect to sftp://dummy@localhost:9696
at 
org.apache.camel.component.file.remote.SftpOperations.connect(SftpOperations.java:146)
at 
org.apache.camel.component.file.remote.RemoteFileProducer.connectIfNecessary(RemoteFileProducer.java:209)
at 
org.apache.camel.component.file.remote.RemoteFileProducer.recoverableConnectIfNecessary(RemoteFileProducer.java:201)
at 
org.apache.camel.component.file.remote.RemoteFileProducer.preWriteCheck(RemoteFileProducer.java:133)
at 
org.apache.camel.component.file.GenericFileProducer.processExchange(GenericFileProducer.java:113)
Caused by: com.jcraft.jsch.JSchException: failed to send channel request
at com.jcraft.jsch.Request.write(Request.java:65)
at com.jcraft.jsch.RequestSftp.request(RequestSftp.java:47)
at com.jcraft.jsch.ChannelSftp.start(ChannelSftp.java:237)
at com.jcraft.jsch.Channel.connect(Channel.java:152)
at 
org.apache.camel.component.file.remote.SftpOperations.connect(SftpOperations.java:130)
... 74 more

Test case Parent class - all children start camel route as given above
--
import java.io.File;
import java.io.IOException;
import java.util.Arrays;

import org.apache.sshd.common.NamedFactory;
import org.apache.sshd.common.file.virtualfs.NativeFileSystemFactory;
import org.apache.sshd.server.Command;
import org.apache.sshd.server.SshServer;
import org.apache.sshd.server.auth.password.PasswordAuthenticator;
import org.apache.sshd.server.auth.password.PasswordChangeRequiredException;
import org.apache.sshd.server.keyprovider.SimpleGeneratorHostKeyProvider;
import org.apache.sshd.server.scp.ScpCommandFactory;
import org.apache.sshd.server.session.ServerSession;
import org.apache.sshd.server.subsystem.sftp.SftpSubsystem;
import org.testng.annotations.AfterClass;
import org.testng.annotations.BeforeClass;

public abstract class AbstractSftpServerTest {

 private static SshServer sftpServer;
  private static final String TEMP_FOLDER = 
System.getProperty("java.io.tmpdir");
  private static File tempFolder;
  private static File tempFile;

  protected static final int PORT = 9696;

  @BeforeClass
  public static void beforeClass() throws IOException {
sftpServer = SshServer.setUpDefaultServer();
sftpServer.setHost("localhost");
tempFolder = new File(TEMP_FOLDER);
tempFile = File.createTempFile("server", ".key", tempFolder);
sftpServer.setPort(PORT);
sftpServer.setFileSystemFactory(new NativeFileSystemFactory ());
sftpServer.setCommandFactory(new ScpCommandFactory());
sftpServer.setKeyPairProvider(new SimpleGeneratorHostKeyProvider(tempFile));

sftpServer.setPasswordAuthenticator(new PasswordAuthenticator() {

  @Override
  public boolean authenticate(String username, String password, 
ServerSession session) throws PasswordChangeRequiredException {
  return true;
  }
});
sftpServer.start();
  }


  @AfterClass
  public void afterClass() throws IOException {
sftpServer.stop();
tempFile.delete();
  }
}


Re: Netty Server vs Jetty Server

2016-09-02 Thread Ranx
I think maybe a better explanation of what I'd like to accomplish is in
order.  The first is about CXF itself and the second is about Netty. The
following is an example of a service I've set up that uses a single
interface called PaymentServicesAPI which is nothing more than an interface
that extends other interfaces like PaymentAuthorization, PaymentSale,
PaymentRefud, etc.  So I only have to set this up once and then it routes to
whatever bundle is listening on the route that matches the operationName. 
This works fine but has the obvious problem that if I want to move the
bundle that implements the service associated with one of the operations the
aggregate interface still exposes that interface.  

To build bundles as portable microservices this is a problem.

Each of the interfaces is decorated with SOAP and Rest annotations.  What
I'd like to do is be able to set up the CXF Rest and SOAP servers with
providers, security, etc. in a single bundle.  Then in my individual bundles
make a reference to it in the same way that I see in the netty example.

http://0.0.0.0:{{port}}/foo?bootstrapConfiguration=#nettyHttpBootstrapOptions";

The CXF example. I don't want to add the cxf:rsServer and Soap server to
every bundle just to expose the common interface.

It seems likely there's a way to share this information in much the same was
the boot strap options are in Netty.  But I haven't seen an example of it.
In the example below the PaymentServicesAPI is the on that extends all the
others. What I'd like to do is set up the cxf:rsServer in single bundle and
set the service class in the consuming bundles. More like this:

 In that case the cxf:rsServer wouldn't actually bind the service class
until the individual bundle requested it.  I was able to roll my own version
by hand coding an OSGi service interface that gets exported from the server
bundle and lets individual bundles send registration notices at which point
it adds (or removes) the endpoint.  But I'm wondering if there's just
something I'm missing.















http://camel.apache.org/schema/blueprint";>





${body[0]}




direct-vm:${header.operationName}







--
View this message in context: 
http://camel.465427.n5.nabble.com/Netty-Server-vs-Jetty-Server-tp5787145p5787152.html
Sent from the Camel - Users mailing list archive at Nabble.com.


Re: downloading large files in chunks

2016-09-02 Thread Quinn Stevenson
The way I know it’s streaming is running the route. 

You’ll see the log entries (“Download Triggered” and “Writing File”) fairly 
close together.  Then if you watch the filesystem, you’ll see the file size on 
disk growing.  Also, I’m using default JVM parameters, so the heap isn’t big 
enough for the entire file to fit into memory and I’m not getting an OOM 
Exception.  I only downloaded about 500 MB, but that should’ve been enough to 
blow the JVM with my settings.

If you want to see it behave without streaming, change 
“disableStreamCache=true” to “disableStreamCache=false” (or just remove it from 
the URI - false is the default).

I’d have to thing about how to write an integration test for this

> On Sep 2, 2016, at 9:26 AM, S Ahmed  wrote:
> 
> Also, is there a way for me to test if the endpoint supports streaming?
> I'm on OSX so any open source tools to test this?
> 
> On Fri, Sep 2, 2016 at 11:11 AM, S Ahmed  wrote:
> 
>> I'm just the consumer (downloading), the file can be anywhere like s3 or
>> centos.org!
>> 
>> 
>> 
>> On Fri, Sep 2, 2016 at 11:09 AM, Brad Johnson <
>> brad.john...@mediadriver.com> wrote:
>> 
>>> By the way S. Ahmed, do you have control of both ends of this I mean
>>> client/server or are you just on the client/consumer side?
>>> 
>>> On Fri, Sep 2, 2016 at 10:01 AM, Brad Johnson <
>>> brad.john...@mediadriver.com>
>>> wrote:
>>> 
 Absolutely.  Love to set up a VM for my server.  I just had a "duh"
>>> moment
 when I did it.  No harm, no foul.
 
 On Fri, Sep 2, 2016 at 10:00 AM, Quinn Stevenson <
 qu...@pronoia-solutions.com> wrote:
 
> Sorry - I wanted to put in and example that worked, and download
> something big to make sure it was streaming.  Hopefully you needed a
>>> new
> CentOS image :-)
> 
> 
> 
>> On Sep 2, 2016, at 8:58 AM, Brad Johnson <
>>> brad.john...@mediadriver.com>
> wrote:
>> 
>> Neat.  I accidentally clicked on the link and Chrome downloaded the
>>> ISO
> for
>> me.  Are you propagating Trojan horses here?  Heh.
>> 
>> On Fri, Sep 2, 2016 at 9:56 AM, Quinn Stevenson <
> qu...@pronoia-solutions.com
>>> wrote:
>> 
>>> I think something like this might work for you
>>> 
>>> 
>>>   
>>>   
>>>   
>>>   
>>>   
>>> 
>>> 
 On Sep 2, 2016, at 8:51 AM, Brad Johnson <
> brad.john...@mediadriver.com>
>>> wrote:
 
 Hmmm. That could be a problem if it doesn't actually chunk.  I
> thought it
 read the entire chunk into memory before letting you read it.  So
>>> if
> the
 chunk size is 10mb it would download that whole 10mb and then let
>>> you
>>> read,
 then fetch the next 10mb and let you read.  But that may not be the
>>> case. I
 haven't worked with it much so can't say.  I do know it's
> exceptionally
 fast.
 
 The chunking almost seems pointless if it doesn't work that way.
 
 On Fri, Sep 2, 2016 at 9:27 AM, S Ahmed 
>>> wrote:
 
> Brad, that page says this: "Notice Netty4 HTTP reads the entire
> stream
>>> into
> memory using io.netty.handler.codec.http.HttpObjectAggregator to
> build
>>> the
> entire full http message. But the resulting message is still a
>>> stream
>>> based
> message which is readable once."
> 
> On Fri, Sep 2, 2016 at 10:26 AM, S Ahmed 
> wrote:
> 
>> Thanks.
>> 
>> Just to be clear, I don't run the server where I am downloading
>>> the
>>> file.
>> I want to download files that are very large, but stream them so
> they
>>> are
>> not held in memory and then written to disk.  I want to stream
>>> the
> download
>> straight to a file and not hold the entire file in memory.
>> 
>> Is Netty for the server portion or the client?
>> 
>> On Fri, Sep 2, 2016 at 12:34 AM, Brad Johnson <
>> brad.john...@mediadriver.com> wrote:
>> 
>>> http://camel.apache.org/netty4-http.html
>>> 
>>> Look at netty and see if that works.  It can control chunk size
> but it
> is
>>> also streaming in any case so you may not even need to be
>>> concerned
> about
>>> it.
>>> 
>>> Brad
>>> 
>>> On Thu, Sep 1, 2016 at 8:53 PM, S Ahmed 
> wrote:
>>> 
 Does it have to be ftp, I just need http?
 
 On Thu, Sep 1, 2016 at 5:31 PM, Quinn Stevenson <
 qu...@pronoia-solutions.com
> wrote:
 
> Check out the section on the ftp component page about “Using a
> Local
>>> Work
> Directory” (http://people.apache.org/~dkulp/camel/ftp2.html <
> http://people.apache.org/~dkulp/camel/ftp2.html>) - I think
>>> that

Re: downloading large files in chunks

2016-09-02 Thread Brad Johnson
https://netty.io/4.0/api/io/netty/handler/codec/http/HttpChunkedInput.html

That's why I thought the Camel Netty with chunked would only read the
entire stream of the specified chunk size in.

On Fri, Sep 2, 2016 at 10:11 AM, S Ahmed  wrote:

> I'm just the consumer (downloading), the file can be anywhere like s3 or
> centos.org!
>
>
>
> On Fri, Sep 2, 2016 at 11:09 AM, Brad Johnson <
> brad.john...@mediadriver.com>
> wrote:
>
> > By the way S. Ahmed, do you have control of both ends of this I mean
> > client/server or are you just on the client/consumer side?
> >
> > On Fri, Sep 2, 2016 at 10:01 AM, Brad Johnson <
> > brad.john...@mediadriver.com>
> > wrote:
> >
> > > Absolutely.  Love to set up a VM for my server.  I just had a "duh"
> > moment
> > > when I did it.  No harm, no foul.
> > >
> > > On Fri, Sep 2, 2016 at 10:00 AM, Quinn Stevenson <
> > > qu...@pronoia-solutions.com> wrote:
> > >
> > >> Sorry - I wanted to put in and example that worked, and download
> > >> something big to make sure it was streaming.  Hopefully you needed a
> new
> > >> CentOS image :-)
> > >>
> > >>
> > >>
> > >> > On Sep 2, 2016, at 8:58 AM, Brad Johnson <
> > brad.john...@mediadriver.com>
> > >> wrote:
> > >> >
> > >> > Neat.  I accidentally clicked on the link and Chrome downloaded the
> > ISO
> > >> for
> > >> > me.  Are you propagating Trojan horses here?  Heh.
> > >> >
> > >> > On Fri, Sep 2, 2016 at 9:56 AM, Quinn Stevenson <
> > >> qu...@pronoia-solutions.com
> > >> >> wrote:
> > >> >
> > >> >> I think something like this might work for you
> > >> >>
> > >> >> 
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >> 
> > >> >>
> > >> >>> On Sep 2, 2016, at 8:51 AM, Brad Johnson <
> > >> brad.john...@mediadriver.com>
> > >> >> wrote:
> > >> >>>
> > >> >>> Hmmm. That could be a problem if it doesn't actually chunk.  I
> > >> thought it
> > >> >>> read the entire chunk into memory before letting you read it.  So
> if
> > >> the
> > >> >>> chunk size is 10mb it would download that whole 10mb and then let
> > you
> > >> >> read,
> > >> >>> then fetch the next 10mb and let you read.  But that may not be
> the
> > >> >> case. I
> > >> >>> haven't worked with it much so can't say.  I do know it's
> > >> exceptionally
> > >> >>> fast.
> > >> >>>
> > >> >>> The chunking almost seems pointless if it doesn't work that way.
> > >> >>>
> > >> >>> On Fri, Sep 2, 2016 at 9:27 AM, S Ahmed 
> > wrote:
> > >> >>>
> > >>  Brad, that page says this: "Notice Netty4 HTTP reads the entire
> > >> stream
> > >> >> into
> > >>  memory using io.netty.handler.codec.http.HttpObjectAggregator to
> > >> build
> > >> >> the
> > >>  entire full http message. But the resulting message is still a
> > stream
> > >> >> based
> > >>  message which is readable once."
> > >> 
> > >>  On Fri, Sep 2, 2016 at 10:26 AM, S Ahmed 
> > >> wrote:
> > >> 
> > >> > Thanks.
> > >> >
> > >> > Just to be clear, I don't run the server where I am downloading
> > the
> > >> >> file.
> > >> > I want to download files that are very large, but stream them so
> > >> they
> > >> >> are
> > >> > not held in memory and then written to disk.  I want to stream
> the
> > >>  download
> > >> > straight to a file and not hold the entire file in memory.
> > >> >
> > >> > Is Netty for the server portion or the client?
> > >> >
> > >> > On Fri, Sep 2, 2016 at 12:34 AM, Brad Johnson <
> > >> > brad.john...@mediadriver.com> wrote:
> > >> >
> > >> >> http://camel.apache.org/netty4-http.html
> > >> >>
> > >> >> Look at netty and see if that works.  It can control chunk size
> > >> but it
> > >>  is
> > >> >> also streaming in any case so you may not even need to be
> > concerned
> > >>  about
> > >> >> it.
> > >> >>
> > >> >> Brad
> > >> >>
> > >> >> On Thu, Sep 1, 2016 at 8:53 PM, S Ahmed 
> > >> wrote:
> > >> >>
> > >> >>> Does it have to be ftp, I just need http?
> > >> >>>
> > >> >>> On Thu, Sep 1, 2016 at 5:31 PM, Quinn Stevenson <
> > >> >>> qu...@pronoia-solutions.com
> > >>  wrote:
> > >> >>>
> > >>  Check out the section on the ftp component page about “Using
> a
> > >> Local
> > >> >> Work
> > >>  Directory” (http://people.apache.org/~dkulp/camel/ftp2.html
> <
> > >>  http://people.apache.org/~dkulp/camel/ftp2.html>) - I think
> > that
> > >>  may
> > >> >> be
> > >>  what you’re after.
> > >> 
> > >> 
> > >> > On Sep 1, 2016, at 9:30 AM, S Ahmed 
> > >> wrote:
> > >> >
> > >> > Hello,
> > >> >
> > >> > Is there an example of how to download a large file in
> chunks
> > >> and
> > >> >> save
> > >>  the
> > >> > file as the file downloads.
> > >> >
> > >> > The goal is not to hold the entire file in memory and then
> > save
> > >> it
> > >> >> to
> > >>  dis

Re: downloading large files in chunks

2016-09-02 Thread S Ahmed
Also, is there a way for me to test if the endpoint supports streaming?
I'm on OSX so any open source tools to test this?

On Fri, Sep 2, 2016 at 11:11 AM, S Ahmed  wrote:

> I'm just the consumer (downloading), the file can be anywhere like s3 or
> centos.org!
>
>
>
> On Fri, Sep 2, 2016 at 11:09 AM, Brad Johnson <
> brad.john...@mediadriver.com> wrote:
>
>> By the way S. Ahmed, do you have control of both ends of this I mean
>> client/server or are you just on the client/consumer side?
>>
>> On Fri, Sep 2, 2016 at 10:01 AM, Brad Johnson <
>> brad.john...@mediadriver.com>
>> wrote:
>>
>> > Absolutely.  Love to set up a VM for my server.  I just had a "duh"
>> moment
>> > when I did it.  No harm, no foul.
>> >
>> > On Fri, Sep 2, 2016 at 10:00 AM, Quinn Stevenson <
>> > qu...@pronoia-solutions.com> wrote:
>> >
>> >> Sorry - I wanted to put in and example that worked, and download
>> >> something big to make sure it was streaming.  Hopefully you needed a
>> new
>> >> CentOS image :-)
>> >>
>> >>
>> >>
>> >> > On Sep 2, 2016, at 8:58 AM, Brad Johnson <
>> brad.john...@mediadriver.com>
>> >> wrote:
>> >> >
>> >> > Neat.  I accidentally clicked on the link and Chrome downloaded the
>> ISO
>> >> for
>> >> > me.  Are you propagating Trojan horses here?  Heh.
>> >> >
>> >> > On Fri, Sep 2, 2016 at 9:56 AM, Quinn Stevenson <
>> >> qu...@pronoia-solutions.com
>> >> >> wrote:
>> >> >
>> >> >> I think something like this might work for you
>> >> >>
>> >> >> 
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >> 
>> >> >>
>> >> >>> On Sep 2, 2016, at 8:51 AM, Brad Johnson <
>> >> brad.john...@mediadriver.com>
>> >> >> wrote:
>> >> >>>
>> >> >>> Hmmm. That could be a problem if it doesn't actually chunk.  I
>> >> thought it
>> >> >>> read the entire chunk into memory before letting you read it.  So
>> if
>> >> the
>> >> >>> chunk size is 10mb it would download that whole 10mb and then let
>> you
>> >> >> read,
>> >> >>> then fetch the next 10mb and let you read.  But that may not be the
>> >> >> case. I
>> >> >>> haven't worked with it much so can't say.  I do know it's
>> >> exceptionally
>> >> >>> fast.
>> >> >>>
>> >> >>> The chunking almost seems pointless if it doesn't work that way.
>> >> >>>
>> >> >>> On Fri, Sep 2, 2016 at 9:27 AM, S Ahmed 
>> wrote:
>> >> >>>
>> >>  Brad, that page says this: "Notice Netty4 HTTP reads the entire
>> >> stream
>> >> >> into
>> >>  memory using io.netty.handler.codec.http.HttpObjectAggregator to
>> >> build
>> >> >> the
>> >>  entire full http message. But the resulting message is still a
>> stream
>> >> >> based
>> >>  message which is readable once."
>> >> 
>> >>  On Fri, Sep 2, 2016 at 10:26 AM, S Ahmed 
>> >> wrote:
>> >> 
>> >> > Thanks.
>> >> >
>> >> > Just to be clear, I don't run the server where I am downloading
>> the
>> >> >> file.
>> >> > I want to download files that are very large, but stream them so
>> >> they
>> >> >> are
>> >> > not held in memory and then written to disk.  I want to stream
>> the
>> >>  download
>> >> > straight to a file and not hold the entire file in memory.
>> >> >
>> >> > Is Netty for the server portion or the client?
>> >> >
>> >> > On Fri, Sep 2, 2016 at 12:34 AM, Brad Johnson <
>> >> > brad.john...@mediadriver.com> wrote:
>> >> >
>> >> >> http://camel.apache.org/netty4-http.html
>> >> >>
>> >> >> Look at netty and see if that works.  It can control chunk size
>> >> but it
>> >>  is
>> >> >> also streaming in any case so you may not even need to be
>> concerned
>> >>  about
>> >> >> it.
>> >> >>
>> >> >> Brad
>> >> >>
>> >> >> On Thu, Sep 1, 2016 at 8:53 PM, S Ahmed 
>> >> wrote:
>> >> >>
>> >> >>> Does it have to be ftp, I just need http?
>> >> >>>
>> >> >>> On Thu, Sep 1, 2016 at 5:31 PM, Quinn Stevenson <
>> >> >>> qu...@pronoia-solutions.com
>> >>  wrote:
>> >> >>>
>> >>  Check out the section on the ftp component page about “Using a
>> >> Local
>> >> >> Work
>> >>  Directory” (http://people.apache.org/~dkulp/camel/ftp2.html <
>> >>  http://people.apache.org/~dkulp/camel/ftp2.html>) - I think
>> that
>> >>  may
>> >> >> be
>> >>  what you’re after.
>> >> 
>> >> 
>> >> > On Sep 1, 2016, at 9:30 AM, S Ahmed 
>> >> wrote:
>> >> >
>> >> > Hello,
>> >> >
>> >> > Is there an example of how to download a large file in chunks
>> >> and
>> >> >> save
>> >>  the
>> >> > file as the file downloads.
>> >> >
>> >> > The goal is not to hold the entire file in memory and then
>> save
>> >> it
>> >> >> to
>> >>  disk.
>> >> >
>> >> >
>> >> > Thanks.
>> >> 
>> >> 
>> >> >>>
>> >> >>
>> >> >
>> >> >
>> >> 
>> >> >>
>> >> >>
>> >>
>> >>
>> >
>>
>
>


Netty Server vs Jetty Server

2016-09-02 Thread Ranx
When I look at this how the shared netty server works it seems very much like
what I'd want for microservice bundles in an OSGi environment.

http://camel.apache.org/netty-http-server-example.html

Can it be used with CXF? Are there any examples of using this with CXF? 

If not, can one create that sort of shared server with unique endpoints in
Jetty?





--
View this message in context: 
http://camel.465427.n5.nabble.com/Netty-Server-vs-Jetty-Server-tp5787145.html
Sent from the Camel - Users mailing list archive at Nabble.com.


Re: Problem with objects not being released from memory.

2016-09-02 Thread litian
We actually finally figured out yesterday where we went wrong, after a week
of trying different things. It turns out the mock statements hold the
objects in memory forever. It was fine while testing since it does not fill
up the memory for a few thousand messages. After removing the mock from
everything, all the memory for the HAPI messages are cleaned up as expected.
Thank you!



--
View this message in context: 
http://camel.465427.n5.nabble.com/Problem-with-objects-not-being-released-from-memory-tp5787010p5787144.html
Sent from the Camel - Users mailing list archive at Nabble.com.


Re: downloading large files in chunks

2016-09-02 Thread S Ahmed
I'm just the consumer (downloading), the file can be anywhere like s3 or
centos.org!



On Fri, Sep 2, 2016 at 11:09 AM, Brad Johnson 
wrote:

> By the way S. Ahmed, do you have control of both ends of this I mean
> client/server or are you just on the client/consumer side?
>
> On Fri, Sep 2, 2016 at 10:01 AM, Brad Johnson <
> brad.john...@mediadriver.com>
> wrote:
>
> > Absolutely.  Love to set up a VM for my server.  I just had a "duh"
> moment
> > when I did it.  No harm, no foul.
> >
> > On Fri, Sep 2, 2016 at 10:00 AM, Quinn Stevenson <
> > qu...@pronoia-solutions.com> wrote:
> >
> >> Sorry - I wanted to put in and example that worked, and download
> >> something big to make sure it was streaming.  Hopefully you needed a new
> >> CentOS image :-)
> >>
> >>
> >>
> >> > On Sep 2, 2016, at 8:58 AM, Brad Johnson <
> brad.john...@mediadriver.com>
> >> wrote:
> >> >
> >> > Neat.  I accidentally clicked on the link and Chrome downloaded the
> ISO
> >> for
> >> > me.  Are you propagating Trojan horses here?  Heh.
> >> >
> >> > On Fri, Sep 2, 2016 at 9:56 AM, Quinn Stevenson <
> >> qu...@pronoia-solutions.com
> >> >> wrote:
> >> >
> >> >> I think something like this might work for you
> >> >>
> >> >> 
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> 
> >> >>
> >> >>> On Sep 2, 2016, at 8:51 AM, Brad Johnson <
> >> brad.john...@mediadriver.com>
> >> >> wrote:
> >> >>>
> >> >>> Hmmm. That could be a problem if it doesn't actually chunk.  I
> >> thought it
> >> >>> read the entire chunk into memory before letting you read it.  So if
> >> the
> >> >>> chunk size is 10mb it would download that whole 10mb and then let
> you
> >> >> read,
> >> >>> then fetch the next 10mb and let you read.  But that may not be the
> >> >> case. I
> >> >>> haven't worked with it much so can't say.  I do know it's
> >> exceptionally
> >> >>> fast.
> >> >>>
> >> >>> The chunking almost seems pointless if it doesn't work that way.
> >> >>>
> >> >>> On Fri, Sep 2, 2016 at 9:27 AM, S Ahmed 
> wrote:
> >> >>>
> >>  Brad, that page says this: "Notice Netty4 HTTP reads the entire
> >> stream
> >> >> into
> >>  memory using io.netty.handler.codec.http.HttpObjectAggregator to
> >> build
> >> >> the
> >>  entire full http message. But the resulting message is still a
> stream
> >> >> based
> >>  message which is readable once."
> >> 
> >>  On Fri, Sep 2, 2016 at 10:26 AM, S Ahmed 
> >> wrote:
> >> 
> >> > Thanks.
> >> >
> >> > Just to be clear, I don't run the server where I am downloading
> the
> >> >> file.
> >> > I want to download files that are very large, but stream them so
> >> they
> >> >> are
> >> > not held in memory and then written to disk.  I want to stream the
> >>  download
> >> > straight to a file and not hold the entire file in memory.
> >> >
> >> > Is Netty for the server portion or the client?
> >> >
> >> > On Fri, Sep 2, 2016 at 12:34 AM, Brad Johnson <
> >> > brad.john...@mediadriver.com> wrote:
> >> >
> >> >> http://camel.apache.org/netty4-http.html
> >> >>
> >> >> Look at netty and see if that works.  It can control chunk size
> >> but it
> >>  is
> >> >> also streaming in any case so you may not even need to be
> concerned
> >>  about
> >> >> it.
> >> >>
> >> >> Brad
> >> >>
> >> >> On Thu, Sep 1, 2016 at 8:53 PM, S Ahmed 
> >> wrote:
> >> >>
> >> >>> Does it have to be ftp, I just need http?
> >> >>>
> >> >>> On Thu, Sep 1, 2016 at 5:31 PM, Quinn Stevenson <
> >> >>> qu...@pronoia-solutions.com
> >>  wrote:
> >> >>>
> >>  Check out the section on the ftp component page about “Using a
> >> Local
> >> >> Work
> >>  Directory” (http://people.apache.org/~dkulp/camel/ftp2.html <
> >>  http://people.apache.org/~dkulp/camel/ftp2.html>) - I think
> that
> >>  may
> >> >> be
> >>  what you’re after.
> >> 
> >> 
> >> > On Sep 1, 2016, at 9:30 AM, S Ahmed 
> >> wrote:
> >> >
> >> > Hello,
> >> >
> >> > Is there an example of how to download a large file in chunks
> >> and
> >> >> save
> >>  the
> >> > file as the file downloads.
> >> >
> >> > The goal is not to hold the entire file in memory and then
> save
> >> it
> >> >> to
> >>  disk.
> >> >
> >> >
> >> > Thanks.
> >> 
> >> 
> >> >>>
> >> >>
> >> >
> >> >
> >> 
> >> >>
> >> >>
> >>
> >>
> >
>


Re: downloading large files in chunks

2016-09-02 Thread Brad Johnson
By the way S. Ahmed, do you have control of both ends of this I mean
client/server or are you just on the client/consumer side?

On Fri, Sep 2, 2016 at 10:01 AM, Brad Johnson 
wrote:

> Absolutely.  Love to set up a VM for my server.  I just had a "duh" moment
> when I did it.  No harm, no foul.
>
> On Fri, Sep 2, 2016 at 10:00 AM, Quinn Stevenson <
> qu...@pronoia-solutions.com> wrote:
>
>> Sorry - I wanted to put in and example that worked, and download
>> something big to make sure it was streaming.  Hopefully you needed a new
>> CentOS image :-)
>>
>>
>>
>> > On Sep 2, 2016, at 8:58 AM, Brad Johnson 
>> wrote:
>> >
>> > Neat.  I accidentally clicked on the link and Chrome downloaded the ISO
>> for
>> > me.  Are you propagating Trojan horses here?  Heh.
>> >
>> > On Fri, Sep 2, 2016 at 9:56 AM, Quinn Stevenson <
>> qu...@pronoia-solutions.com
>> >> wrote:
>> >
>> >> I think something like this might work for you
>> >>
>> >> 
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> 
>> >>
>> >>> On Sep 2, 2016, at 8:51 AM, Brad Johnson <
>> brad.john...@mediadriver.com>
>> >> wrote:
>> >>>
>> >>> Hmmm. That could be a problem if it doesn't actually chunk.  I
>> thought it
>> >>> read the entire chunk into memory before letting you read it.  So if
>> the
>> >>> chunk size is 10mb it would download that whole 10mb and then let you
>> >> read,
>> >>> then fetch the next 10mb and let you read.  But that may not be the
>> >> case. I
>> >>> haven't worked with it much so can't say.  I do know it's
>> exceptionally
>> >>> fast.
>> >>>
>> >>> The chunking almost seems pointless if it doesn't work that way.
>> >>>
>> >>> On Fri, Sep 2, 2016 at 9:27 AM, S Ahmed  wrote:
>> >>>
>>  Brad, that page says this: "Notice Netty4 HTTP reads the entire
>> stream
>> >> into
>>  memory using io.netty.handler.codec.http.HttpObjectAggregator to
>> build
>> >> the
>>  entire full http message. But the resulting message is still a stream
>> >> based
>>  message which is readable once."
>> 
>>  On Fri, Sep 2, 2016 at 10:26 AM, S Ahmed 
>> wrote:
>> 
>> > Thanks.
>> >
>> > Just to be clear, I don't run the server where I am downloading the
>> >> file.
>> > I want to download files that are very large, but stream them so
>> they
>> >> are
>> > not held in memory and then written to disk.  I want to stream the
>>  download
>> > straight to a file and not hold the entire file in memory.
>> >
>> > Is Netty for the server portion or the client?
>> >
>> > On Fri, Sep 2, 2016 at 12:34 AM, Brad Johnson <
>> > brad.john...@mediadriver.com> wrote:
>> >
>> >> http://camel.apache.org/netty4-http.html
>> >>
>> >> Look at netty and see if that works.  It can control chunk size
>> but it
>>  is
>> >> also streaming in any case so you may not even need to be concerned
>>  about
>> >> it.
>> >>
>> >> Brad
>> >>
>> >> On Thu, Sep 1, 2016 at 8:53 PM, S Ahmed 
>> wrote:
>> >>
>> >>> Does it have to be ftp, I just need http?
>> >>>
>> >>> On Thu, Sep 1, 2016 at 5:31 PM, Quinn Stevenson <
>> >>> qu...@pronoia-solutions.com
>>  wrote:
>> >>>
>>  Check out the section on the ftp component page about “Using a
>> Local
>> >> Work
>>  Directory” (http://people.apache.org/~dkulp/camel/ftp2.html <
>>  http://people.apache.org/~dkulp/camel/ftp2.html>) - I think that
>>  may
>> >> be
>>  what you’re after.
>> 
>> 
>> > On Sep 1, 2016, at 9:30 AM, S Ahmed 
>> wrote:
>> >
>> > Hello,
>> >
>> > Is there an example of how to download a large file in chunks
>> and
>> >> save
>>  the
>> > file as the file downloads.
>> >
>> > The goal is not to hold the entire file in memory and then save
>> it
>> >> to
>>  disk.
>> >
>> >
>> > Thanks.
>> 
>> 
>> >>>
>> >>
>> >
>> >
>> 
>> >>
>> >>
>>
>>
>


Re: Problem with objects not being released from memory.

2016-09-02 Thread Quinn Stevenson
I’ve used HAPI quite a bit, and I’ve never noticed a memory leak with it 
(doesn’t mean it isn’t there though).

Can you tell what object/class is holding the objects?  

When you said “they’re never cleaned up by the GC”, I’m assuming you forced a 
GC?



> On Aug 30, 2016, at 1:28 PM, litian  wrote:
> 
> Hi All,
> 
> I have a problem with a program where it appears that objects are never
> released from memory. We are using Apache Camel and the HL7 component to
> read in HL7 messages. Everything is working, however after a few thousand
> messages, the program slows down and eventually stops working. We use
> JProfiler to determine where the issue may be coming from. We noticed that
> there are Object[], ArrayList, and other HL7 objects that are never cleaned
> by the GC and they just kept on growing. When the message gets unmarshaled
> it creates all of the objects like arrays, etc.
> 
> Currently we are using Camel 2.15.0 with Hapi 2.2.
> 
> The code we have for the message processing:
> 
>HL7DataFormat hl7 = new HL7DataFormat();
> 
>HapiContext hapiContext = new DefaultHapiContext();
>hapiContext.getParserConfiguration().setValidating(false);
>hl7.setHapiContext(hapiContext);
> 
>from("mina2:tcp://" + server + ":" + port +
> "?sync=true&codec=#hl7codec")
>.unmarshal(hl7)
>.onException(Exception.class)
>   .handled(true)
>   .transform(ack())
>   .end()
>.validate(messageConforms())
>.choice()
> 
> .when(header("CamelHL7TriggerEvent").isEqualTo("A01")).to("mock:a01").beanRef("adtMessageHandler",
> "handleAdmit").to("mock:a19").transform(ack())
> 
> .when(header("CamelHL7TriggerEvent").isEqualTo("A02")).to("mock:a02").beanRef("adtMessageHandler",
> "handleTransfer").to("mock:a19").transform(ack()) 
> 
> .when(header("CamelHL7TriggerEvent").isEqualTo("A03")).to("mock:a03").beanRef("adtMessageHandler",
> "handleDischarge").to("mock:a19").transform(ack())
> 
> ...
> 
>.end()
>.marshal(hl7);
> 
> We tried many things like onCompletion() and process() but the objects
> continue to stay in memory. Any suggestions would be greatly appreciated.
> Thank you in advance. 
> 
> 
> 
> --
> View this message in context: 
> http://camel.465427.n5.nabble.com/Problem-with-objects-not-being-released-from-memory-tp5787010.html
> Sent from the Camel - Users mailing list archive at Nabble.com.



Re: downloading large files in chunks

2016-09-02 Thread Brad Johnson
Absolutely.  Love to set up a VM for my server.  I just had a "duh" moment
when I did it.  No harm, no foul.

On Fri, Sep 2, 2016 at 10:00 AM, Quinn Stevenson <
qu...@pronoia-solutions.com> wrote:

> Sorry - I wanted to put in and example that worked, and download something
> big to make sure it was streaming.  Hopefully you needed a new CentOS image
> :-)
>
>
>
> > On Sep 2, 2016, at 8:58 AM, Brad Johnson 
> wrote:
> >
> > Neat.  I accidentally clicked on the link and Chrome downloaded the ISO
> for
> > me.  Are you propagating Trojan horses here?  Heh.
> >
> > On Fri, Sep 2, 2016 at 9:56 AM, Quinn Stevenson <
> qu...@pronoia-solutions.com
> >> wrote:
> >
> >> I think something like this might work for you
> >>
> >> 
> >>
> >>
> >>
> >>
> >>
> >> 
> >>
> >>> On Sep 2, 2016, at 8:51 AM, Brad Johnson  >
> >> wrote:
> >>>
> >>> Hmmm. That could be a problem if it doesn't actually chunk.  I thought
> it
> >>> read the entire chunk into memory before letting you read it.  So if
> the
> >>> chunk size is 10mb it would download that whole 10mb and then let you
> >> read,
> >>> then fetch the next 10mb and let you read.  But that may not be the
> >> case. I
> >>> haven't worked with it much so can't say.  I do know it's exceptionally
> >>> fast.
> >>>
> >>> The chunking almost seems pointless if it doesn't work that way.
> >>>
> >>> On Fri, Sep 2, 2016 at 9:27 AM, S Ahmed  wrote:
> >>>
>  Brad, that page says this: "Notice Netty4 HTTP reads the entire stream
> >> into
>  memory using io.netty.handler.codec.http.HttpObjectAggregator to
> build
> >> the
>  entire full http message. But the resulting message is still a stream
> >> based
>  message which is readable once."
> 
>  On Fri, Sep 2, 2016 at 10:26 AM, S Ahmed 
> wrote:
> 
> > Thanks.
> >
> > Just to be clear, I don't run the server where I am downloading the
> >> file.
> > I want to download files that are very large, but stream them so they
> >> are
> > not held in memory and then written to disk.  I want to stream the
>  download
> > straight to a file and not hold the entire file in memory.
> >
> > Is Netty for the server portion or the client?
> >
> > On Fri, Sep 2, 2016 at 12:34 AM, Brad Johnson <
> > brad.john...@mediadriver.com> wrote:
> >
> >> http://camel.apache.org/netty4-http.html
> >>
> >> Look at netty and see if that works.  It can control chunk size but
> it
>  is
> >> also streaming in any case so you may not even need to be concerned
>  about
> >> it.
> >>
> >> Brad
> >>
> >> On Thu, Sep 1, 2016 at 8:53 PM, S Ahmed 
> wrote:
> >>
> >>> Does it have to be ftp, I just need http?
> >>>
> >>> On Thu, Sep 1, 2016 at 5:31 PM, Quinn Stevenson <
> >>> qu...@pronoia-solutions.com
>  wrote:
> >>>
>  Check out the section on the ftp component page about “Using a
> Local
> >> Work
>  Directory” (http://people.apache.org/~dkulp/camel/ftp2.html <
>  http://people.apache.org/~dkulp/camel/ftp2.html>) - I think that
>  may
> >> be
>  what you’re after.
> 
> 
> > On Sep 1, 2016, at 9:30 AM, S Ahmed 
> wrote:
> >
> > Hello,
> >
> > Is there an example of how to download a large file in chunks and
> >> save
>  the
> > file as the file downloads.
> >
> > The goal is not to hold the entire file in memory and then save
> it
> >> to
>  disk.
> >
> >
> > Thanks.
> 
> 
> >>>
> >>
> >
> >
> 
> >>
> >>
>
>


Re: downloading large files in chunks

2016-09-02 Thread Quinn Stevenson
Sorry - I wanted to put in and example that worked, and download something big 
to make sure it was streaming.  Hopefully you needed a new CentOS image :-)



> On Sep 2, 2016, at 8:58 AM, Brad Johnson  wrote:
> 
> Neat.  I accidentally clicked on the link and Chrome downloaded the ISO for
> me.  Are you propagating Trojan horses here?  Heh.
> 
> On Fri, Sep 2, 2016 at 9:56 AM, Quinn Stevenson > wrote:
> 
>> I think something like this might work for you
>> 
>> 
>>
>>
>>
>>
>>
>> 
>> 
>>> On Sep 2, 2016, at 8:51 AM, Brad Johnson 
>> wrote:
>>> 
>>> Hmmm. That could be a problem if it doesn't actually chunk.  I thought it
>>> read the entire chunk into memory before letting you read it.  So if the
>>> chunk size is 10mb it would download that whole 10mb and then let you
>> read,
>>> then fetch the next 10mb and let you read.  But that may not be the
>> case. I
>>> haven't worked with it much so can't say.  I do know it's exceptionally
>>> fast.
>>> 
>>> The chunking almost seems pointless if it doesn't work that way.
>>> 
>>> On Fri, Sep 2, 2016 at 9:27 AM, S Ahmed  wrote:
>>> 
 Brad, that page says this: "Notice Netty4 HTTP reads the entire stream
>> into
 memory using io.netty.handler.codec.http.HttpObjectAggregator to build
>> the
 entire full http message. But the resulting message is still a stream
>> based
 message which is readable once."
 
 On Fri, Sep 2, 2016 at 10:26 AM, S Ahmed  wrote:
 
> Thanks.
> 
> Just to be clear, I don't run the server where I am downloading the
>> file.
> I want to download files that are very large, but stream them so they
>> are
> not held in memory and then written to disk.  I want to stream the
 download
> straight to a file and not hold the entire file in memory.
> 
> Is Netty for the server portion or the client?
> 
> On Fri, Sep 2, 2016 at 12:34 AM, Brad Johnson <
> brad.john...@mediadriver.com> wrote:
> 
>> http://camel.apache.org/netty4-http.html
>> 
>> Look at netty and see if that works.  It can control chunk size but it
 is
>> also streaming in any case so you may not even need to be concerned
 about
>> it.
>> 
>> Brad
>> 
>> On Thu, Sep 1, 2016 at 8:53 PM, S Ahmed  wrote:
>> 
>>> Does it have to be ftp, I just need http?
>>> 
>>> On Thu, Sep 1, 2016 at 5:31 PM, Quinn Stevenson <
>>> qu...@pronoia-solutions.com
 wrote:
>>> 
 Check out the section on the ftp component page about “Using a Local
>> Work
 Directory” (http://people.apache.org/~dkulp/camel/ftp2.html <
 http://people.apache.org/~dkulp/camel/ftp2.html>) - I think that
 may
>> be
 what you’re after.
 
 
> On Sep 1, 2016, at 9:30 AM, S Ahmed  wrote:
> 
> Hello,
> 
> Is there an example of how to download a large file in chunks and
>> save
 the
> file as the file downloads.
> 
> The goal is not to hold the entire file in memory and then save it
>> to
 disk.
> 
> 
> Thanks.
 
 
>>> 
>> 
> 
> 
 
>> 
>> 



Re: downloading large files in chunks

2016-09-02 Thread Brad Johnson
Neat.  I accidentally clicked on the link and Chrome downloaded the ISO for
me.  Are you propagating Trojan horses here?  Heh.

On Fri, Sep 2, 2016 at 9:56 AM, Quinn Stevenson  wrote:

> I think something like this might work for you
>
> 
> 
> 
> 
> 
> 
> 
>
> > On Sep 2, 2016, at 8:51 AM, Brad Johnson 
> wrote:
> >
> > Hmmm. That could be a problem if it doesn't actually chunk.  I thought it
> > read the entire chunk into memory before letting you read it.  So if the
> > chunk size is 10mb it would download that whole 10mb and then let you
> read,
> > then fetch the next 10mb and let you read.  But that may not be the
> case. I
> > haven't worked with it much so can't say.  I do know it's exceptionally
> > fast.
> >
> > The chunking almost seems pointless if it doesn't work that way.
> >
> > On Fri, Sep 2, 2016 at 9:27 AM, S Ahmed  wrote:
> >
> >> Brad, that page says this: "Notice Netty4 HTTP reads the entire stream
> into
> >> memory using io.netty.handler.codec.http.HttpObjectAggregator to build
> the
> >> entire full http message. But the resulting message is still a stream
> based
> >> message which is readable once."
> >>
> >> On Fri, Sep 2, 2016 at 10:26 AM, S Ahmed  wrote:
> >>
> >>> Thanks.
> >>>
> >>> Just to be clear, I don't run the server where I am downloading the
> file.
> >>> I want to download files that are very large, but stream them so they
> are
> >>> not held in memory and then written to disk.  I want to stream the
> >> download
> >>> straight to a file and not hold the entire file in memory.
> >>>
> >>> Is Netty for the server portion or the client?
> >>>
> >>> On Fri, Sep 2, 2016 at 12:34 AM, Brad Johnson <
> >>> brad.john...@mediadriver.com> wrote:
> >>>
>  http://camel.apache.org/netty4-http.html
> 
>  Look at netty and see if that works.  It can control chunk size but it
> >> is
>  also streaming in any case so you may not even need to be concerned
> >> about
>  it.
> 
>  Brad
> 
>  On Thu, Sep 1, 2016 at 8:53 PM, S Ahmed  wrote:
> 
> > Does it have to be ftp, I just need http?
> >
> > On Thu, Sep 1, 2016 at 5:31 PM, Quinn Stevenson <
> > qu...@pronoia-solutions.com
> >> wrote:
> >
> >> Check out the section on the ftp component page about “Using a Local
>  Work
> >> Directory” (http://people.apache.org/~dkulp/camel/ftp2.html <
> >> http://people.apache.org/~dkulp/camel/ftp2.html>) - I think that
> >> may
>  be
> >> what you’re after.
> >>
> >>
> >>> On Sep 1, 2016, at 9:30 AM, S Ahmed  wrote:
> >>>
> >>> Hello,
> >>>
> >>> Is there an example of how to download a large file in chunks and
>  save
> >> the
> >>> file as the file downloads.
> >>>
> >>> The goal is not to hold the entire file in memory and then save it
>  to
> >> disk.
> >>>
> >>>
> >>> Thanks.
> >>
> >>
> >
> 
> >>>
> >>>
> >>
>
>


Re: downloading large files in chunks

2016-09-02 Thread Brad Johnson
Hmmm. That could be a problem if it doesn't actually chunk.  I thought it
read the entire chunk into memory before letting you read it.  So if the
chunk size is 10mb it would download that whole 10mb and then let you read,
then fetch the next 10mb and let you read.  But that may not be the case. I
haven't worked with it much so can't say.  I do know it's exceptionally
fast.

The chunking almost seems pointless if it doesn't work that way.

On Fri, Sep 2, 2016 at 9:27 AM, S Ahmed  wrote:

> Brad, that page says this: "Notice Netty4 HTTP reads the entire stream into
> memory using io.netty.handler.codec.http.HttpObjectAggregator to build the
> entire full http message. But the resulting message is still a stream based
> message which is readable once."
>
> On Fri, Sep 2, 2016 at 10:26 AM, S Ahmed  wrote:
>
> > Thanks.
> >
> > Just to be clear, I don't run the server where I am downloading the file.
> > I want to download files that are very large, but stream them so they are
> > not held in memory and then written to disk.  I want to stream the
> download
> > straight to a file and not hold the entire file in memory.
> >
> > Is Netty for the server portion or the client?
> >
> > On Fri, Sep 2, 2016 at 12:34 AM, Brad Johnson <
> > brad.john...@mediadriver.com> wrote:
> >
> >> http://camel.apache.org/netty4-http.html
> >>
> >> Look at netty and see if that works.  It can control chunk size but it
> is
> >> also streaming in any case so you may not even need to be concerned
> about
> >> it.
> >>
> >> Brad
> >>
> >> On Thu, Sep 1, 2016 at 8:53 PM, S Ahmed  wrote:
> >>
> >> > Does it have to be ftp, I just need http?
> >> >
> >> > On Thu, Sep 1, 2016 at 5:31 PM, Quinn Stevenson <
> >> > qu...@pronoia-solutions.com
> >> > > wrote:
> >> >
> >> > > Check out the section on the ftp component page about “Using a Local
> >> Work
> >> > > Directory” (http://people.apache.org/~dkulp/camel/ftp2.html <
> >> > > http://people.apache.org/~dkulp/camel/ftp2.html>) - I think that
> may
> >> be
> >> > > what you’re after.
> >> > >
> >> > >
> >> > > > On Sep 1, 2016, at 9:30 AM, S Ahmed  wrote:
> >> > > >
> >> > > > Hello,
> >> > > >
> >> > > > Is there an example of how to download a large file in chunks and
> >> save
> >> > > the
> >> > > > file as the file downloads.
> >> > > >
> >> > > > The goal is not to hold the entire file in memory and then save it
> >> to
> >> > > disk.
> >> > > >
> >> > > >
> >> > > > Thanks.
> >> > >
> >> > >
> >> >
> >>
> >
> >
>


Re: downloading large files in chunks

2016-09-02 Thread Brad Johnson
By the way, I don't know if you said or not but do you control both sides
of this or just the consumer side?

On Fri, Sep 2, 2016 at 9:51 AM, Brad Johnson 
wrote:

> Hmmm. That could be a problem if it doesn't actually chunk.  I thought it
> read the entire chunk into memory before letting you read it.  So if the
> chunk size is 10mb it would download that whole 10mb and then let you read,
> then fetch the next 10mb and let you read.  But that may not be the case. I
> haven't worked with it much so can't say.  I do know it's exceptionally
> fast.
>
> The chunking almost seems pointless if it doesn't work that way.
>
> On Fri, Sep 2, 2016 at 9:27 AM, S Ahmed  wrote:
>
>> Brad, that page says this: "Notice Netty4 HTTP reads the entire stream
>> into
>> memory using io.netty.handler.codec.http.HttpObjectAggregator to build
>> the
>> entire full http message. But the resulting message is still a stream
>> based
>> message which is readable once."
>>
>> On Fri, Sep 2, 2016 at 10:26 AM, S Ahmed  wrote:
>>
>> > Thanks.
>> >
>> > Just to be clear, I don't run the server where I am downloading the
>> file.
>> > I want to download files that are very large, but stream them so they
>> are
>> > not held in memory and then written to disk.  I want to stream the
>> download
>> > straight to a file and not hold the entire file in memory.
>> >
>> > Is Netty for the server portion or the client?
>> >
>> > On Fri, Sep 2, 2016 at 12:34 AM, Brad Johnson <
>> > brad.john...@mediadriver.com> wrote:
>> >
>> >> http://camel.apache.org/netty4-http.html
>> >>
>> >> Look at netty and see if that works.  It can control chunk size but it
>> is
>> >> also streaming in any case so you may not even need to be concerned
>> about
>> >> it.
>> >>
>> >> Brad
>> >>
>> >> On Thu, Sep 1, 2016 at 8:53 PM, S Ahmed  wrote:
>> >>
>> >> > Does it have to be ftp, I just need http?
>> >> >
>> >> > On Thu, Sep 1, 2016 at 5:31 PM, Quinn Stevenson <
>> >> > qu...@pronoia-solutions.com
>> >> > > wrote:
>> >> >
>> >> > > Check out the section on the ftp component page about “Using a
>> Local
>> >> Work
>> >> > > Directory” (http://people.apache.org/~dkulp/camel/ftp2.html <
>> >> > > http://people.apache.org/~dkulp/camel/ftp2.html>) - I think that
>> may
>> >> be
>> >> > > what you’re after.
>> >> > >
>> >> > >
>> >> > > > On Sep 1, 2016, at 9:30 AM, S Ahmed 
>> wrote:
>> >> > > >
>> >> > > > Hello,
>> >> > > >
>> >> > > > Is there an example of how to download a large file in chunks and
>> >> save
>> >> > > the
>> >> > > > file as the file downloads.
>> >> > > >
>> >> > > > The goal is not to hold the entire file in memory and then save
>> it
>> >> to
>> >> > > disk.
>> >> > > >
>> >> > > >
>> >> > > > Thanks.
>> >> > >
>> >> > >
>> >> >
>> >>
>> >
>> >
>>
>
>


Re: downloading large files in chunks

2016-09-02 Thread Quinn Stevenson
I think something like this might work for you









> On Sep 2, 2016, at 8:51 AM, Brad Johnson  wrote:
> 
> Hmmm. That could be a problem if it doesn't actually chunk.  I thought it
> read the entire chunk into memory before letting you read it.  So if the
> chunk size is 10mb it would download that whole 10mb and then let you read,
> then fetch the next 10mb and let you read.  But that may not be the case. I
> haven't worked with it much so can't say.  I do know it's exceptionally
> fast.
> 
> The chunking almost seems pointless if it doesn't work that way.
> 
> On Fri, Sep 2, 2016 at 9:27 AM, S Ahmed  wrote:
> 
>> Brad, that page says this: "Notice Netty4 HTTP reads the entire stream into
>> memory using io.netty.handler.codec.http.HttpObjectAggregator to build the
>> entire full http message. But the resulting message is still a stream based
>> message which is readable once."
>> 
>> On Fri, Sep 2, 2016 at 10:26 AM, S Ahmed  wrote:
>> 
>>> Thanks.
>>> 
>>> Just to be clear, I don't run the server where I am downloading the file.
>>> I want to download files that are very large, but stream them so they are
>>> not held in memory and then written to disk.  I want to stream the
>> download
>>> straight to a file and not hold the entire file in memory.
>>> 
>>> Is Netty for the server portion or the client?
>>> 
>>> On Fri, Sep 2, 2016 at 12:34 AM, Brad Johnson <
>>> brad.john...@mediadriver.com> wrote:
>>> 
 http://camel.apache.org/netty4-http.html
 
 Look at netty and see if that works.  It can control chunk size but it
>> is
 also streaming in any case so you may not even need to be concerned
>> about
 it.
 
 Brad
 
 On Thu, Sep 1, 2016 at 8:53 PM, S Ahmed  wrote:
 
> Does it have to be ftp, I just need http?
> 
> On Thu, Sep 1, 2016 at 5:31 PM, Quinn Stevenson <
> qu...@pronoia-solutions.com
>> wrote:
> 
>> Check out the section on the ftp component page about “Using a Local
 Work
>> Directory” (http://people.apache.org/~dkulp/camel/ftp2.html <
>> http://people.apache.org/~dkulp/camel/ftp2.html>) - I think that
>> may
 be
>> what you’re after.
>> 
>> 
>>> On Sep 1, 2016, at 9:30 AM, S Ahmed  wrote:
>>> 
>>> Hello,
>>> 
>>> Is there an example of how to download a large file in chunks and
 save
>> the
>>> file as the file downloads.
>>> 
>>> The goal is not to hold the entire file in memory and then save it
 to
>> disk.
>>> 
>>> 
>>> Thanks.
>> 
>> 
> 
 
>>> 
>>> 
>> 



Re: Unmarshal fixed length Binary data

2016-09-02 Thread Brad Johnson
I second Beanio. I've used it for fixed length multi-line records and it is
fabulous.

On Fri, Sep 2, 2016 at 9:28 AM, Quinn Stevenson  wrote:

> Have you looked at the BeanIO DataFormat? (http://camel.apache.org/
> beanio.html )
>
> > On Sep 2, 2016, at 7:35 AM, kaustubhkane  wrote:
> >
> > Hi,
> >
> > We have a fixed length Binary data.
> >
> > I am looking at the Bindy Data Format and found that it supports
> > unmarshalling of Fixed Length records.
> >
> > I looked at the implementation/code of this Bindy Fixed Lenght records in
> > the Camel Source code (BindyFixedLengthDataFormat.java and
> > BindyFixedLengthFactory.java)
> >
> > I looked at the function createModel in BindyFixedLengthDataFormat.java
> and
> > found that it assumes one record equivalent to one line (i.e. each record
> > will be in a separate line). Which means at the completion of record
> there
> > will be a newline character in the file so that next record starts from a
> > new line.
> >
> > For this reason, it seems to me that Bindy Data Format will not be able
> to
> > unmarshal fixed lenght binary data. In Binary mode files there are no
> lines
> > and rows. You don't read in lines in case of binary data/files. That only
> > works with text files.
> >
> > Could someone please provide details on what Data Format can be used to
> > unmarshal Fixed lenght binary data??
> >
> > I am looking at an appropriate data format to which I can provide a model
> > and using that model it will read the data and create a POJO for me. Just
> > like what Bindy does.
> >
> > Regards,
> > Kaustubh Kane
> >
> >
> >
> > --
> > View this message in context: http://camel.465427.n5.nabble.
> com/Unmarshal-fixed-length-Binary-data-tp5787125.html
> > Sent from the Camel - Users mailing list archive at Nabble.com.
>
>


Re: Unmarshal fixed length Binary data

2016-09-02 Thread Quinn Stevenson
Have you looked at the BeanIO DataFormat? (http://camel.apache.org/beanio.html 
)

> On Sep 2, 2016, at 7:35 AM, kaustubhkane  wrote:
> 
> Hi,
> 
> We have a fixed length Binary data.
> 
> I am looking at the Bindy Data Format and found that it supports
> unmarshalling of Fixed Length records. 
> 
> I looked at the implementation/code of this Bindy Fixed Lenght records in
> the Camel Source code (BindyFixedLengthDataFormat.java and
> BindyFixedLengthFactory.java) 
> 
> I looked at the function createModel in BindyFixedLengthDataFormat.java and
> found that it assumes one record equivalent to one line (i.e. each record
> will be in a separate line). Which means at the completion of record there
> will be a newline character in the file so that next record starts from a
> new line. 
> 
> For this reason, it seems to me that Bindy Data Format will not be able to
> unmarshal fixed lenght binary data. In Binary mode files there are no lines
> and rows. You don't read in lines in case of binary data/files. That only
> works with text files.
> 
> Could someone please provide details on what Data Format can be used to
> unmarshal Fixed lenght binary data??
> 
> I am looking at an appropriate data format to which I can provide a model
> and using that model it will read the data and create a POJO for me. Just
> like what Bindy does. 
> 
> Regards,
> Kaustubh Kane
> 
> 
> 
> --
> View this message in context: 
> http://camel.465427.n5.nabble.com/Unmarshal-fixed-length-Binary-data-tp5787125.html
> Sent from the Camel - Users mailing list archive at Nabble.com.



Re: downloading large files in chunks

2016-09-02 Thread S Ahmed
Brad, that page says this: "Notice Netty4 HTTP reads the entire stream into
memory using io.netty.handler.codec.http.HttpObjectAggregator to build the
entire full http message. But the resulting message is still a stream based
message which is readable once."

On Fri, Sep 2, 2016 at 10:26 AM, S Ahmed  wrote:

> Thanks.
>
> Just to be clear, I don't run the server where I am downloading the file.
> I want to download files that are very large, but stream them so they are
> not held in memory and then written to disk.  I want to stream the download
> straight to a file and not hold the entire file in memory.
>
> Is Netty for the server portion or the client?
>
> On Fri, Sep 2, 2016 at 12:34 AM, Brad Johnson <
> brad.john...@mediadriver.com> wrote:
>
>> http://camel.apache.org/netty4-http.html
>>
>> Look at netty and see if that works.  It can control chunk size but it is
>> also streaming in any case so you may not even need to be concerned about
>> it.
>>
>> Brad
>>
>> On Thu, Sep 1, 2016 at 8:53 PM, S Ahmed  wrote:
>>
>> > Does it have to be ftp, I just need http?
>> >
>> > On Thu, Sep 1, 2016 at 5:31 PM, Quinn Stevenson <
>> > qu...@pronoia-solutions.com
>> > > wrote:
>> >
>> > > Check out the section on the ftp component page about “Using a Local
>> Work
>> > > Directory” (http://people.apache.org/~dkulp/camel/ftp2.html <
>> > > http://people.apache.org/~dkulp/camel/ftp2.html>) - I think that may
>> be
>> > > what you’re after.
>> > >
>> > >
>> > > > On Sep 1, 2016, at 9:30 AM, S Ahmed  wrote:
>> > > >
>> > > > Hello,
>> > > >
>> > > > Is there an example of how to download a large file in chunks and
>> save
>> > > the
>> > > > file as the file downloads.
>> > > >
>> > > > The goal is not to hold the entire file in memory and then save it
>> to
>> > > disk.
>> > > >
>> > > >
>> > > > Thanks.
>> > >
>> > >
>> >
>>
>
>


Re: downloading large files in chunks

2016-09-02 Thread S Ahmed
Thanks.

Just to be clear, I don't run the server where I am downloading the file. I
want to download files that are very large, but stream them so they are not
held in memory and then written to disk.  I want to stream the download
straight to a file and not hold the entire file in memory.

Is Netty for the server portion or the client?

On Fri, Sep 2, 2016 at 12:34 AM, Brad Johnson 
wrote:

> http://camel.apache.org/netty4-http.html
>
> Look at netty and see if that works.  It can control chunk size but it is
> also streaming in any case so you may not even need to be concerned about
> it.
>
> Brad
>
> On Thu, Sep 1, 2016 at 8:53 PM, S Ahmed  wrote:
>
> > Does it have to be ftp, I just need http?
> >
> > On Thu, Sep 1, 2016 at 5:31 PM, Quinn Stevenson <
> > qu...@pronoia-solutions.com
> > > wrote:
> >
> > > Check out the section on the ftp component page about “Using a Local
> Work
> > > Directory” (http://people.apache.org/~dkulp/camel/ftp2.html <
> > > http://people.apache.org/~dkulp/camel/ftp2.html>) - I think that may
> be
> > > what you’re after.
> > >
> > >
> > > > On Sep 1, 2016, at 9:30 AM, S Ahmed  wrote:
> > > >
> > > > Hello,
> > > >
> > > > Is there an example of how to download a large file in chunks and
> save
> > > the
> > > > file as the file downloads.
> > > >
> > > > The goal is not to hold the entire file in memory and then save it to
> > > disk.
> > > >
> > > >
> > > > Thanks.
> > >
> > >
> >
>


Re: Adding options to default idempotent File Repo

2016-09-02 Thread Quinn Stevenson
You are correct - to customize the configuration for an Idempotent Repository, 
you need to create a bean for the repository and set it’s specific properties.

The Idempotent Repository used by the file component is pluggable - it just 
needs to implement the org.apache.camel.spi.IdempotentRepository interface.  
Therefore, it doesn’t support customizing the repository via the file URI. 

BTW - the file component uses the MemoryMessageIdRepository by default 

HTH

> On Aug 22, 2016, at 7:57 AM, JSmith  wrote:
> 
> How do I add options to the default Idempotent File Repository using the
> Spring DSL?
> 
> Basically I want to change the "eager" option to false because it seems like
> it will fix my issue of not inserting a name into the Repo if I set
> Exchange.ROUTE_STOP = true. 
> 
> But it seems the only way is to have a messy block of code like: 
> 
>  class="org.apache.camel.processor.idempotent.FileIdempotentRepository">
>  
>  
>  
> 
> 
>  
>.
>.
>.
>
>
> eager="false">
>${file:name}-${file:modified}
>
>
>
> 
> 
> 
> Right?  That's the only way to manipulate the default options for the File
> Idempotent Consumer?  Or am I missing something here?  
> 
> 
> 
> 
> --
> View this message in context: 
> http://camel.465427.n5.nabble.com/Adding-options-to-default-idempotent-File-Repo-tp5786675.html
> Sent from the Camel - Users mailing list archive at Nabble.com.



Unmarshal fixed length Binary data

2016-09-02 Thread kaustubhkane
Hi,

We have a fixed length Binary data.

I am looking at the Bindy Data Format and found that it supports
unmarshalling of Fixed Length records. 

I looked at the implementation/code of this Bindy Fixed Lenght records in
the Camel Source code (BindyFixedLengthDataFormat.java and
BindyFixedLengthFactory.java) 

I looked at the function createModel in BindyFixedLengthDataFormat.java and
found that it assumes one record equivalent to one line (i.e. each record
will be in a separate line). Which means at the completion of record there
will be a newline character in the file so that next record starts from a
new line. 

For this reason, it seems to me that Bindy Data Format will not be able to
unmarshal fixed lenght binary data. In Binary mode files there are no lines
and rows. You don't read in lines in case of binary data/files. That only
works with text files.

Could someone please provide details on what Data Format can be used to
unmarshal Fixed lenght binary data??

I am looking at an appropriate data format to which I can provide a model
and using that model it will read the data and create a POJO for me. Just
like what Bindy does. 

Regards,
Kaustubh Kane



--
View this message in context: 
http://camel.465427.n5.nabble.com/Unmarshal-fixed-length-Binary-data-tp5787125.html
Sent from the Camel - Users mailing list archive at Nabble.com.


Mongodb Persistent tail tracking: How to chose an increasing field for tracking

2016-09-02 Thread jpeschke
Hello,
Perhaps this is more a MongoDB issue, but maybe somebody has an idea:

The Camel MongoDB endpoint supports the persistent tail tracking feature
(which means that it stores the value of an arbitrary increasing field in a
document to reset the tailable cursor to this document when
restarting/restting the cursor). The field can be for example a timestamp.

Now my problem is:
How do I get such an "constantly increasing field" if I haven't any? MongoDB
hasn't something like an "auto increment" data type, so currently, I use a
counters collection which generates sequences of increasing ids (as
suggested in
https://docs.mongodb.com/v3.0/tutorial/create-an-auto-incrementing-field/#auto-increment-counters-collection)

However, this leads to some race conditions in a
multi-threaded/multi-machine environment, as the process of "Sequence ID
generation" and "insetion" is not atomic:

- Consider two servers (A and B), every one inserts an update. A gets the
first sequence number (e.g. 1), but B inserts it's document (with sequence
number 2) earlier, so the natural (= insertion) order in the capped
collection will now be "2 - 1".
- Now, as the tailable cursor consumes the capped collections in natural
order, it will start consuming the document with id 2. If the cursor is
reset/regenerated at exactly this point, the persistent tail tracker will
store "2" as the last processed document, although "1" was never processed,
so 1 will be lost :(.

Sounds academic, I know, but leads to some annoying errors in out
application :(

Thank you for any ideas.

Best regards,
Joerg





--
View this message in context: 
http://camel.465427.n5.nabble.com/Mongodb-Persistent-tail-tracking-How-to-chose-an-increasing-field-for-tracking-tp5787124.html
Sent from the Camel - Users mailing list archive at Nabble.com.