Problem with camel-aws S3 when multiPartUpload is set to true

2018-01-04 Thread kretin
I created a simple camel route to poll for files in a local directory and 
upload them to a Ceph (S3) server at my University. I am using apache camel 
2.20.0 with the camel-aws S3 component, when I set multiPartUpload=false (the 
default) in the uri, everything works fine, but if I change to 
multiPartUpload=true, it fails. 

I know there is nothing wrong with my s3 secret or s3 access key because when I 
set multiPartUpload=false, everything works (there are no crazy plus (+) 
characters that need to be escaped in the keys). 

Here is the stack trace:

com.amazonaws.services.s3.model.AmazonS3Exception: null (Service: Amazon S3; 
Status Code: 403; Error Code: SignatureDoesNotMatch; Request ID: 
tx002e9edee-005a4ed3d2-2213a2-uky-campus-1; S3 Extended Request ID: 
2213a2-uky-campus-1-uky)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1592)
 ~[aws-java-sdk-core-1.11.186.jar:?]
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1257)
 ~[aws-java-sdk-core-1.11.186.jar:?]
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1029)
 ~[aws-java-sdk-core-1.11.186.jar:?]
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:741)
 ~[aws-java-sdk-core-1.11.186.jar:?]
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:715)
 ~[aws-java-sdk-core-1.11.186.jar:?]
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:697)
 ~[aws-java-sdk-core-1.11.186.jar:?]
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:665)
 ~[aws-java-sdk-core-1.11.186.jar:?]
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:647)
 ~[aws-java-sdk-core-1.11.186.jar:?]
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:511) 
~[aws-java-sdk-core-1.11.186.jar:?]
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4227) 
~[aws-java-sdk-s3-1.11.186.jar:?]
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4174) 
~[aws-java-sdk-s3-1.11.186.jar:?]
at 
com.amazonaws.services.s3.AmazonS3Client.abortMultipartUpload(AmazonS3Client.java:2928)
 ~[aws-java-sdk-s3-1.11.186.jar:?]
at 
org.apache.camel.component.aws.s3.S3Producer.processMultiPart(S3Producer.java:181)
 ~[camel-aws-2.20.0.jar:2.20.0]
at org.apache.camel.component.aws.s3.S3Producer.process(S3Producer.java:84) 
~[camel-aws-2.20.0.jar:2.20.0]
at ...

My camel-context.xml looks like:


http://www.springframework.org/schema/beans;
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
xsi:schemaLocation="
   http://www.springframework.org/schema/beans 
http://www.springframework.org/schema/beans/spring-beans.xsd
   http://camel.apache.org/schema/spring 
http://camel.apache.org/schema/spring/camel-spring.xsd;>

























http://camel.apache.org/schema/spring;>






testMultiPart/${in.header.CamelFileName}




${in.header.CamelFileLength}













Again, everything works fine if I set multiPartUpload=false in the above 
camel-context.xml

I have tried a lot of things like:

• setting the CamelAwsS3ContentMD5 header to the MD5 hash of the file 
(which doesn't make sense for multi-part files)
• various settings for the partSize parameter
• different sized files from very large to very small
• setting the system parameter: 
System.setProperty("com.amazonaws.services.s3.disablePutObjectMD5Validation", 
"true");

If I turn on trace debugging for camel, it doesnt help much:

[d #2 - file://target/sendToS3/] S3Producer TRACE 
Initiating multipart upload 
[com.amazonaws.services.s3.model.InitiateMultipartUploadRequest@3731147a] from 
exchange [Exchange[ID-Toucan-local-1515115111374-0-1]]...
[d #2 - file://target/sendToS3/] S3Producer TRACE Uploading 
part [1] for testMultiPart/testfile.zip
[d #2 - file://target/sendToS3/] DefaultErrorHandlerTRACE Is 
exchangeId: ID-Toucan-local-1515115111374-0-1 interrupted? false
[d #2 - file://target/sendToS3/] DefaultErrorHandlerTRACE Is 
exchangeId: ID-Toucan-local-1515115111374-0-1 done? false
[d #2 - file://target/sendToS3/] DefaultErrorHandlerTRACE 
isRunAllowed() -> true (Run allowed if we are not stopped/stopping)
[d #2 - file://target/sendToS3/] DefaultExceptionPolicyStrategy TRACE Finding 
best suited 

Re: File idempotent store problem

2018-01-04 Thread Onder SEZGIN
There are points i agree/disagree.
This can stay as the limitation imho. Because using cache as fifo sounds a
bit advance care of idempotent repository feature and it would require
taking all the burdens of such implementation for a single minor feature by
diving into caffeine. This would be too much to implement in camel-core
ootb. in case you require such an advanced feature, idempotent repository
is a pluggable feature and it can be implemented with fifo based custom
cache as my first sight and imho.

On Wed, 3 Jan 2018 at 22:51, Krzysztof Hołdanowicz 
wrote:

> Hi,
>
> regarding CAMEL-12058 I don't know if you are aware of all consequences of
> wrong order in the idempotent file store.
> The wrong order in the file is not the problem itself as long as elemens
> are added and eviceted on runtime, because caffeine provides an api for
> ordering like:
>
>- @Override public Map coldest(int limit)
>- @Override public Map hottest(int limit)
>- @Override public Map oldest(int limit),
>- @Override public Map youngest(int limit) )
>
> however the consequences of this appears after RESTART. The memory cache
> does not contain the proper entries (in case of reaching the max limit
> size) because it does not load elements from hottest to coldest but with
> the file entries order hence some of the files are consumed multiple times.
> It means that current implementation of file idempotent store is not usable
> at all anymore. Ignoring the issue (CAMEL-12058) means that Camel library
> does not provide any implementation of idempotent file store as the current
> behaviour is completely wrong and causes consuming multiple times the same
> file after reaching max size limit and after restarting application.
>
>
> Regards
> Kris
>
> sob., 2 gru 2017 o 15:14 użytkownik Krzysztof Hołdanowicz <
> holdanow...@gmail.com> napisał:
>
> > I don't know if I understood you correctly.
> > Instead of looping via cache.keySet() you mean looping via:
> > Map.Entry entry : cache.entrySet() or cache.foreach((k,
> v)
> > -> {...})?
> >
> > If yes what is the difference? Isn't it the same unordered collection?
> > If Caffeince returns unordered collection how we can get ordered entries?
> > Isn't it related with:
> > https://github.com/ben-manes/caffeine/issues/86
> >
> > Shouldn't we use a kind of LinkedHashMap implementation?
> >
> > Regards
> > Kris
> >
> > wt., 28 lis 2017 o 18:40 użytkownik Claus Ibsen-2 [via Camel] <
> > ml+s465427n5815878...@n5.nabble.com> napisał:
> >
> >> Hi
> >>
> >> Ah well spotted.
> >> I think we should for loop via Map.Entry (or (k,v) via lambda) which I
> >> think will be in the correct order.
> >>
> >> You are welcome to log a JIRA. And also work on unit test and patch.
> >> http://camel.apache.org/contributing
> >>
> >> On Tue, Nov 28, 2017 at 8:55 AM, Krzysztof Hołdanowicz
> >> <[hidden email]  /user/SendEmail.jtp?type=node=5815878=0>>
> >> wrote:
> >>
> >> > Hi all,
> >> >
> >> > I recently noticed that there is wrong entry order in file using
> >> > FileIdempotentRepository implementation.
> >> > The effect is that instead of having order like:
> >> >
> >> > file1.txt.20171123
> >> > file2.txt.20171123
> >> > file1.txt.20171124
> >> > file3.txt.20171125
> >> > file2.txt.20171126
> >> >
> >> > we have:
> >> >
> >> > file1.txt.20171123
> >> > file1.txt.20171124
> >> > file2.txt.20171123
> >> > file2.txt.20171126
> >> > file3.txt.20171125
> >> >
> >> > where date extension represents order in which particular file was
> >> consumed
> >> > by the idempotent file consumer.
> >> > As a consequence instead of initializing memory cache with newest
> >> values,
> >> > it is initialized (probably) based on hash function from truncStore
> >> method
> >> > and we consume same file more than once:
> >> >
> >> > protected void trunkStore() {
> >> > LOG.info("Trunking idempotent filestore: {}", fileStore);
> >> > FileOutputStream fos = null;
> >> > try {
> >> > fos = new FileOutputStream(fileStore);
> >> > for (String key : *cache.keySet()*) {
> >> > fos.write(key.getBytes());
> >> > fos.write(STORE_DELIMITER.getBytes());
> >> > }
> >> > } catch (IOException e) {
> >> > throw ObjectHelper.wrapRuntimeCamelException(e);
> >> > } finally {
> >> > IOHelper.close(fos, "Trunking file idempotent repository",
> >> LOG);
> >> > }
> >> > }
> >> >
> >> > LRUCache:
> >> >
> >> > @Override
> >> > public Set keySet() {
> >> > return map.keySet();
> >> > }
> >> >
> >> > where previously it was:
> >> >
> >> > @Override
> >> > public Set keySet() {
> >> > return map.*ascendingKeySet*();
> >> > }
> >> >
> >> > Regards
> >> > Kris
> >> > --
> >> >
> >> > Pozdrawiam
> >> >
> >> > Krzysztof Hołdanowicz
> >>
> >>
> >>
> >> --
> >> Claus Ibsen
> >> 

Re: Error sending email from Camel application

2018-01-04 Thread Charles Berger
Anyone able to help with this please?

On Thu, Dec 21, 2017 at 6:03 PM, Charles Berger
 wrote:
> Hi,
>
> I have the following route in my application which sends an email
> based on a template filled out with data from the SingleImageModel
> class:
>
> from(ACTIVEMQ_EMAIL_QUEUE)
> .routeId(ROUTE_EMAIL_NOTIFICATIONS)
> .convertBodyTo(SingleImageModel.class)
> // set subject, from address & to address
> .setHeader("subject", constant(EMAIL_SUBJECT))
> .setHeader("to", simple("${body.email}"))
> .setHeader("from", constant(EMAIL_FROM))
> // format the message body
> .to(VELOCITY_EMAIL)
> .log("${body}")
> // send email
> .to(SMTP_URL)
> .end();
>
> When it tries to execute the SMTP step the message fails with the
> following error:
>
> 2017-12-21 17:30:08,034 []
> org.apache.camel.processor.DefaultErrorHandler ERROR - Failed delivery
> for (MessageId: ID-iusa16025-local-1513877322283-0-13 on ExchangeId:
> ID-iusa16025-local-1513877322283-0-11). Exhausted after delivery
> attempt: 1 caught: org.apache.camel.TypeConversionException: Error
> during type conversion from type: java.lang.String to the required
> type: java.lang.String with value queue://emailQueue due
> com.fasterxml.jackson.databind.JsonMappingException: No serializer
> found for class java.util.Vector$1 and no properties discovered to
> create BeanSerializer (to avoid exception, disable
> SerializationFeature.FAIL_ON_EMPTY_BEANS) (through reference chain:
> org.apache.activemq.command.ActiveMQQueue["reference"]->javax.naming.Reference["all"])
>
> The stacktrace is:
>
> org.apache.camel.TypeConversionException: Error during type conversion
> from type: java.lang.String to the required type: java.lang.String
> with value queue://emailQueue due
> com.fasterxml.jackson.databind.JsonMappingException: No serializer
> found for class java.util.Vector$1 and no properties discovered to
> create BeanSerializer (to avoid exception, disable
> SerializationFeature.FAIL_ON_EMPTY_BEANS) (through reference chain:
> org.apache.activemq.command.ActiveMQQueue["reference"]->javax.naming.Reference["all"])
> at 
> org.apache.camel.impl.converter.BaseTypeConverterRegistry.createTypeConversionException(BaseTypeConverterRegistry.java:667)
> at 
> org.apache.camel.impl.converter.BaseTypeConverterRegistry.convertTo(BaseTypeConverterRegistry.java:158)
> at org.apache.camel.component.mail.MailBinding.asString(MailBinding.java:717)
> at 
> org.apache.camel.component.mail.MailBinding.appendHeadersFromCamelMessage(MailBinding.java:398)
> at 
> org.apache.camel.component.mail.MailBinding.populateMailMessage(MailBinding.java:117)
> at org.apache.camel.component.mail.MailProducer.process(MailProducer.java:58)
> at 
> org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61)
> at 
> org.apache.camel.processor.SendProcessor$2.doInAsyncProducer(SendProcessor.java:178)
> at 
> org.apache.camel.impl.ProducerCache.doInAsyncProducer(ProducerCache.java:445)
> at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:173)
> at 
> org.apache.camel.processor.interceptor.TraceInterceptor.process(TraceInterceptor.java:181)
> at 
> org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:548)
> at 
> org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
> at org.apache.camel.processor.Pipeline.process(Pipeline.java:138)
> at org.apache.camel.processor.Pipeline.process(Pipeline.java:101)
> at 
> org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
> at 
> org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:97)
> at 
> org.apache.camel.component.jms.EndpointMessageListener.onMessage(EndpointMessageListener.java:112)
> at 
> org.springframework.jms.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:719)
> at 
> org.springframework.jms.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:679)
> at 
> org.springframework.jms.listener.AbstractMessageListenerContainer.doExecuteListener(AbstractMessageListenerContainer.java:649)
> at 
> org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:317)
> at 
> org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:255)
> at 
> org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:1166)
> at 
> org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop(DefaultMessageListenerContainer.java:1158)
> at 
>