[GitHub] flume pull request: change FLUME_HOME to fix the LIBRARY NOT FOUND...

2014-10-16 Thread shadyxu
GitHub user shadyxu opened a pull request:

https://github.com/apache/flume/pull/7

change FLUME_HOME to fix the LIBRARY NOT FOUND bug

Assigning `FLUME_HOME` to `$(cd $(dirname $0)/..; pwd)` will cause an 
`org.apache.flume.node.Application not found` error. This can be fixed by 
elimilated the `;pwd` path.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shadyxu/flume trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flume/pull/7.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #7


commit 8e28d18e34bba651c4b3237fe70260c536693113
Author: shadyxu 
Date:   2014-10-17T02:59:53Z

change FLUME_HOME to fix the LIBRARY NOT FOUND bug




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLUME-2507) File Channel cannot support massive capacities

2014-10-16 Thread Hari Shreedharan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-2507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14174668#comment-14174668
 ] 

Hari Shreedharan commented on FLUME-2507:
-

Note that this does not make the channel unusable - it just causes checkpoints 
to fail - so all replays would be full replays.

> File Channel cannot support massive capacities
> --
>
> Key: FLUME-2507
> URL: https://issues.apache.org/jira/browse/FLUME-2507
> Project: Flume
>  Issue Type: Bug
>Reporter: Hari Shreedharan
>
> In the File Channel, we map the checkpoint file. Java cannot seem to be able 
> to map a single file of more than 2GB, since MappedByteBuffer is indexed on 
> int. So even if we get a LongBuffer using asLongBuffer (since that is only a 
> view on the real buffer), the checkpointing will fail when the data is being 
> put, with a nasty exception that looks like:
> {code}
> 2014-10-16 13:48:54,684 (Log-BackgroundWorker-FileChannel1) [ERROR - 
> org.apache.flume.channel.file.Log$BackgroundWorker.run(Log.java:1188)] 
> General error in checkpoint worker
> java.lang.IndexOutOfBoundsException
>   at java.nio.Buffer.checkIndex(Buffer.java:512)
>   at java.nio.DirectLongBufferS.put(DirectLongBufferS.java:270)
>   at 
> org.apache.flume.channel.file.EventQueueBackingStoreFile.checkpoint(EventQueueBackingStoreFile.java:269)
>   at 
> org.apache.flume.channel.file.FlumeEventQueue.checkpoint(FlumeEventQueue.java:145)
>   at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:991)
>   at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:968)
>   at org.apache.flume.channel.file.Log.access$200(Log.java:75)
>   at org.apache.flume.channel.file.Log$BackgroundWorker.run(Log.java:1183)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>   at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLUME-2507) File Channel cannot support massive capacities

2014-10-16 Thread Hari Shreedharan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-2507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14174665#comment-14174665
 ] 

Hari Shreedharan commented on FLUME-2507:
-

No, we can just map the file in different segments. 0 to 2^25, (2^25) + 1 to 
... etc. Right now, we already handle the case where a partial checkpoint is 
written (we update the mappedBuffer while the OS is syncing some other part to 
disk) by writing out an incomplete marker and then writing out a complete 
marker when we are done - that should take care of some buffers getting fsynced 
and others not. 

> File Channel cannot support massive capacities
> --
>
> Key: FLUME-2507
> URL: https://issues.apache.org/jira/browse/FLUME-2507
> Project: Flume
>  Issue Type: Bug
>Reporter: Hari Shreedharan
>
> In the File Channel, we map the checkpoint file. Java cannot seem to be able 
> to map a single file of more than 2GB, since MappedByteBuffer is indexed on 
> int. So even if we get a LongBuffer using asLongBuffer (since that is only a 
> view on the real buffer), the checkpointing will fail when the data is being 
> put, with a nasty exception that looks like:
> {code}
> 2014-10-16 13:48:54,684 (Log-BackgroundWorker-FileChannel1) [ERROR - 
> org.apache.flume.channel.file.Log$BackgroundWorker.run(Log.java:1188)] 
> General error in checkpoint worker
> java.lang.IndexOutOfBoundsException
>   at java.nio.Buffer.checkIndex(Buffer.java:512)
>   at java.nio.DirectLongBufferS.put(DirectLongBufferS.java:270)
>   at 
> org.apache.flume.channel.file.EventQueueBackingStoreFile.checkpoint(EventQueueBackingStoreFile.java:269)
>   at 
> org.apache.flume.channel.file.FlumeEventQueue.checkpoint(FlumeEventQueue.java:145)
>   at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:991)
>   at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:968)
>   at org.apache.flume.channel.file.Log.access$200(Log.java:75)
>   at org.apache.flume.channel.file.Log$BackgroundWorker.run(Log.java:1183)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>   at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLUME-2507) File Channel cannot support massive capacities

2014-10-16 Thread Roshan Naik (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-2507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14174291#comment-14174291
 ] 

Roshan Naik commented on FLUME-2507:


does this mean we need multiple memory mapped files ?

> File Channel cannot support massive capacities
> --
>
> Key: FLUME-2507
> URL: https://issues.apache.org/jira/browse/FLUME-2507
> Project: Flume
>  Issue Type: Bug
>Reporter: Hari Shreedharan
>
> In the File Channel, we map the checkpoint file. Java cannot seem to be able 
> to map a single file of more than 2GB, since MappedByteBuffer is indexed on 
> int. So even if we get a LongBuffer using asLongBuffer (since that is only a 
> view on the real buffer), the checkpointing will fail when the data is being 
> put, with a nasty exception that looks like:
> {code}
> 2014-10-16 13:48:54,684 (Log-BackgroundWorker-FileChannel1) [ERROR - 
> org.apache.flume.channel.file.Log$BackgroundWorker.run(Log.java:1188)] 
> General error in checkpoint worker
> java.lang.IndexOutOfBoundsException
>   at java.nio.Buffer.checkIndex(Buffer.java:512)
>   at java.nio.DirectLongBufferS.put(DirectLongBufferS.java:270)
>   at 
> org.apache.flume.channel.file.EventQueueBackingStoreFile.checkpoint(EventQueueBackingStoreFile.java:269)
>   at 
> org.apache.flume.channel.file.FlumeEventQueue.checkpoint(FlumeEventQueue.java:145)
>   at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:991)
>   at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:968)
>   at org.apache.flume.channel.file.Log.access$200(Log.java:75)
>   at org.apache.flume.channel.file.Log$BackgroundWorker.run(Log.java:1183)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>   at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLUME-2507) File Channel cannot support massive capacities

2014-10-16 Thread Hari Shreedharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Shreedharan updated FLUME-2507:

Description: 
In the File Channel, we map the checkpoint file. Java cannot seem to be able to 
map a single file of more than 2GB, since MappedByteBuffer is indexed on int. 
So even if we get a LongBuffer using asLongBuffer, the checkpointing will fail 
when the data is being put, with a nasty exception that looks like:
{code}
2014-10-16 13:48:54,684 (Log-BackgroundWorker-FileChannel1) [ERROR - 
org.apache.flume.channel.file.Log$BackgroundWorker.run(Log.java:1188)] General 
error in checkpoint worker
java.lang.IndexOutOfBoundsException
at java.nio.Buffer.checkIndex(Buffer.java:512)
at java.nio.DirectLongBufferS.put(DirectLongBufferS.java:270)
at 
org.apache.flume.channel.file.EventQueueBackingStoreFile.checkpoint(EventQueueBackingStoreFile.java:269)
at 
org.apache.flume.channel.file.FlumeEventQueue.checkpoint(FlumeEventQueue.java:145)
at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:991)
at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:968)
at org.apache.flume.channel.file.Log.access$200(Log.java:75)
at org.apache.flume.channel.file.Log$BackgroundWorker.run(Log.java:1183)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at 
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
{code}

  was:
In the File Channel, we map the checkpoint file. Java cannot seem to be able to 
map a single file of more than 2GB, since MappedByteBuffer is indexed on it. So 
even if we get a LongBuffer using asLongBuffer, the checkpointing will fail 
when the data is being put, with a nasty exception that looks like:
{code}
2014-10-16 13:48:54,684 (Log-BackgroundWorker-FileChannel1) [ERROR - 
org.apache.flume.channel.file.Log$BackgroundWorker.run(Log.java:1188)] General 
error in checkpoint worker
java.lang.IndexOutOfBoundsException
at java.nio.Buffer.checkIndex(Buffer.java:512)
at java.nio.DirectLongBufferS.put(DirectLongBufferS.java:270)
at 
org.apache.flume.channel.file.EventQueueBackingStoreFile.checkpoint(EventQueueBackingStoreFile.java:269)
at 
org.apache.flume.channel.file.FlumeEventQueue.checkpoint(FlumeEventQueue.java:145)
at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:991)
at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:968)
at org.apache.flume.channel.file.Log.access$200(Log.java:75)
at org.apache.flume.channel.file.Log$BackgroundWorker.run(Log.java:1183)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at 
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
{code}


> File Channel cannot support massive capacities
> --
>
> Key: FLUME-2507
> URL: https://issues.apache.org/jira/browse/FLUME-2507
> Project: Flume
>  Issue Type: Bug
>Reporter: Hari Shreedharan
>
> In the File Channel, we map the checkpoint file. Java cannot seem to be able 
> to map a single file of more than 2GB, since MappedByteBuffer is indexed on 
> int. So even if we get a LongBuffer using asLongBuffer, the checkpointing 
> will fail when the data is being put, with a nasty exception that looks like:
> {code}
> 2014-10-16 13:48:54,684 (Log-BackgroundWorker-FileChannel1) [

[jira] [Updated] (FLUME-2507) File Channel cannot support massive capacities

2014-10-16 Thread Hari Shreedharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Shreedharan updated FLUME-2507:

Description: 
In the File Channel, we map the checkpoint file. Java cannot seem to be able to 
map a single file of more than 2GB, since MappedByteBuffer is indexed on int. 
So even if we get a LongBuffer using asLongBuffer (since that is only a view on 
the real buffer), the checkpointing will fail when the data is being put, with 
a nasty exception that looks like:
{code}
2014-10-16 13:48:54,684 (Log-BackgroundWorker-FileChannel1) [ERROR - 
org.apache.flume.channel.file.Log$BackgroundWorker.run(Log.java:1188)] General 
error in checkpoint worker
java.lang.IndexOutOfBoundsException
at java.nio.Buffer.checkIndex(Buffer.java:512)
at java.nio.DirectLongBufferS.put(DirectLongBufferS.java:270)
at 
org.apache.flume.channel.file.EventQueueBackingStoreFile.checkpoint(EventQueueBackingStoreFile.java:269)
at 
org.apache.flume.channel.file.FlumeEventQueue.checkpoint(FlumeEventQueue.java:145)
at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:991)
at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:968)
at org.apache.flume.channel.file.Log.access$200(Log.java:75)
at org.apache.flume.channel.file.Log$BackgroundWorker.run(Log.java:1183)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at 
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
{code}

  was:
In the File Channel, we map the checkpoint file. Java cannot seem to be able to 
map a single file of more than 2GB, since MappedByteBuffer is indexed on int. 
So even if we get a LongBuffer using asLongBuffer, the checkpointing will fail 
when the data is being put, with a nasty exception that looks like:
{code}
2014-10-16 13:48:54,684 (Log-BackgroundWorker-FileChannel1) [ERROR - 
org.apache.flume.channel.file.Log$BackgroundWorker.run(Log.java:1188)] General 
error in checkpoint worker
java.lang.IndexOutOfBoundsException
at java.nio.Buffer.checkIndex(Buffer.java:512)
at java.nio.DirectLongBufferS.put(DirectLongBufferS.java:270)
at 
org.apache.flume.channel.file.EventQueueBackingStoreFile.checkpoint(EventQueueBackingStoreFile.java:269)
at 
org.apache.flume.channel.file.FlumeEventQueue.checkpoint(FlumeEventQueue.java:145)
at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:991)
at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:968)
at org.apache.flume.channel.file.Log.access$200(Log.java:75)
at org.apache.flume.channel.file.Log$BackgroundWorker.run(Log.java:1183)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at 
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
{code}


> File Channel cannot support massive capacities
> --
>
> Key: FLUME-2507
> URL: https://issues.apache.org/jira/browse/FLUME-2507
> Project: Flume
>  Issue Type: Bug
>Reporter: Hari Shreedharan
>
> In the File Channel, we map the checkpoint file. Java cannot seem to be able 
> to map a single file of more than 2GB, since MappedByteBuffer is indexed on 
> int. So even if we get a LongBuffer using asLongBuffer (since that is only a 
> view on the real buffer), the checkpointing will fail when the data is being 
> put, with a nasty 

[jira] [Created] (FLUME-2507) File Channel cannot support massive capacities

2014-10-16 Thread Hari Shreedharan (JIRA)
Hari Shreedharan created FLUME-2507:
---

 Summary: File Channel cannot support massive capacities
 Key: FLUME-2507
 URL: https://issues.apache.org/jira/browse/FLUME-2507
 Project: Flume
  Issue Type: Bug
Reporter: Hari Shreedharan


In the File Channel, we map the checkpoint file. Java cannot seem to be able to 
map a single file of more than 2GB, since MappedByteBuffer is indexed on it. So 
even if we get a LongBuffer using asLongBuffer, the checkpointing will fail 
when the data is being put, with a nasty exception that looks like:
{code}
2014-10-16 13:48:54,684 (Log-BackgroundWorker-FileChannel1) [ERROR - 
org.apache.flume.channel.file.Log$BackgroundWorker.run(Log.java:1188)] General 
error in checkpoint worker
java.lang.IndexOutOfBoundsException
at java.nio.Buffer.checkIndex(Buffer.java:512)
at java.nio.DirectLongBufferS.put(DirectLongBufferS.java:270)
at 
org.apache.flume.channel.file.EventQueueBackingStoreFile.checkpoint(EventQueueBackingStoreFile.java:269)
at 
org.apache.flume.channel.file.FlumeEventQueue.checkpoint(FlumeEventQueue.java:145)
at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:991)
at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:968)
at org.apache.flume.channel.file.Log.access$200(Log.java:75)
at org.apache.flume.channel.file.Log$BackgroundWorker.run(Log.java:1183)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at 
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLUME-2501) Updating HttpClient lib version to ensure compat with Solr

2014-10-16 Thread Roshan Naik (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roshan Naik updated FLUME-2501:
---
Attachment: FLUME-2501.v2.patch

Ok that helps build confidence. This new patch upgrades the versions to 4.2.5
Unit tests ran ok with this.

> Updating HttpClient lib version to ensure compat with Solr
> --
>
> Key: FLUME-2501
> URL: https://issues.apache.org/jira/browse/FLUME-2501
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: v1.5.0.1
>Reporter: Roshan Naik
>Assignee: Roshan Naik
> Attachments: FLUME-2501.patch, FLUME-2501.v2.patch
>
>
> Mismatch in httpclient and http core libs pulled by flume v/s the ones that 
> come with Solr causes errors at runtime
> {code}
> 2014-10-13 19:52:32,042 (lifecycleSupervisor-1-1) [DEBUG - 
> org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:106)]
>  Creating new http client, 
> config:maxConnections=128&maxConnectionsPerHost=32&followRedirects=false
> 2014-10-13 19:52:32,225 (lifecycleSupervisor-1-1) [ERROR - 
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:253)]
>  Unable to start SinkRunner: { 
> policy:org.apache.flume.sink.DefaultSinkProcessor@4752b854 counterGroup:{ 
> name:null counters:{} } } - Exception follows.
> java.lang.NoSuchFieldError: DEF_CONTENT_CHARSET
>   at 
> org.apache.http.impl.client.DefaultHttpClient.setDefaultHttpParams(DefaultHttpClient.java:175)
>   at 
> org.apache.http.impl.client.DefaultHttpClient.createHttpParams(DefaultHttpClient.java:158)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.getParams(AbstractHttpClient.java:448)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.setFollowRedirects(HttpClientUtil.java:251)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientConfigurer.configure(HttpClientConfigurer.java:58)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.configureClient(HttpClientUtil.java:133)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:109)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.(HttpSolrServer.java:161)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.(HttpSolrServer.java:138)
>   at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer.(ConcurrentUpdateSolrServer.java:122)
>   at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer.(ConcurrentUpdateSolrServer.java:114)
>   at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer.(ConcurrentUpdateSolrServer.java:104)
>   at 
> org.kitesdk.morphline.solr.SafeConcurrentUpdateSolrServer.(SafeConcurrentUpdateSolrServer.java:39)
>   at 
> org.kitesdk.morphline.solr.SafeConcurrentUpdateSolrServer.(SafeConcurrentUpdateSolrServer.java:35)
>   at 
> org.kitesdk.morphline.solr.SolrLocator.getLoader(SolrLocator.java:116)
>   at 
> org.kitesdk.morphline.solr.LoadSolrBuilder$LoadSolr.(LoadSolrBuilder.java:70)
>   at 
> org.kitesdk.morphline.solr.LoadSolrBuilder.build(LoadSolrBuilder.java:52)
>   at 
> org.kitesdk.morphline.base.AbstractCommand.buildCommand(AbstractCommand.java:303)
>   at 
> org.kitesdk.morphline.base.AbstractCommand.buildCommandChain(AbstractCommand.java:250)
>   at org.kitesdk.morphline.stdlib.Pipe.(Pipe.java:46)
>   at org.kitesdk.morphline.stdlib.PipeBuilder.build(PipeBuilder.java:40)
>   at org.kitesdk.morphline.base.Compiler.compile(Compiler.java:126)
>   at org.kitesdk.morphline.base.Compiler.compile(Compiler.java:55)
>   at 
> org.apache.flume.sink.solr.morphline.MorphlineHandlerImpl.configure(MorphlineHandlerImpl.java:101)
>   at 
> org.apache.flume.sink.solr.morphline.MorphlineSink.start(MorphlineSink.java:97)
>   at 
> org.apache.flume.sink.DefaultSinkProcessor.start(DefaultSinkProcessor.java:46)
>   at org.apache.flume.SinkRunner.start(SinkRunner.java:79)
>   at 
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA

[jira] [Updated] (FLUME-2500) Add a channel that uses Kafka

2014-10-16 Thread Hari Shreedharan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Shreedharan updated FLUME-2500:

Attachment: FLUME-2500.patch

Latest patch - on rb.

> Add a channel that uses Kafka 
> --
>
> Key: FLUME-2500
> URL: https://issues.apache.org/jira/browse/FLUME-2500
> Project: Flume
>  Issue Type: Bug
>Reporter: Hari Shreedharan
>Assignee: Hari Shreedharan
> Attachments: FLUME-2500.patch, FLUME-2500.patch
>
>
> Here is the rationale:
> - Kafka does give a HA channel, which means a dead agent does not affect the 
> data in the channel - thus reducing delay of delivery.
> - Kafka is used by many companies - it would be a good idea to use Flume to 
> pull data from Kafka and write it to HDFS/HBase etc. 
> This channel is not going to be useful for cases where Kafka is not already 
> used, since it brings is operational overhead of maintaining two systems, but 
> if there is Kafka in use - this is good way to integrate Kafka and Flume.
> Here is an a scratch implementation: 
> https://github.com/harishreedharan/flume/blob/kafka-channel/flume-ng-channels/flume-kafka-channel/src/main/java/org/apache/flume/channel/kafka/KafkaChannel.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 26820: FLUME-2500. Kafka Channel

2014-10-16 Thread Hari Shreedharan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/26820/
---

(Updated Oct. 16, 2014, 8:22 p.m.)


Review request for Flume.


Bugs: FLUME-2500
https://issues.apache.org/jira/browse/FLUME-2500


Repository: flume-git


Description
---

Add a channel that uses Kafka


Diffs
-

  flume-ng-channels/flume-kafka-channel/pom.xml PRE-CREATION 
  
flume-ng-channels/flume-kafka-channel/src/main/java/org/apache/flume/channel/kafka/KafkaChannel.java
 PRE-CREATION 
  
flume-ng-channels/flume-kafka-channel/src/main/java/org/apache/flume/channel/kafka/KafkaChannelConfiguration.java
 PRE-CREATION 
  
flume-ng-channels/flume-kafka-channel/src/test/java/org/apache/flume/channel/kafka/TestKafkaChannel.java
 PRE-CREATION 
  
flume-ng-channels/flume-kafka-channel/src/test/resources/kafka-server.properties
 PRE-CREATION 
  flume-ng-channels/flume-kafka-channel/src/test/resources/log4j.properties 
PRE-CREATION 
  flume-ng-channels/flume-kafka-channel/src/test/resources/zookeeper.properties 
PRE-CREATION 
  flume-ng-channels/pom.xml dc8dbc6 
  flume-ng-sinks/flume-ng-kafka-sink/pom.xml 746a395 
  
flume-ng-sinks/flume-ng-kafka-sink/src/test/java/org/apache/flume/sink/kafka/util/KafkaConsumer.java
 1c98922 
  
flume-ng-sinks/flume-ng-kafka-sink/src/test/java/org/apache/flume/sink/kafka/util/TestUtil.java
 8855c53 
  pom.xml 4f550d3 

Diff: https://reviews.apache.org/r/26820/diff/


Testing
---

Added tests that simulate a Kafka cluster.


Thanks,

Hari Shreedharan



Review Request 26820: FLUME-2500. Kafka Channel

2014-10-16 Thread Hari Shreedharan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/26820/
---

Review request for Flume.


Repository: flume-git


Description
---

Add a channel that uses Kafka


Diffs
-

  flume-ng-channels/flume-kafka-channel/pom.xml PRE-CREATION 
  
flume-ng-channels/flume-kafka-channel/src/main/java/org/apache/flume/channel/kafka/KafkaChannel.java
 PRE-CREATION 
  
flume-ng-channels/flume-kafka-channel/src/main/java/org/apache/flume/channel/kafka/KafkaChannelConfiguration.java
 PRE-CREATION 
  
flume-ng-channels/flume-kafka-channel/src/test/java/org/apache/flume/channel/kafka/TestKafkaChannel.java
 PRE-CREATION 
  
flume-ng-channels/flume-kafka-channel/src/test/resources/kafka-server.properties
 PRE-CREATION 
  flume-ng-channels/flume-kafka-channel/src/test/resources/log4j.properties 
PRE-CREATION 
  flume-ng-channels/flume-kafka-channel/src/test/resources/zookeeper.properties 
PRE-CREATION 
  flume-ng-channels/pom.xml dc8dbc6 
  flume-ng-sinks/flume-ng-kafka-sink/pom.xml 746a395 
  
flume-ng-sinks/flume-ng-kafka-sink/src/test/java/org/apache/flume/sink/kafka/util/KafkaConsumer.java
 1c98922 
  
flume-ng-sinks/flume-ng-kafka-sink/src/test/java/org/apache/flume/sink/kafka/util/TestUtil.java
 8855c53 
  pom.xml 4f550d3 

Diff: https://reviews.apache.org/r/26820/diff/


Testing
---

Added tests that simulate a Kafka cluster.


Thanks,

Hari Shreedharan



[jira] [Commented] (FLUME-2501) Updating HttpClient lib version to ensure compat with Solr

2014-10-16 Thread wolfgang hoschek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-2501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14174143#comment-14174143
 ] 

wolfgang hoschek commented on FLUME-2501:
-

Similar reason here - Flume in CDH 5 uses 4.2.5.

> Updating HttpClient lib version to ensure compat with Solr
> --
>
> Key: FLUME-2501
> URL: https://issues.apache.org/jira/browse/FLUME-2501
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: v1.5.0.1
>Reporter: Roshan Naik
>Assignee: Roshan Naik
> Attachments: FLUME-2501.patch
>
>
> Mismatch in httpclient and http core libs pulled by flume v/s the ones that 
> come with Solr causes errors at runtime
> {code}
> 2014-10-13 19:52:32,042 (lifecycleSupervisor-1-1) [DEBUG - 
> org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:106)]
>  Creating new http client, 
> config:maxConnections=128&maxConnectionsPerHost=32&followRedirects=false
> 2014-10-13 19:52:32,225 (lifecycleSupervisor-1-1) [ERROR - 
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:253)]
>  Unable to start SinkRunner: { 
> policy:org.apache.flume.sink.DefaultSinkProcessor@4752b854 counterGroup:{ 
> name:null counters:{} } } - Exception follows.
> java.lang.NoSuchFieldError: DEF_CONTENT_CHARSET
>   at 
> org.apache.http.impl.client.DefaultHttpClient.setDefaultHttpParams(DefaultHttpClient.java:175)
>   at 
> org.apache.http.impl.client.DefaultHttpClient.createHttpParams(DefaultHttpClient.java:158)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.getParams(AbstractHttpClient.java:448)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.setFollowRedirects(HttpClientUtil.java:251)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientConfigurer.configure(HttpClientConfigurer.java:58)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.configureClient(HttpClientUtil.java:133)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:109)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.(HttpSolrServer.java:161)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.(HttpSolrServer.java:138)
>   at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer.(ConcurrentUpdateSolrServer.java:122)
>   at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer.(ConcurrentUpdateSolrServer.java:114)
>   at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer.(ConcurrentUpdateSolrServer.java:104)
>   at 
> org.kitesdk.morphline.solr.SafeConcurrentUpdateSolrServer.(SafeConcurrentUpdateSolrServer.java:39)
>   at 
> org.kitesdk.morphline.solr.SafeConcurrentUpdateSolrServer.(SafeConcurrentUpdateSolrServer.java:35)
>   at 
> org.kitesdk.morphline.solr.SolrLocator.getLoader(SolrLocator.java:116)
>   at 
> org.kitesdk.morphline.solr.LoadSolrBuilder$LoadSolr.(LoadSolrBuilder.java:70)
>   at 
> org.kitesdk.morphline.solr.LoadSolrBuilder.build(LoadSolrBuilder.java:52)
>   at 
> org.kitesdk.morphline.base.AbstractCommand.buildCommand(AbstractCommand.java:303)
>   at 
> org.kitesdk.morphline.base.AbstractCommand.buildCommandChain(AbstractCommand.java:250)
>   at org.kitesdk.morphline.stdlib.Pipe.(Pipe.java:46)
>   at org.kitesdk.morphline.stdlib.PipeBuilder.build(PipeBuilder.java:40)
>   at org.kitesdk.morphline.base.Compiler.compile(Compiler.java:126)
>   at org.kitesdk.morphline.base.Compiler.compile(Compiler.java:55)
>   at 
> org.apache.flume.sink.solr.morphline.MorphlineHandlerImpl.configure(MorphlineHandlerImpl.java:101)
>   at 
> org.apache.flume.sink.solr.morphline.MorphlineSink.start(MorphlineSink.java:97)
>   at 
> org.apache.flume.sink.DefaultSinkProcessor.start(DefaultSinkProcessor.java:46)
>   at org.apache.flume.SinkRunner.start(SinkRunner.java:79)
>   at 
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLUME-2501) Updating HttpClient lib version to ensure compat with Solr

2014-10-16 Thread Roshan Naik (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-2501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14174140#comment-14174140
 ] 

Roshan Naik commented on FLUME-2501:


Wanted to keep version changes to a minimum to avoid unexpected breakage since 
other components also depend on these libs. Did some tests with the settings in 
the patch and it seemed to work out fine.

> Updating HttpClient lib version to ensure compat with Solr
> --
>
> Key: FLUME-2501
> URL: https://issues.apache.org/jira/browse/FLUME-2501
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: v1.5.0.1
>Reporter: Roshan Naik
>Assignee: Roshan Naik
> Attachments: FLUME-2501.patch
>
>
> Mismatch in httpclient and http core libs pulled by flume v/s the ones that 
> come with Solr causes errors at runtime
> {code}
> 2014-10-13 19:52:32,042 (lifecycleSupervisor-1-1) [DEBUG - 
> org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:106)]
>  Creating new http client, 
> config:maxConnections=128&maxConnectionsPerHost=32&followRedirects=false
> 2014-10-13 19:52:32,225 (lifecycleSupervisor-1-1) [ERROR - 
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:253)]
>  Unable to start SinkRunner: { 
> policy:org.apache.flume.sink.DefaultSinkProcessor@4752b854 counterGroup:{ 
> name:null counters:{} } } - Exception follows.
> java.lang.NoSuchFieldError: DEF_CONTENT_CHARSET
>   at 
> org.apache.http.impl.client.DefaultHttpClient.setDefaultHttpParams(DefaultHttpClient.java:175)
>   at 
> org.apache.http.impl.client.DefaultHttpClient.createHttpParams(DefaultHttpClient.java:158)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.getParams(AbstractHttpClient.java:448)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.setFollowRedirects(HttpClientUtil.java:251)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientConfigurer.configure(HttpClientConfigurer.java:58)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.configureClient(HttpClientUtil.java:133)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:109)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.(HttpSolrServer.java:161)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.(HttpSolrServer.java:138)
>   at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer.(ConcurrentUpdateSolrServer.java:122)
>   at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer.(ConcurrentUpdateSolrServer.java:114)
>   at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer.(ConcurrentUpdateSolrServer.java:104)
>   at 
> org.kitesdk.morphline.solr.SafeConcurrentUpdateSolrServer.(SafeConcurrentUpdateSolrServer.java:39)
>   at 
> org.kitesdk.morphline.solr.SafeConcurrentUpdateSolrServer.(SafeConcurrentUpdateSolrServer.java:35)
>   at 
> org.kitesdk.morphline.solr.SolrLocator.getLoader(SolrLocator.java:116)
>   at 
> org.kitesdk.morphline.solr.LoadSolrBuilder$LoadSolr.(LoadSolrBuilder.java:70)
>   at 
> org.kitesdk.morphline.solr.LoadSolrBuilder.build(LoadSolrBuilder.java:52)
>   at 
> org.kitesdk.morphline.base.AbstractCommand.buildCommand(AbstractCommand.java:303)
>   at 
> org.kitesdk.morphline.base.AbstractCommand.buildCommandChain(AbstractCommand.java:250)
>   at org.kitesdk.morphline.stdlib.Pipe.(Pipe.java:46)
>   at org.kitesdk.morphline.stdlib.PipeBuilder.build(PipeBuilder.java:40)
>   at org.kitesdk.morphline.base.Compiler.compile(Compiler.java:126)
>   at org.kitesdk.morphline.base.Compiler.compile(Compiler.java:55)
>   at 
> org.apache.flume.sink.solr.morphline.MorphlineHandlerImpl.configure(MorphlineHandlerImpl.java:101)
>   at 
> org.apache.flume.sink.solr.morphline.MorphlineSink.start(MorphlineSink.java:97)
>   at 
> org.apache.flume.sink.DefaultSinkProcessor.start(DefaultSinkProcessor.java:46)
>   at org.apache.flume.SinkRunner.start(SinkRunner.java:79)
>   at 
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   

[jira] [Created] (FLUME-2506) LoadBalancingRpcClient should allow for preferred hosts in selector

2014-10-16 Thread brian remedios (JIRA)
brian remedios created FLUME-2506:
-

 Summary: LoadBalancingRpcClient should allow for preferred hosts 
in selector
 Key: FLUME-2506
 URL: https://issues.apache.org/jira/browse/FLUME-2506
 Project: Flume
  Issue Type: Improvement
  Components: Client SDK
Reporter: brian remedios


The current HostSelector interface relies on an iterator to return the next 
host without regard to the incoming event data. For clients that use internal 
caches to speed processing or were otherwise optimized for specific events, it 
would be nice if host selection could depend on a quick examination by the 
selector beforehand.  Allowing support for host preferences (stickiness) would 
keep individual caches much tighter but would necessitate replacing the 
iterator with something that is provided with the actual event:

{code}
public void append(Event event) throws EventDeliveryException {

 boolean eventSent = false;
 HostProvider provider = selector.getHostProvider();

 HostInfo info = provider.hostFor(event);   // host-specific per event
 int attempts = 0;

 while (info != null && attempts < MAX) {
   RpcClient client;
   attempts++;

   try {
 client = getClient(host);
 client.append(event);
 eventSent = true;
 break;

   } catch (Exception ex) {
selector.informFailure(host);
LOGGER.warn("Failed to send event to host " + host, ex);
   }
info = provider.hostFor(event); // try another (if available)
}
{code}
The new HostProvider interface could wrap the current round-robin and random 
allocation strategies by default to avoid breaking existing code while others 
could take advantage of event specificity to chose the host.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLUME-2500) Add a channel that uses Kafka

2014-10-16 Thread Jarek Jarcec Cecho (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-2500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14173787#comment-14173787
 ] 

Jarek Jarcec Cecho commented on FLUME-2500:
---

Could you also post the patch to review board [~hshreedharan]?

> Add a channel that uses Kafka 
> --
>
> Key: FLUME-2500
> URL: https://issues.apache.org/jira/browse/FLUME-2500
> Project: Flume
>  Issue Type: Bug
>Reporter: Hari Shreedharan
>Assignee: Hari Shreedharan
> Attachments: FLUME-2500.patch
>
>
> Here is the rationale:
> - Kafka does give a HA channel, which means a dead agent does not affect the 
> data in the channel - thus reducing delay of delivery.
> - Kafka is used by many companies - it would be a good idea to use Flume to 
> pull data from Kafka and write it to HDFS/HBase etc. 
> This channel is not going to be useful for cases where Kafka is not already 
> used, since it brings is operational overhead of maintaining two systems, but 
> if there is Kafka in use - this is good way to integrate Kafka and Flume.
> Here is an a scratch implementation: 
> https://github.com/harishreedharan/flume/blob/kafka-channel/flume-ng-channels/flume-kafka-channel/src/main/java/org/apache/flume/channel/kafka/KafkaChannel.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLUME-1585) Document the load balancing sink processor

2014-10-16 Thread Ashish Paliwal (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-1585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14173631#comment-14173631
 ] 

Ashish Paliwal commented on FLUME-1585:
---

[~roshan_naik] priority and penalty options are available for Failover Sink 
Processor.

> Document the load balancing sink processor
> --
>
> Key: FLUME-1585
> URL: https://issues.apache.org/jira/browse/FLUME-1585
> Project: Flume
>  Issue Type: Documentation
>Reporter: Mike Percy
>Assignee: Ashish Paliwal
>
> We need to document the load balancing sink processor, including backoff 
> options.
> Example config:
> {noformat}
> # test file
> agent.channels = ch-0
> agent.sources = src-0
> agent.sinks = sink-0 sink-1 sink-2
> agent.sinkgroups = group-0
> agent.channels.ch-0.type = memory
> agent.channels.ch-0.capacity = 1
> agent.sources.src-0.type = netcat
> agent.sources.src-0.channels = ch-0
> agent.sources.src-0.bind = 0.0.0.0
> agent.sources.src-0.port = 10002
> agent.sinkgroups.group-0.sinks = sink-0 sink-1 sink-2
> agent.sinkgroups.group-0.processor.type = load_balance
> agent.sinkgroups.group-0.processor.selector = round_robin
> agent.sinkgroups.group-0.processor.backoff = true
> agent.sinks.sink-0.type = avro
> agent.sinks.sink-0.channel = ch-0
> agent.sinks.sink-0.hostname = 127.0.0.1
> agent.sinks.sink-0.port = 999
> agent.sinks.sink-1.type = avro
> agent.sinks.sink-1.channel = ch-0
> agent.sinks.sink-1.hostname = 127.0.0.1
> agent.sinks.sink-1.port = 999
> agent.sinks.sink-2.type = logger
> agent.sinks.sink-2.channel = ch-0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FLUME-2501) Updating HttpClient lib version to ensure compat with Solr

2014-10-16 Thread wolfgang hoschek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-2501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14173554#comment-14173554
 ] 

wolfgang hoschek edited comment on FLUME-2501 at 10/16/14 9:13 AM:
---

How about updating both httpclient and httpcore to 4.2.5? Reason is because I 
know that this works in production, whereas I don't know about 4.2.3.


was (Author: whoschek):
How about updating to httpclient to 4.2.5 instead of 4.2.3? Reason is because I 
know that the former works in production, whereas I don't know about the latter.

> Updating HttpClient lib version to ensure compat with Solr
> --
>
> Key: FLUME-2501
> URL: https://issues.apache.org/jira/browse/FLUME-2501
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: v1.5.0.1
>Reporter: Roshan Naik
>Assignee: Roshan Naik
> Attachments: FLUME-2501.patch
>
>
> Mismatch in httpclient and http core libs pulled by flume v/s the ones that 
> come with Solr causes errors at runtime
> {code}
> 2014-10-13 19:52:32,042 (lifecycleSupervisor-1-1) [DEBUG - 
> org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:106)]
>  Creating new http client, 
> config:maxConnections=128&maxConnectionsPerHost=32&followRedirects=false
> 2014-10-13 19:52:32,225 (lifecycleSupervisor-1-1) [ERROR - 
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:253)]
>  Unable to start SinkRunner: { 
> policy:org.apache.flume.sink.DefaultSinkProcessor@4752b854 counterGroup:{ 
> name:null counters:{} } } - Exception follows.
> java.lang.NoSuchFieldError: DEF_CONTENT_CHARSET
>   at 
> org.apache.http.impl.client.DefaultHttpClient.setDefaultHttpParams(DefaultHttpClient.java:175)
>   at 
> org.apache.http.impl.client.DefaultHttpClient.createHttpParams(DefaultHttpClient.java:158)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.getParams(AbstractHttpClient.java:448)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.setFollowRedirects(HttpClientUtil.java:251)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientConfigurer.configure(HttpClientConfigurer.java:58)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.configureClient(HttpClientUtil.java:133)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:109)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.(HttpSolrServer.java:161)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.(HttpSolrServer.java:138)
>   at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer.(ConcurrentUpdateSolrServer.java:122)
>   at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer.(ConcurrentUpdateSolrServer.java:114)
>   at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer.(ConcurrentUpdateSolrServer.java:104)
>   at 
> org.kitesdk.morphline.solr.SafeConcurrentUpdateSolrServer.(SafeConcurrentUpdateSolrServer.java:39)
>   at 
> org.kitesdk.morphline.solr.SafeConcurrentUpdateSolrServer.(SafeConcurrentUpdateSolrServer.java:35)
>   at 
> org.kitesdk.morphline.solr.SolrLocator.getLoader(SolrLocator.java:116)
>   at 
> org.kitesdk.morphline.solr.LoadSolrBuilder$LoadSolr.(LoadSolrBuilder.java:70)
>   at 
> org.kitesdk.morphline.solr.LoadSolrBuilder.build(LoadSolrBuilder.java:52)
>   at 
> org.kitesdk.morphline.base.AbstractCommand.buildCommand(AbstractCommand.java:303)
>   at 
> org.kitesdk.morphline.base.AbstractCommand.buildCommandChain(AbstractCommand.java:250)
>   at org.kitesdk.morphline.stdlib.Pipe.(Pipe.java:46)
>   at org.kitesdk.morphline.stdlib.PipeBuilder.build(PipeBuilder.java:40)
>   at org.kitesdk.morphline.base.Compiler.compile(Compiler.java:126)
>   at org.kitesdk.morphline.base.Compiler.compile(Compiler.java:55)
>   at 
> org.apache.flume.sink.solr.morphline.MorphlineHandlerImpl.configure(MorphlineHandlerImpl.java:101)
>   at 
> org.apache.flume.sink.solr.morphline.MorphlineSink.start(MorphlineSink.java:97)
>   at 
> org.apache.flume.sink.DefaultSinkProcessor.start(DefaultSinkProcessor.java:46)
>   at org.apache.flume.SinkRunner.start(SinkRunner.java:79)
>   at 
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>

[jira] [Commented] (FLUME-2501) Updating HttpClient lib version to ensure compat with Solr

2014-10-16 Thread wolfgang hoschek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-2501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14173554#comment-14173554
 ] 

wolfgang hoschek commented on FLUME-2501:
-

How about updating to httpclient to 4.2.5 instead of 4.2.3? Reason is because I 
know that the former works in production, whereas I don't know about the latter.

> Updating HttpClient lib version to ensure compat with Solr
> --
>
> Key: FLUME-2501
> URL: https://issues.apache.org/jira/browse/FLUME-2501
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: v1.5.0.1
>Reporter: Roshan Naik
>Assignee: Roshan Naik
> Attachments: FLUME-2501.patch
>
>
> Mismatch in httpclient and http core libs pulled by flume v/s the ones that 
> come with Solr causes errors at runtime
> {code}
> 2014-10-13 19:52:32,042 (lifecycleSupervisor-1-1) [DEBUG - 
> org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:106)]
>  Creating new http client, 
> config:maxConnections=128&maxConnectionsPerHost=32&followRedirects=false
> 2014-10-13 19:52:32,225 (lifecycleSupervisor-1-1) [ERROR - 
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:253)]
>  Unable to start SinkRunner: { 
> policy:org.apache.flume.sink.DefaultSinkProcessor@4752b854 counterGroup:{ 
> name:null counters:{} } } - Exception follows.
> java.lang.NoSuchFieldError: DEF_CONTENT_CHARSET
>   at 
> org.apache.http.impl.client.DefaultHttpClient.setDefaultHttpParams(DefaultHttpClient.java:175)
>   at 
> org.apache.http.impl.client.DefaultHttpClient.createHttpParams(DefaultHttpClient.java:158)
>   at 
> org.apache.http.impl.client.AbstractHttpClient.getParams(AbstractHttpClient.java:448)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.setFollowRedirects(HttpClientUtil.java:251)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientConfigurer.configure(HttpClientConfigurer.java:58)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.configureClient(HttpClientUtil.java:133)
>   at 
> org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:109)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.(HttpSolrServer.java:161)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrServer.(HttpSolrServer.java:138)
>   at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer.(ConcurrentUpdateSolrServer.java:122)
>   at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer.(ConcurrentUpdateSolrServer.java:114)
>   at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer.(ConcurrentUpdateSolrServer.java:104)
>   at 
> org.kitesdk.morphline.solr.SafeConcurrentUpdateSolrServer.(SafeConcurrentUpdateSolrServer.java:39)
>   at 
> org.kitesdk.morphline.solr.SafeConcurrentUpdateSolrServer.(SafeConcurrentUpdateSolrServer.java:35)
>   at 
> org.kitesdk.morphline.solr.SolrLocator.getLoader(SolrLocator.java:116)
>   at 
> org.kitesdk.morphline.solr.LoadSolrBuilder$LoadSolr.(LoadSolrBuilder.java:70)
>   at 
> org.kitesdk.morphline.solr.LoadSolrBuilder.build(LoadSolrBuilder.java:52)
>   at 
> org.kitesdk.morphline.base.AbstractCommand.buildCommand(AbstractCommand.java:303)
>   at 
> org.kitesdk.morphline.base.AbstractCommand.buildCommandChain(AbstractCommand.java:250)
>   at org.kitesdk.morphline.stdlib.Pipe.(Pipe.java:46)
>   at org.kitesdk.morphline.stdlib.PipeBuilder.build(PipeBuilder.java:40)
>   at org.kitesdk.morphline.base.Compiler.compile(Compiler.java:126)
>   at org.kitesdk.morphline.base.Compiler.compile(Compiler.java:55)
>   at 
> org.apache.flume.sink.solr.morphline.MorphlineHandlerImpl.configure(MorphlineHandlerImpl.java:101)
>   at 
> org.apache.flume.sink.solr.morphline.MorphlineSink.start(MorphlineSink.java:97)
>   at 
> org.apache.flume.sink.DefaultSinkProcessor.start(DefaultSinkProcessor.java:46)
>   at org.apache.flume.SinkRunner.start(SinkRunner.java:79)
>   at 
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.

[jira] [Commented] (FLUME-2503) hadoop-1 profile is broken in Morphline Sink

2014-10-16 Thread wolfgang hoschek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-2503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14173505#comment-14173505
 ] 

wolfgang hoschek commented on FLUME-2503:
-

+1 Patch looks good to me and unit tests continue to pass both under hadoop-1 
and hadoop-2 profile.

> hadoop-1 profile is broken in Morphline Sink
> 
>
> Key: FLUME-2503
> URL: https://issues.apache.org/jira/browse/FLUME-2503
> Project: Flume
>  Issue Type: Bug
>Reporter: Hari Shreedharan
>Assignee: Hari Shreedharan
> Attachments: FLUME-2503.patch
>
>
> kite-morphlines-solr-core test-jar must also exclude hadoop-common:
> [INFO] Downloading: 
> https://repository.apache.org/releases/org/eclipse/jetty/jetty-xml/8.1.8.v20121106/jetty-xml-8.1.8.v20121106.pom
> [INFO] Downloading: 
> http://repo.maven.apache.org/maven2/org/apache/hadoop/hadoop-common/1.0.1/hadoop-common-1.0.1.pom
> [INFO] Downloading: 
> https://oss.sonatype.org/content/repositories/releases/org/apache/hadoop/hadoop-common/1.0.1/hadoop-common-1.0.1.pom
> [INFO] Downloading: 
> https://repository.apache.org/releases/org/apache/hadoop/hadoop-common/1.0.1/hadoop-common-1.0.1.pom
> [INFO] Downloading: 
> https://repository.cloudera.com/artifactory/cloudera-repos/org/apache/hadoop/hadoop-common/1.0.1/hadoop-common-1.0.1.pom
> [INFO] Downloading: 
> http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/1.0.1/hadoop-common-1.0.1.pom
> [INFO] Downloading: 
> http://repository.jboss.org/nexus/content/groups/public/org/apache/hadoop/hadoop-common/1.0.1/hadoop-common-1.0.1.pom
> [INFO] Downloading: 
> http://maven.twttr.com/org/apache/hadoop/hadoop-common/1.0.1/hadoop-common-1.0.1.pom
> [INFO] Downloading: 
> http://maven.restlet.org/org/apache/hadoop/hadoop-common/1.0.1/hadoop-common-1.0.1.pom



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLUME-2486) TestExecSource fails on some environments

2014-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-2486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14173497#comment-14173497
 ] 

Hudson commented on FLUME-2486:
---

SUCCESS: Integrated in Flume-trunk-hbase-98 #38 (See 
[https://builds.apache.org/job/Flume-trunk-hbase-98/38/])
FLUME-2486. TestExecSource fails on some environments (hshreedharan: 
http://git-wip-us.apache.org/repos/asf/flume/repo?p=flume.git&a=commit&h=f99adaabcf9d6c6b8f4c8fe5895fe478c8307694)
* flume-ng-core/src/test/java/org/apache/flume/source/TestExecSource.java


> TestExecSource fails on some environments
> -
>
> Key: FLUME-2486
> URL: https://issues.apache.org/jira/browse/FLUME-2486
> Project: Flume
>  Issue Type: Bug
>Affects Versions: v1.5.0.1
>Reporter: Santiago M. Mola
>Assignee: Santiago M. Mola
>  Labels: easyfix, patch
> Fix For: v1.6.0
>
> Attachments: FLUME-2486-0.patch
>
>
> This seems to occur on slow platforms (e.g. Travis). A workaround already 
> existed for Windows.
> https://travis-ci.org/Stratio/flume/jobs/36817397#L6143
> testShellCommandSimple(org.apache.flume.source.TestExecSource) Time elapsed: 
> 1254 sec <<< FAILURE!
> java.lang.AssertionError: array lengths differed, expected.length=5 
> actual.length=0
> at org.junit.Assert.fail(Assert.java:93)
> at 
> org.junit.internal.ComparisonCriteria.assertArraysAreSameLength(ComparisonCriteria.java:70)
> at 
> org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:34)
> at org.junit.Assert.internalArrayEquals(Assert.java:416)
> at org.junit.Assert.assertArrayEquals(Assert.java:168)
> at org.junit.Assert.assertArrayEquals(Assert.java:185)
> at 
> org.apache.flume.source.TestExecSource.runTestShellCmdHelper(TestExecSource.java:360)
> at 
> org.apache.flume.source.TestExecSource.testShellCommandSimple(TestExecSource.java:152)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
> at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
> at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLUME-2486) TestExecSource fails on some environments

2014-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-2486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14173487#comment-14173487
 ] 

Hudson commented on FLUME-2486:
---

UNSTABLE: Integrated in flume-trunk #679 (See 
[https://builds.apache.org/job/flume-trunk/679/])
FLUME-2486. TestExecSource fails on some environments (hshreedharan: 
http://git-wip-us.apache.org/repos/asf/flume/repo?p=flume.git&a=commit&h=f99adaabcf9d6c6b8f4c8fe5895fe478c8307694)
* flume-ng-core/src/test/java/org/apache/flume/source/TestExecSource.java


> TestExecSource fails on some environments
> -
>
> Key: FLUME-2486
> URL: https://issues.apache.org/jira/browse/FLUME-2486
> Project: Flume
>  Issue Type: Bug
>Affects Versions: v1.5.0.1
>Reporter: Santiago M. Mola
>Assignee: Santiago M. Mola
>  Labels: easyfix, patch
> Fix For: v1.6.0
>
> Attachments: FLUME-2486-0.patch
>
>
> This seems to occur on slow platforms (e.g. Travis). A workaround already 
> existed for Windows.
> https://travis-ci.org/Stratio/flume/jobs/36817397#L6143
> testShellCommandSimple(org.apache.flume.source.TestExecSource) Time elapsed: 
> 1254 sec <<< FAILURE!
> java.lang.AssertionError: array lengths differed, expected.length=5 
> actual.length=0
> at org.junit.Assert.fail(Assert.java:93)
> at 
> org.junit.internal.ComparisonCriteria.assertArraysAreSameLength(ComparisonCriteria.java:70)
> at 
> org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:34)
> at org.junit.Assert.internalArrayEquals(Assert.java:416)
> at org.junit.Assert.assertArrayEquals(Assert.java:168)
> at org.junit.Assert.assertArrayEquals(Assert.java:185)
> at 
> org.apache.flume.source.TestExecSource.runTestShellCmdHelper(TestExecSource.java:360)
> at 
> org.apache.flume.source.TestExecSource.testShellCommandSimple(TestExecSource.java:152)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
> at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
> at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is still unstable: flume-trunk #679

2014-10-16 Thread Apache Jenkins Server
See