Re: [DISCUSS] Flume 1.8 release proposal

2017-09-12 Thread Ralph Goers

> On Sep 12, 2017, at 7:46 AM, Tristan Stevens  wrote:
> 
> FWIW:
> I’m not overly in favour of rushing a release out when we have many JIRAs
> with patches submitted, especially when some of those are +1’d by multiple
> reviewers.

Rushing a release? It has been a year since the last release. For most open 
source projects that is an eternity.

> 
> This doesn’t speak of a mature and active community process - I’d much
> rather that we pause a little, have a big effort by the committers to get
> these patches in - especially as this will improve buy-in for those who
> have spent time developing patches. I, for one, would be much more likely
> to be a more active member of the community if I could see my patches
> getting committed, rather than spending years in ‘patch available’.
> 
> Would anyone be in favour of a concerted push to get some of these types of
> JIRAs committed?

From my experience you will not gain anything. People have had a year to fix 
the things that interest them. Waiting a bit longer just means more things will 
come in that people think are important.

The solution is the same as it always has been. Release early and release 
often.  You wouldn’t even be suggesting this if you knew another release would 
be coming next month.

Ralph


[jira] [Commented] (FLUME-3053) one sink can get events form more than one channel,but the user guide dosent mentioned it.

2017-09-12 Thread Liam (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-3053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16164087#comment-16164087
 ] 

Liam commented on FLUME-3053:
-

Hi,
  When the speed of the sink is  slower than the speed of the source,we can use 
Multi-sink docking a single channel,it will have a better performance.but the 
user guide dosen't mentioned it, friends who have just used flume may not know 
that this feature,so i think it should be added into the document.:D

> one sink can get events form more than one channel,but the user guide dosent 
> mentioned it. 
> ---
>
> Key: FLUME-3053
> URL: https://issues.apache.org/jira/browse/FLUME-3053
> Project: Flume
>  Issue Type: Documentation
>  Components: Docs
>Reporter: Liam
>Priority: Minor
> Fix For: 1.9.0
>
>
> we can improve the throughput by  configure more than one sinks for one 
> channel,such as:
> server.sources = r1 
> server.sinks =  k1 k2 k3 
> server.channels = c1 
> server.sinks.k1.channel = c1
> server.sinks.k2.channel = c1
> server.sinks.k3.channel = c1



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: Flume-trunk-hbase-1 #323

2017-09-12 Thread Apache Jenkins Server
See 


Changes:

[denes] FLUME-3046. Kafka Sink and Source Configuration Improvements

--
[...truncated 229.58 KB...]
[INFO] Deleting 

[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (process-resource-bundles) 
@ flume-ng-clients ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
flume-ng-clients ---
[INFO] 
[INFO] --- apache-rat-plugin:0.11:check (verify.rat) @ flume-ng-clients ---
[INFO] 52 implicit excludes (use -debug for more details).
[INFO] Exclude: **/.idea/
[INFO] Exclude: **/*.iml
[INFO] Exclude: **/nb-configuration.xml
[INFO] Exclude: .git/
[INFO] Exclude: patchprocess/
[INFO] Exclude: .gitignore
[INFO] Exclude: .repository/
[INFO] Exclude: **/*.diff
[INFO] Exclude: **/*.patch
[INFO] Exclude: **/*.avsc
[INFO] Exclude: **/*.avro
[INFO] Exclude: **/docs/**
[INFO] Exclude: **/test/resources/**
[INFO] Exclude: **/.settings/*
[INFO] Exclude: **/.classpath
[INFO] Exclude: **/.project
[INFO] Exclude: **/target/**
[INFO] Exclude: **/derby.log
[INFO] Exclude: **/metastore_db/
[INFO] 1 resources included (use -debug for more details)
[INFO] Rat check: Summary of files. Unapproved: 0 unknown: 0 generated: 0 
approved: 1 licence.
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.17:check (verify) @ flume-ng-clients ---
[INFO] 
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ 
flume-ng-clients ---
[INFO] Installing 
 
to 

[INFO] 
[INFO] 
[INFO] Building Flume NG Log4j Appender 1.8.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ 
flume-ng-log4jappender ---
[INFO] Deleting 

[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (process-resource-bundles) 
@ flume-ng-log4jappender ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
flume-ng-log4jappender ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:2.3.2:compile (default-compile) @ 
flume-ng-log4jappender ---
[INFO] Compiling 3 source files to 

[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ 
flume-ng-log4jappender ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 7 resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:2.3.2:testCompile (default-testCompile) @ 
flume-ng-log4jappender ---
[INFO] Compiling 3 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.12.3:test (default-test) @ 
flume-ng-log4jappender ---
[INFO] Surefire report directory: 


---
 T E S T S
---

---
 T E S T S
---
Running org.apache.flume.clients.log4jappender.TestLoadBalancingLog4jAppender
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.305 sec
Running org.apache.flume.clients.log4jappender.TestLog4jAppenderWithAvro
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.115 sec
Running org.apache.flume.clients.log4jappender.TestLog4jAppender
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.105 sec

Results :

Tests run: 20, Failures: 0, Errors: 0, Skipped: 0

[JENKINS] Recording test results
[INFO] 
[INFO] --- maven-jar-plugin:3.0.0:jar (default-jar) @ flume-ng-log4jappender ---
[INFO] Building jar: 

[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Flume checkstyle project 

[jira] [Commented] (FLUME-3046) Kafka Sink and Source Configuration Improvements

2017-09-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-3046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163646#comment-16163646
 ] 

Hudson commented on FLUME-3046:
---

FAILURE: Integrated in Jenkins build Flume-trunk-hbase-1 #323 (See 
[https://builds.apache.org/job/Flume-trunk-hbase-1/323/])
FLUME-3046. Kafka Sink and Source Configuration Improvements (denes: 
[http://git-wip-us.apache.org/repos/asf/flume/repo?p=flume.git=commit=54e2728a8e141ee63704018c4497bbe083c0f75f])
* (edit) flume-ng-doc/sphinx/FlumeUserGuide.rst
* (edit) 
flume-ng-sinks/flume-ng-kafka-sink/src/main/java/org/apache/flume/sink/kafka/KafkaSinkConstants.java
* (edit) 
flume-ng-sources/flume-kafka-source/src/test/java/org/apache/flume/source/kafka/TestKafkaSource.java
* (edit) 
flume-ng-sources/flume-kafka-source/src/main/java/org/apache/flume/source/kafka/KafkaSource.java
* (edit) 
flume-ng-sources/flume-kafka-source/src/main/java/org/apache/flume/source/kafka/KafkaSourceConstants.java
* (edit) 
flume-ng-sinks/flume-ng-kafka-sink/src/main/java/org/apache/flume/sink/kafka/KafkaSink.java
* (edit) 
flume-ng-sinks/flume-ng-kafka-sink/src/test/java/org/apache/flume/sink/kafka/TestKafkaSink.java


> Kafka Sink and Source Configuration Improvements
> 
>
> Key: FLUME-3046
> URL: https://issues.apache.org/jira/browse/FLUME-3046
> Project: Flume
>  Issue Type: Bug
>Affects Versions: 1.7.0
>Reporter: Jeff Holoman
>Assignee: Tristan Stevens
> Fix For: 1.8.0
>
>
> Currently the Kafka Source sets the header for the topic. The sink reads this 
> value, rather than the statically defined topic value. We should fix this so 
> that you can either change the topic header that is used, or just choose to 
> prefer the statically defined topic in the sink.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] Flume 1.8 release proposal

2017-09-12 Thread Tristan Stevens
Hi Denes,
+1 from me.

I’m really in support of a much more regular release cadence, especially
including maintenance releases. If there’s anything I can do to help burn
down the backlog over the coming months, then please let me know.

Let’s ship this one, then get cracking with 1.8.1.

Tristan

On 12 September 2017 at 17:15:05, Denes Arvay (de...@cloudera.com) wrote:

Hi Tristan,

I understand your concerns. I do think, though, that most of the low
hanging fruits have been committed already and there are no uncommitted
tickets with explicit +1s.

I'd like to propose to continue with the release process as planned, given
44 tickets have been resolved since the 1.7 release, plus a couple of minor
changes have been committed without Jira tickets.
I think the community has just got a bit more active as it used to be some
months ago and I'd like to use this momentum to schedule releases more
often. It'd be nice to introduce maintenance release (e.g. release 1.8.1 in
about a month) and have more frequent minors - let's say one in every 3
months.
Having a faster release cadence would mean that doing a release would need
a lot less effort and we'd do the Jira grooming more often which eventually
could lead to a clean backlog and that we could set realistic expectations
for the contributors/users on when their changes/requested features will
get into a release.
But unfortunately we are not there yet, and I think this is the reason why
some patches have been in Patch available state for months, which - I do
agree - is frustrating.

I think working through this release process is a step to the above
outlined direction, so I'd like to move forward with it.

What do you think?

Thanks,
Denes


On Tue, Sep 12, 2017 at 4:46 PM Tristan Stevens 
wrote:

> FWIW:
> I’m not overly in favour of rushing a release out when we have many JIRAs
> with patches submitted, especially when some of those are +1’d by multiple
> reviewers.
>
> This doesn’t speak of a mature and active community process - I’d much
> rather that we pause a little, have a big effort by the committers to get
> these patches in - especially as this will improve buy-in for those who
> have spent time developing patches. I, for one, would be much more likely
> to be a more active member of the community if I could see my patches
> getting committed, rather than spending years in ‘patch available’.
>
> Would anyone be in favour of a concerted push to get some of these types of
> JIRAs committed?
>
> Tristan
>
> On 12 September 2017 at 14:39:45, Ferenc Szabo (fsz...@cloudera.com)
> wrote:
>
> FYI:
> jira issues have been moved to the new fix versions:
> 1.8.1:
>
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20FLUME%20AND%20fixVersion%20%3D%201.8.1
> 1.9.0:
>
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20FLUME%20AND%20fixVersion%20%3D%201.9.0
>
>
> On Tue, Sep 12, 2017 at 12:17 PM, Mike Percy  wrote:
>
> > I added the additional versions earlier today but neglected to notify the
> > list until just now. :)
> >
> > +1 on the plan. Thanks for keeping us updated and continuing to drive
> this
> > release, Denes!
> >
> > Mike
> >
> > On Tue, Sep 12, 2017 at 3:04 AM, Denes Arvay  wrote:
> >
> > > Hi Donat,
> > >
> > > Thanks for your help.
> > > Ferenc Szabo is already working on the retargeting, but it's definitely
> a
> > > good advice to do it in bulk to avoid spamming the lists.
> > >
> > > We have the following action items:
> > > - retarget the tickets
> > > - fix the blockers: there is only one left which I'm aware of:
> > > https://issues.apache.org/jira/browse/FLUME-3174. To fix that
> upgrading
> > > the
> > > joda-time is in progress: https://github.com/apache/flume/pull/169
> > > - the user guide is broken (netcat udp source's table is malformed),
> I'm
> > > fixing it
> > > - https://github.com/apache/flume/pull/168 needs to be committed. I've
> > > seen
> > > that you've already commented on it, thank you, will reply soon.
> > (Spoiler:
> > > a lot of effort, unfortunately)
> > > - some minor changes need to be done in the documentation (e.g. fixing
> > the
> > > copyright dates, removing/updating the version references in the user
> > > guide). If anybody in the community feels like doing it I'd be more
> than
> > > happy to review & commit the changes.
> > > - once these are done I'm going to create the 1.8 branch and create the
> > RC1
> > > release artifact. I'll announce the branching in advance to the dev@
> > list.
> > >
> > > Thank you,
> > > Denes
> > >
> > >
> > > On Tue, Sep 12, 2017 at 11:18 AM Bessenyei Balázs Donát <
> > bes...@apache.org
> > > >
> > > wrote:
> > >
> > > > Hi Denes,
> > > >
> > > > It seems to me that 1.8.1 and 1.9.0 releases already exist in our
> JIRA.
> > > >
> > > > Regarding the retargeting: I'd be happy to batch-edit the necessary
> > > > tickets in order to avoid spamming the mailing lists.
> > > > Once you have a list of 

[jira] [Commented] (FLUME-3046) Kafka Sink and Source Configuration Improvements

2017-09-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-3046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163558#comment-16163558
 ] 

ASF GitHub Bot commented on FLUME-3046:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flume/pull/105


> Kafka Sink and Source Configuration Improvements
> 
>
> Key: FLUME-3046
> URL: https://issues.apache.org/jira/browse/FLUME-3046
> Project: Flume
>  Issue Type: Bug
>Affects Versions: 1.7.0
>Reporter: Jeff Holoman
>Assignee: Tristan Stevens
> Fix For: 1.8.0
>
>
> Currently the Kafka Source sets the header for the topic. The sink reads this 
> value, rather than the statically defined topic value. We should fix this so 
> that you can either change the topic header that is used, or just choose to 
> prefer the statically defined topic in the sink.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLUME-3046) Kafka Sink and Source Configuration Improvements

2017-09-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-3046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163557#comment-16163557
 ] 

ASF subversion and git services commented on FLUME-3046:


Commit 54e2728a8e141ee63704018c4497bbe083c0f75f in flume's branch 
refs/heads/trunk from [~tmgstev]
[ https://git-wip-us.apache.org/repos/asf?p=flume.git;h=54e2728 ]

FLUME-3046. Kafka Sink and Source Configuration Improvements

This patch fixes the infinite loop between Kafka source and Kafka sink
by introducing the following configuration parameters in those components:
- topicHeader in Kafka source to specify the name of the header where it
  stores the topic name where the event comes from.
- setTopicHeader in Kafka source to control whether the topic name is stored
  in the given header.
- topicHeader in Kafka sink to configure the name of the header which
  is used to specify in which topic to send the event.
- allowTopicOverride in Kafka sink to control whether the target topic's name
  can be overridden by the specified header.

This closes #105

Reviewers: Attila Simon

(Tristan Stevens via Denes Arvay)


> Kafka Sink and Source Configuration Improvements
> 
>
> Key: FLUME-3046
> URL: https://issues.apache.org/jira/browse/FLUME-3046
> Project: Flume
>  Issue Type: Bug
>Affects Versions: 1.7.0
>Reporter: Jeff Holoman
>Assignee: Tristan Stevens
> Fix For: 1.9.0
>
>
> Currently the Kafka Source sets the header for the topic. The sink reads this 
> value, rather than the statically defined topic value. We should fix this so 
> that you can either change the topic header that is used, or just choose to 
> prefer the statically defined topic in the sink.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flume pull request #105: FLUME-3046: Kafka Sink and Source Configuration Imp...

2017-09-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flume/pull/105


---


Jenkins build is unstable: Flume-trunk-hbase-1 #322

2017-09-12 Thread Apache Jenkins Server
See 




[GitHub] flume pull request #170: Fix NetCat UDP Source table in FlumeUserGuide.rst

2017-09-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flume/pull/170


---


Re: [DISCUSS] Flume 1.8 release proposal

2017-09-12 Thread Denes Arvay
Hi Tristan,

I understand your concerns. I do think, though, that most of the low
hanging fruits have been committed already and there are no uncommitted
tickets with explicit +1s.

I'd like to propose to continue with the release process as planned, given
44 tickets have been resolved since the 1.7 release, plus a couple of minor
changes have been committed without Jira tickets.
I think the community has just got a bit more active as it used to be some
months ago and I'd like to use this momentum to schedule releases more
often. It'd be nice to introduce maintenance release (e.g. release 1.8.1 in
about a month) and have more frequent minors - let's say one in every 3
months.
Having a faster release cadence would mean that doing a release would need
a lot less effort and we'd do the Jira grooming more often which eventually
could lead to a clean backlog and that we could set realistic expectations
for the contributors/users on when their changes/requested features will
get into a release.
But unfortunately we are not there yet, and I think this is the reason why
some patches have been in Patch available state for months, which - I do
agree - is frustrating.

I think working through this release process is a step to the above
outlined direction, so I'd like to move forward with it.

What do you think?

Thanks,
Denes


On Tue, Sep 12, 2017 at 4:46 PM Tristan Stevens 
wrote:

> FWIW:
> I’m not overly in favour of rushing a release out when we have many JIRAs
> with patches submitted, especially when some of those are +1’d by multiple
> reviewers.
>
> This doesn’t speak of a mature and active community process - I’d much
> rather that we pause a little, have a big effort by the committers to get
> these patches in - especially as this will improve buy-in for those who
> have spent time developing patches. I, for one, would be much more likely
> to be a more active member of the community if I could see my patches
> getting committed, rather than spending years in ‘patch available’.
>
> Would anyone be in favour of a concerted push to get some of these types of
> JIRAs committed?
>
> Tristan
>
> On 12 September 2017 at 14:39:45, Ferenc Szabo (fsz...@cloudera.com)
> wrote:
>
> FYI:
> jira issues have been moved to the new fix versions:
> 1.8.1:
>
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20FLUME%20AND%20fixVersion%20%3D%201.8.1
> 1.9.0:
>
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20FLUME%20AND%20fixVersion%20%3D%201.9.0
>
>
> On Tue, Sep 12, 2017 at 12:17 PM, Mike Percy  wrote:
>
> > I added the additional versions earlier today but neglected to notify the
> > list until just now. :)
> >
> > +1 on the plan. Thanks for keeping us updated and continuing to drive
> this
> > release, Denes!
> >
> > Mike
> >
> > On Tue, Sep 12, 2017 at 3:04 AM, Denes Arvay  wrote:
> >
> > > Hi Donat,
> > >
> > > Thanks for your help.
> > > Ferenc Szabo is already working on the retargeting, but it's definitely
> a
> > > good advice to do it in bulk to avoid spamming the lists.
> > >
> > > We have the following action items:
> > > - retarget the tickets
> > > - fix the blockers: there is only one left which I'm aware of:
> > > https://issues.apache.org/jira/browse/FLUME-3174. To fix that
> upgrading
> > > the
> > > joda-time is in progress: https://github.com/apache/flume/pull/169
> > > - the user guide is broken (netcat udp source's table is malformed),
> I'm
> > > fixing it
> > > - https://github.com/apache/flume/pull/168 needs to be committed. I've
> > > seen
> > > that you've already commented on it, thank you, will reply soon.
> > (Spoiler:
> > > a lot of effort, unfortunately)
> > > - some minor changes need to be done in the documentation (e.g. fixing
> > the
> > > copyright dates, removing/updating the version references in the user
> > > guide). If anybody in the community feels like doing it I'd be more
> than
> > > happy to review & commit the changes.
> > > - once these are done I'm going to create the 1.8 branch and create the
> > RC1
> > > release artifact. I'll announce the branching in advance to the dev@
> > list.
> > >
> > > Thank you,
> > > Denes
> > >
> > >
> > > On Tue, Sep 12, 2017 at 11:18 AM Bessenyei Balázs Donát <
> > bes...@apache.org
> > > >
> > > wrote:
> > >
> > > > Hi Denes,
> > > >
> > > > It seems to me that 1.8.1 and 1.9.0 releases already exist in our
> JIRA.
> > > >
> > > > Regarding the retargeting: I'd be happy to batch-edit the necessary
> > > > tickets in order to avoid spamming the mailing lists.
> > > > Once you have a list of actions you'd like to do, please let us know.
> > > >
> > > >
> > > > Thank you,
> > > >
> > > > Donat
> > > >
> > > > 2017-09-11 19:39 GMT+02:00 Denes Arvay :
> > > > > Hi All,
> > > > >
> > > > > I'd like to let you know that we are planning to cut the 1.8 branch
> > > > > tomorrow around 2am PDT.
> > > > > If you think that there any important tickets 

[GitHub] flume pull request #171: Set the copyright date dynamically in documentation...

2017-09-12 Thread adenes
GitHub user adenes opened a pull request:

https://github.com/apache/flume/pull/171

Set the copyright date dynamically in documentation footer

Display "2009-current year" instead of the hardcoded "2009-2012".

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/adenes/flume fix-copyright-year

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flume/pull/171.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #171


commit e9ba7c30543cf578673fbbd631f750b778dcf990
Author: Denes Arvay 
Date:   2017-09-12T15:44:13Z

Set the copyright date dynamically in documentation footer

Display "2009-current year" instead of the hardcoded "2009-2012".




---


Build failed in Jenkins: Flume-trunk-hbase-1 #321

2017-09-12 Thread Apache Jenkins Server
See 

--
[...truncated 308.65 KB...]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.maven.plugin.CompilationFailureException: Compilation 
failure
Failure executing javac, but could not parse the error:
An exception has occurred in the compiler (1.8.0_144). Please file a bug 
against the Java compiler via the Java bug reporting page 
(http://bugreport.java.com) after checking the Bug Database 
(http://bugs.java.com) for duplicates. Include your program and the following 
diagnostic in your report. Thank you.
java.lang.IncompatibleClassChangeError: vtable stub
at 
com.sun.tools.javac.comp.Resolve.findImmediateMemberType(Resolve.java:1932)
at com.sun.tools.javac.comp.Resolve.findMemberType(Resolve.java:1990)
at 
com.sun.tools.javac.comp.Resolve.findInheritedMemberType(Resolve.java:1961)
at com.sun.tools.javac.comp.Resolve.findMemberType(Resolve.java:1995)
at 
com.sun.tools.javac.comp.Resolve.findInheritedMemberType(Resolve.java:1961)
at com.sun.tools.javac.comp.Resolve.findType(Resolve.java:2059)
at com.sun.tools.javac.comp.Resolve.findIdent(Resolve.java:2110)
at com.sun.tools.javac.comp.Resolve.resolveIdent(Resolve.java:2384)
at com.sun.tools.javac.comp.Attr.visitIdent(Attr.java:3170)
at com.sun.tools.javac.tree.JCTree$JCIdent.accept(JCTree.java:2011)
at com.sun.tools.javac.comp.Attr.attribTree(Attr.java:576)
at com.sun.tools.javac.comp.Attr.visitSelect(Attr.java:3250)
at 
com.sun.tools.javac.tree.JCTree$JCFieldAccess.accept(JCTree.java:1897)
at com.sun.tools.javac.comp.Attr.attribTree(Attr.java:576)
at com.sun.tools.javac.comp.Attr.visitSelect(Attr.java:3250)
at 
com.sun.tools.javac.tree.JCTree$JCFieldAccess.accept(JCTree.java:1897)
at com.sun.tools.javac.comp.Attr.attribTree(Attr.java:576)
at com.sun.tools.javac.comp.Attr.visitSelect(Attr.java:3250)
at 
com.sun.tools.javac.tree.JCTree$JCFieldAccess.accept(JCTree.java:1897)
at com.sun.tools.javac.comp.Attr.attribTree(Attr.java:576)
at com.sun.tools.javac.comp.Attr.attribArgs(Attr.java:674)
at com.sun.tools.javac.comp.Attr.visitApply(Attr.java:1816)
at 
com.sun.tools.javac.tree.JCTree$JCMethodInvocation.accept(JCTree.java:1465)
at com.sun.tools.javac.comp.Attr.attribTree(Attr.java:576)
at com.sun.tools.javac.comp.Attr.attribExpr(Attr.java:625)
at com.sun.tools.javac.comp.Attr.visitExec(Attr.java:1593)
at 
com.sun.tools.javac.tree.JCTree$JCExpressionStatement.accept(JCTree.java:1296)
at com.sun.tools.javac.comp.Attr.attribTree(Attr.java:576)
at com.sun.tools.javac.comp.Attr.attribStat(Attr.java:645)
at com.sun.tools.javac.comp.Attr.attribStats(Attr.java:661)
at com.sun.tools.javac.comp.Attr.visitBlock(Attr.java:1124)
at com.sun.tools.javac.tree.JCTree$JCBlock.accept(JCTree.java:909)
at com.sun.tools.javac.comp.Attr.attribTree(Attr.java:576)
at com.sun.tools.javac.comp.Attr.attribStat(Attr.java:645)
at com.sun.tools.javac.comp.Attr.visitIf(Attr.java:1582)
at com.sun.tools.javac.tree.JCTree$JCIf.accept(JCTree.java:1269)
at com.sun.tools.javac.comp.Attr.attribTree(Attr.java:576)
at com.sun.tools.javac.comp.Attr.attribStat(Attr.java:645)
at com.sun.tools.javac.comp.Attr.attribStats(Attr.java:661)
at com.sun.tools.javac.comp.Attr.visitBlock(Attr.java:1124)
at com.sun.tools.javac.tree.JCTree$JCBlock.accept(JCTree.java:909)
at com.sun.tools.javac.comp.Attr.attribTree(Attr.java:576)
at com.sun.tools.javac.comp.Attr.attribStat(Attr.java:645)
at com.sun.tools.javac.comp.Attr.visitTry(Attr.java:1354)
at com.sun.tools.javac.tree.JCTree$JCTry.accept(JCTree.java:1173)
at com.sun.tools.javac.comp.Attr.attribTree(Attr.java:576)
at com.sun.tools.javac.comp.Attr.attribStat(Attr.java:645)
at com.sun.tools.javac.comp.Attr.attribStats(Attr.java:661)
at com.sun.tools.javac.comp.Attr.visitBlock(Attr.java:1124)
at com.sun.tools.javac.tree.JCTree$JCBlock.accept(JCTree.java:909)
at com.sun.tools.javac.comp.Attr.attribTree(Attr.java:576)
at com.sun.tools.javac.comp.Attr.attribStat(Attr.java:645)
at com.sun.tools.javac.comp.Attr.visitMethodDef(Attr.java:1013)
at com.sun.tools.javac.tree.JCTree$JCMethodDecl.accept(JCTree.java:778)
at com.sun.tools.javac.comp.Attr.attribTree(Attr.java:576)
at com.sun.tools.javac.comp.Attr.attribStat(Attr.java:645)
at com.sun.tools.javac.comp.Attr.attribClassBody(Attr.java:4364)
at com.sun.tools.javac.comp.Attr.attribClass(Attr.java:4272)
at 

Re: [DISCUSS] Flume 1.8 release proposal

2017-09-12 Thread Tristan Stevens
FWIW:
I’m not overly in favour of rushing a release out when we have many JIRAs
with patches submitted, especially when some of those are +1’d by multiple
reviewers.

This doesn’t speak of a mature and active community process - I’d much
rather that we pause a little, have a big effort by the committers to get
these patches in - especially as this will improve buy-in for those who
have spent time developing patches. I, for one, would be much more likely
to be a more active member of the community if I could see my patches
getting committed, rather than spending years in ‘patch available’.

Would anyone be in favour of a concerted push to get some of these types of
JIRAs committed?

Tristan

On 12 September 2017 at 14:39:45, Ferenc Szabo (fsz...@cloudera.com) wrote:

FYI:
jira issues have been moved to the new fix versions:
1.8.1:
https://issues.apache.org/jira/issues/?jql=project%20%3D%20FLUME%20AND%20fixVersion%20%3D%201.8.1
1.9.0:
https://issues.apache.org/jira/issues/?jql=project%20%3D%20FLUME%20AND%20fixVersion%20%3D%201.9.0


On Tue, Sep 12, 2017 at 12:17 PM, Mike Percy  wrote:

> I added the additional versions earlier today but neglected to notify the
> list until just now. :)
>
> +1 on the plan. Thanks for keeping us updated and continuing to drive
this
> release, Denes!
>
> Mike
>
> On Tue, Sep 12, 2017 at 3:04 AM, Denes Arvay  wrote:
>
> > Hi Donat,
> >
> > Thanks for your help.
> > Ferenc Szabo is already working on the retargeting, but it's definitely
a
> > good advice to do it in bulk to avoid spamming the lists.
> >
> > We have the following action items:
> > - retarget the tickets
> > - fix the blockers: there is only one left which I'm aware of:
> > https://issues.apache.org/jira/browse/FLUME-3174. To fix that upgrading
> > the
> > joda-time is in progress: https://github.com/apache/flume/pull/169
> > - the user guide is broken (netcat udp source's table is malformed),
I'm
> > fixing it
> > - https://github.com/apache/flume/pull/168 needs to be committed. I've
> > seen
> > that you've already commented on it, thank you, will reply soon.
> (Spoiler:
> > a lot of effort, unfortunately)
> > - some minor changes need to be done in the documentation (e.g. fixing
> the
> > copyright dates, removing/updating the version references in the user
> > guide). If anybody in the community feels like doing it I'd be more
than
> > happy to review & commit the changes.
> > - once these are done I'm going to create the 1.8 branch and create the
> RC1
> > release artifact. I'll announce the branching in advance to the dev@
> list.
> >
> > Thank you,
> > Denes
> >
> >
> > On Tue, Sep 12, 2017 at 11:18 AM Bessenyei Balázs Donát <
> bes...@apache.org
> > >
> > wrote:
> >
> > > Hi Denes,
> > >
> > > It seems to me that 1.8.1 and 1.9.0 releases already exist in our
JIRA.
> > >
> > > Regarding the retargeting: I'd be happy to batch-edit the necessary
> > > tickets in order to avoid spamming the mailing lists.
> > > Once you have a list of actions you'd like to do, please let us know.
> > >
> > >
> > > Thank you,
> > >
> > > Donat
> > >
> > > 2017-09-11 19:39 GMT+02:00 Denes Arvay :
> > > > Hi All,
> > > >
> > > > I'd like to let you know that we are planning to cut the 1.8 branch
> > > > tomorrow around 2am PDT.
> > > > If you think that there any important tickets targeted to 1.8 still
> > open
> > > > which needs to be reviewed and committed to get into the release,
> > please
> > > > let us know as soon as possible and we'll do our best to push it
> > through.
> > > >
> > > > The ones which couldn't get committed by the branching will be
> > retargeted
> > > > to 1.8.1 or 1.9, depending on their type (i.e. bug fixes will be
> > > retargeted
> > > > to 1.8.1, new features will be scheduled for 1.9).
> > > > For this I'd like to ask our PMC members to create these new
releases
> > in
> > > > Jira, or if it's possible to grant the required Jira permissions to
> me,
> > > I'd
> > > > be more than happy to do this.
> > > >
> > > > Thank you,
> > > > Denes
> > > >
> > > > On Mon, Sep 4, 2017 at 10:21 AM Denes Arvay 
> > wrote:
> > > >
> > > >> Hi Flume Community,
> > > >>
> > > >> Almost a year passed since we've released Flume 1.7.
> > > >> More than 50 commits were pushed since then, including
documentation
> > > >> fixes, many critical bug fixes and several important features, so
> I'd
> > > like
> > > >> to propose to publish the next minor release of Flume.
> > > >>
> > > >> I'd be happy to be the Release Manager with the help of Ferenc
Szabo
> > and
> > > >> Marcell Hegedus who have been quite active recently, and Balazs
> Donat
> > > >> Bessenyei who took the lion's share of the work during the
previous
> > > release
> > > >> - if both community and they are OK with it.
> > > >>
> > > >> Among others the following major changes will be included in the
> next
> > > >> release:
> > > >>
> > > >> Fixed bugs:
> > > >> - FLUME-2857. Make 

[jira] [Commented] (FLUME-3173) Upgrade joda-time

2017-09-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-3173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163031#comment-16163031
 ] 

Hudson commented on FLUME-3173:
---

FAILURE: Integrated in Jenkins build Flume-trunk-hbase-1 #320 (See 
[https://builds.apache.org/job/Flume-trunk-hbase-1/320/])
FLUME-3173. Upgrade joda-time to 2.9.9 (denes: 
[http://git-wip-us.apache.org/repos/asf/flume/repo?p=flume.git=commit=d434d23dadc411e5d7486447316172c495d70f22])
* (edit) pom.xml


> Upgrade joda-time
> -
>
> Key: FLUME-3173
> URL: https://issues.apache.org/jira/browse/FLUME-3173
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: 1.7.0
>Reporter: Azat Nizametdinov
>Assignee: Miklos Csanady
> Fix For: 1.8.0
>
>
> Flume 1.7 depends on joda-time version 2.1 which uses outdated tz database.  
> For example following code
> {code}
> new org.joda.time.DateTime(
> org.joda.time.DateTimeZone.forID("Europe/Moscow")
> ).toString()
> {code}
> returns time with offset {{+04:00}}, but Moscow timezone is UTC+3 since 2014.
> Furthermore this version of joda-time does not allow to specify custom tz 
> databse folder in contrast to newer versions.
> It affects {{RegexExtractorInterceptorMillisSerializer}}. Test to reproduce 
> the bug:
> {code}
> public void testMoscowTimezone() throws Exception {
> TimeZone.setDefault(TimeZone.getTimeZone("Europe/Moscow"));
> String pattern = "-MM-dd HH:mm:ss";
> SimpleDateFormat format = new SimpleDateFormat(pattern);
> String dateStr = "2017-09-10 10:00:00";
> Date expectedDate = format.parse(dateStr);
> RegexExtractorInterceptorMillisSerializer sut = new 
> RegexExtractorInterceptorMillisSerializer();
> Context context = new Context();
> context.put("pattern", pattern);
> sut.configure(context);
> assertEquals(String.valueOf(expectedDate.getTime()), 
> sut.serialize(dateStr));
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: Flume-trunk-hbase-1 #320

2017-09-12 Thread Apache Jenkins Server
See 


Changes:

[denes] FLUME-3173. Upgrade joda-time to 2.9.9

--
[...truncated 223.36 KB...]
[INFO] --- apache-rat-plugin:0.11:check (verify.rat) @ flume-avro-source ---
[INFO] 51 implicit excludes (use -debug for more details).
[INFO] Exclude: **/.idea/
[INFO] Exclude: **/*.iml
[INFO] Exclude: **/nb-configuration.xml
[INFO] Exclude: .git/
[INFO] Exclude: patchprocess/
[INFO] Exclude: .gitignore
[INFO] Exclude: .repository/
[INFO] Exclude: **/*.diff
[INFO] Exclude: **/*.patch
[INFO] Exclude: **/*.avsc
[INFO] Exclude: **/*.avro
[INFO] Exclude: **/docs/**
[INFO] Exclude: **/test/resources/**
[INFO] Exclude: **/.settings/*
[INFO] Exclude: **/.classpath
[INFO] Exclude: **/.project
[INFO] Exclude: **/target/**
[INFO] Exclude: **/derby.log
[INFO] Exclude: **/metastore_db/
[INFO] 4 resources included (use -debug for more details)
[INFO] Rat check: Summary of files. Unapproved: 0 unknown: 0 generated: 0 
approved: 4 licence.
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.17:check (verify) @ flume-avro-source ---
[INFO] 
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ 
flume-avro-source ---
[INFO] Installing 

 to 

[INFO] Installing 

 to 

[INFO] 
[INFO] 
[INFO] Building Flume legacy Thrift Source 1.8.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ flume-thrift-source 
---
[INFO] Deleting 

[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (process-resource-bundles) 
@ flume-thrift-source ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
flume-thrift-source ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:2.3.2:compile (default-compile) @ 
flume-thrift-source ---
[INFO] Compiling 5 source files to 

[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ 
flume-thrift-source ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:2.3.2:testCompile (default-testCompile) @ 
flume-thrift-source ---
[INFO] Compiling 1 source file to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.12.3:test (default-test) @ 
flume-thrift-source ---
[INFO] Surefire report directory: 


---
 T E S T S
---

---
 T E S T S
---
Running org.apache.flume.source.thriftLegacy.TestThriftLegacySource
Exception in thread "Thread-361" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOfRange(Arrays.java:3664)
at java.lang.StringBuffer.toString(StringBuffer.java:669)
at java.io.BufferedReader.readLine(BufferedReader.java:359)
at java.io.BufferedReader.readLine(BufferedReader.java:389)
at org.codehaus.plexus.util.cli.StreamPumper.run(StreamPumper.java:129)

Results :

Tests run: 0, Failures: 0, Errors: 0, Skipped: 0

[JENKINS] Recording test results
[INFO] 
[INFO] 
[INFO] Building Flume NG Clients 1.8.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- 

[jira] [Commented] (FLUME-3174) HdfsSink AWS S3A authentication does not work on JDK 8

2017-09-12 Thread Marcell Hegedus (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-3174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163028#comment-16163028
 ] 

Marcell Hegedus commented on FLUME-3174:


[~mcsanady] [~denes] Thanks for the fix! 
(I did not consider it as critical because the workaround is just to replace 
the joda-time jar)

> HdfsSink AWS S3A authentication does not work on JDK 8
> --
>
> Key: FLUME-3174
> URL: https://issues.apache.org/jira/browse/FLUME-3174
> Project: Flume
>  Issue Type: Bug
>Reporter: Marcell Hegedus
>Priority: Minor
> Fix For: 1.8.0
>
>
> Flume writing to S3A with the following Hdfs Sink configuration fails with 
> AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error 
> Code: 403 Forbidden...
> {code}
> a1.sinks = k1
> a1.sinks.k1.channel = c1
> a1.sinks.k1.type = hdfs
> a1.sinks.k1.hdfs.path = s3a://testflume/logs
> {code}
> AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are provided either in flume-env 
> or in core-site.xml and running "hdfs dfs -ls s3a://testflume/logs" works 
> properly.
> The cause and the fix is documented in 
> [hadoop-aws/index.md|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md#authentication-failures-when-running-on-java-8u60]
> {quote}A change in the Java 8 JVM broke some of the toString() string 
> generation of Joda Time 2.8.0, which stopped the Amazon S3 client from being 
> able to generate authentication headers suitable for validation by S3.
> Fix: Make sure that the version of Joda Time is 2.8.1 or later, or use a new 
> version of Java 8.{quote}
> Tested that authentication is successful with
> * JDK 7
> * JDK 8 + joda-time updated to v2.9.6.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (FLUME-3174) HdfsSink AWS S3A authentication does not work on JDK 8

2017-09-12 Thread Denes Arvay (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Arvay resolved FLUME-3174.

   Resolution: Fixed
Fix Version/s: 1.8.0

I'm marking this resolved as the joda-time version has been bumped in 
FLUME-3173.

> HdfsSink AWS S3A authentication does not work on JDK 8
> --
>
> Key: FLUME-3174
> URL: https://issues.apache.org/jira/browse/FLUME-3174
> Project: Flume
>  Issue Type: Bug
>Reporter: Marcell Hegedus
>Priority: Minor
> Fix For: 1.8.0
>
>
> Flume writing to S3A with the following Hdfs Sink configuration fails with 
> AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error 
> Code: 403 Forbidden...
> {code}
> a1.sinks = k1
> a1.sinks.k1.channel = c1
> a1.sinks.k1.type = hdfs
> a1.sinks.k1.hdfs.path = s3a://testflume/logs
> {code}
> AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are provided either in flume-env 
> or in core-site.xml and running "hdfs dfs -ls s3a://testflume/logs" works 
> properly.
> The cause and the fix is documented in 
> [hadoop-aws/index.md|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md#authentication-failures-when-running-on-java-8u60]
> {quote}A change in the Java 8 JVM broke some of the toString() string 
> generation of Joda Time 2.8.0, which stopped the Amazon S3 client from being 
> able to generate authentication headers suitable for validation by S3.
> Fix: Make sure that the version of Joda Time is 2.8.1 or later, or use a new 
> version of Java 8.{quote}
> Tested that authentication is successful with
> * JDK 7
> * JDK 8 + joda-time updated to v2.9.6.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-3135) property logger in org.apache.flume.interceptor.RegexFilteringInterceptor confused

2017-09-12 Thread Denes Arvay (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Arvay updated FLUME-3135:
---
Fix Version/s: 1.8.0

> property logger in org.apache.flume.interceptor.RegexFilteringInterceptor 
> confused
> --
>
> Key: FLUME-3135
> URL: https://issues.apache.org/jira/browse/FLUME-3135
> Project: Flume
>  Issue Type: Bug
>Affects Versions: 1.7.0
>Reporter: Peter Chen
>Priority: Minor
>  Labels: easyfix
> Fix For: 1.8.0
>
>
> *+org.apache.flume.interceptor.RegexFilteringInterceptor.java+*
> * *line 72-75:*
> the parameter to the getLogger method should be 
> RegexFilteringInterceptor.class
> {code:java}
> public class RegexFilteringInterceptor implements Interceptor {
>   private static final Logger logger = LoggerFactory
>   .getLogger(StaticInterceptor.class);
> {code}
> * *line 141-143:*
> Javadoc 
> {code:java}
>   /**
>* Builder which builds new instance of the StaticInterceptor.
>*/
> {code}
> :D



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-3141) Small typo found in RegexHbaseEventSerializer.java

2017-09-12 Thread Denes Arvay (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Arvay updated FLUME-3141:
---
Fix Version/s: 1.8.0

> Small typo found in RegexHbaseEventSerializer.java
> --
>
> Key: FLUME-3141
> URL: https://issues.apache.org/jira/browse/FLUME-3141
> Project: Flume
>  Issue Type: Bug
>Affects Versions: 1.7.0
>Reporter: Miklos Csanady
>Assignee: Miklos Csanady
>Priority: Trivial
>  Labels: trivial
> Fix For: 1.8.0
>
>
> Trivial typo found in RegexHbaseEventSerializer.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-3144) Improve Log4jAppender's performance by allowing logging collection of messages

2017-09-12 Thread Denes Arvay (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Arvay updated FLUME-3144:
---
Fix Version/s: 1.8.0

> Improve Log4jAppender's performance by allowing logging collection of messages
> --
>
> Key: FLUME-3144
> URL: https://issues.apache.org/jira/browse/FLUME-3144
> Project: Flume
>  Issue Type: Improvement
>Affects Versions: 1.7.0
>Reporter: Denes Arvay
>Assignee: Denes Arvay
> Fix For: 1.8.0
>
>
> Currently it's only possible to send events one by one with the Log4j 
> appender. (Except if the events are wrapped in an avro record but it's quite 
> cumbersome and might need special handling on the receiving side)
> As the Log4j methods can handle any {{Object}} I'd suggest to improve the 
> Log4j appender to treat {{Collection}} event as a special case and send its 
> content to Flume with one {{rpcClient.appendBatch()}} call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2752) Flume AvroSource will leak the memory and the OOM will be happened.

2017-09-12 Thread Denes Arvay (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Arvay updated FLUME-2752:
---
Fix Version/s: 1.8.0

> Flume AvroSource will leak the memory and the OOM will be happened.
> ---
>
> Key: FLUME-2752
> URL: https://issues.apache.org/jira/browse/FLUME-2752
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: 1.6.0
>Reporter: yinghua_zh
>Assignee: Attila Simon
> Fix For: 1.8.0
>
>
> If the flume agent config the nonexist IP for the avro source,the exception 
> will be happened as follow:
> 2015-07-21 19:57:47,054 | ERROR | [lifecycleSupervisor-1-2] |  Unable to 
> start EventDrivenSourceRunner: { source:Avro source avro_source_21155: { 
> bindAddress: 51.196.27.32, port: 21155 } } - Exception follows.  | 
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:253)
> org.jboss.netty.channel.ChannelException: Failed to bind to: 
> /51.196.27.32:21155
>   at 
> org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:297)
>   at org.apache.avro.ipc.NettyServer.(NettyServer.java:106)
>   at org.apache.flume.source.AvroSource.start(AvroSource.java:294)
>   at 
> org.apache.flume.source.EventDrivenSourceRunner.start(EventDrivenSourceRunner.java:44)
>   at 
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.net.BindException: Cannot assign requested address
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:437)
>   at sun.nio.ch.Net.bind(Net.java:429)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at 
> org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.bind(NioServerSocketPipelineSink.java:140)
>   at 
> org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleServerSocket(NioServerSocketPipelineSink.java:90)
>   at 
> org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:64)
>   at org.jboss.netty.channel.Channels.bind(Channels.java:569)
>   at 
> org.jboss.netty.channel.AbstractChannel.bind(AbstractChannel.java:189)
>   at 
> org.jboss.netty.bootstrap.ServerBootstrap$Binder.channelOpen(ServerBootstrap.java:342)
>   at org.jboss.netty.channel.Channels.fireChannelOpen(Channels.java:170)
>   at 
> org.jboss.netty.channel.socket.nio.NioServerSocketChannel.(NioServerSocketChannel.java:80)
>   at 
> org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory.newChannel(NioServerSocketChannelFactory.java:158)
>   at 
> org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory.newChannel(NioServerSocketChannelFactory.java:86)
>   at 
> org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:276)
> if the above exception happened for 2 hours,and the agent JVM -Xxx is 4G,the 
> OutOfMemory will be happened.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2905) NetcatSource - Socket not closed when an exception is encountered during start() leading to file descriptor leaks

2017-09-12 Thread Denes Arvay (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Arvay updated FLUME-2905:
---
Fix Version/s: 1.8.0

> NetcatSource - Socket not closed when an exception is encountered during 
> start() leading to file descriptor leaks
> -
>
> Key: FLUME-2905
> URL: https://issues.apache.org/jira/browse/FLUME-2905
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: 1.6.0
>Reporter: Siddharth Ahuja
>Assignee: Siddharth Ahuja
> Fix For: 1.8.0
>
> Attachments: FLUME-2905-0.patch, FLUME-2905-1.patch, 
> FLUME-2905-2.patch, FLUME-2905-3.patch, FLUME-2905-4.patch, 
> FLUME-2905-5.patch, FLUME-2905-6.patch
>
>
> During the flume agent start-up, the flume configuration containing the 
> NetcatSource is parsed and the source's start() is called. If there is an 
> issue while binding the channel's socket to a local address to configure the 
> socket to listen for connections following exception is thrown but the socket 
> open just before is not closed. 
> {code}
> 2016-05-01 03:04:37,273 ERROR org.apache.flume.lifecycle.LifecycleSupervisor: 
> Unable to start EventDrivenSourceRunner: { 
> source:org.apache.flume.source.NetcatSource{name:src-1,state:IDLE} } - 
> Exception follows.
> org.apache.flume.FlumeException: java.net.BindException: Address already in 
> use
> at org.apache.flume.source.NetcatSource.start(NetcatSource.java:173)
> at 
> org.apache.flume.source.EventDrivenSourceRunner.start(EventDrivenSourceRunner.java:44)
> at 
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:444)
> at sun.nio.ch.Net.bind(Net.java:436)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at org.apache.flume.source.NetcatSource.start(NetcatSource.java:167)
> ... 9 more
> {code}
> The source's start() is then called again leading to another socket being 
> opened but not closed and so on. This leads to file descriptor (socket) leaks.
> This can be easily reproduced as follows:
> 1. Set Netcat as the source in flume agent configuration.
> 2. Set the bind port for the netcat source to a port which is already in use. 
> e.g. in my case I used 50010 which is the port for DataNode's XCeiver 
> Protocol in use by the HDFS service.
> 3. Start flume agent and perform "lsof -p  | wc -l". Notice 
> the file descriptors keep on growing due to socket leaks with errors like: 
> "can't identify protocol".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-3155) Use batch mode in mvn to fix Travis CI error

2017-09-12 Thread Denes Arvay (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Arvay updated FLUME-3155:
---
Fix Version/s: 1.8.0

> Use batch mode in mvn to fix Travis CI error
> 
>
> Key: FLUME-3155
> URL: https://issues.apache.org/jira/browse/FLUME-3155
> Project: Flume
>  Issue Type: Bug
>Reporter: Miklos Csanady
>Assignee: Miklos Csanady
> Fix For: 1.8.0
>
>
> The Travis test failing with this error:
> The log length has exceeded the limit of 4 MB (this usually means that the 
> test suite is raising the same exception over and over).
> The job has been terminated
> The mvn -B switch should be used to reduce output verbosity.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2620) File channel throws NullPointerException if a header value is null

2017-09-12 Thread Denes Arvay (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Arvay updated FLUME-2620:
---
Fix Version/s: 1.8.0

> File channel throws NullPointerException if a header value is null
> --
>
> Key: FLUME-2620
> URL: https://issues.apache.org/jira/browse/FLUME-2620
> Project: Flume
>  Issue Type: Bug
>  Components: File Channel
>Reporter: Santiago M. Mola
>Assignee: Marcell Hegedus
> Fix For: 1.8.0
>
> Attachments: FLUME-2620-0.patch, FLUME-2620-1.patch, 
> FLUME-2620-2.patch, FLUME-2620-3.patch, FLUME-2620-4.patch, 
> FLUME-2620-5.patch, FLUME-2620.patch, FLUME-2620.patch
>
>
> File channel throws NullPointerException if a header value is null.
> If this is intended, it should be reported correctly in the logs.
> Sample trace:
> org.apache.flume.ChannelException: Unable to put batch on required channel: 
> FileChannel chan { dataDirs: [/var/lib/ingestion-csv/chan/data] }
>   at 
> org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:200)
>   at 
> org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:236)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flume.channel.file.proto.ProtosFactory$FlumeEventHeader$Builder.setValue(ProtosFactory.java:7415)
>   at org.apache.flume.channel.file.Put.writeProtos(Put.java:85)
>   at 
> org.apache.flume.channel.file.TransactionEventRecord.toByteBuffer(TransactionEventRecord.java:174)
>   at org.apache.flume.channel.file.Log.put(Log.java:622)
>   at 
> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doPut(FileChannel.java:469)
>   at 
> org.apache.flume.channel.BasicTransactionSemantics.put(BasicTransactionSemantics.java:93)
>   at 
> org.apache.flume.channel.BasicChannelSemantics.put(BasicChannelSemantics.java:80)
>   at 
> org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:189)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-3093) Groundwork for version changes in root pom

2017-09-12 Thread Denes Arvay (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Arvay updated FLUME-3093:
---
Fix Version/s: 1.8.0

> Groundwork for version changes in root pom
> --
>
> Key: FLUME-3093
> URL: https://issues.apache.org/jira/browse/FLUME-3093
> Project: Flume
>  Issue Type: Task
>Affects Versions: 1.7.0
>Reporter: Miklos Csanady
>Assignee: Miklos Csanady
>  Labels: dependency
> Fix For: 1.8.0
>
>
> Flume's root pom should have a parameter block where all the dependency 
> versions are listed. 
> This is fundamental to later version changes required to time effectively 
> overcame 3rd party security vulnerabilities.
> This should be done in two steps: first just refactoring with no change, the 
> second is getting rid of unnecessary different versions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2991) ExecSource command execution starts before starting the sourceCounter

2017-09-12 Thread Denes Arvay (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Arvay updated FLUME-2991:
---
Fix Version/s: 1.8.0

> ExecSource command execution starts before starting the sourceCounter
> -
>
> Key: FLUME-2991
> URL: https://issues.apache.org/jira/browse/FLUME-2991
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Reporter: Denes Arvay
>Assignee: Denes Arvay
> Fix For: 1.8.0
>
> Attachments: FLUME-2991.patch
>
>
> It causes invalid data to appear in JMX monitoring and also makes 
> {{TestExecSource.testMonitoredCounterGroup}} failing/flaky.
> starting the {{sourceCounter}} before submitting the {{ExecRunnable}} to the 
> {{executor}} solves the problem.
> https://github.com/apache/flume/blob/trunk/flume-ng-core/src/main/java/org/apache/flume/source/ExecSource.java#L176-L183



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-3175) Javadoc generation fails due to Java8's strict doclint

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-3175:

Fix Version/s: (was: 1.8.1)
   1.8.0

> Javadoc generation fails due to Java8's strict doclint
> --
>
> Key: FLUME-3175
> URL: https://issues.apache.org/jira/browse/FLUME-3175
> Project: Flume
>  Issue Type: Bug
>Affects Versions: 1.7.0
>Reporter: Denes Arvay
> Fix For: 1.8.0
>
>
> I'd suggest to turn it off for now, but in the long run the javadoc errors 
> should be fixed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLUME-2942) AvroEventDeserializer ignores header from spool source

2017-09-12 Thread Sebastian Alfers (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-2942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162944#comment-16162944
 ] 

Sebastian Alfers commented on FLUME-2942:
-

[~fszabo] Any play if this gets attention in the future?

> AvroEventDeserializer ignores header from spool source
> --
>
> Key: FLUME-2942
> URL: https://issues.apache.org/jira/browse/FLUME-2942
> Project: Flume
>  Issue Type: Bug
>Affects Versions: 1.6.0
>Reporter: Sebastian Alfers
> Fix For: 1.8.1
>
> Attachments: FLUME-2942-0.patch
>
>
> I have a spool file source and use avro for de-/serialization
> In detail, serialized events store the topic of the kafka sink in the header.
> When I load the events from the spool directory, the header are ignored. 
> Please see: 
> https://github.com/apache/flume/blob/caa64a1a6d4bc97be5993cb468516e9ffe862794/flume-ng-core/src/main/java/org/apache/flume/serialization/AvroEventDeserializer.java#L122
> You can see, it uses the whole event as body but does not distinguish between 
> the header and body encoded by avro.
> Please verify that this is a bug.
> I fixed this but by using the record that stores header and body separately.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] Flume 1.8 release proposal

2017-09-12 Thread Ferenc Szabo
FYI:
jira issues have been moved to the new fix versions:
1.8.1:
https://issues.apache.org/jira/issues/?jql=project%20%3D%20FLUME%20AND%20fixVersion%20%3D%201.8.1
1.9.0:
https://issues.apache.org/jira/issues/?jql=project%20%3D%20FLUME%20AND%20fixVersion%20%3D%201.9.0


On Tue, Sep 12, 2017 at 12:17 PM, Mike Percy  wrote:

> I added the additional versions earlier today but neglected to notify the
> list until just now. :)
>
> +1 on the plan. Thanks for keeping us updated and continuing to drive this
> release, Denes!
>
> Mike
>
> On Tue, Sep 12, 2017 at 3:04 AM, Denes Arvay  wrote:
>
> > Hi Donat,
> >
> > Thanks for your help.
> > Ferenc Szabo is already working on the retargeting, but it's definitely a
> > good advice to do it in bulk to avoid spamming the lists.
> >
> > We have the following action items:
> > - retarget the tickets
> > - fix the blockers: there is only one left which I'm aware of:
> > https://issues.apache.org/jira/browse/FLUME-3174. To fix that upgrading
> > the
> > joda-time is in progress: https://github.com/apache/flume/pull/169
> > - the user guide is broken (netcat udp source's table is malformed), I'm
> > fixing it
> > - https://github.com/apache/flume/pull/168 needs to be committed. I've
> > seen
> > that you've already commented on it, thank you, will reply soon.
> (Spoiler:
> > a lot of effort, unfortunately)
> > -  some minor changes need to be done in the documentation (e.g. fixing
> the
> > copyright dates, removing/updating the version references in the user
> > guide). If anybody in the community feels like doing it I'd be more than
> > happy to review & commit the changes.
> > - once these are done I'm going to create the 1.8 branch and create the
> RC1
> > release artifact. I'll announce the branching in advance to the dev@
> list.
> >
> > Thank you,
> > Denes
> >
> >
> > On Tue, Sep 12, 2017 at 11:18 AM Bessenyei Balázs Donát <
> bes...@apache.org
> > >
> > wrote:
> >
> > > Hi Denes,
> > >
> > > It seems to me that 1.8.1 and 1.9.0 releases already exist in our JIRA.
> > >
> > > Regarding the retargeting: I'd be happy to batch-edit the necessary
> > > tickets in order to avoid spamming the mailing lists.
> > > Once you have a list of actions you'd like to do, please let us know.
> > >
> > >
> > > Thank you,
> > >
> > > Donat
> > >
> > > 2017-09-11 19:39 GMT+02:00 Denes Arvay :
> > > > Hi All,
> > > >
> > > > I'd like to let you know that we are planning to cut the 1.8 branch
> > > > tomorrow around 2am PDT.
> > > > If you think that there any important tickets targeted to 1.8 still
> > open
> > > > which needs to be reviewed and committed to get into the release,
> > please
> > > > let us know as soon as possible and we'll do our best to push it
> > through.
> > > >
> > > > The ones which couldn't get committed by the branching will be
> > retargeted
> > > > to 1.8.1 or 1.9, depending on their type (i.e. bug fixes will be
> > > retargeted
> > > > to 1.8.1, new features will be scheduled for 1.9).
> > > > For this I'd like to ask our PMC members to create these new releases
> > in
> > > > Jira, or if it's possible to grant the required Jira permissions to
> me,
> > > I'd
> > > > be more than happy to do this.
> > > >
> > > > Thank you,
> > > > Denes
> > > >
> > > > On Mon, Sep 4, 2017 at 10:21 AM Denes Arvay 
> > wrote:
> > > >
> > > >> Hi Flume Community,
> > > >>
> > > >> Almost a year passed since we've released Flume 1.7.
> > > >> More than 50 commits were pushed since then, including documentation
> > > >> fixes, many critical bug fixes and several important features, so
> I'd
> > > like
> > > >> to propose to publish the next minor release of Flume.
> > > >>
> > > >> I'd be happy to be the Release Manager with the help of Ferenc Szabo
> > and
> > > >> Marcell Hegedus who have been quite active recently, and Balazs
> Donat
> > > >> Bessenyei who took the lion's share of the work during the previous
> > > release
> > > >> - if both community and they are OK with it.
> > > >>
> > > >> Among others the following major changes will be included in the
> next
> > > >> release:
> > > >>
> > > >> Fixed bugs:
> > > >> - FLUME-2857. Make Kafka Source/Channel/Sink restore default values
> > when
> > > >> live updating config
> > > >> - FLUME-2812. Fix semaphore leak causing java.lang.Error: Maximum
> > permit
> > > >> count exceeded in MemoryChannel
> > > >> - FLUME-3020. Improve HDFS Sink escape sequence substitution
> > > >> - FLUME-3027. Change Kafka Channel to clear offsets map after commit
> > > >> - FLUME-3049. Make HDFS sink rotate more reliably in secure mode
> > > >> - FLUME-3080. Close failure in HDFS Sink might cause data loss
> > > >> - FLUME-3085. HDFS Sink can skip flushing some BucketWriters, might
> > lead
> > > >> to data loss
> > > >> - FLUME-2752. Fix AvroSource startup resource leaks
> > > >> - FLUME-2905. Fix NetcatSource file descriptor leak if startup fails
> > > >>
> > > >> 

[jira] [Updated] (FLUME-2968) After transfer the entire file it add a ‘\n’ to the file

2017-09-12 Thread Denes Arvay (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Arvay updated FLUME-2968:
---
Fix Version/s: notrack

> After  transfer the entire file it add a  ‘\n’ to the file
> --
>
> Key: FLUME-2968
> URL: https://issues.apache.org/jira/browse/FLUME-2968
> Project: Flume
>  Issue Type: Question
>Reporter: guangtaozhuang
> Fix For: notrack
>
>
> After the transfer the entire file , add a space
> I want to transfer the entire to hdfs and calculate the MD5. But I found that 
> the source file and the file in hdfs MD5 are different. I found that the 
> flume add the '\n' to the file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2966) NULL text in a TextMessage from a JMS source in Flume can lead to NPE

2017-09-12 Thread Denes Arvay (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Arvay updated FLUME-2966:
---
Fix Version/s: 1.7.0

> NULL text in a TextMessage from a JMS source in Flume can lead to NPE
> -
>
> Key: FLUME-2966
> URL: https://issues.apache.org/jira/browse/FLUME-2966
> Project: Flume
>  Issue Type: Bug
>Affects Versions: 1.5.0
>Reporter: Siddharth Ahuja
>Assignee: Siddharth Ahuja
> Fix For: 1.7.0
>
> Attachments: App.java, FLUME-2966-0.patch, FLUME-2966-1.patch
>
>
> Code at 
> https://github.com/apache/flume/blob/trunk/flume-ng-sources/flume-jms-source/src/main/java/org/apache/flume/source/jms/DefaultJMSMessageConverter.java#L103
>  does not check for a NULL text in a TextMessage from a Flume JMS source. 
> This can lead to a NullPointerException here: 
> {code}textMessage.getText().getBytes(charset){code} while trying to 
> de-reference a null text from the textmessage.
> We should probably skip these like the NULL Objects in the ObjectMessage just 
> below at: 
> https://github.com/apache/flume/blob/trunk/flume-ng-sources/flume-jms-source/src/main/java/org/apache/flume/source/jms/DefaultJMSMessageConverter.java#L107.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2256) Generic JDBC Sink

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2256:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Generic JDBC Sink
> -
>
> Key: FLUME-2256
> URL: https://issues.apache.org/jira/browse/FLUME-2256
> Project: Flume
>  Issue Type: New Feature
>Reporter: Jeremy Karlson
>Assignee: Jeremy Karlson
> Fix For: 1.9.0
>
> Attachments: FLUME-2256.diff
>
>
> I've been working on a generic JDBC sink.  It needs a bit more testing, but I 
> think it's ready for review and feedback.  I have not yet updated the Flume 
> documentation, but I can / will if people are happy with this.
> Since the config file is how you’d interact with it, here’s a working example 
> from my source tree:
> {code}
> a.sinks.k.type=jdbc
> a.sinks.k.channel=c
> a.sinks.k.driver=com.mysql.jdbc.Driver
> a.sinks.k.url=jdbc:mysql://localhost:8889/flume
> a.sinks.k.user=username
> a.sinks.k.password=password
> a.sinks.k.batchSize=100
> a.sinks.k.sql=insert into twitter (body, timestamp) values (${body:string}, 
> ${header.timestamp:long})
> {code}
> The interesting part is the SQL statement.  You can put anything you want in 
> there - it will get converted to a prepared statement on execution.  The 
> Ant-ish tokens get parsed and replaced with parameters at startup.
> The tokens are three part.  For example, in:
> {code}
> ${body:string(UTF-8)}
> {code}
> The first is a place in the event to get the value from (“body”, 
> “header.foo”, or “custom”).  The second part ("string") is a type identifier 
> that converts into an appropriate JDBC parameter.  The third part (“UTF-8") 
> is a configuration string for that type, if needed.  As for types, so far 
> I’ve defined:
> body: string (with optional charset encoding), bytearray
> header: string, long, int, float, double, date (with mandatory date format 
> and optional timezone)
> Additionally, if none of those make you happy you can define you own 
> parameter converters:
> {code}
> ${custom:com.company.foo.MyConverter(optionaltextconfig)}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-3053) one sink can get events form more than one channel,but the user guide dosent mentioned it.

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-3053:

Fix Version/s: (was: 1.8.0)
   1.9.0

> one sink can get events form more than one channel,but the user guide dosent 
> mentioned it. 
> ---
>
> Key: FLUME-3053
> URL: https://issues.apache.org/jira/browse/FLUME-3053
> Project: Flume
>  Issue Type: Documentation
>  Components: Docs
>Reporter: Liam
>Priority: Minor
> Fix For: 1.9.0
>
>
> we can improve the throughput by  configure more than one sinks for one 
> channel,such as:
> server.sources = r1 
> server.sinks =  k1 k2 k3 
> server.channels = c1 
> server.sinks.k1.channel = c1
> server.sinks.k2.channel = c1
> server.sinks.k3.channel = c1



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-3054) hflushOrSync method in HDFS Sink should not treat ClosedChannelException as an error

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-3054:

Fix Version/s: (was: 1.8.0)
   1.9.0

> hflushOrSync method in HDFS Sink should not treat ClosedChannelException as 
> an error 
> -
>
> Key: FLUME-3054
> URL: https://issues.apache.org/jira/browse/FLUME-3054
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: 1.6.0
>Reporter: Pan Yuxuan
> Fix For: 1.9.0
>
> Attachments: FLUME-3054.0001.patch
>
>
> When use HDFS Sink in multiple threads, we face the error in log as below:
> {code}
> 09 Feb 2017 13:44:14,721 ERROR [hdfs-hsProt6-call-runner-4] 
> (org.apache.flume.sink.hdfs.AbstractHDFSWriter.hflushOrSync:267)  - Error 
> while trying to hflushOrSync!
> 09 Feb 2017 14:54:48,271 ERROR 
> [SinkRunner-PollingRunner-DefaultSinkProcessor] 
> (org.apache.flume.sink.hdfs.AbstractHDFSWriter.isUnderReplicated:98)  - 
> Unexpected error while checking replication factor
> java.lang.reflect.InvocationTargetException
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.flume.sink.hdfs.AbstractHDFSWriter.getNumCurrentReplicas(AbstractHDFSWriter.java:165)
>   at 
> org.apache.flume.sink.hdfs.AbstractHDFSWriter.isUnderReplicated(AbstractHDFSWriter.java:84)
>   at 
> org.apache.flume.sink.hdfs.BucketWriter.shouldRotate(BucketWriter.java:583)
>   at org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:518)
>   at 
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:418)
>   at 
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>   at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.nio.channels.ClosedChannelException
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1665)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.getCurrentBlockReplication(DFSOutputStream.java:2151)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.getNumCurrentReplicas(DFSOutputStream.java:2140)
>   ... 11 more
> 09 Feb 2017 14:54:48,277 ERROR 
> [SinkRunner-PollingRunner-DefaultSinkProcessor] 
> (org.apache.flume.sink.hdfs.HDFSEventSink.process:459)  - process failed
> org.apache.flume.auth.SecurityException: Privileged action failed
>   at org.apache.flume.auth.UGIExecutor.execute(UGIExecutor.java:49)
>   at 
> org.apache.flume.auth.KerberosAuthenticator.execute(KerberosAuthenticator.java:63)
>   at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:676)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.nio.channels.ClosedChannelException
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1665)
>   at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:104)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
>   at java.io.DataOutputStream.write(DataOutputStream.java:107)
>   at java.io.FilterOutputStream.write(FilterOutputStream.java:97)
>   at 
> org.apache.flume.serialization.BodyTextEventSerializer.write(BodyTextEventSerializer.java:71)
>   at 
> org.apache.flume.sink.hdfs.HDFSDataStream.append(HDFSDataStream.java:124)
>   at org.apache.flume.sink.hdfs.BucketWriter$7.call(BucketWriter.java:550)
>   at org.apache.flume.sink.hdfs.BucketWriter$7.call(BucketWriter.java:547)
>   at 
> org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:679)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1858)
>   at org.apache.flume.auth.UGIExecutor.execute(UGIExecutor.java:47)
>   ... 6 more
> 09 Feb 2017 14:54:48,280 ERROR 
> [SinkRunner-PollingRunner-DefaultSinkProcessor] 
> (org.apache.flume.SinkRunner$PollingRunner.run:160)  - Unable to deliver 
> event. Exception follows.
> org.apache.flume.EventDeliveryException: 
> org.apache.flume.auth.SecurityException: Privileged action failed
>   at 
> 

[jira] [Updated] (FLUME-3021) flume Elasticsearch 5.0 not support

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-3021:

Fix Version/s: (was: 1.8.0)
   1.9.0

> flume Elasticsearch 5.0 not support
> ---
>
> Key: FLUME-3021
> URL: https://issues.apache.org/jira/browse/FLUME-3021
> Project: Flume
>  Issue Type: New Feature
>  Components: Sinks+Sources
>Affects Versions: 1.7.0
>Reporter: tycho_yang
>Assignee: Yonghao Zou
> Fix For: 1.9.0
>
> Attachments: FLUME-3021-0.patch, FLUME-3021-1.patch, 
> screenshot-1.png, screenshot-2.png
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2771) HeaderAndBodyTextEventSerializer - provide a configurable way to set the multi-character event delimiter (instead of just a newline)

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2771:

Fix Version/s: (was: 1.8.0)
   1.9.0

> HeaderAndBodyTextEventSerializer - provide a configurable way to set the 
> multi-character event delimiter (instead of just a newline)
> 
>
> Key: FLUME-2771
> URL: https://issues.apache.org/jira/browse/FLUME-2771
> Project: Flume
>  Issue Type: Improvement
>  Components: Sinks+Sources
>Reporter: Piotr Wiecek
>Priority: Minor
> Fix For: 1.9.0
>
>
> HeaderAndBodyTextEventSerializer - provide a configurable way to set the 
> multi-character event delimiter (instead of just a newline).
> Now we have something like this:
> {code:title=HeaderAndBodyTextEventSerializer.java|borderStyle=solid}
> @Override
> public void write(Event e) throws IOException {
> ...
> if (appendNewline) {
>out.write('\n');
> }
> }
> {code}
> It would be nice to have the ability to configure the massage delimiter:
> {code:title=HeaderAndBodyTextEventSerializer.java|borderStyle=solid}
> @Override
> public void write(Event e) throws IOException {
> ...
>if (appendDelimiter) {
>   out.write(delimiter);
>}
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-3000) Update morphline solr sink to use solr-6

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-3000:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Update morphline solr sink to use solr-6
> 
>
> Key: FLUME-3000
> URL: https://issues.apache.org/jira/browse/FLUME-3000
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: 1.5.2
>Reporter: wolfgang hoschek
>Assignee: wolfgang hoschek
>Priority: Minor
> Fix For: 1.9.0
>
>
> Move flume from solr-4 to solr-6. This involves changing flume to depend on 
> the upcoming kite-1.2 release, which in turn used the solr-6 API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2960) Support Wildcards in directory name in TaildirSource

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2960:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Support Wildcards in directory name in TaildirSource
> 
>
> Key: FLUME-2960
> URL: https://issues.apache.org/jira/browse/FLUME-2960
> Project: Flume
>  Issue Type: Improvement
>  Components: Sinks+Sources
>Affects Versions: 1.7.0
>Reporter: tinawenqiao
>Assignee: tinawenqiao
>  Labels: wildcards
> Fix For: 1.9.0
>
> Attachments: FLUME-2960_1.patch, FLUME-2960_2.patch, 
> FLUME-2960_3.patch, FLUME-2960_4.patch, FLUME-2960_5.patch, 
> FLUME-2960_6.patch, FLUME-2960_7.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> In our log management project, we wan't to track many log files like this:
> /app/dir1/log.*
>  /app/dir2/log.*
> ...
> /app/dirn/log.*
> But TaildirSource can't support wildcards in filegroup directory name. The 
> following config is expected:
> a1.sources.r1.filegroups.fg = /app/\*/log.*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2524) Adding an HTTP Sink

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2524:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Adding an HTTP Sink
> ---
>
> Key: FLUME-2524
> URL: https://issues.apache.org/jira/browse/FLUME-2524
> Project: Flume
>  Issue Type: New Feature
>  Components: Sinks+Sources
>Affects Versions: 1.6.0
>Reporter: Jeff Guilmard
> Fix For: 1.9.0
>
> Attachments: FLUME-2524-0.patch, FLUME-2524-1.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> Flume whould have an HTTP Sink, with following capacities:
> - Using up to date performant Http Client
> - Capacity to load balance on multiple target servers (simple round robin)
> - Handle HTTP Authentication
> - use HTTP POST
> - Capacity to send binary data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2530) Resource leaks found by Coverity tool

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2530:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Resource leaks found by Coverity tool
> -
>
> Key: FLUME-2530
> URL: https://issues.apache.org/jira/browse/FLUME-2530
> Project: Flume
>  Issue Type: Bug
>Reporter: Roshan Naik
>Assignee: Roshan Naik
> Fix For: 1.9.0
>
> Attachments: coverity.patch, FLUME-2530.patch
>
>
> A recent run of coverity on the Flume code base found some issues in various 
> components.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-3101) taildir source may endless loop when tail a file

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-3101:

Fix Version/s: (was: 1.8.0)
   1.9.0

> taildir source may endless loop when tail a file
> 
>
> Key: FLUME-3101
> URL: https://issues.apache.org/jira/browse/FLUME-3101
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: 1.7.0
>Reporter: hunshenshi
>  Labels: patch, taildirsource
> Fix For: 1.9.0
>
> Attachments: FLUME-3101-0.patch, FLUME-3101-1.patch, 
> FLUME-3101-2.patch, FLUME-3101-3.patch
>
>
> If there are many files in the path that need to be tail, and there is a file 
> written by *high frequency* (for example, there are file a, file b and file c 
> in the path, file a is written at high frequency), *taildir can read the 
> batchSize size event from file a everytime*, then taildir will only read data 
> from file a, other files will not to be read, because in 
> TaildirSource.tailFileProcess will into an endless loop.
> code:
> {code:title=TaildirSource.java|borderStyle=solid}
> private void tailFileProcess(TailFile tf, boolean backoffWithoutNL)
> throws IOException, InterruptedException {
>   while (true) {
> // if events.size >= batchSize will not break while, 
> // then into endless loop to only read tf
> if (events.size() < batchSize) {
>   break;
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2981) Upgrade the Solr version to 5.5.2

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2981:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Upgrade the Solr version to 5.5.2
> -
>
> Key: FLUME-2981
> URL: https://issues.apache.org/jira/browse/FLUME-2981
> Project: Flume
>  Issue Type: Improvement
>  Components: Sinks+Sources
>Affects Versions: 1.6.0
>Reporter: Minoru Osuka
>Priority: Minor
> Fix For: 1.9.0
>
> Attachments: FLUME-2981-0.patch
>
>
> Currently flume-ng-morphline-solr-sink is using Solr 4.3.0. I propose to 
> upgrade to Solr 5.5.2 for Flume 1.7.0. Also Solr 5.5.2 requires Java 1.7 same 
> as Flume 1.7.0. 
> Solr 5.5.2 includes guava 14 but it also works with guava 11. There were no 
> problems when I confirmed it with guava 11.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2362) Memory mapping channel

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2362:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Memory mapping channel
> --
>
> Key: FLUME-2362
> URL: https://issues.apache.org/jira/browse/FLUME-2362
> Project: Flume
>  Issue Type: Improvement
>  Components: Channel
>Affects Versions: 1.5.0
>Reporter: Lining Sun
> Fix For: 1.9.0
>
> Attachments: FLUME-2362.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> I've implemented memory mapping channel that has the same performance as 
> memory channel and the same reliability as file channel.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2616) Add Cassandra sink

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2616:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Add Cassandra sink
> --
>
> Key: FLUME-2616
> URL: https://issues.apache.org/jira/browse/FLUME-2616
> Project: Flume
>  Issue Type: New Feature
>  Components: Sinks+Sources
>Reporter: Santiago M. Mola
>Assignee: Santiago M. Mola
> Fix For: 1.9.0
>
> Attachments: FLUME-2616-0.patch
>
>
> A Cassandra sink would be a useful addition.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-3046) Kafka Sink and Source Configuration Improvements

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-3046:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Kafka Sink and Source Configuration Improvements
> 
>
> Key: FLUME-3046
> URL: https://issues.apache.org/jira/browse/FLUME-3046
> Project: Flume
>  Issue Type: Bug
>Affects Versions: 1.7.0
>Reporter: Jeff Holoman
>Assignee: Tristan Stevens
> Fix For: 1.9.0
>
>
> Currently the Kafka Source sets the header for the topic. The sink reads this 
> value, rather than the statically defined topic value. We should fix this so 
> that you can either change the topic header that is used, or just choose to 
> prefer the statically defined topic in the sink.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2802) Folder name interceptor

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2802:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Folder name interceptor
> ---
>
> Key: FLUME-2802
> URL: https://issues.apache.org/jira/browse/FLUME-2802
> Project: Flume
>  Issue Type: New Feature
>Reporter: Eran W
>Assignee: Eran W
>  Labels: reviewboard-missing
> Fix For: 1.9.0
>
> Attachments: flume-2802.patch, FLUME-2802.patch
>
>
> This interceptor retrieve the last folder name from the 
> SpoolDir.fileHeaderKey and set it to the given folderKey.
> This is allow users to set the target hdfs directory based on the source 
> directory and not the whole path or file name. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2653) Allow inUseSuffix to be null/empty

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2653:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Allow inUseSuffix to be null/empty
> --
>
> Key: FLUME-2653
> URL: https://issues.apache.org/jira/browse/FLUME-2653
> Project: Flume
>  Issue Type: Improvement
>  Components: Sinks+Sources
>Affects Versions: 1.5.1
>Reporter: Andrew Jones
>Assignee: bimal tandel
>  Labels: docs-missing, hdfssink
> Fix For: 1.9.0
>
> Attachments: FLUME-2653.patch
>
>
> At the moment, it doesn't seem possible to set the null/empty. We've tried 
> {{''}} which just adds the quotes to the end, and setting to nothing, which 
> just uses the default {{.tmp}}.
> We want the _in use_ file to have the same name as the _closed_ file, so we 
> can read from files that are in use without the file moving from underneath 
> us. In our use case, we know that an in use file is still readable and 
> parseable, because it is just text with a JSON document per line.
> It looks like [the HDFS sink 
> code|https://github.com/apache/flume/blob/542b1695033d330eb00ae81713fdc838b88332b6/flume-ng-sinks/flume-hdfs-sink/src/main/java/org/apache/flume/sink/hdfs/BucketWriter.java#L618]
>  can handle this change already, but at the moment there is no way to set the 
> {{bucketPath}} and {{targetPath}} to be the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2706) Camel source

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2706:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Camel source
> 
>
> Key: FLUME-2706
> URL: https://issues.apache.org/jira/browse/FLUME-2706
> Project: Flume
>  Issue Type: New Feature
>  Components: Sinks+Sources
>Reporter: David Greco
>Priority: Minor
> Fix For: 1.9.0
>
> Attachments: flume-2706.patch
>
>
> This component can start Camel routes, in this way provides a very powerful 
> mechanism for ingesting data from virtually any source supported by Camel 
> http://camel.apache.org.
> This source can be configured either with a Camel URI or with an XML file 
> containing route specifications.
> The configuration is very simple, let's show a couple of examples:
> 1. Configuration by an URI:
> tier1.sources = source1
> tier1.channels = channel1
> tier1.sinks  = sink1
> tier1.sources.source1.type = org.apache.flume.source.camel.CamelSource
> tier1.sources.source1.sourceURI  = 
> twitter://streaming/sample?type=event===
> any URI supported by Camel components is valid. For Twitter, see 
> [here](http://camel.apache.org/twitter.html).
> 2. Configuration by an xml file:
> tier1.sources = source1
> tier1.channels  = channel1
> tier1.sinks = sink1
> tier1.sources.source1.type = org.apache.flume.source.camel.CamelSource
> tier1.sources.source1.routesFile = conf/routes.xml
> where the routes.xml can contain something like:
> http://camel.apache.org/schema/spring;>
> 
>  uri="twitter://streaming/sample?type=eventconsumerKey=consumerSecret=accessToken="/>
> 
> 
> 
> any route that wants to send data to flume must have as a target endpoint an 
> URI with the following format: direct-vm://, so in 
> our case the CamelSource's name is source1 and consequently the endpoint URI 
> is: direct-vm://source1 as shown in the XML snippet above.
> The sourceURI property takes always precedence on the routesFile property.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2919) Upgrade the Solr version to 6.0.1

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2919:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Upgrade the Solr version to 6.0.1
> -
>
> Key: FLUME-2919
> URL: https://issues.apache.org/jira/browse/FLUME-2919
> Project: Flume
>  Issue Type: Improvement
>  Components: Sinks+Sources
>Affects Versions: 1.6.0
>Reporter: Minoru Osuka
> Fix For: 1.9.0
>
> Attachments: FLUME-2919-1.patch, FLUME-2919-2.patch, FLUME-2919.patch
>
>
> Flume morphline-solr-sink is using Solr 4.3.0. Recently, Solr 6.0.1 has been 
> released. I propose to upgrade to it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2938) JDBC Source

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2938:

Fix Version/s: (was: 1.8.0)
   1.9.0

> JDBC Source
> ---
>
> Key: FLUME-2938
> URL: https://issues.apache.org/jira/browse/FLUME-2938
> Project: Flume
>  Issue Type: New Feature
>  Components: Sinks+Sources
>Affects Versions: 1.8.0
>Reporter: Lior Zeno
> Fix For: 1.9.0
>
>
> The idea is to allow migrating data from SQL stores to NoSQL stores or HDFS 
> for archiving purposes.
> This source will get a statement to execute and a scheduling policy. It will 
> be able to fetch timestamped data by performing range queries on a 
> configurable field (this can fetch data with incremental id as well). For 
> fault-tolerance, the last fetched value can be checkpointed to a file.
> Dealing with large datasets can be done via the fetch_size parameter. (Ref: 
> https://docs.oracle.com/cd/A87860_01/doc/java.817/a83724/resltse5.htm)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-3106) When batchSize of sink greater than transactionCapacity of Memory Channel, Flume can produce endless data

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-3106:

Fix Version/s: (was: 1.8.0)
   1.9.0

> When batchSize of sink greater than transactionCapacity of Memory Channel, 
> Flume can produce endless data
> -
>
> Key: FLUME-3106
> URL: https://issues.apache.org/jira/browse/FLUME-3106
> Project: Flume
>  Issue Type: Bug
>  Components: Channel
>Affects Versions: 1.7.0
>Reporter: Yongxi Zhang
> Fix For: 1.9.0
>
> Attachments: FLUME-3106-0.patch
>
>
> Flume can produce endless data when use this following config:
> {code:xml}
> agent.sources = src1
> agent.sinks = sink1
> agent.channels = ch2
> agent.sources.src1.type = spooldir
> agent.sources.src1.channels = ch2
> agent.sources.src1.spoolDir = /home/kafka/flumeSpooldir
> agent.sources.src1.fileHeader = false
> agent.sources.src1.batchSize = 5
> agent.channels.ch2.type=memory
> agent.channels.ch2.capacity=100
> agent.channels.ch2.transactionCapacity=5
> agent.sinks.sink1.type = hdfs
> agent.sinks.sink1.channel = ch2
> agent.sinks.sink1.hdfs.path = hdfs://kafka1:9000/flume/
> agent.sinks.sink1.hdfs.rollInterval=1
> agent.sinks.sink1.hdfs.fileType = DataStream
> agent.sinks.sink1.hdfs.writeFormat = Text
> agent.sinks.sink1.hdfs.batchSize = 10
> {code}
> And there are Exceptions like this:
> {code:xml}
> org.apache.flume.ChannelException: Take list for MemoryTransaction, capacity 
> 5 full, consider committing more frequently, increasing capaci
> ty, or increasing thread count
> at 
> org.apache.flume.channel.MemoryChannel$MemoryTransaction.doTake(MemoryChannel.java:99)
> at 
> org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
> at 
> org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
> at 
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:362)
> at 
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67)
> at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145)
> at java.lang.Thread.run(Thread.java:745)
> 17/06/09 09:48:04 ERROR flume.SinkRunner: Unable to deliver event. Exception 
> follows.
> org.apache.flume.EventDeliveryException: org.apache.flume.ChannelException: 
> Take list for MemoryTransaction, capacity 5 full, consider comm
> itting more frequently, increasing capacity, or increasing thread count
> at 
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:451)
> at 
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67)
> at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> When takeList of Memory Channel is full,there is a ChannelException will be 
> throwed,The event of takeList has been writed by the sink and roll back to 
> the queue of memoryChannel at the same time,it is not reasonable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2882) Add Generic configuration provider

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2882:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Add Generic configuration provider
> --
>
> Key: FLUME-2882
> URL: https://issues.apache.org/jira/browse/FLUME-2882
> Project: Flume
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Enrique Ruiz Garcia
>  Labels: docs-missing, reviewboard-missing, unit-test-missing
> Fix For: 1.9.0
>
> Attachments: FLUME-2882.patch
>
>
> Add the ability to specify custom configuration provider to flume node (use 
> new optional '-confprovider' option to a specify class name that you can 
> implement from GenericConfigurationProvider class.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2958) Add ignorePattern for TaildirSource

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2958:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Add ignorePattern for TaildirSource
> ---
>
> Key: FLUME-2958
> URL: https://issues.apache.org/jira/browse/FLUME-2958
> Project: Flume
>  Issue Type: Improvement
>  Components: Sinks+Sources
>Affects Versions: 1.7.0
>Reporter: Hu Liu,
>Assignee: Hu Liu,
> Fix For: 1.9.0
>
> Attachments: FLUME-2958-0.patch
>
>
> we have tried the TaildirSource and found that it's lack of ignorePattern 
> specifying which files to ignore. I'm glad to work on it if anyone could 
> assign it to me



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2977) Upgrade RAT to 0.12

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2977:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Upgrade RAT to 0.12
> ---
>
> Key: FLUME-2977
> URL: https://issues.apache.org/jira/browse/FLUME-2977
> Project: Flume
>  Issue Type: Improvement
>  Components: Build
>Reporter: Attila Simon
>Assignee: Attila Simon
>Priority: Minor
> Fix For: 1.9.0
>
> Attachments: FLUME-2977.patch
>
>
> Before RAT check mvn install prints out the following warnings:
> {noformat}
> Warning:  org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser: Property 
> 'http://www.oracle.com/xml/jaxp/properties/entityExpansionLimit' is not 
> recognized.
> Compiler warnings:
>   WARNING:  'org.apache.xerces.jaxp.SAXParserImpl: Property 
> 'http://javax.xml.XMLConstants/property/accessExternalDTD' is not recognized.'
> Warning:  org.apache.xerces.parsers.SAXParser: Feature 
> 'http://javax.xml.XMLConstants/feature/secure-processing' is not recognized.
> Warning:  org.apache.xerces.parsers.SAXParser: Property 
> 'http://javax.xml.XMLConstants/property/accessExternalDTD' is not recognized.
> Warning:  org.apache.xerces.parsers.SAXParser: Property 
> 'http://www.oracle.com/xml/jaxp/properties/entityExpansionLimit' is not 
> recognized.
> [INFO] Rat check: Summary of files. Unapproved: 0 unknown: 0 generated: 0 
> approved: 9 licence.
> {noformat} 
> It doesn't break the build but seems misleading.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2770) Create a new deserializer for the Spooling Directory source that will be able to parse the header and body text

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2770:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Create a new deserializer for the Spooling Directory source that will be able 
> to parse the header and body text
> ---
>
> Key: FLUME-2770
> URL: https://issues.apache.org/jira/browse/FLUME-2770
> Project: Flume
>  Issue Type: New Feature
>  Components: Sinks+Sources
>Reporter: Piotr Wiecek
>  Labels: SpoolingDirectorySource
> Fix For: 1.9.0
>
>
> Create a new deserializer for the Spooling Directory source that will be able 
> to parse the header and body text.
> Currently we have serializer, that writes headers and body of the event to 
> the output stream (HeaderAndBodyTextEventSerializer). There is no default way 
> to read this kind of data (we have only LineDeserializer).
> I created a deserializer, which reads the header with the body until a 
> suitable message delimiter. I think that it would make sense to add this 
> functionality to the next release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2961) Make TaildirSource work with multiline

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2961:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Make TaildirSource work with multiline
> --
>
> Key: FLUME-2961
> URL: https://issues.apache.org/jira/browse/FLUME-2961
> Project: Flume
>  Issue Type: Improvement
>  Components: Sinks+Sources
>Affects Versions: 1.7.0
>Reporter: tinawenqiao
>Assignee: tinawenqiao
> Fix For: 1.9.0
>
> Attachments: FLUME-2961_1.patch, FLUME-2961_2.patch, 
> FLUME-2961_4.patch
>
>
> TaidirSource defaults to LINE, this has issue when multiline log events like 
> stack traces and have request/responses. Following part is Java traceback 
> logs. We expect to have log line start regex Key to aggregate all the log 
> lines till the next regex key is found.
> 2016-07-16 14:59:43,956 ERROR lifecycleSupervisor-1-7 LifecycleSupervisor.run 
> - Unable to start EventDrivenSourceRunner: { 
> source:cn.yottabyte.flume.source.http.HTTPSource{name:sourceHttp,state:IDLE} 
> } - Exception follows.
> java.lang.IllegalStateException: Running HTTP Server found in source: 
> sourceHttp before I started one. Will not attempt to start.
> at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
> at 
> cn.yottabyte.flume.source.http.HTTPSource.startHttpSourceServer(HTTPSource.java:170)
> at cn.yottabyte.flume.source.http.HTTPSource.start(HTTPSource.java:166)
> at 
> org.apache.flume.source.EventDrivenSourceRunner.start(EventDrivenSourceRunner.java:44)
> at 
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2953) Make TaildirSource work with recursive directory

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2953:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Make TaildirSource work with recursive directory
> 
>
> Key: FLUME-2953
> URL: https://issues.apache.org/jira/browse/FLUME-2953
> Project: Flume
>  Issue Type: Improvement
>  Components: Sinks+Sources
>Affects Versions: 1.7.0
>Reporter: tinawenqiao
>  Labels: Recuresive, TaildirSource, Wildcards, reviewboard-missing
> Fix For: 1.9.0
>
> Attachments: FLUME-2953.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> In TaildirSource filegroupName, regular expression can be used for filename 
> only. Sample usage is : a1.sources.r1.filegroups.f2 = /var/log/test2/.\*log.\*
> If there are many files to be tracked in the same directory, the 
> configuration is oft-repeated. So it‘s necessary that wildcards are supported 
> in the directory path. Then the user can configure the filegroupName like 
> this:
>  a1.sources.r1.filegroups.f2 = /var/log/\*/.\*log.\*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2994) flume-taildir-source: support for windows

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2994:

Fix Version/s: (was: 1.8.0)
   1.9.0

> flume-taildir-source: support for windows
> -
>
> Key: FLUME-2994
> URL: https://issues.apache.org/jira/browse/FLUME-2994
> Project: Flume
>  Issue Type: Improvement
>  Components: Sinks+Sources, Windows
>Affects Versions: 1.7.0
>Reporter: Jason Kushmaul
>Assignee: Jason Kushmaul
>Priority: Trivial
> Fix For: 1.9.0
>
> Attachments: FLUME-2994-3.patch, taildir-mac.conf, taildir-win8.1.conf
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> The current implementation of flume-taildir-source does not support windows.
> The only reason for this from what I can see is a simple call to 
> Files.getAttribute(file.toPath(), "unix:ino");
> I've tested an equivalent for windows (which of course does not work on 
> non-windows).  With an OS switch we should be able to identify a file 
> independent of file name on either system.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2912) thrift Sources/Sinks can only authenticate with kerberos principal in format with hostname

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2912:

Fix Version/s: (was: 1.8.0)
   1.9.0

> thrift Sources/Sinks can only authenticate with kerberos principal in  format 
> with hostname
> ---
>
> Key: FLUME-2912
> URL: https://issues.apache.org/jira/browse/FLUME-2912
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: 1.6.0
>Reporter: Ping Wang
>Assignee: Johny Rufus
> Fix For: 1.9.0
>
>
> Using Thrift Sources/Sinks in Kerberos environment, the Flume agents
> only work with principle in format "name/_h...@your-realm.com".  
> If using other valid principle in the format "n...@your-realm.com"  it will 
> hit ERROR of "GSS initiate failed".  
> Here's the configuration file:
> g1.sources.source1.type = spooldir
> g1.sources.source1.spoolDir = /test
> g1.sources.source1.fileHeader = false
> g1.sinks.sink1.type = thrift
> g1.sinks.sink1.hostname = localhost
> g1.sinks.sink1.port = 5
> g1.channels.channel1.type = memory
> g1.channels.channel1.capacity = 1000
> g1.channels.channel1.transactionCapacity = 100
> g1.sources.source1.channels = channel1
> g1.sinks.sink1.channel = channel1
> g2.sources = source2
> g2.sinks = sink2
> g2.channels = channel2
> g2.sources.source2.type = thrift
> g2.sources.source2.bind = localhost
> g2.sources.source2.port = 5
> g2.sinks.sink2.type = hdfs
> g2.sinks.sink2.hdfs.path = /tmp
> g2.sinks.sink2.hdfs.filePrefix = thriftData
> g2.sinks.sink2.hdfs.writeFormat = Text
> g2.sinks.sink2.hdfs.fileType = DataStream
> g2.channels.channel2.type = memory
> g2.channels.channel2.capacity = 1000
> g2.channels.channel2.transactionCapacity = 100
> g2.sources.source2.channels = channel2
> g2.sinks.sink2.channel = channel2
> g1.sinks.sink1.kerberos = true
> g1.sinks.sink1.client-principal = flume/hostn...@xxx.com
> g1.sinks.sink1.client-keytab
> = /etc/security/keytabs/flume-1563.server.keytab
> g1.sinks.sink1.server-principal = flume/hostn...@xxx.com
> g2.sources.source2.kerberos = true
> g2.sources.source2.agent-principal = flume/hostn...@xxx.com
> g2.sources.source2.agent-keytab
> = /etc/security/keytabs/flume-1563.server.keytab
> If using other valid principle like "t...@ibm.com" as below, will hit error:
> g1.sinks.sink1.kerberos = true
> g1.sinks.sink1.client-principal = t...@ibm.com
> g1.sinks.sink1.client-keytab = /home/test/test.keytab
> g1.sinks.sink1.server-principal = t...@ibm.com
> g2.sources.source2.kerberos = true
> g2.sources.source2.agent-principal = t...@ibm.com
> g2.sources.source2.agent-keytab = /home/test/test.keytab
> Agent g1:
> ERROR server.TThreadPoolServer: Error occurred during processing of
> message.
> java.lang.RuntimeException:
> org.apache.thrift.transport.TTransportException: Peer indicated failure:
> GSS initiate failed
>     at org.apache.thrift.transport.TSaslServerTransport
> $Factory.getTransport(TSaslServerTransport.java:219)
>     at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run
> (TThreadPoolServer.java:189)
>     at java.util.concurrent.ThreadPoolExecutor.runWorker
> (ThreadPoolExecutor.java:1142)
> Agent g2:
> ERROR transport.TSaslTransport: SASL negotiation failure
> javax.security.sasl.SaslException: GSS initiate failed [Caused by
> GSSException: No valid credentials provided (Mechanism level: Server not
> found in Kerberos database (7) - UNKNOWN_SERVER)]
>     at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge
> (GssKrb5Client.java:211)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2716) File Channel cannot handle capacity Integer.MAX_VALUE

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2716:

Fix Version/s: (was: 1.8.0)
   1.9.0

> File Channel cannot handle capacity Integer.MAX_VALUE
> -
>
> Key: FLUME-2716
> URL: https://issues.apache.org/jira/browse/FLUME-2716
> Project: Flume
>  Issue Type: Bug
>  Components: Channel, File Channel
>Affects Versions: 1.6.0, 1.7.0
>Reporter: Dong Zhao
>  Labels: unit-test-missing
> Fix For: 1.9.0
>
> Attachments: FLUME-2716.patch
>
>
> if capacity is set to Integer.MAX_VALUE(2147483647), checkpoint file size is 
> calculated wrongly to 8224. The calculation should first cast int to long, 
> then calculate the totalBytes. See the patch for details. Thanks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2980) Automated concurrent Kafka offset migration test

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2980:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Automated concurrent Kafka offset migration test
> 
>
> Key: FLUME-2980
> URL: https://issues.apache.org/jira/browse/FLUME-2980
> Project: Flume
>  Issue Type: Bug
>  Components: Kafka Channel
>Affects Versions: 1.6.0
>Reporter: Mike Percy
>Assignee: Umesh Chaudhary
> Fix For: 1.9.0
>
> Attachments: FLUME-2980-0.patch
>
>
> The Kafka Channel needs an automated offset migration test. See also 
> FLUME-2972



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2989) Kafka Channel metrics missing eventTakeAttemptCount and eventPutAttemptCount

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2989:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Kafka Channel metrics missing eventTakeAttemptCount and eventPutAttemptCount
> 
>
> Key: FLUME-2989
> URL: https://issues.apache.org/jira/browse/FLUME-2989
> Project: Flume
>  Issue Type: Bug
>  Components: Kafka Channel
>Affects Versions: 1.5.0, 1.6.0, 1.5.1
>Reporter: Denes Arvay
>Assignee: Umesh Chaudhary
>Priority: Minor
> Fix For: 1.9.0
>
> Attachments: FLUME-2989-0.patch, FLUME-2989-1.patch
>
>
> {{eventTakeAttemptCount}} and {{eventPutAttemptCount}} counters don't get 
> incremented in Kafka Channel



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2894) Flume components should stop in the correct order (graceful shutdown)

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2894:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Flume components should stop in the correct order (graceful shutdown)
> -
>
> Key: FLUME-2894
> URL: https://issues.apache.org/jira/browse/FLUME-2894
> Project: Flume
>  Issue Type: Bug
>  Components: Channel, Node, Sinks+Sources
>Affects Versions: 1.6.0, 1.7.0
>Reporter: Piotr Wiecek
>Assignee: Laxman
> Fix For: 1.9.0
>
> Attachments: FLUME-2894.patch
>
>
> Flume components should be stopped in the right way:
> * stop the sources (in order to not receiving further notifications),
> * wait until all events within the channels are consumed by the sinks,
> * stop the channels and the sinks.
> Currently, the shutdown hook stops the components in a random manner.
> E.g.: SINK, CHANNEL, SOURCE.
> Components are stored in the HashMap:
> {code:borderStyle=solid}
> Map supervisedProcesses;
> ...
> supervisedProcesses = new HashMap();
> ...
> @Override
>   public synchronized void stop() {
>   ...
>   for (final Entry entry : supervisedProcesses
> .entrySet()) {
>   if (entry.getKey().getLifecycleState().equals(LifecycleState.START)) {
> entry.getValue().status.desiredState = LifecycleState.STOP;
> entry.getKey().stop();
>   }
> }
> 
> {code}
> The problems which we can have:
> * not all Events will be consumed (Sink will be stopped first)
> * Source will continue to accept messages even though other components are 
> stopped



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2436) Make hadoop-2 the default build profile

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2436:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Make hadoop-2 the default build profile
> ---
>
> Key: FLUME-2436
> URL: https://issues.apache.org/jira/browse/FLUME-2436
> Project: Flume
>  Issue Type: Bug
>Reporter: Hari Shreedharan
>Assignee: Johny Rufus
>  Labels: build
> Fix For: 1.9.0
>
> Attachments: FLUME-2436.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2854) Parameterize jetty version in pom

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2854:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Parameterize jetty version in pom
> -
>
> Key: FLUME-2854
> URL: https://issues.apache.org/jira/browse/FLUME-2854
> Project: Flume
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Assignee: Sriharsha Chintalapani
> Fix For: 1.9.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-3107) When batchSize of sink greater than transactionCapacity of File Channel, Flume can produce endless data

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-3107:

Fix Version/s: (was: 1.8.0)
   1.9.0

> When batchSize of sink greater than transactionCapacity of File Channel, 
> Flume can produce endless data
> ---
>
> Key: FLUME-3107
> URL: https://issues.apache.org/jira/browse/FLUME-3107
> Project: Flume
>  Issue Type: Bug
>  Components: File Channel
>Affects Versions: 1.7.0
>Reporter: Yongxi Zhang
> Fix For: 1.9.0
>
> Attachments: FLUME-3107-0.patch
>
>
> This problem is the similar as it in FLUME-3106.Flume can produce endless 
> data When batchSize of sink greater than transactionCapacity of File Channel, 
> you can try it with the following config:
> {code:xml}
> agent.sources = src1
> agent.sinks = sink1
> agent.channels = ch2
> agent.sources.src1.type = spooldir
> agent.sources.src1.channels = ch2
> agent.sources.src1.spoolDir = /home/kafka/flumeSpooldir
> agent.sources.src1.fileHeader = false
> agent.sources.src1.batchSize = 5
> agent.channels.ch2.type=file
> agent.channels.ch2.capacity=100
> agent.channels.ch2.checkpointDir=/home/kafka/flumefilechannel/checkpointDir
> agent.channels.ch2.dataDirs=/home/kafka/flumefilechannel/dataDirs
> agent.channels.ch2.transactionCapacity=5
> agent.sinks.sink1.type = hdfs
> agent.sinks.sink1.channel = ch2
> agent.sinks.sink1.hdfs.path = hdfs://kafka1:9000/flume/
> agent.sinks.sink1.hdfs.rollInterval=1
> agent.sinks.sink1.hdfs.fileType = DataStream
> agent.sinks.sink1.hdfs.writeFormat = Text
> agent.sinks.sink1.hdfs.batchSize = 10
> {code}
> Exceptions like this:
> {code:xml}
> 17/06/09 17:16:18 ERROR flume.SinkRunner: Unable to deliver event. Exception 
> follows.
> org.apache.flume.EventDeliveryException: org.apache.flume.ChannelException: 
> Take list for FileBackedTransaction, capacity 5 full, consider
> committing more frequently, increasing capacity, or increasing thread count. 
> [channel=ch2]
> at 
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:451)
> at 
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67)
> at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.flume.ChannelException: Take list for 
> FileBackedTransaction, capacity 5 full, consider committing more frequently, 
> in
> creasing capacity, or increasing thread count. [channel=ch2]
> at 
> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:531)
> at 
> org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
> at 
> org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
> at 
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:362)
> ... 3 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2944) Remove guava as dependency

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2944:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Remove guava as dependency
> --
>
> Key: FLUME-2944
> URL: https://issues.apache.org/jira/browse/FLUME-2944
> Project: Flume
>  Issue Type: Improvement
>Affects Versions: 1.8.0
>Reporter: Lior Zeno
> Fix For: 1.9.0
>
>
> Guava is a very popular dependency, which often causes version collisions. 
> Especially, due to lack of backwards compatibility. 
> Adding new dependencies that rely on guava requires us to shade these 
> dependencies. This is hard to maintain and bloats our distribution. 
> Therefore, we should omit guava as a dependency and delegate these problems 
> to our users.
> If a user would like to use X-Source with Y-Sink, where X and Y rely on 
> colliding versions of guava, the user will have to shade one of them.
> This task should be easier, now that we have moved to Java 1.7. If we move to 
> 1.8 in this release, then most of it becomes virtually trivial.
> Flume relies on the following guava imports:
> * com.google.common.base.Preconditions
> * com.google.common.collect.ArrayListMultimap
> * com.google.common.collect.ListMultimap
> * com.google.common.collect.Lists
> * com.google.common.collect.Maps
> * com.google.common.annotations.VisibleForTesting
> * com.google.common.util.concurrent.ThreadFactoryBuilder
> * com.google.common.base.Charsets
> * com.google.common.base.Strings
> * com.google.common.base.Throwables
> * com.google.common.eventbus.EventBus
> * com.google.common.eventbus.Subscribe
> * com.google.common.primitives.UnsignedBytes
> * com.google.common.cache.CacheBuilder
> * com.google.common.cache.CacheLoader
> * com.google.common.cache.LoadingCache
> * com.google.common.util.concurrent.UncheckedExecutionException
> * com.google.common.collect.HashMultimap
> * com.google.common.collect.SetMultimap
> * com.google.common.collect.ImmutableMap
> * com.google.common.base.Joiner
> * com.google.common.io.Files
> * com.google.common.io.Resources
> * com.google.common.collect.ImmutableSortedSet
> * com.google.common.base.Splitter
> * com.google.common.collect.Iterables
> * com.google.common.base.Optional
> * com.google.common.io.ByteStreams
> * com.google.common.collect.HashBasedTable
> * com.google.common.primitives.Longs
> * com.google.common.collect.Sets
> * com.google.common.collect.ImmutableListMultimap
> * com.google.common.collect.ListMultimap
> * import com.google.common.collect.Table
> * com.google.common.base.Function



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2461) memoryChannel bytesRemaining counting error

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2461:

Fix Version/s: (was: 1.8.0)
   1.9.0

> memoryChannel bytesRemaining counting error
> ---
>
> Key: FLUME-2461
> URL: https://issues.apache.org/jira/browse/FLUME-2461
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: 1.5.0.1
>Reporter: yangwei
>Priority: Minor
>  Labels: patch, reviewboard-missing, unit-test-missing
> Fix For: 1.9.0
>
> Attachments: flume-2461-0.patch
>
>
> In doRollback function putByteCounter permits are released by bytesRemaining. 
> This is wrong for the below cases:
> In the doCommit function:
> 1)
> if(!bytesRemaining.tryAcquire(putByteCounter, keepAlive,
>   TimeUnit.SECONDS)) {
>   throw new ChannelException("Cannot commit transaction. Heap space " 
> +
> "limit of " + byteCapacity + "reached. Please increase heap 
> space" +
> " allocated to the channel as the sinks may not be keeping up " +
> "with the sources");
> }
> 2)
> if(!queueRemaining.tryAcquire(-remainingChange, keepAlive, 
> TimeUnit.SECONDS)) {
>   bytesRemaining.release(putByteCounter);
>   throw new ChannelFullException("Space for commit to queue couldn't 
> be acquired." +
>   " Sinks are likely not keeping up with sources, or the buffer 
> size is too tight");
> }
> When they throw ChannelException, bytesRemaining should not release any 
> permits.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2464) Remove hadoop-2 profile

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2464:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Remove hadoop-2 profile
> ---
>
> Key: FLUME-2464
> URL: https://issues.apache.org/jira/browse/FLUME-2464
> Project: Flume
>  Issue Type: Bug
>Reporter: Hari Shreedharan
>Assignee: Johny Rufus
> Fix For: 1.9.0
>
>
> This profile is quite painful to maintain since there is no hbase-0.94.2 
> artifact against hadoop-2. Let's get rid of this since hbase-98 profile takes 
> care of hadoop 2 builds with hbase-98



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2973) Deadlock in hdfs sink

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2973:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Deadlock in hdfs sink
> -
>
> Key: FLUME-2973
> URL: https://issues.apache.org/jira/browse/FLUME-2973
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: 1.7.0
>Reporter: Denes Arvay
>Assignee: Denes Arvay
>Priority: Critical
>  Labels: hdfssink
> Fix For: 1.9.0
>
> Attachments: FLUME-2973-1.patch, FLUME-2973.patch
>
>
> Automatic close of BucketWriters (when open file count reached 
> {{hdfs.maxOpenFiles}}) and the file rolling thread can end up in deadlock.
> When creating a new {{BucketWriter}} in {{HDFSEventSink}} it locks 
> {{HDFSEventSink.sfWritersLock}} and the {{close()}} called in 
> {{HDFSEventSink.sfWritersLock.removeEldestEntry}} tries to lock the 
> {{BucketWriter}} instance.
> On the other hand if the file is being rolled in 
> {{BucketWriter.close(boolean)}} it locks the {{BucketWriter}} instance first 
> and in the close callback it tries to lock the {{sfWritersLock}}.
> The chances for this deadlock is higher when the {{hdfs.maxOpenFiles}}'s 
> value is low (1).
> Script to reproduce: 
> https://gist.github.com/adenes/96503a6e737f9604ab3ee9397a5809ff
> (put to 
> {{flume-ng-sinks/flume-hdfs-sink/src/main/java/org/apache/flume/sink/hdfs}})
> Deadlock usually occurs before ~30 iterations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2976) Exception when JMS source tries to connect to a Weblogic server without authentication

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2976:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Exception when JMS source tries to connect to a Weblogic server without 
> authentication
> --
>
> Key: FLUME-2976
> URL: https://issues.apache.org/jira/browse/FLUME-2976
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: 1.6.0
>Reporter: Denes Arvay
>Assignee: Denes Arvay
>  Labels: jms
> Fix For: 1.9.0
>
> Attachments: FLUME-2976-2.patch, FLUME-2976.patch
>
>
> If no {{userName}} and {{passwordFile}} is set for the JMS source it sets the 
> password to {{Optional("")}} (see: 
> https://github.com/apache/flume/blob/trunk/flume-ng-sources/flume-jms-source/src/main/java/org/apache/flume/source/jms/JMSSource.java#L127)
> This leads to an exception in the weblogic jndi context implementation when 
> trying to connect to a weblogic jms server.
> {noformat}
> java.lang.IllegalArgumentException: The 'java.naming.security.principal' 
> property has not been specified
>   at weblogic.jndi.Environment.getSecurityUser(Environment.java:562)
>   at 
> weblogic.jndi.WLInitialContextFactoryDelegate.pushSubject(WLInitialContextFactoryDelegate.java:665)
>   at 
> weblogic.jndi.WLInitialContextFactoryDelegate.newContext(WLInitialContextFactoryDelegate.java:485)
>   at 
> weblogic.jndi.WLInitialContextFactoryDelegate.getInitialContext(WLInitialContextFactoryDelegate.java:373)
>   at weblogic.jndi.Environment.getContext(Environment.java:319)
>   at weblogic.jndi.Environment.getContext(Environment.java:288)
>   at 
> weblogic.jndi.WLInitialContextFactory.getInitialContext(WLInitialContextFactory.java:117)
>   at 
> javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:684)
>   at 
> javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:307)
>   at javax.naming.InitialContext.init(InitialContext.java:242)
>   at javax.naming.InitialContext.(InitialContext.java:216)
>   at 
> org.apache.flume.source.jms.InitialContextFactory.create(InitialContextFactory.java:28)
>   at org.apache.flume.source.jms.JMSSource.doConfigure(JMSSource.java:223)
> {noformat}
> Changing the above mentioned line to {{Optional.absent()}} fixes the issue.
> [~brocknoland]: Is there any specific reason for setting the password to 
> {{Optional("")}} when there is no {{passwordFile}} set?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2807) Add a simple split interceptor

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2807:

Fix Version/s: (was: 1.8.0)
   1.9.0

> Add a simple split interceptor 
> ---
>
> Key: FLUME-2807
> URL: https://issues.apache.org/jira/browse/FLUME-2807
> Project: Flume
>  Issue Type: Improvement
>  Components: Sinks+Sources
>Affects Versions: 1.6.0, 1.7.0
>Reporter: seekerak
>  Labels: features, patch, reviewboard-missing, unit-test-missing
> Fix For: 1.9.0
>
> Attachments: FLUME-2807.patch
>
>
> a simple split interceptor , aims to deal with the situation as follows:
> the source data like this:
> “
> 1,tom,boy,13
> 2,lili,girl,14
> 3,jack,boy,10
> ...
> ”
> and i hope to sink source data into two different hdfs directories named by 
> boy and girl like this:
> “
> hdfs:///sink/boy/
> hdfs:///sink/girl/
> ”
> we can use this interceptor to accomplish this goal.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2880) how to config a custom hbase sink?

2017-09-12 Thread Denes Arvay (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Arvay updated FLUME-2880:
---
Fix Version/s: notrack

> how to config a custom hbase sink?
> --
>
> Key: FLUME-2880
> URL: https://issues.apache.org/jira/browse/FLUME-2880
> Project: Flume
>  Issue Type: Question
>Reporter: Jack Jiang
> Fix For: notrack
>
>
> my flume version is 1.6.0 and i'm ready to sink logs to hbase,before this 
> action,i need to process logs.so i defined a class named 
> AsyncHbaseLogEventSerializer,it implements the interface of 
> AsyncHbaseEventSerializer and i overwrite the method inside.here is my sink 
> part of configuration file:
> server.sinks.hdfs_sink.type = asynchbase
> server.sinks.hdfs_sink.table = access_log
> server.sinks.hdfs_sink.columnFamily = cb
> server.sinks.hdfs_sink.serializer = 
> com.huateng.flume.sink.hbase.AsyncHbaseLogEventSerializer
> server.sinks.hdfs_sink.serializer.columns = 
> host_name,remote_host,remote_user,event_ts,req,req_status,resp_bytes,ref,agent
> my question is should i package my own class as a jar package,and put it to 
> lib?
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2101) Flume 1.4.0 Umbrella JIRA

2017-09-12 Thread Denes Arvay (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Arvay updated FLUME-2101:
---
Fix Version/s: 1.4.0

> Flume 1.4.0 Umbrella JIRA
> -
>
> Key: FLUME-2101
> URL: https://issues.apache.org/jira/browse/FLUME-2101
> Project: Flume
>  Issue Type: Umbrella
>Reporter: Mike Percy
>Assignee: Mike Percy
> Fix For: 1.4.0
>
>
> Flume 1.4.0 Umbrella JIRA



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2399) Flume 1.5.0.1 release

2017-09-12 Thread Denes Arvay (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Arvay updated FLUME-2399:
---
Fix Version/s: 1.5.0.1

> Flume 1.5.0.1 release
> -
>
> Key: FLUME-2399
> URL: https://issues.apache.org/jira/browse/FLUME-2399
> Project: Flume
>  Issue Type: Bug
>Reporter: Hari Shreedharan
> Fix For: 1.5.0.1
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-1732) Build is failing due to netty problems

2017-09-12 Thread Denes Arvay (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-1732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Arvay updated FLUME-1732:
---
Fix Version/s: notrack

> Build is failing due to netty problems
> --
>
> Key: FLUME-1732
> URL: https://issues.apache.org/jira/browse/FLUME-1732
> Project: Flume
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Brock Noland
>Assignee: Mike Percy
> Fix For: notrack
>
> Attachments: FLUME-1732-3.patch, FLUME-1732.patch
>
>
> FLUME-1723 changed how we bring in netty and that seems to have broken the 
> build https://builds.apache.org/job/flume-trunk/330/#showFailuresLink



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLUME-3163) JIRA cleanup for 1.8.0 release

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo reassigned FLUME-3163:
---

Assignee: Ferenc Szabo

> JIRA cleanup for 1.8.0 release
> --
>
> Key: FLUME-3163
> URL: https://issues.apache.org/jira/browse/FLUME-3163
> Project: Flume
>  Issue Type: Sub-task
>Reporter: Ferenc Szabo
>Assignee: Ferenc Szabo
> Fix For: 1.8.0
>
>
> We should resolve all issues targeted in 1.8.0 version or move them to 1.8.1 
> or 1.9.0
> Filter for these issues:
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20Flume%20AND%20(Status%20!%3D%20Resolved%20OR%20Resolution%20!%3D%20Fixed)%20AND%20fixVersion%20%3D%201.8.0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (FLUME-3173) Upgrade joda-time

2017-09-12 Thread Denes Arvay (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denes Arvay resolved FLUME-3173.

Resolution: Fixed

Thank you [~Azat] for reporting it, [~mcsanady] for the patch and 
[~marcellhegedus] for the review.

> Upgrade joda-time
> -
>
> Key: FLUME-3173
> URL: https://issues.apache.org/jira/browse/FLUME-3173
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: 1.7.0
>Reporter: Azat Nizametdinov
>Assignee: Miklos Csanady
> Fix For: 1.8.0
>
>
> Flume 1.7 depends on joda-time version 2.1 which uses outdated tz database.  
> For example following code
> {code}
> new org.joda.time.DateTime(
> org.joda.time.DateTimeZone.forID("Europe/Moscow")
> ).toString()
> {code}
> returns time with offset {{+04:00}}, but Moscow timezone is UTC+3 since 2014.
> Furthermore this version of joda-time does not allow to specify custom tz 
> databse folder in contrast to newer versions.
> It affects {{RegexExtractorInterceptorMillisSerializer}}. Test to reproduce 
> the bug:
> {code}
> public void testMoscowTimezone() throws Exception {
> TimeZone.setDefault(TimeZone.getTimeZone("Europe/Moscow"));
> String pattern = "-MM-dd HH:mm:ss";
> SimpleDateFormat format = new SimpleDateFormat(pattern);
> String dateStr = "2017-09-10 10:00:00";
> Date expectedDate = format.parse(dateStr);
> RegexExtractorInterceptorMillisSerializer sut = new 
> RegexExtractorInterceptorMillisSerializer();
> Context context = new Context();
> context.put("pattern", pattern);
> sut.configure(context);
> assertEquals(String.valueOf(expectedDate.getTime()), 
> sut.serialize(dateStr));
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2333) HTTP source handler doesn't allow for responses

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2333:

Fix Version/s: (was: 1.8.0)
   1.8.1

> HTTP source handler doesn't allow for responses
> ---
>
> Key: FLUME-2333
> URL: https://issues.apache.org/jira/browse/FLUME-2333
> Project: Flume
>  Issue Type: Improvement
>Reporter: Jeremy Karlson
>Assignee: Jeremy Karlson
> Fix For: 1.8.1
>
> Attachments: FLUME-2333-2.diff, FLUME-2333-3.diff, FLUME-2333-4.diff, 
> FLUME-2333-CUMULATIVE.diff, FLUME-2333.diff
>
>
> Existing HTTP source handlers recieve events via a HTTPServletRequest.  This 
> works, but because the handler doesn't have access to the 
> HTTPServletResponse, there is no ability to return a response.  This makes it 
> unsuitable for some sort of protocol that relies on bidirectional 
> communication.
> My solution: In addition to the existing HTTPSource interface, I've added a 
> BidirectionalHTTPSource interface that is provided the servlet response as a 
> parameter.  I've made some changes in the HTTP source allow for both types to 
> co-exist, and my changes shouldn't affect anyone who is already using the 
> existing interface.
> Also includes minor documentation updates to reflect this.
> Review: https://reviews.apache.org/r/18555/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-402) Add a "Known Issues" section to flume documentation.

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-402:
---
Fix Version/s: (was: 1.8.0)
   1.8.1

> Add a "Known Issues" section to flume documentation.
> 
>
> Key: FLUME-402
> URL: https://issues.apache.org/jira/browse/FLUME-402
> Project: Flume
>  Issue Type: Documentation
>  Components: Docs, Technical Debt
>Affects Versions: 0.9.4
>Reporter: Jonathan Hsieh
> Fix For: 1.8.1
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLUME-3173) Upgrade joda-time

2017-09-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-3173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162927#comment-16162927
 ] 

ASF subversion and git services commented on FLUME-3173:


Commit d434d23dadc411e5d7486447316172c495d70f22 in flume's branch 
refs/heads/trunk from [~mcsanady]
[ https://git-wip-us.apache.org/repos/asf?p=flume.git;h=d434d23 ]

FLUME-3173. Upgrade joda-time to 2.9.9

This closes #169

Reviewers: Marcell Hegedus

(Miklos Csanady via Denes Arvay)


> Upgrade joda-time
> -
>
> Key: FLUME-3173
> URL: https://issues.apache.org/jira/browse/FLUME-3173
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: 1.7.0
>Reporter: Azat Nizametdinov
>Assignee: Miklos Csanady
> Fix For: 1.8.0
>
>
> Flume 1.7 depends on joda-time version 2.1 which uses outdated tz database.  
> For example following code
> {code}
> new org.joda.time.DateTime(
> org.joda.time.DateTimeZone.forID("Europe/Moscow")
> ).toString()
> {code}
> returns time with offset {{+04:00}}, but Moscow timezone is UTC+3 since 2014.
> Furthermore this version of joda-time does not allow to specify custom tz 
> databse folder in contrast to newer versions.
> It affects {{RegexExtractorInterceptorMillisSerializer}}. Test to reproduce 
> the bug:
> {code}
> public void testMoscowTimezone() throws Exception {
> TimeZone.setDefault(TimeZone.getTimeZone("Europe/Moscow"));
> String pattern = "-MM-dd HH:mm:ss";
> SimpleDateFormat format = new SimpleDateFormat(pattern);
> String dateStr = "2017-09-10 10:00:00";
> Date expectedDate = format.parse(dateStr);
> RegexExtractorInterceptorMillisSerializer sut = new 
> RegexExtractorInterceptorMillisSerializer();
> Context context = new Context();
> context.put("pattern", pattern);
> sut.configure(context);
> assertEquals(String.valueOf(expectedDate.getTime()), 
> sut.serialize(dateStr));
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-1092) We should have a consistent shutdown contract with all Flume components

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-1092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-1092:

Fix Version/s: (was: 1.8.0)
   1.8.1

> We should have a consistent shutdown contract with all Flume components
> ---
>
> Key: FLUME-1092
> URL: https://issues.apache.org/jira/browse/FLUME-1092
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: 1.1.0
>Reporter: Will McQueen
> Fix For: 1.8.1
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flume pull request #169: FLUME-3173: Upgrade joda-time

2017-09-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flume/pull/169


---


[jira] [Updated] (FLUME-2811) Taildir source doesn't call stop() on graceful shutdown

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2811:

Fix Version/s: (was: 1.8.0)
   1.8.1

> Taildir source doesn't call stop() on graceful shutdown
> ---
>
> Key: FLUME-2811
> URL: https://issues.apache.org/jira/browse/FLUME-2811
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: 1.7.0
>Reporter: Jun Seok Hong
>Assignee: Umesh Chaudhary
>Priority: Critical
>  Labels: newbie
> Fix For: 1.8.1
>
>
> Taildir source doesn't call stop() on graceful shutdown.
> Test configuration.
> source - taildir
> channel - PseudoTxnMemoryChannel / flume-kafka-channel
> sink - none
> I found that flume sometimes doesn't terminate with Taildir source. 
> I had to kill the process to terminate it.
> tailFileProcess() function in TaildirSource.java has a infinite loop.
> When the process interrupted, ChannelException will happen, but it can't 
> breaks the infinite loop.
> I think that's the reason why Taildir can't call stop() function.
> {code:title=TaildirSource.java|borderStyle=solid}
>  private void tailFileProcess(TailFile tf, boolean backoffWithoutNL)
>   throws IOException, InterruptedException {
> while (true) {
>   reader.setCurrentFile(tf);
>   List events = reader.readEvents(batchSize, backoffWithoutNL);
>   if (events.isEmpty()) {
> break;
>   }
>   sourceCounter.addToEventReceivedCount(events.size());
>   sourceCounter.incrementAppendBatchReceivedCount();
>   try {
> getChannelProcessor().processEventBatch(events);
> reader.commit();
>   } catch (ChannelException ex) {
> logger.warn("The channel is full or unexpected failure. " +
>   "The source will try again after " + retryInterval + " ms");
> TimeUnit.MILLISECONDS.sleep(retryInterval);
> retryInterval = retryInterval << 1;
> retryInterval = Math.min(retryInterval, maxRetryInterval);
> continue;
>   }
>   retryInterval = 1000;
>   sourceCounter.addToEventAcceptedCount(events.size());
>   sourceCounter.incrementAppendBatchAcceptedCount();
>   if (events.size() < batchSize) {
> break;
>   }
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2825) Avro Files are not readable with the converter of HDP2.3

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2825:

Fix Version/s: (was: 1.8.0)
   1.8.1

> Avro Files are not readable with the converter of HDP2.3
> 
>
> Key: FLUME-2825
> URL: https://issues.apache.org/jira/browse/FLUME-2825
> Project: Flume
>  Issue Type: Blog - New Blog Request
>  Components: Sinks+Sources
>Affects Versions: 1.5.2
> Environment: HDP2.3
>Reporter: Kettler Karl
> Fix For: 1.8.1
>
>
> Avro Files are not readable with the converter of HDP2.3
> What we can do?
> Obj   avro.schema�
> {"type":"record","name":"Doc","doc":"adoc","fields":[{"name":"id","type":"string"},{"name":"user_friends_count","type":["int","null"]},{"name":"user_location","type":["string","null"]},{"name":"user_description","type":["string","null"]},{"name":"user_statuses_count","type":["int","null"]},{"name":"user_followers_count","type":["int","null"]},{"name":"user_name","type":["string","null"]},{"name":"user_screen_name","type":["string","null"]},{"name":"created_at","type":["string","null"]},{"name":"text","type":["string","null"]},{"name":"retweet_count","type":["long","null"]},{"name":"retweeted","type":["boolean","null"]},{"name":"in_reply_to_user_id","type":["long","null"]},{"name":"source","type":["string","null"]},{"name":"in_reply_to_status_id","type":["long","null"]},{"name":"media_url_https","type":["string","null"]},{"name":"expanded_url","type":["string","null"]}]}����}z��]~/y)��$657453578462875648�,With
>  Long Stroke McGee$Inmate ID: 3893978���� 
> NeeshaYerdMe_CMB(2015-10-23T07:08:42Z0@_ThisIsImani thank you!��ޠ �  
>  http://twitter.com/download/iphone; rel="nofollow">Twitter for 
> iPhoneКߟ����}z��]~/y)�
> Kind regards,
> Karl



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2942) AvroEventDeserializer ignores header from spool source

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2942:

Fix Version/s: (was: 1.8.0)
   1.8.1

> AvroEventDeserializer ignores header from spool source
> --
>
> Key: FLUME-2942
> URL: https://issues.apache.org/jira/browse/FLUME-2942
> Project: Flume
>  Issue Type: Bug
>Affects Versions: 1.6.0
>Reporter: Sebastian Alfers
> Fix For: 1.8.1
>
> Attachments: FLUME-2942-0.patch
>
>
> I have a spool file source and use avro for de-/serialization
> In detail, serialized events store the topic of the kafka sink in the header.
> When I load the events from the spool directory, the header are ignored. 
> Please see: 
> https://github.com/apache/flume/blob/caa64a1a6d4bc97be5993cb468516e9ffe862794/flume-ng-core/src/main/java/org/apache/flume/serialization/AvroEventDeserializer.java#L122
> You can see, it uses the whole event as body but does not distinguish between 
> the header and body encoded by avro.
> Please verify that this is a bug.
> I fixed this but by using the record that stores header and body separately.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2876) ElasticSearchSink: indexnameBuilderContext.putAll bug fixes

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2876:

Fix Version/s: (was: 1.8.0)
   1.8.1

> ElasticSearchSink: indexnameBuilderContext.putAll bug fixes
> ---
>
> Key: FLUME-2876
> URL: https://issues.apache.org/jira/browse/FLUME-2876
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: 1.5.0, 1.6.0, 1.7.0
>Reporter: yh123555
>  Labels: unit-test-missing
> Fix For: 1.8.1
>
> Attachments: 
> 0001-ElasticSearchSink-indexnameBuilderContext.putAll-bug.patch
>
>
> ElasticSearchSink: indexnameBuilderContext.putAll bug fixes
> org.apache.flume.sink.elasticsearch.ElasticSearchSink 
> indexnameBuilderContext.putAll wrong  serializerContext.putAll



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2871) avro sink reset-connection-interval cause EventDeliveryException

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2871:

Fix Version/s: (was: 1.8.0)
   1.8.1

> avro sink reset-connection-interval cause EventDeliveryException
> 
>
> Key: FLUME-2871
> URL: https://issues.apache.org/jira/browse/FLUME-2871
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: 1.6.0
>Reporter: chenchunbin
> Fix For: 1.8.1
>
>
> I found avro sink use reset-connection-interval will throw this exception:
> 29 Jan 2016 14:01:45,257 ERROR 
> [SinkRunner-PollingRunner-DefaultSinkProcessor] 
> (org.apache.flume.SinkRunner$PollingRunner.run:160)  - Unable to deliver 
> event. Exception follows.
> org.apache.flume.EventDeliveryException: Failed to send events
>   at 
> org.apache.flume.sink.AbstractRpcSink.process(AbstractRpcSink.java:392)
>   at 
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>   at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>   at java.lang.Thread.run(Thread.java:724)
> Caused by: org.apache.flume.EventDeliveryException: NettyAvroRpcClient { 
> host: localhost, port: 58989 }: Failed to send batch
>   at 
> org.apache.flume.api.NettyAvroRpcClient.appendBatch(NettyAvroRpcClient.java:315)
>   at 
> org.apache.flume.sink.AbstractRpcSink.process(AbstractRpcSink.java:376)
>   ... 3 more
> Caused by: org.apache.flume.EventDeliveryException: NettyAvroRpcClient { 
> host: localhost, port: 58989 }: Handshake timed out after 3ms
>   at 
> org.apache.flume.api.NettyAvroRpcClient.appendBatch(NettyAvroRpcClient.java:359)
>   at 
> org.apache.flume.api.NettyAvroRpcClient.appendBatch(NettyAvroRpcClient.java:303)
>   ... 4 more
> Caused by: java.util.concurrent.TimeoutException
>   at java.util.concurrent.FutureTask.get(FutureTask.java:201)
>   at 
> org.apache.flume.api.NettyAvroRpcClient.appendBatch(NettyAvroRpcClient.java:357)
>   ... 5 more
> and then I repace netty-3.5.12.Final.jar to netty-3.10.5.Final.jar, the avro 
> sink works well.
> #client.conf
> agent1.channels = c1 
> agent1.sources  = s1
> agent1.sinks= k1
> agent1.channels = c1
> agent1.channels.c1.type = memory
> agent1.channels.c1.capacity = 1
> agent1.channels.c1.transactionCapacity = 100
> agent1.sources.s1.type = exec
> agent1.sources.s1.command = tail -F /var/log/secure
> agent1.sources.s1.restart = true
> agent1.sources.s1.channels = c1
> agent1.sinks.k1.channel = c1
> agent1.sinks.k1.type = avro
> agent1.sinks.k1.hostname = 127.0.0.1
> agent1.sinks.k1.port = 58989
> agent1.sinks.k1.batch-size = 100
> agent1.sinks.k1.reset-connection-interval = 120
> agent1.sinks.k1.compression-type = deflate 
> agent1.sinks.k1.compression-level = 6
> agent1.sinks.k1.connect-timeout = 3
> agent1.sinks.k1.request-timeout = 3
> #center.conf
> agent1.channels = c1 
> agent1.sources  = s1
> agent1.sinks= k1
> agent1.channels = c1
> agent1.channels.c1.type = memory
> agent1.channels.c1.capacity = 1
> agent1.channels.c1.transactionCapacity = 100
> agent1.sources.s1.type = avro
> agent1.sources.s1.bind = 0.0.0.0
> agent1.sources.s1.port = 58989
> agent1.sources.s1.threads = 1 #fast failed
> agent1.sources.s1.channels = c1 
> agent1.sources.s1.compression-type = deflate
> agent1.sinks.k1.type =  file_roll
> agent1.sinks.k1.sink.directory = /tmp/center1
> agent1.sinks.k1.channel = c1
> agent1.sinks.k1.sink.rollInterval = 86400



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2943) Integrate checkstyle - second pass

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2943:

Fix Version/s: (was: 1.8.0)
   1.8.1

> Integrate checkstyle - second pass
> --
>
> Key: FLUME-2943
> URL: https://issues.apache.org/jira/browse/FLUME-2943
> Project: Flume
>  Issue Type: Improvement
>Affects Versions: 1.8.0
>Reporter: Lior Zeno
>Priority: Minor
> Fix For: 1.8.1
>
> Attachments: google_checks.xml
>
>
> In the first phase of this task we used a weakened style rules. This issue 
> proposes to add the following rules:
> * Naming conventions
> * Java docs (relaxed version)
> * Disallowing static star imports
> * Disallowing abbreviations 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-2689) reloading conf file leads syslogTcpSource not receives any event

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-2689:

Fix Version/s: (was: 1.8.0)
   1.8.1

> reloading conf file leads syslogTcpSource not receives any event
> 
>
> Key: FLUME-2689
> URL: https://issues.apache.org/jira/browse/FLUME-2689
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: 1.5.1
> Environment: configuring syslog sending logs to remote flume agent
>Reporter: yangwei
>Assignee: yangwei
> Fix For: 1.8.1
>
> Attachments: flume-2689-1.patch, flume-2689-3.patch, 
> flume-2689-trunk.patch
>
>
> Reloading conf file will stop old syslog source and start new syslog source. 
> Stopping syslog tcp source only closes the NioServerSocketChannel, resulting 
> in the client sends data through the old channel. In that case, the new 
> source never receives data. The tcpdump shows the events have received but 
> the new source doesn't and ss shows the client connection stays same with old 
> one.
> The right way to stop syslog source is close both the NioSocketChannel and 
> NioServerSocketChannel, and shutdown the executor.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-3039) TestSyslogUtils fails when system locale is not English

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-3039:

Fix Version/s: (was: 1.8.0)
   1.8.1

> TestSyslogUtils fails when system locale is not English
> ---
>
> Key: FLUME-3039
> URL: https://issues.apache.org/jira/browse/FLUME-3039
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: 1.7.0
>Reporter: pingwang
>Priority: Minor
> Fix For: 1.8.1
>
> Attachments: FLUME-3039.patch
>
>
> Ran Flume 1.7.0 UT hit the error below:
> ...
> TestHeader9(org.apache.flume.source.TestSyslogUtils)  Time elapsed: 10 sec  
> <<< ERROR!
> java.text.ParseException: Unparseable date: "2016十二月  28 03:12:25"
> at java.text.DateFormat.parse(DateFormat.java:366)
> at 
> org.apache.flume.source.TestSyslogUtils.checkHeader(TestSyslogUtils.java:251)
> at 
> org.apache.flume.source.TestSyslogUtils.checkHeader(TestSyslogUtils.java:266)
> at 
> org.apache.flume.source.TestSyslogUtils.TestHeader9(TestSyslogUtils.java:143)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15
> ...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-3114) Upgrade commons-httpclient library dependency

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-3114:

Fix Version/s: (was: 1.8.0)
   1.8.1

> Upgrade commons-httpclient library dependency
> -
>
> Key: FLUME-3114
> URL: https://issues.apache.org/jira/browse/FLUME-3114
> Project: Flume
>  Issue Type: Bug
>Affects Versions: 1.7.0
>Reporter: Attila Simon
>Priority: Critical
>  Labels: dependency
> Fix For: 1.8.1
>
>
> ||Group||Artifact||Version used||Upgrade target||
> |commons-httpclient|commons-httpclient|3.1,3.0.1|4.5.2|
> Note: This artifact was moved to:
> * New Group   org.apache.httpcomponents
> * New Artifacthttpclient
> Security vulnerability: https://www.cvedetails.com/cve/CVE-2012-5783/
> Please do:
> - double check the newest version. 
> - consider to remove a dependency if better alternative is available.
> - check whether the lib change would introduce a backward incompatibility (in 
> which case please add this label `breaking_change` and fix version should be 
> the next major)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-3121) Upgrade javax.mail:test dependency

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-3121:

Fix Version/s: (was: 1.8.0)
   1.8.1

> Upgrade javax.mail:test dependency
> --
>
> Key: FLUME-3121
> URL: https://issues.apache.org/jira/browse/FLUME-3121
> Project: Flume
>  Issue Type: Bug
>Affects Versions: 1.7.0
>Reporter: Attila Simon
>Priority: Critical
>  Labels: dependency
> Fix For: 1.8.1
>
>
> ||Group||Artifact||Version used||Upgrade target||
> |javax.mail|mail|1.4.1|1.5.6|
> Note: This artifact was moved to:
> - New Group   javax.mail
> - New Artifactjavax.mail-api
> Security vulnerability: 
> https://www.cvedetails.com/vulnerability-list/vendor_id-5/product_id-5085/SUN-Javamail.html
>  
> Note: this might be a false alarm. Please double check the CVE. 
> Please do:
> - double check the newest version. 
> - consider to remove a dependency if better alternative is available.
> - check whether the lib change would introduce a backward incompatibility (in 
> which case please add this label `breaking_change` and fix version should be 
> the next major)
> Excerpt from mvn dependency:tree
> {noformat}
> +- org.apache.hive:hive-cli:jar:1.0.0:test
> |  +- org.apache.hive:hive-service:jar:1.0.0:test
> |  |  +- org.eclipse.jetty.aggregate:jetty-all:jar:7.6.0.v20120127:test
> |  |  |  +- javax.mail:mail:jar:1.4.1:test
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-3126) Upgrade apache poi library dependencies

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-3126:

Fix Version/s: (was: 1.8.0)
   1.8.1

> Upgrade apache poi library dependencies
> ---
>
> Key: FLUME-3126
> URL: https://issues.apache.org/jira/browse/FLUME-3126
> Project: Flume
>  Issue Type: Bug
>Affects Versions: 1.7.0
>Reporter: Attila Simon
>Priority: Critical
>  Labels: dependency
> Fix For: 1.8.1
>
>
> ||Group||Artifact||Version used||Upgrade target||
> |org.apache.poi|poi|3.10-beta2|3.15-beta2|
> |org.apache.poi|poi-ooxml|3.10-beta2|3.15-beta2|
> |org.apache.poi|poi-scratchpad|3.10-beta2|3.15-beta2|
> Security vulnerability: 
> https://www.cvedetails.com/vulnerability-list/vendor_id-45/product_id-22766/Apache-POI.html
> Maven repositories: 
> - https://mvnrepository.com/artifact/org.apache.poi/poi-ooxml
> - https://mvnrepository.com/artifact/org.apache.poi/poi
> - https://mvnrepository.com/artifact/org.apache.poi/poi
> Please do:
> - CVE might be a false alarm or mistake. Please double check.
> - double check the newest version. 
> - consider to remove a dependency if better alternative is available.
> - check whether the lib change would introduce a backward incompatibility (in 
> which case please add this label `breaking_change` and fix version should be 
> the next major)
> Excerpt from mvn dependency:tree
> {noformat}
> org.apache.flume.flume-ng-sinks:flume-ng-morphline-solr-sink:jar:1.8.0-SNAPSHOT
> +- org.kitesdk:kite-morphlines-all:pom:1.0.0:compile
> |  +- org.kitesdk:kite-morphlines-solr-cell:jar:1.0.0:compile
> |  |  +- org.apache.tika:tika-xmp:jar:1.5:compile
> |  |  |  +- org.apache.tika:tika-parsers:jar:1.5:compile
> |  |  |  |  +- org.apache.poi:poi:jar:3.10-beta2:compile
> |  |  |  |  +- org.apache.poi:poi-scratchpad:jar:3.10-beta2:compile
> |  |  |  |  +- org.apache.poi:poi-ooxml:jar:3.10-beta2:compile
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-3129) Upgrade bouncycastle library dependencies

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-3129:

Fix Version/s: (was: 1.8.0)
   1.8.1

> Upgrade bouncycastle library dependencies
> -
>
> Key: FLUME-3129
> URL: https://issues.apache.org/jira/browse/FLUME-3129
> Project: Flume
>  Issue Type: Bug
>Affects Versions: 1.7.0
>Reporter: Attila Simon
>Priority: Critical
>  Labels: dependency
> Fix For: 1.8.1
>
>
> ||Group||Artifact||Version used||Upgrade target||
> |org.bouncycastle|bcprov-jdk15|1.45|1.57|
> |org.bouncycastle|bcmail-jdk15|1.45|1.57|
> Security vulnerability: 
> https://www.cvedetails.com/vulnerability-list/vendor_id-7637/Bouncycastle.html
> Maven repository: 
> https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk15
> Please do:
> - CVE might be a false alarm or mistake. Please double check.
> - double check the newest version. 
> - consider to remove a dependency if better alternative is available.
> - check whether the lib change would introduce a backward incompatibility (in 
> which case please add this label `breaking_change` and fix version should be 
> the next major)
> Excerpt from mvn dependency:tree
> {noformat}
> org.apache.flume.flume-ng-sinks:flume-ng-morphline-solr-sink:jar:1.8.0-SNAPSHOT
> +- org.kitesdk:kite-morphlines-all:pom:1.0.0:compile
> |  +- org.kitesdk:kite-morphlines-solr-cell:jar:1.0.0:compile
> |  |  +- org.apache.tika:tika-xmp:jar:1.5:compile
> |  |  |  +- org.apache.tika:tika-parsers:jar:1.5:compile
> |  |  |  |  +- org.bouncycastle:bcmail-jdk15:jar:1.45:compile
> |  |  |  |  +- org.bouncycastle:bcprov-jdk15:jar:1.45:compile
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-3124) Upgrade apache-mime4j-core library dependency

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-3124:

Fix Version/s: (was: 1.8.0)
   1.8.1

> Upgrade apache-mime4j-core library dependency
> -
>
> Key: FLUME-3124
> URL: https://issues.apache.org/jira/browse/FLUME-3124
> Project: Flume
>  Issue Type: Bug
>Affects Versions: 1.7.0
>Reporter: Attila Simon
>Priority: Critical
>  Labels: dependency
> Fix For: 1.8.1
>
>
> ||Group||Artifact||Version used||Upgrade target||
> |org.apache.james|apache-mime4j-core|0.7.2|0.8.1|
> Security vulnerability: 
> https://www.cvedetails.com/vulnerability-list/vendor_id-45/product_id-4526/Apache-James.html
>  
> Maven repository: 
> https://mvnrepository.com/artifact/org.apache.james/apache-mime4j
> Please do:
> - CVE might be a false alarm or mistake. Please double check.
> - double check the newest version. 
> - consider to remove a dependency if better alternative is available.
> - check whether the lib change would introduce a backward incompatibility (in 
> which case please add this label `breaking_change` and fix version should be 
> the next major)
> Excerpt from mvn dependency:tree
> {noformat}
> org.apache.flume.flume-ng-sinks:flume-ng-morphline-solr-sink:jar:1.8.0-SNAPSHOT
> +- org.kitesdk:kite-morphlines-all:pom:1.0.0:compile
> |  +- org.kitesdk:kite-morphlines-solr-cell:jar:1.0.0:compile
> |  |  +- org.apache.tika:tika-xmp:jar:1.5:compile
> |  |  |  +- org.apache.tika:tika-parsers:jar:1.5:compile
> |  |  |  |  +- org.apache.james:apache-mime4j-core:jar:0.7.2:compile
> |  |  |  |  +- org.apache.james:apache-mime4j-dom:jar:0.7.2:compile
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-3113) Upgrade commons-beanutils library dependency

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-3113:

Fix Version/s: (was: 1.8.0)
   1.8.1

> Upgrade commons-beanutils library dependency
> 
>
> Key: FLUME-3113
> URL: https://issues.apache.org/jira/browse/FLUME-3113
> Project: Flume
>  Issue Type: Bug
>Affects Versions: 1.7.0
>Reporter: Attila Simon
>Priority: Critical
>  Labels: dependency
> Fix For: 1.8.1
>
>
> ||Group||Artifact||Version used||Upgrade target||
> |commons-beanutils|commons-beanutils|1.7.0|1.9.3|
> |commons-beanutils|commons-beanutils-core|1.8.0|1.8.3|
> Security vulnerability: https://www.cvedetails.com/cve/CVE-2014-0114/
> Please do:
> - double check the newest version. 
> - consider to remove a dependency if better alternative is available.
> - check whether the lib change would introduce a backward incompatibility (in 
> which case please add this label `breaking_change` and fix version should be 
> the next major)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-3130) Upgrade restlet library dependency

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-3130:

Fix Version/s: (was: 1.8.0)
   1.8.1

> Upgrade restlet library dependency
> --
>
> Key: FLUME-3130
> URL: https://issues.apache.org/jira/browse/FLUME-3130
> Project: Flume
>  Issue Type: Bug
>Affects Versions: 1.7.0
>Reporter: Attila Simon
>Priority: Critical
>  Labels: dependency
> Fix For: 1.8.1
>
>
> ||Group||Artifact||Version used||Upgrade target||
> |org.restlet.jee|org.restlet|2.1.1|2.3.10|
> Security vulnerability: 
> http://www.cvedetails.com/vulnerability-list/vendor_id-12911/product_id-26316/Restlet-Restlet.html
> Maven: https://mvnrepository.com/artifact/org.restlet.jee/org.restlet
> Please do:
> - CVE might be a false alarm or mistake. Please double check.
> - double check the newest version. 
> - consider to remove a dependency if better alternative is available.
> - check whether the lib change would introduce a backward incompatibility (in 
> which case please add this label `breaking_change` and fix version should be 
> the next major)
> Excerpt from mvn dependency:tree
> {noformat}
> org.apache.flume.flume-ng-sinks:flume-ng-morphline-solr-sink:jar:1.8.0-SNAPSHOT
> +- org.apache.solr:solr-test-framework:jar:4.3.0:test
> |  +- org.apache.solr:solr-core:jar:4.3.0:compile
> |  |  +- org.restlet.jee:org.restlet:jar:2.1.1:compile
> |  |  +- org.restlet.jee:org.restlet.ext.servlet:jar:2.1.1:compile
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLUME-3115) Upgrade netty library dependency

2017-09-12 Thread Ferenc Szabo (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Szabo updated FLUME-3115:

Fix Version/s: (was: 1.8.0)
   1.8.1

> Upgrade netty library dependency
> 
>
> Key: FLUME-3115
> URL: https://issues.apache.org/jira/browse/FLUME-3115
> Project: Flume
>  Issue Type: Bug
>Affects Versions: 1.7.0
>Reporter: Attila Simon
>Assignee: Ferenc Szabo
>Priority: Critical
>  Labels: dependency
> Fix For: 1.8.1
>
>
> ||Group||Artifact||Version used||Upgrade target||
> |io.netty|netty|3.2.2.Final, 3.9.4.Final|4.1.12.Final|
> Note: This artifact was moved to:
> - New Group   io.netty
> - New Artifactnetty-all
> Security vulnerability: http://www.cvedetails.com/cve/CVE-2014-3488/
> Please do:
> - double check the newest version. 
> - consider to remove a dependency if better alternative is available.
> - check whether the lib change would introduce a backward incompatibility (in 
> which case please add this label `breaking_change` and fix version should be 
> the next major)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   >