[jira] [Commented] (FLUME-2172) Update protocol buffer from 2.4.1 to 2.5.0

2013-12-19 Thread stanley shi (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-2172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13853741#comment-13853741
 ] 

stanley shi commented on FLUME-2172:


Just wondering: the attached patch is in-correct, why is this JIRA Resolved?

> Update protocol buffer from 2.4.1 to 2.5.0
> --
>
> Key: FLUME-2172
> URL: https://issues.apache.org/jira/browse/FLUME-2172
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: v1.4.0
>Reporter: Roshan Naik
>Assignee: Roshan Naik
>  Labels: test
> Attachments: FLUME-2172.2.patch, FLUME-2172.patch
>
>
> Hadoop and HBase have upgraded to 2.5.0 of protocol buffer.
> See HADOOP-9845
> Due to backward incompatibility, this introduces runtime errors in HDFS sink 
> & HBase sink tests:
> {code}
> class 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$DeleteSnapshotResponseProto
>  overrides final method 
> getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
> Stacktrace
> java.lang.VerifyError: class 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$DeleteSnapshotResponseProto
>  overrides final method 
> getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
>   at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.(ClientNamenodeProtocolServerSideTranslatorPB.java:169)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:181)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:485)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:468)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:659)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:644)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1221)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:893)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:784)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:642)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:585)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:456)
>   at 
> org.apache.flume.sink.hdfs.TestHDFSEventSinkOnMiniCluster.simpleHDFSTest(TestHDFSEventSinkOnMiniCluster.java:85)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
>   at 
> org.junit.intern

[jira] [Commented] (FLUME-1227) Introduce some sort of SpillableChannel

2013-12-19 Thread Roshan Naik (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-1227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13853707#comment-13853707
 ] 

Roshan Naik commented on FLUME-1227:


thanks for the feedback [~brocknoland] 
Will incorporate ur feedback and update the patch soon. 

WRT to the adding notes on file channel best practices into Spillable Channel 
section, i am not too hot on that unless it has specifically to do with its 
coupling with Spillable channel. In (FLUME-2239) recently I made a note about 
multiple data dirs helping file channel perf.  Also the dual checkpoint feature 
is broken on Windows(FLUME-2224). Let me know if you feel otherwise.

> Introduce some sort of SpillableChannel
> ---
>
> Key: FLUME-1227
> URL: https://issues.apache.org/jira/browse/FLUME-1227
> Project: Flume
>  Issue Type: New Feature
>  Components: Channel
>Reporter: Jarek Jarcec Cecho
>Assignee: Roshan Naik
> Attachments: 1227.patch.1, FLUME-1227.v2.patch, FLUME-1227.v5.patch, 
> FLUME-1227.v6.patch, FLUME-1227.v7.patch, FLUME-1227.v8.patch, 
> SpillableMemory Channel Design 2.pdf, SpillableMemory Channel Design.pdf
>
>
> I would like to introduce new channel that would behave similarly as scribe 
> (https://github.com/facebook/scribe). It would be something between memory 
> and file channel. Input events would be saved directly to the memory (only) 
> and would be served from there. In case that the memory would be full, we 
> would outsource the events to file.
> Let me describe the use case behind this request. We have plenty of frontend 
> servers that are generating events. We want to send all events to just 
> limited number of machines from where we would send the data to HDFS (some 
> sort of staging layer). Reason for this second layer is our need to decouple 
> event aggregation and front end code to separate machines. Using memory 
> channel is fully sufficient as we can survive lost of some portion of the 
> events. However in order to sustain maintenance windows or networking issues 
> we would have to end up with a lot of memory assigned to those "staging" 
> machines. Referenced "scribe" is dealing with this problem by implementing 
> following logic - events are saved in memory similarly as our MemoryChannel. 
> However in case that the memory gets full (because of maintenance, networking 
> issues, ...) it will spill data to disk where they will be sitting until 
> everything start working again.
> I would like to introduce channel that would implement similar logic. It's 
> durability guarantees would be same as MemoryChannel - in case that someone 
> would remove power cord, this channel would lose data. Based on the 
> discussion in FLUME-1201, I would propose to have the implementation 
> completely independent on any other channel internal code.
> Jarcec



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (FLUME-2278) Incorrect documentation for write-timeout of File Channel

2013-12-19 Thread Gopinathan A (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-2278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13853680#comment-13853680
 ] 

Gopinathan A commented on FLUME-2278:
-

+1 for patch

> Incorrect documentation for write-timeout of File Channel
> -
>
> Key: FLUME-2278
> URL: https://issues.apache.org/jira/browse/FLUME-2278
> Project: Flume
>  Issue Type: Documentation
>  Components: Docs
>Affects Versions: v1.5.0
>Reporter: Satoshi Iijima
>Priority: Trivial
>  Labels: documentation
> Fix For: v1.5.0
>
> Attachments: FLUME-2278.patch
>
>
> write-timeout's default is listed as 3 secs, however it is actually 10 secs.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


Re: Phoenix- Hbase Sink

2013-12-19 Thread Arvind Prabhakar
Apologies for the delay Ravi and Hari - last time I tried to add you the
Wiki was being upgraded and was not ready. I have now added you in and you
should be able to see the edit button.

Regards,
Arvind Prabhakar


On Thu, Dec 19, 2013 at 5:18 PM, Hari Shreedharan  wrote:

> Arvind - Could you please give Ravi edit privileges. I don’t seem to have
> access.
>
>
> Thanks,
> Hari
>
>
> On Thursday, December 19, 2013 at 5:13 PM, Ravi Kiran wrote:
>
> > Hi Hari,
> >
> >   Can you please grant me permissions to update the WIKI to have
> pointers to Phoenix .
> >
> > Regards
> > Ravi
> >
> >
> > On Sun, Dec 15, 2013 at 7:25 AM, Ravi Kiran 
> >  maghamraviki...@gmail.com)> wrote:
> > > Hi Hari,
> > >
> > >   Its maghamravikiran
> > >
> > > Thanks
> > > Ravi.
> > >
> > >
> > >
> > > On Sat, Dec 14, 2013 at 7:03 AM, Hari Shreedharan <
> hshreedha...@cloudera.com (mailto:hshreedha...@cloudera.com)> wrote:
> > > > +dev@
> > > >
> > > > Hi Ravi,
> > > >
> > > > Can you please send your confluence (wiki) login id?
> > > > Thanks,
> > > > Hari
> > > >
> > > >
> > > > On Thursday, December 12, 2013 at 4:24 AM, Ravi Kiran wrote:
> > > >
> > > > > Hi Hari,
> > > > >
> > > > >I don't seem to have permissions to edit the page. Can you
> please grant me permissions.
> > > > >
> > > > > Regards
> > > > > Ravi
> > > > >
> > > > >
> > > > > On Thu, Dec 12, 2013 at 10:40 AM, Hari Shreedharan <
> hshreedha...@cloudera.com (mailto:hshreedha...@cloudera.com)> wrote:
> > > > > > Hi Ravi,
> > > > > >
> > > > > > Thanks for the information. You could post a link to this on the
> wiki here:
> https://cwiki.apache.org/confluence/display/FLUME/Flume+NG+Plugins for
> users to be able to find it.
> > > > > >
> > > > > >
> > > > > > Thanks,
> > > > > > Hari
> > > > > >
> > > > > >
> > > > > > On Wednesday, December 11, 2013 at 8:26 PM, Ravi Kiran wrote:
> > > > > >
> > > > > > > Hi all,
> > > > > > >
> > > > > > >The Apache Phoenix project now provides a custom sink for
> streaming Flume events into HBase. These events may be queried through SQL
> using the Phoenix JDBC driver.
> > > > > > > The detailed instructions can be found here (still on
> github until we move to Apache):
> https://github.com/forcedotcom/phoenix/wiki/Apache-Flume-Plugin.
> > > > > > >
> > > > > > >
> > > > > > > Regards
> > > > > > > Ravi
> > > > > >
> > > > >
> > > >
> > >
> >
>
>


[jira] [Commented] (FLUME-2279) HDFSSequenceFile doesn't support custom serializers

2013-12-19 Thread Harpreet Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-2279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13853578#comment-13853578
 ] 

Harpreet Singh commented on FLUME-2279:
---

No, I'm referring to this field:
serializer  TEXTOther possible options include avro_event or the 
fully-qualified class name of an implementation of the EventSerializer.Builder 
interface.


> HDFSSequenceFile doesn't support custom serializers
> ---
>
> Key: FLUME-2279
> URL: https://issues.apache.org/jira/browse/FLUME-2279
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Reporter: Harpreet Singh
>
> The HDFSEventSink has a serializer parameter that can be specified in the 
> config. However, if the fileType is set to sequence file (HDFSSequenceFile), 
> the serializer is not used. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (FLUME-2279) HDFSSequenceFile doesn't support custom serializers

2013-12-19 Thread Jeff Lord (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-2279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13853571#comment-13853571
 ] 

Jeff Lord commented on FLUME-2279:
--

If you wish to write seq files to hdfs than you have the option to use Text or 
Writable (default)
hdfs.writeFormat = Writeable

Is that what you are after?

> HDFSSequenceFile doesn't support custom serializers
> ---
>
> Key: FLUME-2279
> URL: https://issues.apache.org/jira/browse/FLUME-2279
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Reporter: Harpreet Singh
>
> The HDFSEventSink has a serializer parameter that can be specified in the 
> config. However, if the fileType is set to sequence file (HDFSSequenceFile), 
> the serializer is not used. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


Re: Phoenix- Hbase Sink

2013-12-19 Thread Hari Shreedharan
Arvind - Could you please give Ravi edit privileges. I don’t seem to have 
access.  


Thanks,
Hari


On Thursday, December 19, 2013 at 5:13 PM, Ravi Kiran wrote:

> Hi Hari,
>  
>   Can you please grant me permissions to update the WIKI to have pointers to 
> Phoenix .
>  
> Regards
> Ravi
>  
>  
> On Sun, Dec 15, 2013 at 7:25 AM, Ravi Kiran  (mailto:maghamraviki...@gmail.com)> wrote:
> > Hi Hari,
> >  
> >   Its maghamravikiran
> >  
> > Thanks
> > Ravi.
> >  
> >  
> >  
> > On Sat, Dec 14, 2013 at 7:03 AM, Hari Shreedharan 
> > mailto:hshreedha...@cloudera.com)> wrote:
> > > +dev@  
> > >  
> > > Hi Ravi,
> > >  
> > > Can you please send your confluence (wiki) login id?  
> > > Thanks,
> > > Hari
> > >  
> > >  
> > > On Thursday, December 12, 2013 at 4:24 AM, Ravi Kiran wrote:
> > >  
> > > > Hi Hari,
> > > >  
> > > >I don't seem to have permissions to edit the page. Can you please 
> > > > grant me permissions.
> > > >  
> > > > Regards
> > > > Ravi
> > > >  
> > > >  
> > > > On Thu, Dec 12, 2013 at 10:40 AM, Hari Shreedharan 
> > > > mailto:hshreedha...@cloudera.com)> wrote:
> > > > > Hi Ravi,  
> > > > >  
> > > > > Thanks for the information. You could post a link to this on the wiki 
> > > > > here: 
> > > > > https://cwiki.apache.org/confluence/display/FLUME/Flume+NG+Plugins 
> > > > > for users to be able to find it.  
> > > > >  
> > > > >  
> > > > > Thanks,
> > > > > Hari
> > > > >  
> > > > >  
> > > > > On Wednesday, December 11, 2013 at 8:26 PM, Ravi Kiran wrote:
> > > > >  
> > > > > > Hi all,
> > > > > >  
> > > > > >The Apache Phoenix project now provides a custom sink for 
> > > > > > streaming Flume events into HBase. These events may be queried 
> > > > > > through SQL using the Phoenix JDBC driver.  
> > > > > > The detailed instructions can be found here (still on github 
> > > > > > until we move to Apache):  
> > > > > > https://github.com/forcedotcom/phoenix/wiki/Apache-Flume-Plugin.
> > > > > >  
> > > > > >  
> > > > > > Regards
> > > > > > Ravi
> > > > >  
> > > >  
> > >  
> >  
>  



[jira] [Created] (FLUME-2279) HDFSSequenceFile doesn't support custom serializers

2013-12-19 Thread Harpreet Singh (JIRA)
Harpreet Singh created FLUME-2279:
-

 Summary: HDFSSequenceFile doesn't support custom serializers
 Key: FLUME-2279
 URL: https://issues.apache.org/jira/browse/FLUME-2279
 Project: Flume
  Issue Type: Bug
  Components: Sinks+Sources
Reporter: Harpreet Singh


The HDFSEventSink has a serializer parameter that can be specified in the 
config. However, if the fileType is set to sequence file (HDFSSequenceFile), 
the serializer is not used. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (FLUME-2278) Incorrect documentation for write-timeout of File Channel

2013-12-19 Thread Satoshi Iijima (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satoshi Iijima updated FLUME-2278:
--

Attachment: FLUME-2278.patch

> Incorrect documentation for write-timeout of File Channel
> -
>
> Key: FLUME-2278
> URL: https://issues.apache.org/jira/browse/FLUME-2278
> Project: Flume
>  Issue Type: Documentation
>  Components: Docs
>Reporter: Satoshi Iijima
>Priority: Trivial
> Attachments: FLUME-2278.patch
>
>
> write-timeout's default is listed as 3 secs, however it is actually 10 secs.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Created] (FLUME-2278) Incorrect documentation for write-timeout of File Channel

2013-12-19 Thread Satoshi Iijima (JIRA)
Satoshi Iijima created FLUME-2278:
-

 Summary: Incorrect documentation for write-timeout of File Channel
 Key: FLUME-2278
 URL: https://issues.apache.org/jira/browse/FLUME-2278
 Project: Flume
  Issue Type: Documentation
  Components: Docs
Reporter: Satoshi Iijima
Priority: Trivial


write-timeout's default is listed as 3 secs, however it is actually 10 secs.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (FLUME-1491) Dynamic configuration from Zookeeper watcher

2013-12-19 Thread Ashish Paliwal (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-1491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13852932#comment-13852932
 ] 

Ashish Paliwal commented on FLUME-1491:
---

Hi Brock,

Thanks for the review :)

For #1 I haven't started on it, was waiting for the feedback. Shall start 
working on it.
For #2 The file is store per agent, AFAIK, even with a lot of configurations 
options, the file may not even breach 100KB. 

I may not have complete details for #2, so my assumption may be incorrect. My 
day job is not related to Flume, so don't have production experience. Let me 
know, I will try find other options.

> Dynamic configuration from Zookeeper watcher
> 
>
> Key: FLUME-1491
> URL: https://issues.apache.org/jira/browse/FLUME-1491
> Project: Flume
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: v1.2.0
>Reporter: Denny Ye
>Assignee: Ashish Paliwal
>  Labels: Zookeeper
> Attachments: FLUME-1491-2.patch, FLUME-1491-3.patch, 
> FLUME-1491-4.patch, FLUME-1491-5.patch
>
>
> Currently, Flume only support file-level dynamic configuration. Another 
> frequent usage in practical environment, we would like to manage 
> configuration with Zookeeper, and modify configuration from Web UI to stored 
> file in Zookeeper. 
> Flume should support this method with Zookeeper watcher.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (FLUME-1491) Dynamic configuration from Zookeeper watcher

2013-12-19 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-1491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13852921#comment-13852921
 ] 

Brock Noland commented on FLUME-1491:
-

Hi Ashish,

Great work!! Thank you very much for contributing to Flume! I have two concerns 
with the current approach:

1) I think the big use-case for pulling configuration from ZK would be 
automatic reconfiguration but this patch doesn't implement that.
2) The patch stores the entire contents of the file in a single znode which has 
a 1MB size limit by default.

> Dynamic configuration from Zookeeper watcher
> 
>
> Key: FLUME-1491
> URL: https://issues.apache.org/jira/browse/FLUME-1491
> Project: Flume
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: v1.2.0
>Reporter: Denny Ye
>Assignee: Ashish Paliwal
>  Labels: Zookeeper
> Attachments: FLUME-1491-2.patch, FLUME-1491-3.patch, 
> FLUME-1491-4.patch, FLUME-1491-5.patch
>
>
> Currently, Flume only support file-level dynamic configuration. Another 
> frequent usage in practical environment, we would like to manage 
> configuration with Zookeeper, and modify configuration from Web UI to stored 
> file in Zookeeper. 
> Flume should support this method with Zookeeper watcher.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


Re: [DISCUSS] Feature bloat and contrib module

2013-12-19 Thread Bruno Mahé

On 12/18/2013 08:46 AM, Ashish wrote:

I am with you for most discussion, but on two points, we differ a bit

- Easy In - Easy Out, would be a bit difficult to achieve. Easy in is easy,
but on what basis we move a component out. Someone not maintaining it or
someone not using it. Would be difficult for us to know. Probably there may
be easy way of doing this, BigTop deals with these cases more often than me.



There are multiple ways of doing it.
One way would be to kick it out if it is broken. Broken as in, build is 
broken and/or tests not passing. Or if someone is introducing a sizable 
change and that component would need a big update but no one willing to 
do it.
I would not consider a component as being a target for being kicked out 
if it is just about some bugs not being actively fixed. Maybe the 
current version is good enough and not everyone hit such issues.
And if a component is broken, then someone would call it for removal on 
the mailinglist and/or jira and notify the last few people who touched 
that part. All of this with ample notice (1-2 weeks) so interested 
parties can jump in, in case they are on vacation or not following the 
mailing-lists every single day for instance.


Overall, I would rely more on best judgment rather than process.
In any case I assume the process would have to be adjusted.



- Committers are facilitators agree, but we do need to review the
contributions, else the whole process would break.



Reviewing patches is still somewhat blurry. Some people will vaguely 
look at the code, while some others will do a complete and thorough 
testing themselves.


So depending on the community wishes and the patch being reviewed, the 
following could happen:


* Commiter could raise the quality bar. Since no one is familiar with 
this area, it is fair to ask for a higher quality.
For instance insisting on extensive unit tests or even going beyond unit 
tests by providing integration tests running against the real thing. As 
an example, new projects being added in Apache Bigtop need to provide 
deployment recipes and integration tests.
With regards to Apache Flume, Apache Bigtop could probably help with 
integration tests since it already has everything needed to deploy 
clusters and run feature tests. So integration tests could probably be 
contributed to Apache Bigtop which would run them for the Apache community.


* Commiter looks at the code before comiting it if everything looks 
fine. Remaining issues can be resolved later on.






 From the discussion it seems that nobody seems to be inclined towards
contrib. How about giving components place under sources/sinks/interceptors
modules which are already present. Most of the times, only these would be
the things that would come in. These components maintain compatibility with
Flume's current version, get released alongside. Extra burden for core
Dev's for sure, but would save a lot of trouble for Users. Final call will
still be with Flume Dev's

@Bruno - Buddy Please do not stop work on Redis component. This is the last
thing we want. I am not a Committer here, but definitely willing to pitch
in for review and testing it a bit (would learn redis as well :) )

cheers
ashish




I haven't stopped working on the Redis component :)
We are happily using Flume along with the Redis component to push a few 
billions events every day (and still ramping up) in elasticsearch,


Since I have to get stuff done, I cannot wait for the community to 
decide if source/sinks are ok or not.
So I already converted and using my redis component into a Flume plugin, 
and am using that.
While not rocket science, re-preparing a patch for Apache Flume based on 
my latest updates will still take some time. So I am waiting for the 
conclusion of this discussion before spending time on it.
Either way the code should be open sourced, whether it is part of Apache 
Flume or on Github.
In the mean time I am working on pushing what I have in Github and will 
do so as soon as I can.



Thanks,
Bruno


[jira] [Commented] (FLUME-1491) Dynamic configuration from Zookeeper watcher

2013-12-19 Thread Ashish Paliwal (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-1491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13852743#comment-13852743
 ] 

Ashish Paliwal commented on FLUME-1491:
---

I shall fix as per the comments.

About auto-reload, I am not sure. File based implementation has one mode where 
the config file change would initiate a reload.
If its needed we can add a watch on the node and initiate a reload on the 
change notification. IMHO, we can get this simple implementation in, and then 
can refine it.

Thanks for the review.

> Dynamic configuration from Zookeeper watcher
> 
>
> Key: FLUME-1491
> URL: https://issues.apache.org/jira/browse/FLUME-1491
> Project: Flume
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: v1.2.0
>Reporter: Denny Ye
>Assignee: Ashish Paliwal
>  Labels: Zookeeper
> Attachments: FLUME-1491-2.patch, FLUME-1491-3.patch, 
> FLUME-1491-4.patch, FLUME-1491-5.patch
>
>
> Currently, Flume only support file-level dynamic configuration. Another 
> frequent usage in practical environment, we would like to manage 
> configuration with Zookeeper, and modify configuration from Web UI to stored 
> file in Zookeeper. 
> Flume should support this method with Zookeeper watcher.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)