[GitHub] nifi pull request: NIFI-1594: Add option to bulk using Index or Up...

2016-03-04 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/255#discussion_r55112614
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearch.java
 ---
@@ -99,6 +99,14 @@
 AttributeExpression.ResultType.STRING, true))
 .build();
 
+public static final PropertyDescriptor INDEX_OP = new 
PropertyDescriptor.Builder()
+.name("Index Operation")
+.description("The type of the operation used to index (index, 
update, upsert)")
--- End diff --

The description mentions three modes but only two are allowed. Should they 
be "insert" and "update/upsert", or is "index" the preferred terminology for 
adding a new document?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: heisenbug causing "lost" content claims

2016-03-04 Thread Michael Moser
MergeContent exists in the flow in several places.  I haven't noticed any
UnknownHostExceptions, though my grep of the logs is still running ...

-- Mike


On Fri, Mar 4, 2016 at 5:13 PM, Joe Gresock  wrote:

> Are there any UnknownHostExceptions in the logs prior to entering this
> state?  We had a similar problem where NiFi kept opening sockets to connect
> to an RPG and eventually ran into "too many open files" after several
> days.  We had to add the fqdn of the RPG host to our /etc/hosts file before
> NiFi would resolve it, and the problem went away.
>
> On Fri, Mar 4, 2016 at 5:04 PM, Joe Witt  wrote:
>
> > Mike,
> >
> > Does this flow have MergeContent processor on it?
> >
> > Thanks
> > Joe
> >
> > On Fri, Mar 4, 2016 at 4:59 PM, Michael Moser 
> wrote:
> > > Thanks for the reply, Mark.
> > >
> > > NIFI-1577 isn't the cause because I don't think we were using any
> > processor
> > > that does ProcessSession.append().
> > > NIFI-1527 mentions a problem that occurs when NiFi starts, and our NiFi
> > had
> > > been running for several days.
> > >
> > > Setting aside the "Too many open files" cause for the moment.  Here's
> > what
> > > we saw when the NiFi JVM encountered Too many open files:
> > >
> > > ERROR [Site-to-Site Worker Thread]
> > o.a.nifi.remote.SocketRemoteSiteListener
> > > Unable to communicate with remote instance due to
> > > o.a.nifi.processor.exception.ProcessException:
> > > o.a.nifi.processor.exception.FlowFileAccessException: Failed to import
> > data
> > > from org.apache.nifi.stream.io.MinimumLengthInputStream@1234 for
> > > StandardFlowFileRecord[uuid=foo,claim=,offset=0,name=filename,size=0]
> due
> > > to org.apache.nifi.processor.exception.FlowFileAccessException: Unable
> to
> > > create ContentClaim due to java.io.FileNotFoundException:
> > > content_repository/1/1-1 (Too many open files); closing connection
> > >
> > > This NiFi instance was using a remote process group Input Port to
> accept
> > > new files.  It appears after the exception that a flowfile exists in
> the
> > > flowfile_repository but the ContentClaim doesn't get a chance to exist
> in
> > > the content_repository.
> > >
> > > -- Mike
> > >
> > >
> > >
> > > On Fri, Mar 4, 2016 at 3:03 PM, Mark Payne 
> wrote:
> > >
> > >> Tony,
> > >>
> > >> The two tickets that come to mind are:
> > >> https://issues.apache.org/jira/browse/NIFI-1577 <
> > >> https://issues.apache.org/jira/browse/NIFI-1577> (Too many open
> files)
> > >> https://issues.apache.org/jira/browse/NIFI-1527 <
> > >> https://issues.apache.org/jira/browse/NIFI-1527> (ContentNotFound)
> > >>
> > >> Do these sound like they may be what is causing your issues?
> > >>
> > >> Thanks
> > >> -Mark
> > >>
> > >>
> > >> > On Mar 4, 2016, at 2:57 PM, Tony Kurc  wrote:
> > >> >
> > >> > All,
> > >> > I wanted to describe an issue on a nifi instance we've been using
> > 0.4.1
> > >> on,
> > >> > and why diagnosing it and reproducing it may be difficult. This is
> on
> > a
> > >> > linux server, where we have a reasonably high load, and the error
> > happens
> > >> > infrequently, but when it does, it really gums up operations.
> > >> >
> > >> > At some point we get an IOException for too many open files. (with
> an
> > >> > awfully high limit of open files in ulimit, so not sure why that is
> > >> > happening).
> > >> >
> > >> > Some time later, when trying to read a flowfile in a processor, we
> > get a
> > >> > ContentNotFoundException because presumably a flowfile is pointing
> to
> > >> > content that was never written. When this happens, we basically have
> > to
> > >> > remove the flowfile manually (and if no one is watching at the
> moment
> > or
> > >> > the processor that reads isn't configured to handle this, or if
> you're
> > >> not
> > >> > using 0.5.x where you can selectively remove flowfiles from a queue
> > this
> > >> > can cause operational challenges).
> > >> >
> > >> > Because this happens so infrequently, I'm not sure if others have
> seen
> > >> > this. I'm not sure if something in the framework may need to
> > adjustment
> > >> if
> > >> > a content claim goes wrong, but I really didn't expect that a
> flowfile
> > >> with
> > >> > no actual content should be able to be created, which seems to be
> what
> > >> > happened (rather than the content being deleted or corrupted).
> > >> >
> > >> > Anyone else experience this, or know maybe if something in 0.5.X may
> > have
> > >> > addressed this (looking through the release notes, nothing jumped
> > out).
> > >> >
> > >> > Tony
> > >>
> > >>
> >
>
>
>
> --
> I know what it is to be in need, and I know what it is to have plenty.  I
> have learned the secret of being content in any and every situation,
> whether well fed or hungry, whether living in plenty or in want.  I can do
> all this through him who gives me strength.*-Philippians 4:12-13*
>
> This email has been sent from a virus-free computer protected by Avast.
> www.avast.com
> <
> https://www.avast.com/sig-email?utm_medium=

Re: heisenbug causing "lost" content claims

2016-03-04 Thread Joe Gresock
Are there any UnknownHostExceptions in the logs prior to entering this
state?  We had a similar problem where NiFi kept opening sockets to connect
to an RPG and eventually ran into "too many open files" after several
days.  We had to add the fqdn of the RPG host to our /etc/hosts file before
NiFi would resolve it, and the problem went away.

On Fri, Mar 4, 2016 at 5:04 PM, Joe Witt  wrote:

> Mike,
>
> Does this flow have MergeContent processor on it?
>
> Thanks
> Joe
>
> On Fri, Mar 4, 2016 at 4:59 PM, Michael Moser  wrote:
> > Thanks for the reply, Mark.
> >
> > NIFI-1577 isn't the cause because I don't think we were using any
> processor
> > that does ProcessSession.append().
> > NIFI-1527 mentions a problem that occurs when NiFi starts, and our NiFi
> had
> > been running for several days.
> >
> > Setting aside the "Too many open files" cause for the moment.  Here's
> what
> > we saw when the NiFi JVM encountered Too many open files:
> >
> > ERROR [Site-to-Site Worker Thread]
> o.a.nifi.remote.SocketRemoteSiteListener
> > Unable to communicate with remote instance due to
> > o.a.nifi.processor.exception.ProcessException:
> > o.a.nifi.processor.exception.FlowFileAccessException: Failed to import
> data
> > from org.apache.nifi.stream.io.MinimumLengthInputStream@1234 for
> > StandardFlowFileRecord[uuid=foo,claim=,offset=0,name=filename,size=0] due
> > to org.apache.nifi.processor.exception.FlowFileAccessException: Unable to
> > create ContentClaim due to java.io.FileNotFoundException:
> > content_repository/1/1-1 (Too many open files); closing connection
> >
> > This NiFi instance was using a remote process group Input Port to accept
> > new files.  It appears after the exception that a flowfile exists in the
> > flowfile_repository but the ContentClaim doesn't get a chance to exist in
> > the content_repository.
> >
> > -- Mike
> >
> >
> >
> > On Fri, Mar 4, 2016 at 3:03 PM, Mark Payne  wrote:
> >
> >> Tony,
> >>
> >> The two tickets that come to mind are:
> >> https://issues.apache.org/jira/browse/NIFI-1577 <
> >> https://issues.apache.org/jira/browse/NIFI-1577> (Too many open files)
> >> https://issues.apache.org/jira/browse/NIFI-1527 <
> >> https://issues.apache.org/jira/browse/NIFI-1527> (ContentNotFound)
> >>
> >> Do these sound like they may be what is causing your issues?
> >>
> >> Thanks
> >> -Mark
> >>
> >>
> >> > On Mar 4, 2016, at 2:57 PM, Tony Kurc  wrote:
> >> >
> >> > All,
> >> > I wanted to describe an issue on a nifi instance we've been using
> 0.4.1
> >> on,
> >> > and why diagnosing it and reproducing it may be difficult. This is on
> a
> >> > linux server, where we have a reasonably high load, and the error
> happens
> >> > infrequently, but when it does, it really gums up operations.
> >> >
> >> > At some point we get an IOException for too many open files. (with an
> >> > awfully high limit of open files in ulimit, so not sure why that is
> >> > happening).
> >> >
> >> > Some time later, when trying to read a flowfile in a processor, we
> get a
> >> > ContentNotFoundException because presumably a flowfile is pointing to
> >> > content that was never written. When this happens, we basically have
> to
> >> > remove the flowfile manually (and if no one is watching at the moment
> or
> >> > the processor that reads isn't configured to handle this, or if you're
> >> not
> >> > using 0.5.x where you can selectively remove flowfiles from a queue
> this
> >> > can cause operational challenges).
> >> >
> >> > Because this happens so infrequently, I'm not sure if others have seen
> >> > this. I'm not sure if something in the framework may need to
> adjustment
> >> if
> >> > a content claim goes wrong, but I really didn't expect that a flowfile
> >> with
> >> > no actual content should be able to be created, which seems to be what
> >> > happened (rather than the content being deleted or corrupted).
> >> >
> >> > Anyone else experience this, or know maybe if something in 0.5.X may
> have
> >> > addressed this (looking through the release notes, nothing jumped
> out).
> >> >
> >> > Tony
> >>
> >>
>



-- 
I know what it is to be in need, and I know what it is to have plenty.  I
have learned the secret of being content in any and every situation,
whether well fed or hungry, whether living in plenty or in want.  I can do
all this through him who gives me strength.*-Philippians 4:12-13*

This email has been sent from a virus-free computer protected by Avast.
www.avast.com

<#DDB4FAA8-2DD7-40BB-A1B8-4E2AA1F9FDF2>


Re: heisenbug causing "lost" content claims

2016-03-04 Thread Joe Witt
Mike,

Does this flow have MergeContent processor on it?

Thanks
Joe

On Fri, Mar 4, 2016 at 4:59 PM, Michael Moser  wrote:
> Thanks for the reply, Mark.
>
> NIFI-1577 isn't the cause because I don't think we were using any processor
> that does ProcessSession.append().
> NIFI-1527 mentions a problem that occurs when NiFi starts, and our NiFi had
> been running for several days.
>
> Setting aside the "Too many open files" cause for the moment.  Here's what
> we saw when the NiFi JVM encountered Too many open files:
>
> ERROR [Site-to-Site Worker Thread] o.a.nifi.remote.SocketRemoteSiteListener
> Unable to communicate with remote instance due to
> o.a.nifi.processor.exception.ProcessException:
> o.a.nifi.processor.exception.FlowFileAccessException: Failed to import data
> from org.apache.nifi.stream.io.MinimumLengthInputStream@1234 for
> StandardFlowFileRecord[uuid=foo,claim=,offset=0,name=filename,size=0] due
> to org.apache.nifi.processor.exception.FlowFileAccessException: Unable to
> create ContentClaim due to java.io.FileNotFoundException:
> content_repository/1/1-1 (Too many open files); closing connection
>
> This NiFi instance was using a remote process group Input Port to accept
> new files.  It appears after the exception that a flowfile exists in the
> flowfile_repository but the ContentClaim doesn't get a chance to exist in
> the content_repository.
>
> -- Mike
>
>
>
> On Fri, Mar 4, 2016 at 3:03 PM, Mark Payne  wrote:
>
>> Tony,
>>
>> The two tickets that come to mind are:
>> https://issues.apache.org/jira/browse/NIFI-1577 <
>> https://issues.apache.org/jira/browse/NIFI-1577> (Too many open files)
>> https://issues.apache.org/jira/browse/NIFI-1527 <
>> https://issues.apache.org/jira/browse/NIFI-1527> (ContentNotFound)
>>
>> Do these sound like they may be what is causing your issues?
>>
>> Thanks
>> -Mark
>>
>>
>> > On Mar 4, 2016, at 2:57 PM, Tony Kurc  wrote:
>> >
>> > All,
>> > I wanted to describe an issue on a nifi instance we've been using 0.4.1
>> on,
>> > and why diagnosing it and reproducing it may be difficult. This is on a
>> > linux server, where we have a reasonably high load, and the error happens
>> > infrequently, but when it does, it really gums up operations.
>> >
>> > At some point we get an IOException for too many open files. (with an
>> > awfully high limit of open files in ulimit, so not sure why that is
>> > happening).
>> >
>> > Some time later, when trying to read a flowfile in a processor, we get a
>> > ContentNotFoundException because presumably a flowfile is pointing to
>> > content that was never written. When this happens, we basically have to
>> > remove the flowfile manually (and if no one is watching at the moment or
>> > the processor that reads isn't configured to handle this, or if you're
>> not
>> > using 0.5.x where you can selectively remove flowfiles from a queue this
>> > can cause operational challenges).
>> >
>> > Because this happens so infrequently, I'm not sure if others have seen
>> > this. I'm not sure if something in the framework may need to adjustment
>> if
>> > a content claim goes wrong, but I really didn't expect that a flowfile
>> with
>> > no actual content should be able to be created, which seems to be what
>> > happened (rather than the content being deleted or corrupted).
>> >
>> > Anyone else experience this, or know maybe if something in 0.5.X may have
>> > addressed this (looking through the release notes, nothing jumped out).
>> >
>> > Tony
>>
>>


Re: heisenbug causing "lost" content claims

2016-03-04 Thread Michael Moser
Thanks for the reply, Mark.

NIFI-1577 isn't the cause because I don't think we were using any processor
that does ProcessSession.append().
NIFI-1527 mentions a problem that occurs when NiFi starts, and our NiFi had
been running for several days.

Setting aside the "Too many open files" cause for the moment.  Here's what
we saw when the NiFi JVM encountered Too many open files:

ERROR [Site-to-Site Worker Thread] o.a.nifi.remote.SocketRemoteSiteListener
Unable to communicate with remote instance due to
o.a.nifi.processor.exception.ProcessException:
o.a.nifi.processor.exception.FlowFileAccessException: Failed to import data
from org.apache.nifi.stream.io.MinimumLengthInputStream@1234 for
StandardFlowFileRecord[uuid=foo,claim=,offset=0,name=filename,size=0] due
to org.apache.nifi.processor.exception.FlowFileAccessException: Unable to
create ContentClaim due to java.io.FileNotFoundException:
content_repository/1/1-1 (Too many open files); closing connection

This NiFi instance was using a remote process group Input Port to accept
new files.  It appears after the exception that a flowfile exists in the
flowfile_repository but the ContentClaim doesn't get a chance to exist in
the content_repository.

-- Mike



On Fri, Mar 4, 2016 at 3:03 PM, Mark Payne  wrote:

> Tony,
>
> The two tickets that come to mind are:
> https://issues.apache.org/jira/browse/NIFI-1577 <
> https://issues.apache.org/jira/browse/NIFI-1577> (Too many open files)
> https://issues.apache.org/jira/browse/NIFI-1527 <
> https://issues.apache.org/jira/browse/NIFI-1527> (ContentNotFound)
>
> Do these sound like they may be what is causing your issues?
>
> Thanks
> -Mark
>
>
> > On Mar 4, 2016, at 2:57 PM, Tony Kurc  wrote:
> >
> > All,
> > I wanted to describe an issue on a nifi instance we've been using 0.4.1
> on,
> > and why diagnosing it and reproducing it may be difficult. This is on a
> > linux server, where we have a reasonably high load, and the error happens
> > infrequently, but when it does, it really gums up operations.
> >
> > At some point we get an IOException for too many open files. (with an
> > awfully high limit of open files in ulimit, so not sure why that is
> > happening).
> >
> > Some time later, when trying to read a flowfile in a processor, we get a
> > ContentNotFoundException because presumably a flowfile is pointing to
> > content that was never written. When this happens, we basically have to
> > remove the flowfile manually (and if no one is watching at the moment or
> > the processor that reads isn't configured to handle this, or if you're
> not
> > using 0.5.x where you can selectively remove flowfiles from a queue this
> > can cause operational challenges).
> >
> > Because this happens so infrequently, I'm not sure if others have seen
> > this. I'm not sure if something in the framework may need to adjustment
> if
> > a content claim goes wrong, but I really didn't expect that a flowfile
> with
> > no actual content should be able to be created, which seems to be what
> > happened (rather than the content being deleted or corrupted).
> >
> > Anyone else experience this, or know maybe if something in 0.5.X may have
> > addressed this (looking through the release notes, nothing jumped out).
> >
> > Tony
>
>


[GitHub] nifi pull request: NIFI-1594: Add option to bulk using Index or Up...

2016-03-04 Thread joaohf
GitHub user joaohf opened a pull request:

https://github.com/apache/nifi/pull/255

NIFI-1594: Add option to bulk using Index or Update.

Signed-off-by: João Henrique Ferreira de Freitas 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/joaohf/nifi NIFI-1594

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/255.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #255


commit 84597d253c61ed4f23c3078607da5f05145187c0
Author: João Henrique Ferreira de Freitas 
Date:   2016-02-24T17:44:34Z

NIFI-1594: Add option to bulk using Index or Update.

Signed-off-by: João Henrique Ferreira de Freitas 




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1420 Adding Splunk bundle

2016-03-04 Thread bbende
Github user bbende commented on the pull request:

https://github.com/apache/nifi/pull/233#issuecomment-192466857
  
@JPercivall Pushed up another commit that addresses the additional comments 
from today.

Part of this change I decided to go the route that @trixpan suggested and 
change ListenSplunkForwarder to ListenTCP, and as a result moved it to the 
standard bundle. This will open it up to a lot more use cases and it wasn't 
really Splunk specific. As a result I decided to take out the mime.type 
attribute since it is writing bytes to FlowFiles and may not really be 
text/plain all the time.

Let me know if anything else needs updating or was left out.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [ANNOUNCE] New Apache NiFi Committer Matt Burgess

2016-03-04 Thread Jeremy Dyer
Congratulations Matt. Look forward to consuming all your future work

> On Mar 4, 2016, at 2:30 PM, Ricky Saltzer  wrote:
> 
> Congratulations!
> 
>> On Fri, Mar 4, 2016 at 2:11 PM, Edmon Begoli  wrote:
>> 
>> Congrats, Matt. I have seen Matt's previous open source code contributions,
>> and I can attest that he is an expert programmer.
>> 
>> Edmon
>> 
>>> On Fri, Mar 4, 2016 at 2:02 PM, Joe Witt  wrote:
>>> 
>>> Congrats Matthew and very much welcome your contributions to the
>>> project in terms of reviews, code, and also blogs to help drive
>>> awareness and feedback of those capabilities.
>>> 
>>> Thanks
>>> Joe
>>> 
 On Fri, Mar 4, 2016 at 2:00 PM, Tony Kurc  wrote:
 On behalf of the Apache NiFI PMC, I am very pleased to announce that
>> Matt
 Burgess has accepted the PMC's invitation to become a committer on the
 Apache NiFi project. We greatly appreciate all of Matt's hard work and
 generous contributions to the project. We look forward to his continued
 involvement in the project.
 
 Matt has taken lead on some of our longer standing feature requests as
>>> well
 as representing the community well in the blogosphere and twitterverse.
 We're delighted to have him on board as a committer now!
 
 Tony
> 
> 
> 
> -- 
> Ricky Saltzer
> http://www.cloudera.com


Re: heisenbug causing "lost" content claims

2016-03-04 Thread Mark Payne
Tony,

The two tickets that come to mind are:
https://issues.apache.org/jira/browse/NIFI-1577 
 (Too many open files)
https://issues.apache.org/jira/browse/NIFI-1527 
 (ContentNotFound)

Do these sound like they may be what is causing your issues?

Thanks
-Mark


> On Mar 4, 2016, at 2:57 PM, Tony Kurc  wrote:
> 
> All,
> I wanted to describe an issue on a nifi instance we've been using 0.4.1 on,
> and why diagnosing it and reproducing it may be difficult. This is on a
> linux server, where we have a reasonably high load, and the error happens
> infrequently, but when it does, it really gums up operations.
> 
> At some point we get an IOException for too many open files. (with an
> awfully high limit of open files in ulimit, so not sure why that is
> happening).
> 
> Some time later, when trying to read a flowfile in a processor, we get a
> ContentNotFoundException because presumably a flowfile is pointing to
> content that was never written. When this happens, we basically have to
> remove the flowfile manually (and if no one is watching at the moment or
> the processor that reads isn't configured to handle this, or if you're not
> using 0.5.x where you can selectively remove flowfiles from a queue this
> can cause operational challenges).
> 
> Because this happens so infrequently, I'm not sure if others have seen
> this. I'm not sure if something in the framework may need to adjustment if
> a content claim goes wrong, but I really didn't expect that a flowfile with
> no actual content should be able to be created, which seems to be what
> happened (rather than the content being deleted or corrupted).
> 
> Anyone else experience this, or know maybe if something in 0.5.X may have
> addressed this (looking through the release notes, nothing jumped out).
> 
> Tony



heisenbug causing "lost" content claims

2016-03-04 Thread Tony Kurc
All,
I wanted to describe an issue on a nifi instance we've been using 0.4.1 on,
and why diagnosing it and reproducing it may be difficult. This is on a
linux server, where we have a reasonably high load, and the error happens
infrequently, but when it does, it really gums up operations.

At some point we get an IOException for too many open files. (with an
awfully high limit of open files in ulimit, so not sure why that is
happening).

Some time later, when trying to read a flowfile in a processor, we get a
ContentNotFoundException because presumably a flowfile is pointing to
content that was never written. When this happens, we basically have to
remove the flowfile manually (and if no one is watching at the moment or
the processor that reads isn't configured to handle this, or if you're not
using 0.5.x where you can selectively remove flowfiles from a queue this
can cause operational challenges).

Because this happens so infrequently, I'm not sure if others have seen
this. I'm not sure if something in the framework may need to adjustment if
a content claim goes wrong, but I really didn't expect that a flowfile with
no actual content should be able to be created, which seems to be what
happened (rather than the content being deleted or corrupted).

Anyone else experience this, or know maybe if something in 0.5.X may have
addressed this (looking through the release notes, nothing jumped out).

Tony


Re: [ANNOUNCE] New Apache NiFi Committer Matt Burgess

2016-03-04 Thread Ricky Saltzer
Congratulations!

On Fri, Mar 4, 2016 at 2:11 PM, Edmon Begoli  wrote:

> Congrats, Matt. I have seen Matt's previous open source code contributions,
> and I can attest that he is an expert programmer.
>
> Edmon
>
> On Fri, Mar 4, 2016 at 2:02 PM, Joe Witt  wrote:
>
> > Congrats Matthew and very much welcome your contributions to the
> > project in terms of reviews, code, and also blogs to help drive
> > awareness and feedback of those capabilities.
> >
> > Thanks
> > Joe
> >
> > On Fri, Mar 4, 2016 at 2:00 PM, Tony Kurc  wrote:
> > > On behalf of the Apache NiFI PMC, I am very pleased to announce that
> Matt
> > > Burgess has accepted the PMC's invitation to become a committer on the
> > > Apache NiFi project. We greatly appreciate all of Matt's hard work and
> > > generous contributions to the project. We look forward to his continued
> > > involvement in the project.
> > >
> > > Matt has taken lead on some of our longer standing feature requests as
> > well
> > > as representing the community well in the blogosphere and twitterverse.
> > > We're delighted to have him on board as a committer now!
> > >
> > > Tony
> >
>



-- 
Ricky Saltzer
http://www.cloudera.com


Re: [ANNOUNCE] New Apache NiFi Committer Matt Burgess

2016-03-04 Thread Edmon Begoli
Congrats, Matt. I have seen Matt's previous open source code contributions,
and I can attest that he is an expert programmer.

Edmon

On Fri, Mar 4, 2016 at 2:02 PM, Joe Witt  wrote:

> Congrats Matthew and very much welcome your contributions to the
> project in terms of reviews, code, and also blogs to help drive
> awareness and feedback of those capabilities.
>
> Thanks
> Joe
>
> On Fri, Mar 4, 2016 at 2:00 PM, Tony Kurc  wrote:
> > On behalf of the Apache NiFI PMC, I am very pleased to announce that Matt
> > Burgess has accepted the PMC's invitation to become a committer on the
> > Apache NiFi project. We greatly appreciate all of Matt's hard work and
> > generous contributions to the project. We look forward to his continued
> > involvement in the project.
> >
> > Matt has taken lead on some of our longer standing feature requests as
> well
> > as representing the community well in the blogosphere and twitterverse.
> > We're delighted to have him on board as a committer now!
> >
> > Tony
>


Re: [ANNOUNCE] New Apache NiFi Committer Matt Burgess

2016-03-04 Thread Joe Witt
Congrats Matthew and very much welcome your contributions to the
project in terms of reviews, code, and also blogs to help drive
awareness and feedback of those capabilities.

Thanks
Joe

On Fri, Mar 4, 2016 at 2:00 PM, Tony Kurc  wrote:
> On behalf of the Apache NiFI PMC, I am very pleased to announce that Matt
> Burgess has accepted the PMC's invitation to become a committer on the
> Apache NiFi project. We greatly appreciate all of Matt's hard work and
> generous contributions to the project. We look forward to his continued
> involvement in the project.
>
> Matt has taken lead on some of our longer standing feature requests as well
> as representing the community well in the blogosphere and twitterverse.
> We're delighted to have him on board as a committer now!
>
> Tony


[ANNOUNCE] New Apache NiFi Committer Matt Burgess

2016-03-04 Thread Tony Kurc
On behalf of the Apache NiFI PMC, I am very pleased to announce that Matt
Burgess has accepted the PMC's invitation to become a committer on the
Apache NiFi project. We greatly appreciate all of Matt's hard work and
generous contributions to the project. We look forward to his continued
involvement in the project.

Matt has taken lead on some of our longer standing feature requests as well
as representing the community well in the blogosphere and twitterverse.
We're delighted to have him on board as a committer now!

Tony


[GitHub] nifi pull request: NIFI-1420 Adding Splunk bundle

2016-03-04 Thread JPercivall
Github user JPercivall commented on the pull request:

https://github.com/apache/nifi/pull/233#issuecomment-192325813
  
Should have made this comment on the first commit but can the LogGenerator 
be put into a util package on the same path? It makes it more readable to only 
have the test classes in package "org.apache.nifi.processors.splunk".


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---