Re: [EXT] New to Nifi - Failed to update database due to a failed batch update

2017-09-27 Thread Koji Kawamura
Hi Aruna,

The XML files in the Gist page are NiFi Templates.
You can import those XML from NiFi UI. Please look at this documentation
for detail:
https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#Import_Template

As to PutDatabase doing nothing, the '1' on the right top corner of
PutDatabaseRecord indicates that one thread is running for this processor
currently.
That's strange if you don't see anything happening with it for 30 min, the
thread may be blocked unexpectedly.

If possible please take a thread dump with following command and share it
with us:
$NIFI_HOME/bin/nifi.sh dump
Then thread dump is logged at
$NIFI_HOME/logs/nifi-bootstrap

Also, please share PutDatabaseRecord and its record reader configurations
for further investigation.

Thanks,
Koji


On Thu, Sep 28, 2017 at 1:48 AM, Aruna Sankaralingam <
aruna.sankaralin...@cormac-corp.com> wrote:

> Thank you Koji. Could you please let me know how I can import the xml so
> that I can see them as nifi processors?
>
> I updated my flow as shown below. When I started PutDatabaseRecord, it is
> not doing anything. It’s been more than 30 mins. I don’t see any errors as
> well. How do I find out what is wrong?
>
>
>
>
>
> *From:* Koji Kawamura [mailto:ijokaruma...@gmail.com]
> *Sent:* Tuesday, September 26, 2017 10:22 PM
>
> *To:* users@nifi.apache.org
> *Cc:* karthi keyan
> *Subject:* Re: [EXT] New to Nifi - Failed to update database due to a
> failed batch update
>
>
>
> Hi Aruna,
>
>
>
> To explain details, I've summarized two different approaches to load a CSV
> file into a Table in this Gist page:
>
> https://gist.github.com/ijokarumawak/b37db141b4d04c2da124c1a6d922f81f
>
>
>
> One is using ConvertCSVToAvro and few additional processors.
>
> I didn't use ReplaceText as I thought altering raw SQL string would be
> error prone.
>
> This approach should work with older version of NiFi (I see you're using
> NiFi 1.2.0 in your screenshot).
>
>
>
> The another way is to use PutDatabaseRecord.
>
> This is recommended if you're able to upgrade your NiFi installation.
>
>
>
> I hope you find these examples useful.
>
>
>
> Thanks,
>
> Koji
>
>
>
> On Tue, Sep 26, 2017 at 11:23 PM, Aruna Sankaralingam <
> aruna.sankaralin...@cormac-corp.com> wrote:
>
> I am not sure I understand. This is how my CSV looks.
>
>
>
>
>
> -Original Message-
> From: Koji Kawamura [mailto:ijokaruma...@gmail.com]
> Sent: Monday, September 25, 2017 8:19 PM
> To: users@nifi.apache.org
> Cc: karthi keyan
> Subject: Re: [EXT] New to Nifi - Failed to update database due to a failed
> batch update
>
>
>
> Hi Aruna,
>
>
>
> The placeholders in your ReplaceText configuration, such as '${city_name}'
> are NiFi Expression Language. If the incoming FlowFile has such FlowFile
> Attributes, those can be replaced with FlowFile Attribute values. But I
> suspect FlowFile doesn't have those attributes since ReplaceText is
> connected right after FetchS3Object.
>
>
>
> You need to extract values from FlowFile content into FlowFile attribute
> somehow, for example, if the data fetched from S3 is a JSON, use
> EvaluateJsonPath before ReplaceText.
>
>
>
> BTW, I think you don't need to use FetchS3Object because PutS3Object
> passes the data object to its 'success' relationship. You can connect
> 'success' relationship to downstream flow like:
>
> PutS3Object -> EvaluateJsonPath -> ReplaceText -> PutSQL
>
>
>
> Also if you can upgrade NiFi to 1.3.0, PutDatabaseRecord can make the flow
> simpler and more efficient:
>
> PutS3Objecct -> PutDatabaseRecord (with arbitrary RecordReader)
>
>
>
> Thanks,
>
> Koji
>
>
>
>
>
> On Tue, Sep 26, 2017 at 12:47 AM, Aruna Sankaralingam <
> aruna.sankaralin...@cormac-corp.com> wrote:
>
> > I updated the insert statement to be in a single line. Again it
>
> > failed. I checked the flow file.
>
> >
>
> >
>
> >
>
> > INSERT INTO ADR_SUB_NIFI (enrlmt_id, city_name, zip_cd, state_cd)
>
> > VALUES ('', '', '', '')
>
> >
>
> >
>
> >
>
> > What could be the reason for the values to be blank instead of actual
>
> > values from the CSV file?
>
> >
>
> >
>
> >
>
> > From: karthi keyan [mailto:karthi93.san...@gmail.com
> ]
>
> > Sent: Monday, September 25, 2017 7:15 AM
>
> > To: users@nifi.apache.org; Aruna Sankaralingam
>
> >
>
> >
>
> > Subject: Re: [EXT] New to Nifi - Failed to update database due to a
>
> > failed batch update
>
> >
>
> >
>
> >
>
> > Aruna,
>
> >
>
> >
>
> >
>
> > seems failure in your insert statement, don't split the Replacement
>
> > value(query) in the replacetext processor into multiple lines and try
>
> > to be in a single line?
>
> >
>
> >
>
> >
>
> > -Karthik
>
> >
>
> >
>
> >
>
> > On Mon, Sep 25, 2017 at 4:20 PM, karthi keyan
>
> > 
>
> > wrote:
>
> >
>
> > Aruna,
>
> >
>
> >
>
> >
>
> > You can download the flow file to see whether your query passed
>
> > correctly and try execute the same with you datasoruce.
>
> >
>
> >
>
> >
>
> > -Karthik
>
> >
>
> >
>
> >
>
> > On Mon, Sep 

Re: ComponentLog and JUnit

2017-09-27 Thread Ryan H
I've done some more digging with mvn dependency:tree.

It looks like a jar I'm importing in my pom includes logback-classic,
version 1.0.9, at the compile scope.  The exceptions I'm getting is likely
due to it being an old version of logback-classic.  Notably, the
logback-classic in the jar I'm pulling in should probably have that scoped
to test.

Also, I noticed that nifi-mock includes the logback-classic at version
1.2.3 for you, so it's not really even necessary to include it.

I suppose the best development practice here is to logback-classic always
at a test scope, knowing the nifi container will have a copy of it.

Thanks,
Ryan


On Wed, Sep 27, 2017 at 12:50 PM, Ryan H 
wrote:

> *tl;dr:*  When using TestRunner, I can only get logging to log to the
> console/standard out when logback-classic is present in the test-scope.
>
>   Is it safe to say that NiFi, when using the TestRunner, requires
> logback-classic to be present in the test scope in the pom.xml?
>
>   I've got my stuff working now, if there's any documentation on the
> logging that I'm missing that might explain this, or the minimum
> dependencies required type thing, I'd be interested in seeing, or
> contributing some notes if not.
>
> Thanks,
> Ryan
>
> Some trial/errors:
>
> I've tried a couple options here:
> 1) slf4j-simple in pom.xml and logback-test.xml
>   RESULTS: No console output.  Stacktrace #1 below
>
> 2) [logback-*core + *slf4j-api] and logback-test.xml
>   RESULTS: No console output.  Stacktrace #1 below
>
> 3) [logback-*classic*] and logback-test.xml
>   RESULTS: Console output successful.
>
> 4) [logback-*classic*] alone
>   RESULTS: Console output successful.
>
> 5) Nothing in pom.xml for logging
>   RESULTS: No console output.  Stacktrace #2 below
>
> Another note:
> Throughout all of this, except with logback-classic, I've been seeing this
> stacktrace pop-up at this line:
># TestRunner runner = TestRunners.newTestRunner(processor);
>
> *Stacktrace #1*
> Failed to instantiate [ch.qos.logback.classic.LoggerContext]
> Reported exception:
> java.lang.AbstractMethodError: ch.qos.logback.classic.pattern.
> EnsureExceptionHandling.process(Lch/qos/logback/core/
> Context;Lch/qos/logback/core/pattern/Converter;)V
>   at ch.qos.logback.core.pattern.PatternLayoutBase.start(
> PatternLayoutBase.java:86)
>   ...
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
>   at org.apache.nifi.util.MockProvenanceReporter.(
> MockProvenanceReporter.java:36)
>   at org.apache.nifi.util.SharedSessionState.(
> SharedSessionState.java:45)
>   at org.apache.nifi.util.StandardProcessorTestRunner.(
> StandardProcessorTestRunner.java:100)
>   at org.apache.nifi.util.TestRunners.newTestRunner(TestRunners.java:24)
>   at org.testing.processors.TestProcessor.testFile(TestProcessor.java:96)
>
>
> *Stacktrace #2*
> Failed to instantiate [ch.qos.logback.classic.LoggerContext]
> Reported exception:
> java.lang.NoSuchMethodError: ch.qos.logback.core.util.loader.
> getResourceOccurrenceCount(Ljava/lang/String;Ljava/lang/
> ClassLoader;)Ljava/util/Set;
>   at ch.qos.logback.classic.util.ContextInitializer.multiplicityWarning(
> ContextInitializer.java:158)
>   ...
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
>   at org.apache.nifi.util.MockProvenanceReporter.(
> MockProvenanceReporter.java:36)
>   at org.apache.nifi.util.SharedSessionState.(
> SharedSessionState.java:45)
>   at org.apache.nifi.util.StandardProcessorTestRunner.(
> StandardProcessorTestRunner.java:100)
>   at org.apache.nifi.util.TestRunners.newTestRunner(TestRunners.java:24)
>   at org.testing.processors.TestProcessor.testFile(TestProcessor.java:96)
>
>   Is it safe to say that NiFi, when using the TestRunner, requires
> logback-classic to be present in the test scope?
>
> Thanks,
> Ryan
>
> On Wed, Sep 27, 2017 at 11:08 AM, Bryan Bende  wrote:
>
>> I think another option is to use the simple SLF4J logger
>>
>> 
>> org.slf4j
>> slf4j-simple
>> test
>> 
>>
>> Then you probably wouldn't need the logback.xml file since you
>> wouldn't be logging through logback at that point.
>>
>>
>> On Wed, Sep 27, 2017 at 10:40 AM, Ryan H 
>> wrote:
>> > I figured it out... I was missing the logback-test.xml file (now
>> included)
>> > and in my pom.xml:
>> >
>> > I had:
>> > 
>> >ch.qos.logback
>> >logback-core
>> >test
>> > 
>> >
>> > Instead I swapped to:
>> >
>> > 
>> >ch.qos.logback
>> >logback-classic
>> >test
>> > 
>> >
>> > And the logs are outputting now.
>> >
>> > Ryan
>> >
>> >
>> > On Wed, Sep 27, 2017 at 10:20 AM, Ryan H 
>> > wrote:
>> >>
>> >> Hi Andy,
>> >>It's a custom processor I'm writing.  I was scratching my head
>> >> wondering if that is it, or adding a test dependency for logback has
>> >> something to do with it.
>> >>
>> >>I just copy/pasted this one:
>> >> 

Re: Using MiNiFi as a library from with a Java program

2017-09-27 Thread Andy LoPresto
I understand your desire for a bundled SDK for data routing, but NiFi/MiNiFi 
isn’t designed for that right now. As the code is open source, you are welcome 
to extract pieces you find useful to build your own. Kafka Connect or some 
other libraries are probably more relevant to what you are trying to do now. 
I’m not saying that in the future, MiNiFi libraries won’t be exposed or made 
accessible to other clients in order to expand the reach, but currently that is 
not how the software is designed.


Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

> On Sep 27, 2017, at 9:54 AM, p pathiyil  wrote:
> 
> Thanks for that detailed response Andy.
> 
> I was not sure if it is advisable to use MiNiFi without connecting it back to 
> NiFi, so good to get clarity on that.
> 
> To take a stab at providing a reasoning for the 'MiNiFi as a library' use 
> case:
> 
> - I am looking to provide a library that exposes an SDK to clients, through 
> which they will pump in some data. I was hoping to avoid users having to run 
> a separate program (MiNiFi agent) in addition to the application that they 
> will build using the SDK. These two will have to run on client machines.
> 
> - The second has more to do with the ease of deployment for an application 
> that the user will build with an SDK. If all the dependencies come packaged 
> with the SDK, users do not have to worry about deploying 2 applications to 
> their target systems.
> 
> On Wed, Sep 27, 2017 at 11:14 AM, Andy LoPresto  > wrote:
> I'm not sure I understand this proposal. MiNiFi can be installed completely 
> separately from NiFi (and usually they are not co-located on the same 
> machine). If your requirements are to connect an arbitrary Java program with 
> Kafka to produce data that is published to a Kafka topic, MiNiFi is a 
> potential (but probably not ideal) tool, while its interdependency with NiFi 
> is only relevant if you wanted to push from MiNiFi to NiFi to Kafka.
> 
> Please let me know if I am misunderstanding, but to go from your client to 
> Kafka, I would propose the following flow:
> 
> Your code: persist or stream data somewhere (CSV/XML/JSON file, HTTP 
> endpoint, TCP packet, DB, whatever)
> 
> MiNiFi: processor to read that format/location -> processor(s) to manipulate 
> what is read in as flowfile content to massage it to expected Kafka form -> 
> PublishKafka processor
> 
> This allows you to decouple your application from the Kafka format while 
> using MiNiFi as a "glue layer" that can be updated by changing a single 
> processor should the source or destination change in the future.
> 
> MiNiFi would provide you with the queuing features like ordering, 
> prioritization, and backpressure without requiring you to code that yourself.
> 
> Hopefully this helps and if not, at least clarifies the capabilities and uses 
> of NiFi and MiNiFi.
> 
> Andy LoPresto
> alopre...@apache.org 
> alopresto.apa...@gmail.com 
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
> 
> On Sep 26, 2017, at 20:04, p pathiyil  > wrote:
> 
>> As an alternative approach, is it conceivable to write a Custom Processor 
>> and run that in the MiNiFi 'pipeline' without connecting back to NiFi 
>> (Custom Processor -> Some Connector with socket / buffer input -> 
>> PublishToKafka) ? This wouldn't hide the MiNiFi installation / runtime from 
>> the user, but will keep the 'agent' independent of the NiFi backend.
>> 
>> 
>> 
>> 
>> 
>> On Wed, Sep 27, 2017 at 6:50 AM, p pathiyil > > wrote:
>> Thanks for the pointer Joe. Let me look at Kafka Connect and see if that can 
>> be used to address this use case.
>> 
>> On Wed, Sep 27, 2017 at 6:35 AM, Joe Witt > > wrote:
>> Praveen
>> 
>> I think the direction of MiNiFi and where it can head will support
>> this case nicely.  Today though I dont think we offer the simple
>> library model you're looking for.  Fortunately, for Apache Kafka
>> specifically their community has developed Kafka Connect [1] which
>> sounds like it could be just what you need.
>> 
>> Take a look at that and if that gets you where you want to be then
>> great.  If not, or I've misunderstood the ask please let us know.
>> 
>> [1] https://kafka.apache.org/documentation/#connect 
>> 
>> 
>> Thanks
>> Joe
>> 
>> On Tue, Sep 26, 2017 at 8:50 PM, p pathiyil > > wrote:
>> > Hi Aldrin,
>> >
>> > I am looking at a couple of different flows, but to take the most simple
>> > scenario, I would like to leverage the PublishToKafka_0_10 processor of
>> > MiNiFi alone. The 

Re: Using MiNiFi as a library from with a Java program

2017-09-27 Thread p pathiyil
Thanks for that detailed response Andy.

I was not sure if it is advisable to use MiNiFi without connecting it back
to NiFi, so good to get clarity on that.

To take a stab at providing a reasoning for the 'MiNiFi as a library' use
case:

- I am looking to provide a library that exposes an SDK to clients, through
which they will pump in some data. I was hoping to avoid users having to
run a separate program (MiNiFi agent) in addition to the application that
they will build using the SDK. These two will have to run on client
machines.

- The second has more to do with the ease of deployment for an application
that the user will build with an SDK. If all the dependencies come packaged
with the SDK, users do not have to worry about deploying 2 applications to
their target systems.

On Wed, Sep 27, 2017 at 11:14 AM, Andy LoPresto 
wrote:

> I'm not sure I understand this proposal. MiNiFi can be installed
> completely separately from NiFi (and usually they are not co-located on the
> same machine). If your requirements are to connect an arbitrary Java
> program with Kafka to produce data that is published to a Kafka topic,
> MiNiFi is a potential (but probably not ideal) tool, while its
> interdependency with NiFi is only relevant if you wanted to push from
> MiNiFi to NiFi to Kafka.
>
> Please let me know if I am misunderstanding, but to go from your client to
> Kafka, I would propose the following flow:
>
> Your code: persist or stream data somewhere (CSV/XML/JSON file, HTTP
> endpoint, TCP packet, DB, whatever)
>
> MiNiFi: processor to read that format/location -> processor(s) to
> manipulate what is read in as flowfile content to massage it to expected
> Kafka form -> PublishKafka processor
>
> This allows you to decouple your application from the Kafka format while
> using MiNiFi as a "glue layer" that can be updated by changing a single
> processor should the source or destination change in the future.
>
> MiNiFi would provide you with the queuing features like ordering,
> prioritization, and backpressure without requiring you to code that
> yourself.
>
> Hopefully this helps and if not, at least clarifies the capabilities and
> uses of NiFi and MiNiFi.
>
> Andy LoPresto
> alopre...@apache.org
> alopresto.apa...@gmail.com
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> On Sep 26, 2017, at 20:04, p pathiyil  wrote:
>
> As an alternative approach, is it conceivable to write a Custom Processor
> and run that in the MiNiFi 'pipeline' without connecting back to NiFi
> (Custom Processor -> Some Connector with socket / buffer input ->
> PublishToKafka) ? This wouldn't hide the MiNiFi installation / runtime from
> the user, but will keep the 'agent' independent of the NiFi backend.
>
>
>
>
>
> On Wed, Sep 27, 2017 at 6:50 AM, p pathiyil  wrote:
>
>> Thanks for the pointer Joe. Let me look at Kafka Connect and see if that
>> can be used to address this use case.
>>
>> On Wed, Sep 27, 2017 at 6:35 AM, Joe Witt  wrote:
>>
>>> Praveen
>>>
>>> I think the direction of MiNiFi and where it can head will support
>>> this case nicely.  Today though I dont think we offer the simple
>>> library model you're looking for.  Fortunately, for Apache Kafka
>>> specifically their community has developed Kafka Connect [1] which
>>> sounds like it could be just what you need.
>>>
>>> Take a look at that and if that gets you where you want to be then
>>> great.  If not, or I've misunderstood the ask please let us know.
>>>
>>> [1] https://kafka.apache.org/documentation/#connect
>>>
>>> Thanks
>>> Joe
>>>
>>> On Tue, Sep 26, 2017 at 8:50 PM, p pathiyil  wrote:
>>> > Hi Aldrin,
>>> >
>>> > I am looking at a couple of different flows, but to take the most
>>> simple
>>> > scenario, I would like to leverage the PublishToKafka_0_10 processor of
>>> > MiNiFi alone. The application will obtain some data, send it to a
>>> Processor
>>> > that has a simple upstream connection interface like a socket or in
>>> memory
>>> > buffer (so that conversion to FlowFiles can be taken care of in that
>>> > Processor) and then use PublishToKafka_0_10 to directly publish to a
>>> Kafka
>>> > cluster. Will that be easy enough to do ?
>>> >
>>> > Thanks,
>>> > Praveen.
>>> >
>>> >
>>> >
>>> >
>>> > On Wed, Sep 27, 2017 at 12:24 AM, Aldrin Piri 
>>> wrote:
>>> >>
>>> >> Hello,
>>> >>
>>> >> Neither NiFi nor MiNiFi (Java) is currently packaged as such.  Would
>>> you
>>> >> be able to expand upon your use case of publishing to Kafka and/or
>>> what
>>> >> facets of MiNiFi you are looking to utilize?  There may be other
>>> avenues
>>> >> that get you toward your solution.
>>> >>
>>> >> Thanks,
>>> >> Aldrin
>>> >>
>>> >> On Tue, Sep 26, 2017 at 1:03 PM, p pathiyil 
>>> wrote:
>>> >>>
>>> >>> Hi,
>>> >>>
>>> >>> I am starting to look at MiNiFi for a use case that involves
>>> publishing

Re: ComponentLog and JUnit

2017-09-27 Thread Ryan H
*tl;dr:*  When using TestRunner, I can only get logging to log to the
console/standard out when logback-classic is present in the test-scope.

  Is it safe to say that NiFi, when using the TestRunner, requires
logback-classic to be present in the test scope in the pom.xml?

  I've got my stuff working now, if there's any documentation on the
logging that I'm missing that might explain this, or the minimum
dependencies required type thing, I'd be interested in seeing, or
contributing some notes if not.

Thanks,
Ryan

Some trial/errors:

I've tried a couple options here:
1) slf4j-simple in pom.xml and logback-test.xml
  RESULTS: No console output.  Stacktrace #1 below

2) [logback-*core + *slf4j-api] and logback-test.xml
  RESULTS: No console output.  Stacktrace #1 below

3) [logback-*classic*] and logback-test.xml
  RESULTS: Console output successful.

4) [logback-*classic*] alone
  RESULTS: Console output successful.

5) Nothing in pom.xml for logging
  RESULTS: No console output.  Stacktrace #2 below

Another note:
Throughout all of this, except with logback-classic, I've been seeing this
stacktrace pop-up at this line:
   # TestRunner runner = TestRunners.newTestRunner(processor);

*Stacktrace #1*
Failed to instantiate [ch.qos.logback.classic.LoggerContext]
Reported exception:
java.lang.AbstractMethodError:
ch.qos.logback.classic.pattern.EnsureExceptionHandling.process(Lch/qos/logback/core/Context;Lch/qos/logback/core/pattern/Converter;)V
  at
ch.qos.logback.core.pattern.PatternLayoutBase.start(PatternLayoutBase.java:86)
  ...
  at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
  at
org.apache.nifi.util.MockProvenanceReporter.(MockProvenanceReporter.java:36)
  at
org.apache.nifi.util.SharedSessionState.(SharedSessionState.java:45)
  at
org.apache.nifi.util.StandardProcessorTestRunner.(StandardProcessorTestRunner.java:100)
  at org.apache.nifi.util.TestRunners.newTestRunner(TestRunners.java:24)
  at org.testing.processors.TestProcessor.testFile(TestProcessor.java:96)


*Stacktrace #2*
Failed to instantiate [ch.qos.logback.classic.LoggerContext]
Reported exception:
java.lang.NoSuchMethodError:
ch.qos.logback.core.util.loader.getResourceOccurrenceCount(Ljava/lang/String;Ljava/lang/ClassLoader;)Ljava/util/Set;
  at
ch.qos.logback.classic.util.ContextInitializer.multiplicityWarning(ContextInitializer.java:158)
  ...
  at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
  at
org.apache.nifi.util.MockProvenanceReporter.(MockProvenanceReporter.java:36)
  at
org.apache.nifi.util.SharedSessionState.(SharedSessionState.java:45)
  at
org.apache.nifi.util.StandardProcessorTestRunner.(StandardProcessorTestRunner.java:100)
  at org.apache.nifi.util.TestRunners.newTestRunner(TestRunners.java:24)
  at org.testing.processors.TestProcessor.testFile(TestProcessor.java:96)

  Is it safe to say that NiFi, when using the TestRunner, requires
logback-classic to be present in the test scope?

Thanks,
Ryan

On Wed, Sep 27, 2017 at 11:08 AM, Bryan Bende  wrote:

> I think another option is to use the simple SLF4J logger
>
> 
> org.slf4j
> slf4j-simple
> test
> 
>
> Then you probably wouldn't need the logback.xml file since you
> wouldn't be logging through logback at that point.
>
>
> On Wed, Sep 27, 2017 at 10:40 AM, Ryan H 
> wrote:
> > I figured it out... I was missing the logback-test.xml file (now
> included)
> > and in my pom.xml:
> >
> > I had:
> > 
> >ch.qos.logback
> >logback-core
> >test
> > 
> >
> > Instead I swapped to:
> >
> > 
> >ch.qos.logback
> >logback-classic
> >test
> > 
> >
> > And the logs are outputting now.
> >
> > Ryan
> >
> >
> > On Wed, Sep 27, 2017 at 10:20 AM, Ryan H 
> > wrote:
> >>
> >> Hi Andy,
> >>It's a custom processor I'm writing.  I was scratching my head
> >> wondering if that is it, or adding a test dependency for logback has
> >> something to do with it.
> >>
> >>I just copy/pasted this one:
> >> https://github.com/apache/nifi/blob/d838f61291d2582592754a37314911
> b701c6891b/nifi-nar-bundles/nifi-framework-bundle/nifi-
> framework/nifi-web/nifi-web-api/src/test/resources/logback-test.xml
> >> and droped it into my src/test/resources
> >>
> >>   No such luck at seeing logs yet in my output yet.
> >>
> >> Ryan
> >>
> >> On Tue, Sep 26, 2017 at 5:59 PM, Andy LoPresto 
> >> wrote:
> >>>
> >>> Ryan,
> >>>
> >>> Which module is this running in? Some modules have a logback.xml file
> >>> defined in /src/test/resources/logback.xml which configures the test
> >>> loggers, while others do not. If this is not configured in your
> module, you
> >>> won’t see the error messages on the console. You can copy an existing
> >>> logback.xml file from one of the other modules (be sure to use one
> from the
> >>> *test* directory, not the *main* directory).
> >>>
> >>>
> >>> Andy LoPresto
> >>> alopre...@apache.org
> >>> alopresto.apa...@gmail.com

Re: Processing multiple lines per flowfile with ExtractGrok

2017-09-27 Thread Adam Lamar
Thanks Joe and Bryan. The setup is little more involved but I was able to
get ConvertRecord running with a grok reader and a json writer. And I can
confirm that setup splits records as expected by newline. Nice touch to
have multiple records contained in the same flow file! Thanks for the tip
and excuse to finally play with the record oriented processors.

Cheers,
Adam


Re: ComponentLog and JUnit

2017-09-27 Thread Bryan Bende
I think another option is to use the simple SLF4J logger


org.slf4j
slf4j-simple
test


Then you probably wouldn't need the logback.xml file since you
wouldn't be logging through logback at that point.


On Wed, Sep 27, 2017 at 10:40 AM, Ryan H  wrote:
> I figured it out... I was missing the logback-test.xml file (now included)
> and in my pom.xml:
>
> I had:
> 
>ch.qos.logback
>logback-core
>test
> 
>
> Instead I swapped to:
>
> 
>ch.qos.logback
>logback-classic
>test
> 
>
> And the logs are outputting now.
>
> Ryan
>
>
> On Wed, Sep 27, 2017 at 10:20 AM, Ryan H 
> wrote:
>>
>> Hi Andy,
>>It's a custom processor I'm writing.  I was scratching my head
>> wondering if that is it, or adding a test dependency for logback has
>> something to do with it.
>>
>>I just copy/pasted this one:
>> https://github.com/apache/nifi/blob/d838f61291d2582592754a37314911b701c6891b/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/test/resources/logback-test.xml
>> and droped it into my src/test/resources
>>
>>   No such luck at seeing logs yet in my output yet.
>>
>> Ryan
>>
>> On Tue, Sep 26, 2017 at 5:59 PM, Andy LoPresto 
>> wrote:
>>>
>>> Ryan,
>>>
>>> Which module is this running in? Some modules have a logback.xml file
>>> defined in /src/test/resources/logback.xml which configures the test
>>> loggers, while others do not. If this is not configured in your module, you
>>> won’t see the error messages on the console. You can copy an existing
>>> logback.xml file from one of the other modules (be sure to use one from the
>>> *test* directory, not the *main* directory).
>>>
>>>
>>> Andy LoPresto
>>> alopre...@apache.org
>>> alopresto.apa...@gmail.com
>>> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>>>
>>> On Sep 26, 2017, at 2:51 PM, Ryan H  wrote:
>>>
>>> Hi,
>>>I'm curious if there's a way to get the ComponentLog to output to
>>> StandardOut during a Unit Test.
>>>
>>>I see I can access them when I call
>>> testRunner.getLogger().getErrorMessages(), however, I'd really like to see a
>>> stacktrace if it happens without having to iterate through an array to find
>>> it.
>>>
>>>I feel like I'm just missing something obvious here, I'd appreciate
>>> any advice.
>>>
>>> Thanks,
>>> Ryan
>>>
>>>
>>
>


Re: ComponentLog and JUnit

2017-09-27 Thread Ryan H
I figured it out... I was missing the logback-test.xml file (now included)
and in my pom.xml:

I had:

   ch.qos.logback
   logback-*core*
   test


Instead I swapped to:


   ch.qos.logback
   logback-*classic*
   test


And the logs are outputting now.

Ryan


On Wed, Sep 27, 2017 at 10:20 AM, Ryan H 
wrote:

> Hi Andy,
>It's a custom processor I'm writing.  I was scratching my head
> wondering if that is it, or adding a test dependency for logback has
> something to do with it.
>
>I just copy/pasted this one: https://github.com/apache/nifi/blob/
> d838f61291d2582592754a37314911b701c6891b/nifi-nar-bundles/
> nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-
> api/src/test/resources/logback-test.xml and droped it into my
> src/test/resources
>
>   No such luck at seeing logs yet in my output yet.
>
> Ryan
>
> On Tue, Sep 26, 2017 at 5:59 PM, Andy LoPresto 
> wrote:
>
>> Ryan,
>>
>> Which module is this running in? Some modules have a logback.xml file
>> defined in /src/test/resources/logback.xml which configures the test
>> loggers, while others do not. If this is not configured in your module, you
>> won’t see the error messages on the console. You can copy an existing
>> logback.xml file from one of the other modules (be sure to use one from the
>> *test* directory, not the *main* directory).
>>
>>
>> Andy LoPresto
>> alopre...@apache.org
>> *alopresto.apa...@gmail.com *
>> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>>
>> On Sep 26, 2017, at 2:51 PM, Ryan H  wrote:
>>
>> Hi,
>>I'm curious if there's a way to get the ComponentLog to output to
>> StandardOut during a Unit Test.
>>
>>I see I can access them when I call 
>> testRunner.getLogger().getErrorMessages(),
>> however, I'd really like to see a stacktrace if it happens without having
>> to iterate through an array to find it.
>>
>>I feel like I'm just missing something obvious here, I'd appreciate
>> any advice.
>>
>> Thanks,
>> Ryan
>>
>>
>>
>


Re: ComponentLog and JUnit

2017-09-27 Thread Ryan H
Hi Andy,
   It's a custom processor I'm writing.  I was scratching my head wondering
if that is it, or adding a test dependency for logback has something to do
with it.

   I just copy/pasted this one:
https://github.com/apache/nifi/blob/d838f61291d2582592754a37314911b701c6891b/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/test/resources/logback-test.xml
and droped it into my src/test/resources

  No such luck at seeing logs yet in my output yet.

Ryan

On Tue, Sep 26, 2017 at 5:59 PM, Andy LoPresto  wrote:

> Ryan,
>
> Which module is this running in? Some modules have a logback.xml file
> defined in /src/test/resources/logback.xml which configures the test
> loggers, while others do not. If this is not configured in your module, you
> won’t see the error messages on the console. You can copy an existing
> logback.xml file from one of the other modules (be sure to use one from the
> *test* directory, not the *main* directory).
>
>
> Andy LoPresto
> alopre...@apache.org
> *alopresto.apa...@gmail.com *
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> On Sep 26, 2017, at 2:51 PM, Ryan H  wrote:
>
> Hi,
>I'm curious if there's a way to get the ComponentLog to output to
> StandardOut during a Unit Test.
>
>I see I can access them when I call 
> testRunner.getLogger().getErrorMessages(),
> however, I'd really like to see a stacktrace if it happens without having
> to iterate through an array to find it.
>
>I feel like I'm just missing something obvious here, I'd appreciate any
> advice.
>
> Thanks,
> Ryan
>
>
>


Re: Nifi site-to-site: How does it work and how can it scale?

2017-09-27 Thread Ali Nazemian
Thank you very much, Joe.

Do we have any recommendation regarding the maximum throughput a single
input port can achieve to understand when we need to upgrade to having
multiple input ports? Will it be a bottleneck at all? Or before hitting
that we probably already hit other bottlenecks?

Is there any document/article I can read regarding how site-to-site works
at the low level?

Regards,
Ali

On Wed, Sep 27, 2017 at 12:48 AM, Joe Witt  wrote:

> Ali
>
> 1) There are of course practical limits on how many input ports there
> can be.  Each of them do generate threads to manage those sockets.
> However, many different edge systems can send to a single input port.
> You can also demux the streams of data using flow file attributes so
> there are various ways to tackle that.  It wasn't tested against
> thousands of edge systems sending to a central cluster as the more
> common model in such a case is less of tons of spokes and one central
> hub but rather spokes sending to regional clusters which send to
> central cluster(s).  That said, it is likely it will work quite well.
>
> 2) Site-to-site has load balancing and fail-over built-in to it.  The
> s2s exchange that happens when the connection is established and over
> time is to share information about the cluster, how many nodes are in
> it, and their relative load.  This allows the clients to do weighted
> distribution, detect new or removed nodes, etc..
>
> 3) No you dont have to use the same version.  This is another huge
> benefit of s2s is that it was built with the recognition that it is
> not possible or even desirable to upgrade all systems at once across a
> large enterprise.  The protocol involves both sides of s2s transfers
> to exchange information about the flowfile/transfer protocol it
> supports.  So old nifi sending to new nifi and new nifi sending to an
> old nifi are able to come to base agreement.  The protocol and
> serialization have been quite stable but still the ability to evolve
> is baked in.
>
> Thanks
>
> On Tue, Sep 26, 2017 at 4:07 AM, Ali Nazemian 
> wrote:
> > Hi all,
> >
> >
> > I am investigating the feasibility of using multiple Nifi clusters across
> > the world to send a live traffic to a central Nifi cluster using the
> > site-to-site. I have some questions regarding this matter:
> >
> > 1- Does it scale well? Is there any performance concerns/limitations I
> need
> > to consider? Can I send live traffic from thousands of Nifi cluster to a
> > single Nifi cluster (let's suppose the central one is a huge cluster of
> > Nifi)? Is there any limitation on the number of input ports for example?
> >
> > 2- How Nifi can handle load-balancing in the site-to-site situation? Is
> that
> > per session or flow or in a batch mode? I want to understand all the
> > situations we may face regarding the network issues between different
> Nifi
> > clusters.
> >
> > 3- Do I need to use the same version of Nifi from the edge Nifis to the
> > central Nifis? Is there any chance that the site-to-site communication
> > changes significantly that we need to upgrade all the Nifi instances we
> will
> > have or it is pretty reliable now?
> >
> >
> > Regards,
> > Ali
>



-- 
A.Nazemian