Re: [VOTE] Release Apache NiFi 0.6.1 (RC2)

2016-04-14 Thread Tony Kurc
+1 (binding) built on Windows 10 with Java 7 without issue. Verification
was successful
On Apr 14, 2016 5:17 PM, "Matt Gilman"  wrote:

+1 (binding)

Build, hashes, signatures, etc all check out. Ran application
standalone/cluster in secure/unsecured mode and everything functioned as
expected.

Matt

On Thu, Apr 14, 2016 at 2:35 PM, Joe Skora  wrote:

> +1 (non-binding)
>
> * signature and hashes verify
> * built fine using JDK1.7.0_80 with contrib-check on Centos 6.7
> * build artifacts look good
> * deploys and runs as expected
>
>
> On Thu, Apr 14, 2016 at 9:59 AM, Mark Payne  wrote:
>
> > +1 (binding)
> >
> > Downloaded and verified signature and hashes.
> > Built on OSX with contrib-check and had no problems.
> > Verified README, NOTICE, and LICENSE files.
> > All looks good to me.
> >
> > Thanks
> > -Mark
> >
> >
> >
> > > On Apr 14, 2016, at 8:45 AM, Joe Percivall
> >  wrote:
> > >
> > > +1 (non-binding)
> > >
> > > Went through helper to verify build. Also verified the zip using the
> > SHA-512 that Joe linked in the second helper email. Ran a contrib check
> > build on Windows 8 and OSX. Tested a couple templates as well.
> > >
> > > - - - - - -
> > > Joseph Percivall
> > > linkedin.com/in/Percivall
> > > e: joeperciv...@yahoo.com
> > >
> > >
> > >
> > > On Wednesday, April 13, 2016 8:56 PM, Matt Burgess <
> mattyb...@gmail.com>
> > wrote:
> > >
> > >
> > >
> > > +1 (non-binding)
> > >
> > > Ran release verifier, checked artifacts, ran in standalone and 1-node
> > cluster (secure and insecure), tried some flows, everything looked fine.
> > >
> > >
> > >
> > >> On Apr 12, 2016, at 7:47 PM, Joe Witt  wrote:
> > >>
> > >> Hello Apache NiFi Community,
> > >>
> > >> I am pleased to be calling this vote for the source release of Apache
> > >> NiFi 0.6.1.
> > >>
> > >> The source zip, including signatures, digests, etc. can be found at:
> > >> https://dist.apache.org/repos/dist/dev/nifi/nifi-0.6.1/
> > >>
> > >> The Git tag is nifi-0.6.1-RC2
> > >> The Git commit hash is 1a67b4de2e504bbe1a0cdbd6cccd949f997a5ad5
> > >> *
> >
>
https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=commit;h=1a67b4de2e504bbe1a0cdbd6cccd949f997a5ad5
> > >> *
> >
>
https://github.com/apache/nifi/commit/1a67b4de2e504bbe1a0cdbd6cccd949f997a5ad5
> > >>
> > >> Checksums of nifi-0.6.1-source-release.zip:
> > >> MD5: 5bb2b80e0384f89e6055ad4b0dd45294
> > >> SHA1: b262664ed077f28623866d2a1090a4034dc3c04a
> > >>
> > >> Release artifacts are signed with the following key:
> > >> https://people.apache.org/keys/committer/joewitt.asc
> > >>
> > >> KEYS file available here:
> > >> https://dist.apache.org/repos/dist/release/nifi/KEYS
> > >>
> > >> 13 issues were closed/resolved for this release:
> > >>
> >
>
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12316020&version=12335496
> > >> Release note highlights can be found here:
> > >>
> >
>
https://cwiki.apache.org/confluence/display/NIFI/Release+Notes#ReleaseNotes-Version0.6.1
> > >>
> > >> The vote will be open for 72 hours.
> > >> Please download the release candidate and evaluate the necessary
items
> > >> including checking hashes, signatures, build from source, and test.
> Then
> > >> please vote:
> > >>
> > >> [ ] +1 Release this package as nifi-0.6.1
> > >> [ ] +0 no opinion
> > >> [ ] -1 Do not release this package because...
> > >>
> > >> Thanks!
> >
> >
>


Re: GetKafka blowing up with assertion error in Kafka client code

2016-04-14 Thread Oleg Zhurakousky
Thanks Chris

Indeed let us know if/when/how to reproduce it so we can evaluate and see if it 
is something we can validate/handle in NiFi before it is passed to Kafka (e.g., 
validation etc)

Cheers
Oleg

> On Apr 14, 2016, at 8:25 PM, McDermott, Chris Kevin (MSDU - 
> STaTS/StorefrontRemote)  wrote:
> 
> I looked at the Kafka client code and it seemed to me to be a bug in the 
> caller. There is a map passed that maps topics to number of consumers. In 
> this case it asserting that the number of consumers is greater than zero. If 
> I can repro the problem I'll try to isolate it in the debugger and provide 
> more details.
> 
> 
> 
> Sent from my Verizon, Samsung Galaxy smartphone
> 
> 
>  Original message 
> From: Oleg Zhurakousky 
> Date: 4/14/16 4:14 PM (GMT-05:00)
> To: dev@nifi.apache.org
> Subject: Re: GetKafka blowing up with assertion error in Kafka client code
> 
> Chris
> That is correct and for a change I am pretty happy to see this stack trace as 
> it clearly shows the problem and validates the approach we have.
> So here are more details. . .
> 
> The root failure is in Kafka (as you can see from the stack trace). All we 
> are doing is encapsulating interaction with Kafka into cancelable Future so 
> we can cancel if and when Kafka deadlocks (which we noticed happens rather 
> often)
> When we execute Future.get() it results in ExecutionException which caries 
> the original Kafka exception (AssertionError).
> Now I am not sure what that assertion error really means in the context of 
> what you are trying to do but its clearly a problem originated in Kafka.
> Could you share your config or whatever other details?
> 
> Cheers
> Oleg
> 
>> On Apr 14, 2016, at 4:00 PM, McDermott, Chris Kevin (MSDU - 
>> STaTS/StorefrontRemote)  wrote:
>> 
>> I’m running based of of 0.7.0 Snapshot.  The GetKafka config is pretty 
>> generic.  Batch size 1, 1 concurrent task.
>> 
>> 
>> 2016-04-14 19:27:23,204 ERROR [Timer-Driven Process Thread-9] 
>> o.apache.nifi.processors.kafka.GetKafka
>> java.lang.IllegalStateException: java.util.concurrent.ExecutionException: 
>> java.lang.AssertionError: assertion failed
>>   at 
>> org.apache.nifi.processors.kafka.GetKafka.onTrigger(GetKafka.java:355) 
>> ~[na:na]
>>   at 
>> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>>  ~[nifi-api-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>   at 
>> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1059)
>>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>   at 
>> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>   at 
>> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>   at 
>> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:123)
>>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>   at 
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
>> [na:1.8.0_45]
>>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
>> [na:1.8.0_45]
>>   at 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>>  [na:1.8.0_45]
>>   at 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>>  [na:1.8.0_45]
>>   at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>  [na:1.8.0_45]
>>   at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>  [na:1.8.0_45]
>>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
>> Caused by: java.util.concurrent.ExecutionException: 
>> java.lang.AssertionError: assertion failed
>>   at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
>> [na:1.8.0_45]
>>   at java.util.concurrent.FutureTask.get(FutureTask.java:206) 
>> [na:1.8.0_45]
>>   at 
>> org.apache.nifi.processors.kafka.GetKafka.onTrigger(GetKafka.java:348) 
>> ~[na:na]
>>   ... 12 common frames omitted
>> Caused by: java.lang.AssertionError: assertion failed
>>   at scala.Predef$.assert(Predef.scala:165) ~[na:na]
>>   at 
>> kafka.consumer.TopicCount$$anonfun$makeConsumerThreadIdsPerTopic$2.apply(TopicCount.scala:51)
>>  ~[na:na]
>>   at 
>> kafka.consumer.TopicCount$$anonfun$makeConsumerThreadIdsPerTopic$2.apply(TopicCount.scala:49)
>>  ~[na:na]
>>   at 
>> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
>>  ~[na:na]
>>   at scala.collection.immutable.Map$Map1.foreach(Map.scala:109) ~[na:na]
>>   at 
>> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
>>  ~[na:

RE: GetKafka blowing up with assertion error in Kafka client code

2016-04-14 Thread McDermott, Chris Kevin (MSDU - STaTS/StorefrontRemote)
I looked at the Kafka client code and it seemed to me to be a bug in the 
caller. There is a map passed that maps topics to number of consumers. In this 
case it asserting that the number of consumers is greater than zero. If I can 
repro the problem I'll try to isolate it in the debugger and provide more 
details.



Sent from my Verizon, Samsung Galaxy smartphone


 Original message 
From: Oleg Zhurakousky 
Date: 4/14/16 4:14 PM (GMT-05:00)
To: dev@nifi.apache.org
Subject: Re: GetKafka blowing up with assertion error in Kafka client code

Chris
That is correct and for a change I am pretty happy to see this stack trace as 
it clearly shows the problem and validates the approach we have.
So here are more details. . .

The root failure is in Kafka (as you can see from the stack trace). All we are 
doing is encapsulating interaction with Kafka into cancelable Future so we can 
cancel if and when Kafka deadlocks (which we noticed happens rather often)
When we execute Future.get() it results in ExecutionException which caries the 
original Kafka exception (AssertionError).
Now I am not sure what that assertion error really means in the context of what 
you are trying to do but its clearly a problem originated in Kafka.
Could you share your config or whatever other details?

Cheers
Oleg

> On Apr 14, 2016, at 4:00 PM, McDermott, Chris Kevin (MSDU - 
> STaTS/StorefrontRemote)  wrote:
>
> I’m running based of of 0.7.0 Snapshot.  The GetKafka config is pretty 
> generic.  Batch size 1, 1 concurrent task.
>
>
> 2016-04-14 19:27:23,204 ERROR [Timer-Driven Process Thread-9] 
> o.apache.nifi.processors.kafka.GetKafka
> java.lang.IllegalStateException: java.util.concurrent.ExecutionException: 
> java.lang.AssertionError: assertion failed
>at 
> org.apache.nifi.processors.kafka.GetKafka.onTrigger(GetKafka.java:355) 
> ~[na:na]
>at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  ~[nifi-api-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1059)
>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:123)
>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_45]
>at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_45]
>at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_45]
>at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_45]
>at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_45]
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_45]
>at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> Caused by: java.util.concurrent.ExecutionException: java.lang.AssertionError: 
> assertion failed
>at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
> [na:1.8.0_45]
>at java.util.concurrent.FutureTask.get(FutureTask.java:206) 
> [na:1.8.0_45]
>at 
> org.apache.nifi.processors.kafka.GetKafka.onTrigger(GetKafka.java:348) 
> ~[na:na]
>... 12 common frames omitted
> Caused by: java.lang.AssertionError: assertion failed
>at scala.Predef$.assert(Predef.scala:165) ~[na:na]
>at 
> kafka.consumer.TopicCount$$anonfun$makeConsumerThreadIdsPerTopic$2.apply(TopicCount.scala:51)
>  ~[na:na]
>at 
> kafka.consumer.TopicCount$$anonfun$makeConsumerThreadIdsPerTopic$2.apply(TopicCount.scala:49)
>  ~[na:na]
>at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
>  ~[na:na]
>at scala.collection.immutable.Map$Map1.foreach(Map.scala:109) ~[na:na]
>at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
>  ~[na:na]
>at 
> kafka.consumer.TopicCount$.makeConsumerThreadIdsPerTopic(TopicCount.scala:49) 
> ~[na:na]
>at 
> kafka.consumer.StaticTopicCount.getConsumerThreadIdsPerTopic(TopicCount.scala:113)
>  ~[na:na]
>at 
> kafka.consumer.ZookeeperConsumerConnector.consume(ZookeeperConsumerConnector.scala:226)
>  ~[na:na]
>at 
> kafka.javaapi.consumer.ZookeeperConsumerConnector.createMessageStreams(Zookeeper

[GitHub] nifi-minifi pull request: MINIFI-15 Created a config file format w...

2016-04-14 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi-minifi/pull/8#discussion_r59806518
  
--- Diff: 
minifi-bootstrap/src/main/java/org/apache/nifi/minifi/bootstrap/util/ConfigTransformer.java
 ---
@@ -0,0 +1,571 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.minifi.bootstrap.util;
+
+
+import org.apache.nifi.controller.FlowSerializationException;
+import org.w3c.dom.DOMException;
+import org.w3c.dom.Document;
+import org.w3c.dom.Element;
+import org.yaml.snakeyaml.Yaml;
+
+import javax.xml.parsers.DocumentBuilder;
+import javax.xml.parsers.DocumentBuilderFactory;
+import javax.xml.parsers.ParserConfigurationException;
+import javax.xml.transform.OutputKeys;
+import javax.xml.transform.Transformer;
+import javax.xml.transform.TransformerException;
+import javax.xml.transform.TransformerFactory;
+import javax.xml.transform.TransformerFactoryConfigurationError;
+import javax.xml.transform.dom.DOMSource;
+import javax.xml.transform.stream.StreamResult;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileNotFoundException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.io.PrintWriter;
+import java.io.UnsupportedEncodingException;
+import java.nio.file.Files;
+import java.nio.file.Paths;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.zip.GZIPOutputStream;
+
+public final class ConfigTransformer {
+// Underlying version NIFI POC will be using
+public static final String NIFI_VERSION = "0.6.0";
+
+public static final String NAME_KEY = "name";
+public static final String COMMENT_KEY = "comment";
+public static final String ALWAYS_SYNC_KEY = "always sync";
+public static final String YIELD_PERIOD_KEY = "yield period";
+public static final String MAX_CONCURRENT_TASKS_KEY = "max concurrent 
tasks";
+public static final String ID_KEY = "id";
+
+public static final String FLOW_CONTROLLER_PROPS_KEY = "Flow 
Controller";
+
+public static final String CORE_PROPS_KEY = "Core Properties";
+public static final String FLOW_CONTROLLER_SHUTDOWN_PERIOD_KEY = "flow 
controller graceful shutdown period";
+public static final String FLOW_SERVICE_WRITE_DELAY_INTERVAL_KEY = 
"flow service write delay interval";
+public static final String ADMINISTRATIVE_YIELD_DURATION_KEY = 
"administrative yield duration";
+public static final String BORED_YIELD_DURATION_KEY = "bored yield 
duration";
+
+public static final String FLOWFILE_REPO_KEY = "FlowFile Repository";
+public static final String PARTITIONS_KEY = "partitions";
+public static final String CHECKPOINT_INTERVAL_KEY = "checkpoint 
interval";
+public static final String THRESHOLD_KEY = "queue swap threshold";
+public static final String SWAP_PROPS_KEY = "Swap";
+public static final String IN_PERIOD_KEY = "in period";
+public static final String IN_THREADS_KEY = "in threads";
+public static final String OUT_PERIOD_KEY = "out period";
+public static final String OUT_THREADS_KEY = "out threads";
+
+
+public static final String CONTENT_REPO_KEY = "Content Repository";
+public static final String CONTENT_CLAIM_MAX_APPENDABLE_SIZE_KEY = 
"content claim max appendable size";
+public static final String CONTENT_CLAIM_MAX_FLOW_FILES_KEY = "content 
claim max flow files";
+
+public static final String COMPONENT_STATUS_REPO_KEY = "Component 
Status Repository";
+public static final String BUFFER_SIZE_KEY = "buffer size";
+public static final String SNAPSHOT_FREQUENCY_KEY = "snapshot 
frequency";
+
+public static final String SECURITY_PROPS_KEY = "Security Properties";
+public static final String KEYSTORE_KEY = "keystore";
+public static final String KEYSTORE_TYPE_KEY = "keyst

[GitHub] nifi-minifi pull request: MINIFI-12 initial commit of http config ...

2016-04-14 Thread JPercivall
GitHub user JPercivall opened a pull request:

https://github.com/apache/nifi-minifi/pull/9

MINIFI-12 initial commit of http config change notifier



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/JPercivall/nifi-minifi MINIFI-12

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi/pull/9.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #9


commit eefdacfd2b9f319b754d94f837bc7ca90c963a6e
Author: Joseph Percivall 
Date:   2016-03-31T21:49:40Z

MINIFI-9 initial commit for boostrapping/init process

commit 15184c6832fc5f3760da63468f5bfd8eeb00725a
Author: Joseph Percivall 
Date:   2016-04-14T22:56:39Z

MINIFI-12 initial commit of http config change notifier




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] Release Apache NiFi 0.6.1 (RC2)

2016-04-14 Thread Matt Gilman
+1 (binding)

Build, hashes, signatures, etc all check out. Ran application
standalone/cluster in secure/unsecured mode and everything functioned as
expected.

Matt

On Thu, Apr 14, 2016 at 2:35 PM, Joe Skora  wrote:

> +1 (non-binding)
>
> * signature and hashes verify
> * built fine using JDK1.7.0_80 with contrib-check on Centos 6.7
> * build artifacts look good
> * deploys and runs as expected
>
>
> On Thu, Apr 14, 2016 at 9:59 AM, Mark Payne  wrote:
>
> > +1 (binding)
> >
> > Downloaded and verified signature and hashes.
> > Built on OSX with contrib-check and had no problems.
> > Verified README, NOTICE, and LICENSE files.
> > All looks good to me.
> >
> > Thanks
> > -Mark
> >
> >
> >
> > > On Apr 14, 2016, at 8:45 AM, Joe Percivall
> >  wrote:
> > >
> > > +1 (non-binding)
> > >
> > > Went through helper to verify build. Also verified the zip using the
> > SHA-512 that Joe linked in the second helper email. Ran a contrib check
> > build on Windows 8 and OSX. Tested a couple templates as well.
> > >
> > > - - - - - -
> > > Joseph Percivall
> > > linkedin.com/in/Percivall
> > > e: joeperciv...@yahoo.com
> > >
> > >
> > >
> > > On Wednesday, April 13, 2016 8:56 PM, Matt Burgess <
> mattyb...@gmail.com>
> > wrote:
> > >
> > >
> > >
> > > +1 (non-binding)
> > >
> > > Ran release verifier, checked artifacts, ran in standalone and 1-node
> > cluster (secure and insecure), tried some flows, everything looked fine.
> > >
> > >
> > >
> > >> On Apr 12, 2016, at 7:47 PM, Joe Witt  wrote:
> > >>
> > >> Hello Apache NiFi Community,
> > >>
> > >> I am pleased to be calling this vote for the source release of Apache
> > >> NiFi 0.6.1.
> > >>
> > >> The source zip, including signatures, digests, etc. can be found at:
> > >> https://dist.apache.org/repos/dist/dev/nifi/nifi-0.6.1/
> > >>
> > >> The Git tag is nifi-0.6.1-RC2
> > >> The Git commit hash is 1a67b4de2e504bbe1a0cdbd6cccd949f997a5ad5
> > >> *
> >
> https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=commit;h=1a67b4de2e504bbe1a0cdbd6cccd949f997a5ad5
> > >> *
> >
> https://github.com/apache/nifi/commit/1a67b4de2e504bbe1a0cdbd6cccd949f997a5ad5
> > >>
> > >> Checksums of nifi-0.6.1-source-release.zip:
> > >> MD5: 5bb2b80e0384f89e6055ad4b0dd45294
> > >> SHA1: b262664ed077f28623866d2a1090a4034dc3c04a
> > >>
> > >> Release artifacts are signed with the following key:
> > >> https://people.apache.org/keys/committer/joewitt.asc
> > >>
> > >> KEYS file available here:
> > >> https://dist.apache.org/repos/dist/release/nifi/KEYS
> > >>
> > >> 13 issues were closed/resolved for this release:
> > >>
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12316020&version=12335496
> > >> Release note highlights can be found here:
> > >>
> >
> https://cwiki.apache.org/confluence/display/NIFI/Release+Notes#ReleaseNotes-Version0.6.1
> > >>
> > >> The vote will be open for 72 hours.
> > >> Please download the release candidate and evaluate the necessary items
> > >> including checking hashes, signatures, build from source, and test.
> Then
> > >> please vote:
> > >>
> > >> [ ] +1 Release this package as nifi-0.6.1
> > >> [ ] +0 no opinion
> > >> [ ] -1 Do not release this package because...
> > >>
> > >> Thanks!
> >
> >
>


[GitHub] nifi pull request: NIFI-361 - Create Processors to mutate JSON dat...

2016-04-14 Thread YolandaMDavis
GitHub user YolandaMDavis opened a pull request:

https://github.com/apache/nifi/pull/354

NIFI-361 - Create Processors to mutate JSON data

This is an initial implementation of the TransformJSON processor using the 
Jolt library. TransformJSON supports Jolt specifications for the following 
transformations:  Chain, Shift, Remove, and Default. Users will be able to add 
the TransformJSON processor, select the transformation they wish to apply and 
enter the specification for the given transformation. 

Details for creating Jolt specifications can be found 
[here](https://github.com/bazaarvoice/jolt)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/YolandaMDavis/nifi NIFI-361

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/354.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #354


commit 68b5d65de0e0b1787a45bea7bcc50fdb2b625655
Author: Yolanda M. Davis 
Date:   2016-04-14T12:19:41Z

NIFI-361 Updates to test for processor including latest master merge

commit 236471961bb577768167d3f16fd99bda3f2f1a54
Author: Yolanda M. Davis 
Date:   2016-04-14T20:18:54Z

NIFI-361 add missing asterisk to documentation




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: GetKafka blowing up with assertion error in Kafka client code

2016-04-14 Thread Joe Witt
As with any system to system interaction things can happen.  All
systems, including Kafka provide facilities to allow systems writing
to or consuming from it, to recover from failure cases.  So let's just
focus on what the config/environment is and do our best to provide
ways to work past these issues.  It doesn't help us or anyone else to
highlight frequent deadlocks so let's just stay focused on what we can
do to help.

On Thu, Apr 14, 2016 at 4:13 PM, Oleg Zhurakousky
 wrote:
> Chris
> That is correct and for a change I am pretty happy to see this stack trace as 
> it clearly shows the problem and validates the approach we have.
> So here are more details. . .
>
> The root failure is in Kafka (as you can see from the stack trace). All we 
> are doing is encapsulating interaction with Kafka into cancelable Future so 
> we can cancel if and when Kafka deadlocks (which we noticed happens rather 
> often)
> When we execute Future.get() it results in ExecutionException which caries 
> the original Kafka exception (AssertionError).
> Now I am not sure what that assertion error really means in the context of 
> what you are trying to do but its clearly a problem originated in Kafka.
> Could you share your config or whatever other details?
>
> Cheers
> Oleg
>
>> On Apr 14, 2016, at 4:00 PM, McDermott, Chris Kevin (MSDU - 
>> STaTS/StorefrontRemote)  wrote:
>>
>> I’m running based of of 0.7.0 Snapshot.  The GetKafka config is pretty 
>> generic.  Batch size 1, 1 concurrent task.
>>
>>
>> 2016-04-14 19:27:23,204 ERROR [Timer-Driven Process Thread-9] 
>> o.apache.nifi.processors.kafka.GetKafka
>> java.lang.IllegalStateException: java.util.concurrent.ExecutionException: 
>> java.lang.AssertionError: assertion failed
>>at 
>> org.apache.nifi.processors.kafka.GetKafka.onTrigger(GetKafka.java:355) 
>> ~[na:na]
>>at 
>> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>>  ~[nifi-api-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>at 
>> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1059)
>>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>at 
>> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>at 
>> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>at 
>> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:123)
>>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>>at 
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
>> [na:1.8.0_45]
>>at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
>> [na:1.8.0_45]
>>at 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>>  [na:1.8.0_45]
>>at 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>>  [na:1.8.0_45]
>>at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>  [na:1.8.0_45]
>>at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>  [na:1.8.0_45]
>>at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
>> Caused by: java.util.concurrent.ExecutionException: 
>> java.lang.AssertionError: assertion failed
>>at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
>> [na:1.8.0_45]
>>at java.util.concurrent.FutureTask.get(FutureTask.java:206) 
>> [na:1.8.0_45]
>>at 
>> org.apache.nifi.processors.kafka.GetKafka.onTrigger(GetKafka.java:348) 
>> ~[na:na]
>>... 12 common frames omitted
>> Caused by: java.lang.AssertionError: assertion failed
>>at scala.Predef$.assert(Predef.scala:165) ~[na:na]
>>at 
>> kafka.consumer.TopicCount$$anonfun$makeConsumerThreadIdsPerTopic$2.apply(TopicCount.scala:51)
>>  ~[na:na]
>>at 
>> kafka.consumer.TopicCount$$anonfun$makeConsumerThreadIdsPerTopic$2.apply(TopicCount.scala:49)
>>  ~[na:na]
>>at 
>> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
>>  ~[na:na]
>>at scala.collection.immutable.Map$Map1.foreach(Map.scala:109) ~[na:na]
>>at 
>> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
>>  ~[na:na]
>>at 
>> kafka.consumer.TopicCount$.makeConsumerThreadIdsPerTopic(TopicCount.scala:49)
>>  ~[na:na]
>>at 
>> kafka.consumer.StaticTopicCount.getConsumerThreadIdsPerTopic(TopicCount.scala:113)
>>  ~[na:na]
>>at 
>> kafka.consumer.ZookeeperConsumerConnector.consume(ZookeeperConsumerConnector.scala:226)
>>  ~[na:na]
>>at 
>> kafka.javaapi.consumer.ZookeeperConsumerCo

Re: GetKafka blowing up with assertion error in Kafka client code

2016-04-14 Thread Oleg Zhurakousky
Chris
That is correct and for a change I am pretty happy to see this stack trace as 
it clearly shows the problem and validates the approach we have. 
So here are more details. . .

The root failure is in Kafka (as you can see from the stack trace). All we are 
doing is encapsulating interaction with Kafka into cancelable Future so we can 
cancel if and when Kafka deadlocks (which we noticed happens rather often)
When we execute Future.get() it results in ExecutionException which caries the 
original Kafka exception (AssertionError). 
Now I am not sure what that assertion error really means in the context of what 
you are trying to do but its clearly a problem originated in Kafka.
Could you share your config or whatever other details?

Cheers
Oleg

> On Apr 14, 2016, at 4:00 PM, McDermott, Chris Kevin (MSDU - 
> STaTS/StorefrontRemote)  wrote:
> 
> I’m running based of of 0.7.0 Snapshot.  The GetKafka config is pretty 
> generic.  Batch size 1, 1 concurrent task.
> 
> 
> 2016-04-14 19:27:23,204 ERROR [Timer-Driven Process Thread-9] 
> o.apache.nifi.processors.kafka.GetKafka
> java.lang.IllegalStateException: java.util.concurrent.ExecutionException: 
> java.lang.AssertionError: assertion failed
>at 
> org.apache.nifi.processors.kafka.GetKafka.onTrigger(GetKafka.java:355) 
> ~[na:na]
>at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>  ~[nifi-api-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1059)
>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:123)
>  [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
>at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_45]
>at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_45]
>at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_45]
>at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_45]
>at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_45]
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_45]
>at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> Caused by: java.util.concurrent.ExecutionException: java.lang.AssertionError: 
> assertion failed
>at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
> [na:1.8.0_45]
>at java.util.concurrent.FutureTask.get(FutureTask.java:206) 
> [na:1.8.0_45]
>at 
> org.apache.nifi.processors.kafka.GetKafka.onTrigger(GetKafka.java:348) 
> ~[na:na]
>... 12 common frames omitted
> Caused by: java.lang.AssertionError: assertion failed
>at scala.Predef$.assert(Predef.scala:165) ~[na:na]
>at 
> kafka.consumer.TopicCount$$anonfun$makeConsumerThreadIdsPerTopic$2.apply(TopicCount.scala:51)
>  ~[na:na]
>at 
> kafka.consumer.TopicCount$$anonfun$makeConsumerThreadIdsPerTopic$2.apply(TopicCount.scala:49)
>  ~[na:na]
>at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
>  ~[na:na]
>at scala.collection.immutable.Map$Map1.foreach(Map.scala:109) ~[na:na]
>at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
>  ~[na:na]
>at 
> kafka.consumer.TopicCount$.makeConsumerThreadIdsPerTopic(TopicCount.scala:49) 
> ~[na:na]
>at 
> kafka.consumer.StaticTopicCount.getConsumerThreadIdsPerTopic(TopicCount.scala:113)
>  ~[na:na]
>at 
> kafka.consumer.ZookeeperConsumerConnector.consume(ZookeeperConsumerConnector.scala:226)
>  ~[na:na]
>at 
> kafka.javaapi.consumer.ZookeeperConsumerConnector.createMessageStreams(ZookeeperConsumerConnector.scala:85)
>  ~[na:na]
>at 
> kafka.javaapi.consumer.ZookeeperConsumerConnector.createMessageStreams(ZookeeperConsumerConnector.scala:97)
>  ~[na:na]
>at 
> org.apache.nifi.processors.kafka.GetKafka.createConsumers(GetKafka.java:281) 
> ~[na:na]
>at org.apache.nifi.processors.kafka.GetKafka$1.call(GetKafka.java:343) 
> ~[na:na]
>at org.apache.nifi.processors.kafka.GetKafka$1.call(GetKafka.java:340) 
> ~[na:na]
>at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_45]
>

GetKafka blowing up with assertion error in Kafka client code

2016-04-14 Thread McDermott, Chris Kevin (MSDU - STaTS/StorefrontRemote)
I’m running based of of 0.7.0 Snapshot.  The GetKafka config is pretty generic. 
 Batch size 1, 1 concurrent task.


2016-04-14 19:27:23,204 ERROR [Timer-Driven Process Thread-9] 
o.apache.nifi.processors.kafka.GetKafka
java.lang.IllegalStateException: java.util.concurrent.ExecutionException: 
java.lang.AssertionError: assertion failed
at 
org.apache.nifi.processors.kafka.GetKafka.onTrigger(GetKafka.java:355) ~[na:na]
at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
 ~[nifi-api-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1059)
 [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
 [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
 [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:123)
 [nifi-framework-core-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_45]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
[na:1.8.0_45]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 [na:1.8.0_45]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 [na:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
Caused by: java.util.concurrent.ExecutionException: java.lang.AssertionError: 
assertion failed
at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
[na:1.8.0_45]
at java.util.concurrent.FutureTask.get(FutureTask.java:206) 
[na:1.8.0_45]
at 
org.apache.nifi.processors.kafka.GetKafka.onTrigger(GetKafka.java:348) ~[na:na]
... 12 common frames omitted
Caused by: java.lang.AssertionError: assertion failed
at scala.Predef$.assert(Predef.scala:165) ~[na:na]
at 
kafka.consumer.TopicCount$$anonfun$makeConsumerThreadIdsPerTopic$2.apply(TopicCount.scala:51)
 ~[na:na]
at 
kafka.consumer.TopicCount$$anonfun$makeConsumerThreadIdsPerTopic$2.apply(TopicCount.scala:49)
 ~[na:na]
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
 ~[na:na]
at scala.collection.immutable.Map$Map1.foreach(Map.scala:109) ~[na:na]
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771) 
~[na:na]
at 
kafka.consumer.TopicCount$.makeConsumerThreadIdsPerTopic(TopicCount.scala:49) 
~[na:na]
at 
kafka.consumer.StaticTopicCount.getConsumerThreadIdsPerTopic(TopicCount.scala:113)
 ~[na:na]
at 
kafka.consumer.ZookeeperConsumerConnector.consume(ZookeeperConsumerConnector.scala:226)
 ~[na:na]
at 
kafka.javaapi.consumer.ZookeeperConsumerConnector.createMessageStreams(ZookeeperConsumerConnector.scala:85)
 ~[na:na]
at 
kafka.javaapi.consumer.ZookeeperConsumerConnector.createMessageStreams(ZookeeperConsumerConnector.scala:97)
 ~[na:na]
at 
org.apache.nifi.processors.kafka.GetKafka.createConsumers(GetKafka.java:281) 
~[na:na]
at org.apache.nifi.processors.kafka.GetKafka$1.call(GetKafka.java:343) 
~[na:na]
at org.apache.nifi.processors.kafka.GetKafka$1.call(GetKafka.java:340) 
~[na:na]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[na:1.8.0_45]
... 3 common frames omitted


Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)

2016-04-14 Thread McDermott, Chris Kevin (MSDU - STaTS/StorefrontRemote)
I’m seeing this a lot in my logs.  Does anyone have any idea what it is about?

The cluster view in the UI shows all nodes connected.

2016-04-14 19:45:29,310 INFO [Framework Task Thread 
Thread-2-SendThread(localhost:2181)] org.apache.zookeeper.ClientCnxn Opening 
socket connection to server localhost/127.0.0.1:2181. Will not attempt to 
authenticate using SASL (unknown error)
2016-04-14 19:45:29,313 WARN [Framework Task Thread 
Thread-2-SendThread(localhost:2181)] org.apache.zookeeper.ClientCnxn Session 
0x0 for server null, unexpected error, closing socket connection and attempting 
reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_45]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) 
~[na:1.8.0_45]
at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
 ~[na:na]
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) ~[na:na]



Re: [VOTE] Release Apache NiFi 0.6.1 (RC2)

2016-04-14 Thread Joe Skora
+1 (non-binding)

* signature and hashes verify
* built fine using JDK1.7.0_80 with contrib-check on Centos 6.7
* build artifacts look good
* deploys and runs as expected


On Thu, Apr 14, 2016 at 9:59 AM, Mark Payne  wrote:

> +1 (binding)
>
> Downloaded and verified signature and hashes.
> Built on OSX with contrib-check and had no problems.
> Verified README, NOTICE, and LICENSE files.
> All looks good to me.
>
> Thanks
> -Mark
>
>
>
> > On Apr 14, 2016, at 8:45 AM, Joe Percivall
>  wrote:
> >
> > +1 (non-binding)
> >
> > Went through helper to verify build. Also verified the zip using the
> SHA-512 that Joe linked in the second helper email. Ran a contrib check
> build on Windows 8 and OSX. Tested a couple templates as well.
> >
> > - - - - - -
> > Joseph Percivall
> > linkedin.com/in/Percivall
> > e: joeperciv...@yahoo.com
> >
> >
> >
> > On Wednesday, April 13, 2016 8:56 PM, Matt Burgess 
> wrote:
> >
> >
> >
> > +1 (non-binding)
> >
> > Ran release verifier, checked artifacts, ran in standalone and 1-node
> cluster (secure and insecure), tried some flows, everything looked fine.
> >
> >
> >
> >> On Apr 12, 2016, at 7:47 PM, Joe Witt  wrote:
> >>
> >> Hello Apache NiFi Community,
> >>
> >> I am pleased to be calling this vote for the source release of Apache
> >> NiFi 0.6.1.
> >>
> >> The source zip, including signatures, digests, etc. can be found at:
> >> https://dist.apache.org/repos/dist/dev/nifi/nifi-0.6.1/
> >>
> >> The Git tag is nifi-0.6.1-RC2
> >> The Git commit hash is 1a67b4de2e504bbe1a0cdbd6cccd949f997a5ad5
> >> *
> https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=commit;h=1a67b4de2e504bbe1a0cdbd6cccd949f997a5ad5
> >> *
> https://github.com/apache/nifi/commit/1a67b4de2e504bbe1a0cdbd6cccd949f997a5ad5
> >>
> >> Checksums of nifi-0.6.1-source-release.zip:
> >> MD5: 5bb2b80e0384f89e6055ad4b0dd45294
> >> SHA1: b262664ed077f28623866d2a1090a4034dc3c04a
> >>
> >> Release artifacts are signed with the following key:
> >> https://people.apache.org/keys/committer/joewitt.asc
> >>
> >> KEYS file available here:
> >> https://dist.apache.org/repos/dist/release/nifi/KEYS
> >>
> >> 13 issues were closed/resolved for this release:
> >>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12316020&version=12335496
> >> Release note highlights can be found here:
> >>
> https://cwiki.apache.org/confluence/display/NIFI/Release+Notes#ReleaseNotes-Version0.6.1
> >>
> >> The vote will be open for 72 hours.
> >> Please download the release candidate and evaluate the necessary items
> >> including checking hashes, signatures, build from source, and test. Then
> >> please vote:
> >>
> >> [ ] +1 Release this package as nifi-0.6.1
> >> [ ] +0 no opinion
> >> [ ] -1 Do not release this package because...
> >>
> >> Thanks!
>
>


[GitHub] nifi-minifi pull request: MINIFI-15 Created a config file format w...

2016-04-14 Thread apiri
Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi/pull/8#discussion_r59759228
  
--- Diff: 
minifi-bootstrap/src/main/java/org/apache/nifi/minifi/bootstrap/util/ConfigTransformer.java
 ---
@@ -0,0 +1,571 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.minifi.bootstrap.util;
+
+
+import org.apache.nifi.controller.FlowSerializationException;
+import org.w3c.dom.DOMException;
+import org.w3c.dom.Document;
+import org.w3c.dom.Element;
+import org.yaml.snakeyaml.Yaml;
+
+import javax.xml.parsers.DocumentBuilder;
+import javax.xml.parsers.DocumentBuilderFactory;
+import javax.xml.parsers.ParserConfigurationException;
+import javax.xml.transform.OutputKeys;
+import javax.xml.transform.Transformer;
+import javax.xml.transform.TransformerException;
+import javax.xml.transform.TransformerFactory;
+import javax.xml.transform.TransformerFactoryConfigurationError;
+import javax.xml.transform.dom.DOMSource;
+import javax.xml.transform.stream.StreamResult;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileNotFoundException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.io.PrintWriter;
+import java.io.UnsupportedEncodingException;
+import java.nio.file.Files;
+import java.nio.file.Paths;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.zip.GZIPOutputStream;
+
+public final class ConfigTransformer {
+// Underlying version NIFI POC will be using
+public static final String NIFI_VERSION = "0.6.0";
+
+public static final String NAME_KEY = "name";
+public static final String COMMENT_KEY = "comment";
+public static final String ALWAYS_SYNC_KEY = "always sync";
+public static final String YIELD_PERIOD_KEY = "yield period";
+public static final String MAX_CONCURRENT_TASKS_KEY = "max concurrent 
tasks";
+public static final String ID_KEY = "id";
+
+public static final String FLOW_CONTROLLER_PROPS_KEY = "Flow 
Controller";
+
+public static final String CORE_PROPS_KEY = "Core Properties";
+public static final String FLOW_CONTROLLER_SHUTDOWN_PERIOD_KEY = "flow 
controller graceful shutdown period";
+public static final String FLOW_SERVICE_WRITE_DELAY_INTERVAL_KEY = 
"flow service write delay interval";
+public static final String ADMINISTRATIVE_YIELD_DURATION_KEY = 
"administrative yield duration";
+public static final String BORED_YIELD_DURATION_KEY = "bored yield 
duration";
+
+public static final String FLOWFILE_REPO_KEY = "FlowFile Repository";
+public static final String PARTITIONS_KEY = "partitions";
+public static final String CHECKPOINT_INTERVAL_KEY = "checkpoint 
interval";
+public static final String THRESHOLD_KEY = "queue swap threshold";
+public static final String SWAP_PROPS_KEY = "Swap";
+public static final String IN_PERIOD_KEY = "in period";
+public static final String IN_THREADS_KEY = "in threads";
+public static final String OUT_PERIOD_KEY = "out period";
+public static final String OUT_THREADS_KEY = "out threads";
+
+
+public static final String CONTENT_REPO_KEY = "Content Repository";
+public static final String CONTENT_CLAIM_MAX_APPENDABLE_SIZE_KEY = 
"content claim max appendable size";
+public static final String CONTENT_CLAIM_MAX_FLOW_FILES_KEY = "content 
claim max flow files";
+
+public static final String COMPONENT_STATUS_REPO_KEY = "Component 
Status Repository";
+public static final String BUFFER_SIZE_KEY = "buffer size";
+public static final String SNAPSHOT_FREQUENCY_KEY = "snapshot 
frequency";
+
+public static final String SECURITY_PROPS_KEY = "Security Properties";
+public static final String KEYSTORE_KEY = "keystore";
+public static final String KEYSTORE_TYPE_KEY = "keystore t

Re: Multiple nar/custom processors: advisable directory structure

2016-04-14 Thread Oleg Zhurakousky
Unfortunately I’ll answer the question with the question ;)
Is the additional processor related to the previous one? For example we have a 
single bundle with more then one processor (e.g., Get/PutSomething). If so then 
you can create another Processor in the same bundle (NAR).
If it is not then you should start a separate NAR.

Keep in mind that each NAR provides a class loader isolation, so another way of 
looking at this is do the two+ processor require different class path?
Does that help?

Cheers
Oleg

> On Apr 14, 2016, at 11:20 AM, idioma  wrote:
> 
> Hi,
> currently, I have one custom processor + test in a similar folder structure
> in my IDE (IntelliJ):
> 
> -CustomProcessors
>   -nifi-myprocessor-nar
>   -nifi-myprocessor
>  -src
>  -main
>  -java
>  MyProcessor.java
>  -test
>  -MyProcessorTest.java
> 
> I am now in the process to add another processor, what is the best approach?
> Shall I have 2 new folders for the nar and one containing the actual
> processor? I would like to generate a basic structure for the processor (as
> it describes here:
> https://community.hortonworks.com/articles/4318/build-custom-nifi-processor.html).
> Is that advisable when adding another custom processor?
> 
> Thanks,
> 
> 
> 
> 
> --
> View this message in context: 
> http://apache-nifi-developer-list.39713.n7.nabble.com/Multiple-nar-custom-processors-advisable-directory-structure-tp9089.html
> Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.
> 



[GitHub] nifi pull request: [NIFI-1761] Initial AngularJS application boots...

2016-04-14 Thread scottyaslan
Github user scottyaslan commented on a diff in the pull request:

https://github.com/apache/nifi/pull/331#discussion_r59753289
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/pom.xml
 ---
@@ -628,8 +634,13 @@
 js/d3/**/*,
 js/codemirror/**/*,
 js/jquery/**/*,
+js/**/**/*,
--- End diff --

This was for the new angular libs, I will update it to be more specific: 

js/angular/**/*


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Multiple nar/custom processors: advisable directory structure

2016-04-14 Thread Joe Witt
Hello

You can certainly group your processors together in the same module.
No need to make different module/directories.  You are free to use
whatever package naming structure you need under there.

When deciding whether to include things in the same nar or not
consider their shared dependencies and if there is value in them be
together (they use the same dependencies) or separate (they use
different ones or are likely to diverge).

Thanks
Joe

On Thu, Apr 14, 2016 at 12:27 PM, Russell Bateman
 wrote:
> Welcome Idioma...
>
> 1) You'll want to subsume your new processors under deeper Java packages
> (you probably knew that).
>
> 2) In addition to the Java code, you'll add:
> -src
>   -main
> - resources
>   - META-INF
> - services
>   - org.apache.nifi.processor.Processor
> containing a list of the package paths to each of your new processors.
>
> Does this help?
>
>
> On 04/14/2016 09:20 AM, idioma wrote:
>>
>> Hi,
>> currently, I have one custom processor + test in a similar folder
>> structure
>> in my IDE (IntelliJ):
>>
>> -CustomProcessors
>> -nifi-myprocessor-nar
>> -nifi-myprocessor
>>-src
>>-main
>>-java
>>MyProcessor.java
>>-test
>>-MyProcessorTest.java
>>
>> I am now in the process to add another processor, what is the best
>> approach?
>> Shall I have 2 new folders for the nar and one containing the actual
>> processor? I would like to generate a basic structure for the processor
>> (as
>> it describes here:
>>
>> https://community.hortonworks.com/articles/4318/build-custom-nifi-processor.html).
>> Is that advisable when adding another custom processor?
>>
>> Thanks,
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-nifi-developer-list.39713.n7.nabble.com/Multiple-nar-custom-processors-advisable-directory-structure-tp9089.html
>> Sent from the Apache NiFi Developer List mailing list archive at
>> Nabble.com.
>
>


Re: Multiple nar/custom processors: advisable directory structure

2016-04-14 Thread Russell Bateman

Welcome Idioma...

1) You'll want to subsume your new processors under deeper Java packages 
(you probably knew that).


2) In addition to the Java code, you'll add:
-src
  -main
- resources
  - META-INF
- services
  - org.apache.nifi.processor.Processor
containing a list of the package paths to each of your new processors.

Does this help?

On 04/14/2016 09:20 AM, idioma wrote:

Hi,
currently, I have one custom processor + test in a similar folder structure
in my IDE (IntelliJ):

-CustomProcessors
-nifi-myprocessor-nar
-nifi-myprocessor
   -src
   -main
   -java
   MyProcessor.java
   -test
   -MyProcessorTest.java

I am now in the process to add another processor, what is the best approach?
Shall I have 2 new folders for the nar and one containing the actual
processor? I would like to generate a basic structure for the processor (as
it describes here:
https://community.hortonworks.com/articles/4318/build-custom-nifi-processor.html).
Is that advisable when adding another custom processor?

Thanks,




--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/Multiple-nar-custom-processors-advisable-directory-structure-tp9089.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.




Re: [VOTE] Incorporate SHA256 part of release process

2016-04-14 Thread Michael Moser
+1 from me, too.



On Thu, Apr 14, 2016 at 12:12 PM, Pierre Villard <
pierre.villard...@gmail.com> wrote:

> +1
>
> Pierre
>
> 2016-04-14 14:24 GMT+02:00 Joe Percivall :
>
> > +1
> >  - - - - - - Joseph Percivalllinkedin.com/in/Percivalle:
> > joeperciv...@yahoo.com
> >
> >
> > On Thursday, April 14, 2016 7:55 AM, Joe Skora 
> > wrote:
> >
> >
> >  +1 for SHA256
> >
> > Whatever process produces the checksums it would be nice if the checksum
> > files could be made compatible with the "--check" option on the md5sum,
> > sha1sum, and sha256sum commands to simplify validation.
> >
> > That format is "".  With the checksum
> in
> > that format, running "md5sum --check .md5" will checksum
> >  and verify its checksum matches the expectations.  This then
> > outputs either ": OK" or ": FAILED" eliminating the
> > need to eyeball checksums and also making it easier to script the
> > validation if needed.
> >
> >
> >
> > On Wed, Apr 13, 2016 at 11:20 PM, Andy LoPresto <
> > alopresto.apa...@gmail.com>
> > wrote:
> >
> > > Fair enough. OpenSSL is pretty universal, but there are also
> OS-specific
> > > commands to perform the same task.
> > >
> > > Andy LoPresto
> > > alopresto.apa...@gmail.com
> > > PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
> > >
> > > > On Apr 13, 2016, at 20:13, Aldrin Piri  wrote:
> > > >
> > > > As far as the wrapper script, I'm in favor of the manual process for
> > the
> > > > SHA256.  The arbitrary shell commands/processes in the Maven build
> feel
> > > too
> > > > brittle across operating systems and this is multiplied in
> conjunction
> > > with
> > > > a maintained follow on script(s).  Overall would prefer just
> incurring
> > > the
> > > > "expense" on the RM to do so manually once these artifacts have been
> > > > generated through the process currently in place.
> > > >
> > > >> On Wed, Apr 13, 2016 at 9:58 PM, Andy LoPresto <
> alopre...@apache.org>
> > > wrote:
> > > >>
> > > >> Tony,
> > > >>
> > > >> That’s definitely a valid concern that I’m sure benefits all release
> > > >> managers to review. The conversation below is regarding the
> checksums
> > > for
> > > >> data integrity only; not the underlying hash used in the GPG
> signature
> > > >> process.
> > > >>
> > > >> Andy LoPresto
> > > >> alopre...@apache.org
> > > >> *alopresto.apa...@gmail.com *
> > > >> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
> > > >>
> > > >> On Apr 13, 2016, at 6:50 PM, Tony Kurc  wrote:
> > > >>
> > > >> I was under the impression not using SHA-1 WAS part of our release,
> > > when we
> > > >> were gpg signing (based off of [1]), which I assumed was the
> preferred
> > > form
> > > >> of assuring an artifact was not "bad". However, it looks like it
> isn't
> > > in
> > > >> our checklist to confirm that SHA-1 wasn't used to make the digital
> > > >> signature, and it looks like 0.6.1 is using SHA1.
> > > >>
> > > >>
> > > >> 1. http://www.apache.org/dev/openpgp.html#key-gen-avoid-sha1
> > > >>
> > > >>
> > > >>
> > > >>
> > > >> On Wed, Apr 13, 2016 at 9:13 PM, Aldrin Piri 
> > > wrote:
> > > >>
> > > >> This was mentioned in the vote thread for the RC2 release and wanted
> > to
> > > >> separate it out to keep the release messaging streamlined. As
> > mentioned
> > > by
> > > >> Andy, the MD5 and SHA1 are subject to collisions. From another
> > > viewpoint, I
> > > >> like having this as part of the official release process as I
> > typically
> > > >> generate this myself when updating the associated Homebrew formula
> > with
> > > no
> > > >> real connection to the artifacts created other than me saying so.
> > > >>
> > > >> The drawback is that the Maven plugins that drives the release
> > > >> unfortunately does not support SHA-256.[1] As a result this would
> fall
> > > on
> > > >> the RM to do so but could easily be added to the documentation we
> have
> > > >> until the linked ticket is resolved.
> > > >>
> > > >> This vote will be a lazy consensus and remain open for 72 hours.
> > > >>
> > > >>
> > > >> [1] https://issues.apache.org/jira/browse/MINSTALL-82
> > > >>
> > > >>
> > > >>
> > >
> >
> >
> >
>


Multiple nar/custom processors: advisable directory structure

2016-04-14 Thread idioma
Hi,
currently, I have one custom processor + test in a similar folder structure
in my IDE (IntelliJ):

-CustomProcessors
   -nifi-myprocessor-nar
   -nifi-myprocessor
  -src
  -main
  -java
  MyProcessor.java
  -test
  -MyProcessorTest.java

I am now in the process to add another processor, what is the best approach?
Shall I have 2 new folders for the nar and one containing the actual
processor? I would like to generate a basic structure for the processor (as
it describes here:
https://community.hortonworks.com/articles/4318/build-custom-nifi-processor.html).
Is that advisable when adding another custom processor?

Thanks,




--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/Multiple-nar-custom-processors-advisable-directory-structure-tp9089.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


Re: [VOTE] Incorporate SHA256 part of release process

2016-04-14 Thread Pierre Villard
+1

Pierre

2016-04-14 14:24 GMT+02:00 Joe Percivall :

> +1
>  - - - - - - Joseph Percivalllinkedin.com/in/Percivalle:
> joeperciv...@yahoo.com
>
>
> On Thursday, April 14, 2016 7:55 AM, Joe Skora 
> wrote:
>
>
>  +1 for SHA256
>
> Whatever process produces the checksums it would be nice if the checksum
> files could be made compatible with the "--check" option on the md5sum,
> sha1sum, and sha256sum commands to simplify validation.
>
> That format is "".  With the checksum in
> that format, running "md5sum --check .md5" will checksum
>  and verify its checksum matches the expectations.  This then
> outputs either ": OK" or ": FAILED" eliminating the
> need to eyeball checksums and also making it easier to script the
> validation if needed.
>
>
>
> On Wed, Apr 13, 2016 at 11:20 PM, Andy LoPresto <
> alopresto.apa...@gmail.com>
> wrote:
>
> > Fair enough. OpenSSL is pretty universal, but there are also OS-specific
> > commands to perform the same task.
> >
> > Andy LoPresto
> > alopresto.apa...@gmail.com
> > PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
> >
> > > On Apr 13, 2016, at 20:13, Aldrin Piri  wrote:
> > >
> > > As far as the wrapper script, I'm in favor of the manual process for
> the
> > > SHA256.  The arbitrary shell commands/processes in the Maven build feel
> > too
> > > brittle across operating systems and this is multiplied in conjunction
> > with
> > > a maintained follow on script(s).  Overall would prefer just incurring
> > the
> > > "expense" on the RM to do so manually once these artifacts have been
> > > generated through the process currently in place.
> > >
> > >> On Wed, Apr 13, 2016 at 9:58 PM, Andy LoPresto 
> > wrote:
> > >>
> > >> Tony,
> > >>
> > >> That’s definitely a valid concern that I’m sure benefits all release
> > >> managers to review. The conversation below is regarding the checksums
> > for
> > >> data integrity only; not the underlying hash used in the GPG signature
> > >> process.
> > >>
> > >> Andy LoPresto
> > >> alopre...@apache.org
> > >> *alopresto.apa...@gmail.com *
> > >> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
> > >>
> > >> On Apr 13, 2016, at 6:50 PM, Tony Kurc  wrote:
> > >>
> > >> I was under the impression not using SHA-1 WAS part of our release,
> > when we
> > >> were gpg signing (based off of [1]), which I assumed was the preferred
> > form
> > >> of assuring an artifact was not "bad". However, it looks like it isn't
> > in
> > >> our checklist to confirm that SHA-1 wasn't used to make the digital
> > >> signature, and it looks like 0.6.1 is using SHA1.
> > >>
> > >>
> > >> 1. http://www.apache.org/dev/openpgp.html#key-gen-avoid-sha1
> > >>
> > >>
> > >>
> > >>
> > >> On Wed, Apr 13, 2016 at 9:13 PM, Aldrin Piri 
> > wrote:
> > >>
> > >> This was mentioned in the vote thread for the RC2 release and wanted
> to
> > >> separate it out to keep the release messaging streamlined. As
> mentioned
> > by
> > >> Andy, the MD5 and SHA1 are subject to collisions. From another
> > viewpoint, I
> > >> like having this as part of the official release process as I
> typically
> > >> generate this myself when updating the associated Homebrew formula
> with
> > no
> > >> real connection to the artifacts created other than me saying so.
> > >>
> > >> The drawback is that the Maven plugins that drives the release
> > >> unfortunately does not support SHA-256.[1] As a result this would fall
> > on
> > >> the RM to do so but could easily be added to the documentation we have
> > >> until the linked ticket is resolved.
> > >>
> > >> This vote will be a lazy consensus and remain open for 72 hours.
> > >>
> > >>
> > >> [1] https://issues.apache.org/jira/browse/MINSTALL-82
> > >>
> > >>
> > >>
> >
>
>
>


[GitHub] nifi pull request: NIFI-1762: Use Lambda Expressions in StandardNi...

2016-04-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/352


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1762: Use Lambda Expressions in StandardNi...

2016-04-14 Thread mcgilman
Github user mcgilman commented on the pull request:

https://github.com/apache/nifi/pull/352#issuecomment-209976460
  
Looks good! Love be able to write more concise code with Java 8 features. +1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Interested to be a part of this list.

2016-04-14 Thread Joe Witt
Hello

If you go to this link https://nifi.apache.org/mailing_lists.html you can
then click on the 'subscribe' link for the mailing lists you'd like to join
and then send the email.  That registers you.  You are able to register and
unregister yourself.

Hope that helps

Thanks
Joe

On Thu, Apr 14, 2016 at 10:29 AM, Balaji K Hari 
wrote:

>
>
>
>
> Regards,
>
> ___
>
> [image: Email_CBE.gif]*Balaji KNV_Hari*
>
> Technical Architect
>
>
>
> This message contains information that may be privileged or confidential
> and is the property of the Capgemini Group. It is intended only for the
> person to whom it is addressed. If you are not the intended recipient, you
> are not authorized to read, print, retain, copy, disseminate, distribute,
> or use this message or any part thereof. If you receive this message in
> error, please notify the sender immediately and delete all copies of this
> message.
>


Interested to be a part of this list.

2016-04-14 Thread Balaji K Hari


Regards,
___
[Email_CBE.gif]Balaji KNV_Hari
Technical Architect

This message contains information that may be privileged or confidential and is 
the property of the Capgemini Group. It is intended only for the person to whom 
it is addressed. If you are not the intended recipient, you are not authorized 
to read, print, retain, copy, disseminate, distribute, or use this message or 
any part thereof. If you receive this message in error, please notify the 
sender immediately and delete all copies of this message.


[GitHub] nifi pull request: NIFI-1762: Use Lambda Expressions in StandardNi...

2016-04-14 Thread mcgilman
Github user mcgilman commented on the pull request:

https://github.com/apache/nifi/pull/352#issuecomment-209967571
  
Reviewing...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-361 - Create Processors to mutate JSON dat...

2016-04-14 Thread YolandaMDavis
Github user YolandaMDavis closed the pull request at:

https://github.com/apache/nifi/pull/353


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Clarifications/Suggestions on Using NIFI.

2016-04-14 Thread Joe Witt
Hello

You received a near immediate response which you can find here [1].  But
since you are not subscribed to the mailing list you did not receive the
response.  Please subscribe here [2].

[1]
http://mail-archives.apache.org/mod_mbox/nifi-dev/201604.mbox/%3CCALJK9a6h0qv14%3DyqN4mqRDJQYaimwO1V1DviqFntt0Xy2dTYQQ%40mail.gmail.com%3E
[2] https://nifi.apache.org/mailing_lists.html

Thanks
Joe

On Thu, Apr 14, 2016 at 10:03 AM, Balaji K Hari 
wrote:

> Hi Dev Team,
>
>
>
> Good Morning!!
>
>
>
> I hope you are trying hard by taking time from your busy schedule to
> provide the inputs for the below questions.
>
>
>
> It would be really helpful, if you can answer these at earliest as based
> on this we want to try with other options to achieve the functionalities as
> per requirements.
>
>
>
> Thanks a lot for your assistance… Looking forward for your reply.
>
>
>
> Regards,
>
> ___
>
> [image: Email_CBE.gif]*Balaji KNV_Hari*
>
> Technical Architect
>
>
>
> *From:* Balaji K Hari
> *Sent:* Tuesday, April 12, 2016 7:57 PM
> *To:* 'd...@nifi.incubator.apache.org'
> *Subject:* Clarifications/Suggestions on Using NIFI.
> *Importance:* High
>
>
>
> Hi Team,
>
>
>
> Based on the project requirements, I was looking at different features
> included in Apache NIFI and found that this would be the good way to
> interact with the Development team who have developed NIFI and are looking
> for suggestions/inputs from the User community to improvise the product and
> also it is a great medium where the users who are using this NIFI would get
> the valuable inputs from the developers for their requirements.
>
>
>
> Need your assistance/inputs on the below requirements and how these can be
> implemented in NIFI to achieve the solution.
>
>
>
> è  I have observed that, *Event Based Scheduling/Any Trigger Based
> Scheduling* is yet to be included in the latest NIFI product. Any
> workarounds/alternatives to achieve this?
>
> è  Can *Spark/Hive Jobs* can be scheduled on time basis and also executed
> through NIFI? If Yes, please suggest how can we do this?
>
> è  Can we get the data from multiple tables of *Oracle/SQL
> Server/Teradata* and put directly in *S3/HDFS* and also directly to 
> *RedShift/Any
> database*? If Yes, please suggest how can we do this?
>
> è  Also can we do the *transformations/manipulations* on the data while
> moving it to S3/HDFS from RDBMS databases? If Yes, please suggest how can
> we do this?
>
> è  Can we do the *validations* and also find the *duplicate data/records*
> before you put the data into S3/HDFS. *For example, I have moved the data
> from RDBMS tables into S3 and as part of daily loads, I need to check
> whether any duplicate records are present in the new load and need to
> remove those records while data movement itself.* Please provide your
> inputs how can we do this?
>
> è  Also can you provide valuable inputs on how can we achieve the
> workflow execution  dependency i.e. *For example, I have designed one
> workflow and based on this 1st workflow execution completion, I need to
> start the second workflow else need to start another workflow. *Can this
> be achieved in NIFI?
>
>
>
> It would be really helpful and appreciated on the above inputs, as you
> would be the best team who can help the us the solutions/workarounds in
> using the NIFI product as it is been identified as a good user friendly
> product for Data Ingestion/movement.
>
>
>
> Looking forward for your reply with the requested suggestions and
> solutions. Thanks in Advance JJ
>
>
>
> Regards,
>
> ___
>
> *Balaji KNV_Hari*
>
> Technical Architect
>
>
>
> This message contains information that may be privileged or confidential
> and is the property of the Capgemini Group. It is intended only for the
> person to whom it is addressed. If you are not the intended recipient, you
> are not authorized to read, print, retain, copy, disseminate, distribute,
> or use this message or any part thereof. If you receive this message in
> error, please notify the sender immediately and delete all copies of this
> message.
>


Re: Clarifications/Suggestions on Using NIFI.

2016-04-14 Thread Joe Witt
On Thu, Apr 14, 2016 at 10:09 AM, Joe Witt  wrote:

> Hello
>
> You received a near immediate response which you can find here [1].  But
> since you are not subscribed to the mailing list you did not receive the
> response.  Please subscribe here [2].
>
> [1]
> http://mail-archives.apache.org/mod_mbox/nifi-dev/201604.mbox/%3CCALJK9a6h0qv14%3DyqN4mqRDJQYaimwO1V1DviqFntt0Xy2dTYQQ%40mail.gmail.com%3E
> [2] https://nifi.apache.org/mailing_lists.html
>
> Thanks
> Joe
>
> On Thu, Apr 14, 2016 at 10:03 AM, Balaji K Hari  > wrote:
>
>> Hi Dev Team,
>>
>>
>>
>> Good Morning!!
>>
>>
>>
>> I hope you are trying hard by taking time from your busy schedule to
>> provide the inputs for the below questions.
>>
>>
>>
>> It would be really helpful, if you can answer these at earliest as based
>> on this we want to try with other options to achieve the functionalities as
>> per requirements.
>>
>>
>>
>> Thanks a lot for your assistance… Looking forward for your reply.
>>
>>
>>
>> Regards,
>>
>> ___
>>
>> [image: Email_CBE.gif]*Balaji KNV_Hari*
>>
>> Technical Architect
>>
>>
>>
>> *From:* Balaji K Hari
>> *Sent:* Tuesday, April 12, 2016 7:57 PM
>> *To:* 'd...@nifi.incubator.apache.org'
>> *Subject:* Clarifications/Suggestions on Using NIFI.
>> *Importance:* High
>>
>>
>>
>> Hi Team,
>>
>>
>>
>> Based on the project requirements, I was looking at different features
>> included in Apache NIFI and found that this would be the good way to
>> interact with the Development team who have developed NIFI and are looking
>> for suggestions/inputs from the User community to improvise the product and
>> also it is a great medium where the users who are using this NIFI would get
>> the valuable inputs from the developers for their requirements.
>>
>>
>>
>> Need your assistance/inputs on the below requirements and how these can
>> be implemented in NIFI to achieve the solution.
>>
>>
>>
>> è  I have observed that, *Event Based Scheduling/Any Trigger Based
>> Scheduling* is yet to be included in the latest NIFI product. Any
>> workarounds/alternatives to achieve this?
>>
>> è  Can *Spark/Hive Jobs* can be scheduled on time basis and also
>> executed through NIFI? If Yes, please suggest how can we do this?
>>
>> è  Can we get the data from multiple tables of *Oracle/SQL
>> Server/Teradata* and put directly in *S3/HDFS* and also directly to 
>> *RedShift/Any
>> database*? If Yes, please suggest how can we do this?
>>
>> è  Also can we do the *transformations/manipulations* on the data while
>> moving it to S3/HDFS from RDBMS databases? If Yes, please suggest how can
>> we do this?
>>
>> è  Can we do the *validations* and also find the *duplicate data/records*
>> before you put the data into S3/HDFS. *For example, I have moved the
>> data from RDBMS tables into S3 and as part of daily loads, I need to check
>> whether any duplicate records are present in the new load and need to
>> remove those records while data movement itself.* Please provide your
>> inputs how can we do this?
>>
>> è  Also can you provide valuable inputs on how can we achieve the
>> workflow execution  dependency i.e. *For example, I have designed one
>> workflow and based on this 1st workflow execution completion, I need to
>> start the second workflow else need to start another workflow. *Can this
>> be achieved in NIFI?
>>
>>
>>
>> It would be really helpful and appreciated on the above inputs, as you
>> would be the best team who can help the us the solutions/workarounds in
>> using the NIFI product as it is been identified as a good user friendly
>> product for Data Ingestion/movement.
>>
>>
>>
>> Looking forward for your reply with the requested suggestions and
>> solutions. Thanks in Advance JJ
>>
>>
>>
>> Regards,
>>
>> ___
>>
>> *Balaji KNV_Hari*
>>
>> Technical Architect
>>
>>
>>
>> This message contains information that may be privileged or confidential
>> and is the property of the Capgemini Group. It is intended only for the
>> person to whom it is addressed. If you are not the intended recipient, you
>> are not authorized to read, print, retain, copy, disseminate, distribute,
>> or use this message or any part thereof. If you receive this message in
>> error, please notify the sender immediately and delete all copies of this
>> message.
>>
>
>


[GitHub] nifi pull request: [NIFI-1761] Initial AngularJS application boots...

2016-04-14 Thread mcgilman
Github user mcgilman commented on the pull request:

https://github.com/apache/nifi/pull/331#issuecomment-209961026
  
Ensure breadcrumb placement is correct when bottom banner is visible.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] Release Apache NiFi 0.6.1 (RC2)

2016-04-14 Thread Mark Payne
+1 (binding)

Downloaded and verified signature and hashes.
Built on OSX with contrib-check and had no problems.
Verified README, NOTICE, and LICENSE files.
All looks good to me.

Thanks
-Mark



> On Apr 14, 2016, at 8:45 AM, Joe Percivall  
> wrote:
> 
> +1 (non-binding)
> 
> Went through helper to verify build. Also verified the zip using the SHA-512 
> that Joe linked in the second helper email. Ran a contrib check build on 
> Windows 8 and OSX. Tested a couple templates as well.
> 
> - - - - - - 
> Joseph Percivall
> linkedin.com/in/Percivall
> e: joeperciv...@yahoo.com
> 
> 
> 
> On Wednesday, April 13, 2016 8:56 PM, Matt Burgess  
> wrote:
> 
> 
> 
> +1 (non-binding)
> 
> Ran release verifier, checked artifacts, ran in standalone and 1-node cluster 
> (secure and insecure), tried some flows, everything looked fine.
> 
> 
> 
>> On Apr 12, 2016, at 7:47 PM, Joe Witt  wrote:
>> 
>> Hello Apache NiFi Community,
>> 
>> I am pleased to be calling this vote for the source release of Apache
>> NiFi 0.6.1.
>> 
>> The source zip, including signatures, digests, etc. can be found at:
>> https://dist.apache.org/repos/dist/dev/nifi/nifi-0.6.1/
>> 
>> The Git tag is nifi-0.6.1-RC2
>> The Git commit hash is 1a67b4de2e504bbe1a0cdbd6cccd949f997a5ad5
>> * 
>> https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=commit;h=1a67b4de2e504bbe1a0cdbd6cccd949f997a5ad5
>> * 
>> https://github.com/apache/nifi/commit/1a67b4de2e504bbe1a0cdbd6cccd949f997a5ad5
>> 
>> Checksums of nifi-0.6.1-source-release.zip:
>> MD5: 5bb2b80e0384f89e6055ad4b0dd45294
>> SHA1: b262664ed077f28623866d2a1090a4034dc3c04a
>> 
>> Release artifacts are signed with the following key:
>> https://people.apache.org/keys/committer/joewitt.asc
>> 
>> KEYS file available here:
>> https://dist.apache.org/repos/dist/release/nifi/KEYS
>> 
>> 13 issues were closed/resolved for this release:
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12316020&version=12335496
>> Release note highlights can be found here:
>> https://cwiki.apache.org/confluence/display/NIFI/Release+Notes#ReleaseNotes-Version0.6.1
>> 
>> The vote will be open for 72 hours.
>> Please download the release candidate and evaluate the necessary items
>> including checking hashes, signatures, build from source, and test. Then
>> please vote:
>> 
>> [ ] +1 Release this package as nifi-0.6.1
>> [ ] +0 no opinion
>> [ ] -1 Do not release this package because...
>> 
>> Thanks!



RE: Clarifications/Suggestions on Using NIFI.

2016-04-14 Thread Balaji K Hari
Hi Dev Team,

Good Morning!!

I hope you are trying hard by taking time from your busy schedule to provide 
the inputs for the below questions.

It would be really helpful, if you can answer these at earliest as based on 
this we want to try with other options to achieve the functionalities as per 
requirements.

Thanks a lot for your assistance... Looking forward for your reply.

Regards,
___
[Email_CBE.gif]Balaji KNV_Hari
Technical Architect

From: Balaji K Hari
Sent: Tuesday, April 12, 2016 7:57 PM
To: 'd...@nifi.incubator.apache.org'
Subject: Clarifications/Suggestions on Using NIFI.
Importance: High

Hi Team,

Based on the project requirements, I was looking at different features included 
in Apache NIFI and found that this would be the good way to interact with the 
Development team who have developed NIFI and are looking for suggestions/inputs 
from the User community to improvise the product and also it is a great medium 
where the users who are using this NIFI would get the valuable inputs from the 
developers for their requirements.

Need your assistance/inputs on the below requirements and how these can be 
implemented in NIFI to achieve the solution.


è  I have observed that, Event Based Scheduling/Any Trigger Based Scheduling is 
yet to be included in the latest NIFI product. Any workarounds/alternatives to 
achieve this?

è  Can Spark/Hive Jobs can be scheduled on time basis and also executed through 
NIFI? If Yes, please suggest how can we do this?

è  Can we get the data from multiple tables of Oracle/SQL Server/Teradata and 
put directly in S3/HDFS and also directly to RedShift/Any database? If Yes, 
please suggest how can we do this?

è  Also can we do the transformations/manipulations on the data while moving it 
to S3/HDFS from RDBMS databases? If Yes, please suggest how can we do this?

è  Can we do the validations and also find the duplicate data/records before 
you put the data into S3/HDFS. For example, I have moved the data from RDBMS 
tables into S3 and as part of daily loads, I need to check whether any 
duplicate records are present in the new load and need to remove those records 
while data movement itself. Please provide your inputs how can we do this?

è  Also can you provide valuable inputs on how can we achieve the workflow 
execution  dependency i.e. For example, I have designed one workflow and based 
on this 1st workflow execution completion, I need to start the second workflow 
else need to start another workflow. Can this be achieved in NIFI?

It would be really helpful and appreciated on the above inputs, as you would be 
the best team who can help the us the solutions/workarounds in using the NIFI 
product as it is been identified as a good user friendly product for Data 
Ingestion/movement.

Looking forward for your reply with the requested suggestions and solutions. 
Thanks in Advance :):)

Regards,
___
Balaji KNV_Hari
Technical Architect

This message contains information that may be privileged or confidential and is 
the property of the Capgemini Group. It is intended only for the person to whom 
it is addressed. If you are not the intended recipient, you are not authorized 
to read, print, retain, copy, disseminate, distribute, or use this message or 
any part thereof. If you receive this message in error, please notify the 
sender immediately and delete all copies of this message.


Re: catch commit error in OnTrigger to diversify session behaviour

2016-04-14 Thread Russell Bateman

Oleg,

Agreed. As I started only a few months ago, I have been using 
AtomicReference and it has been reliable and satisfied all my needs. 
(Just sayin'.)


Best

On 04/14/2016 05:22 AM, Oleg Zhurakousky wrote:

A bit unrelated, but how do you guys feel if we deprecate ObjectHolder so it 
could be gone by 1.0?
AtomicReference is available from Java 5

Cheers
Oleg


On Apr 14, 2016, at 5:18 AM, Bryan Bende  wrote:

Hello,

It may be easier to move the load() out of the InputStreamCallback. You
could do something like this...

final ObjectHolder holder = new ObjectHolder(null);

session.read(flowFile, new InputStreamCallback() {

@Override
public void process(InputStream in) throws IOException {
StringWriter strWriter = new StringWriter();
IOUtils.copy(in, strWriter, "UTF-8");
String contents = strWriter.toString();
holder.set(contents);
}
});

try {
load(holder.get());
session.transfer(flowFile, SUCCESS);
  } catch (IOException e) {
session.transfer(flowFile, FAILURE);
}


-Bryan

On Thu, Apr 14, 2016 at 9:06 AM, idioma  wrote:


Hi,
I have modified my onTrigger in this way:

session.read(flowFile, new InputStreamCallback() {

@Override
public void process(InputStream in) throws IOException {

StringWriter strWriter = new StringWriter();
IOUtils.copy(in, strWriter, "UTF-8");
String contents = strWriter.toString();

try {
load(contents);
} catch (IOException e) {
e.getMessage();
boolean error = true;
throw e;
}
}
});

What I am struggling with is how to send it to a failure or a success
depending on the error being thrown. Any help would be appreciated, thank
you so much.



--
View this message in context:
http://apache-nifi-developer-list.39713.n7.nabble.com/catch-commit-error-in-OnTrigger-to-diversify-session-behaviour-tp9027p9062.html
Sent from the Apache NiFi Developer List mailing list archive at
Nabble.com.





[GitHub] nifi pull request: [NIFI-1761] Initial AngularJS application boots...

2016-04-14 Thread mcgilman
Github user mcgilman commented on the pull request:

https://github.com/apache/nifi/pull/331#issuecomment-209945150
  
@scottyaslan A couple things...

- Looks there is a whitespace being inserted at the end group name in the 
breadcrumbs.
- The breadcrumb alignment might be off by a pixel or two.
- The bottom of the breadcrumb text is getting clipped.
- Once the breadcrumb text has overflowed and the group changes, the view 
does not appear to reset correctly.
- Looks like the scroll direction changed. Was this intended?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-361 - Create Processors to mutate JSON dat...

2016-04-14 Thread YolandaMDavis
GitHub user YolandaMDavis opened a pull request:

https://github.com/apache/nifi/pull/353

NIFI-361 - Create Processors to mutate JSON data

This is an initial implementation of the TransformJSON processor using the 
Jolt library. TransformJSON supports Jolt specifications for the following 
transformations:  Chain, Shift, Remove, and Default. Users will be able to add 
the TransformJSON processor, select the transformation they wish to apply and 
enter the specification for the given transformation. 

Details for creating Jolt specifications can be found 
[here](https://github.com/bazaarvoice/jolt)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/YolandaMDavis/nifi NIFI-361

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/353.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #353


commit 3d2c2acae11823dc98cdc06c5066720216185984
Author: Yolanda M. Davis 
Date:   2016-04-14T12:19:41Z

NIFI-361 Initial implementation of TransformJSON using Jolt

commit 3dbe30c7d5799e17c4cb8727f379de4ac36fca65
Author: Yolanda M. Davis 
Date:   2016-04-14T12:20:02Z

NIFI-361 Documentation entry and license update




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: [NIFI-1761] Initial AngularJS application boots...

2016-04-14 Thread mcgilman
Github user mcgilman commented on a diff in the pull request:

https://github.com/apache/nifi/pull/331#discussion_r59717798
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/pom.xml
 ---
@@ -628,8 +634,13 @@
 js/d3/**/*,
 js/codemirror/**/*,
 js/jquery/**/*,
+js/**/**/*,
--- End diff --

Why was this line added? It appears to be includes all the JS resources 
that was aggregated above.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1762: Use Lambda Expressions in StandardNi...

2016-04-14 Thread markap14
GitHub user markap14 opened a pull request:

https://github.com/apache/nifi/pull/352

NIFI-1762: Use Lambda Expressions in StandardNiFIServiceFacade to simplify 
codebase

 Changed Java dependency to 1.8 instead of 1.7 and refactored 
StandardNiFiServiceFacade to make use of Lambda expressions to simplify code 
base. Also had to address a unit test because changing to Java 8 results in 
calls to assertEquals to become ambiguous

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/markap14/nifi NIFI-1762

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/352.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #352


commit 31097e82cef5dc0614cffe826e417f39bda4fb69
Author: Mark Payne 
Date:   2016-04-14T13:28:25Z

NIFI-1762: Changed Java dependency to 1.8 instead of 1.7 and refactored 
StandardNiFiServiceFacade to make use of Lambda expressions to simplify code 
base. Also had to address a unit test because changing to Java 8 results in 
calls to assertEquals to become ambiguous




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: NIFI-1771 deprecated ObjectHolder

2016-04-14 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/351

NIFI-1771 deprecated ObjectHolder



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1771

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/351.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #351


commit fd5ee2b369067830e56152cb4cc8770e44281dc5
Author: Oleg Zhurakousky 
Date:   2016-04-14T13:19:18Z

NIFI-1771 deprecated ObjectHolder




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] Release Apache NiFi 0.6.1 (RC2)

2016-04-14 Thread Joe Percivall
+1 (non-binding)

Went through helper to verify build. Also verified the zip using the SHA-512 
that Joe linked in the second helper email. Ran a contrib check build on 
Windows 8 and OSX. Tested a couple templates as well.
 
- - - - - - 
Joseph Percivall
linkedin.com/in/Percivall
e: joeperciv...@yahoo.com



On Wednesday, April 13, 2016 8:56 PM, Matt Burgess  wrote:



+1 (non-binding)

Ran release verifier, checked artifacts, ran in standalone and 1-node cluster 
(secure and insecure), tried some flows, everything looked fine.



> On Apr 12, 2016, at 7:47 PM, Joe Witt  wrote:
> 
> Hello Apache NiFi Community,
> 
> I am pleased to be calling this vote for the source release of Apache
> NiFi 0.6.1.
> 
> The source zip, including signatures, digests, etc. can be found at:
> https://dist.apache.org/repos/dist/dev/nifi/nifi-0.6.1/
> 
> The Git tag is nifi-0.6.1-RC2
> The Git commit hash is 1a67b4de2e504bbe1a0cdbd6cccd949f997a5ad5
> * 
> https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=commit;h=1a67b4de2e504bbe1a0cdbd6cccd949f997a5ad5
> * 
> https://github.com/apache/nifi/commit/1a67b4de2e504bbe1a0cdbd6cccd949f997a5ad5
> 
> Checksums of nifi-0.6.1-source-release.zip:
> MD5: 5bb2b80e0384f89e6055ad4b0dd45294
> SHA1: b262664ed077f28623866d2a1090a4034dc3c04a
> 
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/joewitt.asc
> 
> KEYS file available here:
> https://dist.apache.org/repos/dist/release/nifi/KEYS
> 
> 13 issues were closed/resolved for this release:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12316020&version=12335496
> Release note highlights can be found here:
> https://cwiki.apache.org/confluence/display/NIFI/Release+Notes#ReleaseNotes-Version0.6.1
> 
> The vote will be open for 72 hours.
> Please download the release candidate and evaluate the necessary items
> including checking hashes, signatures, build from source, and test. Then
> please vote:
> 
> [ ] +1 Release this package as nifi-0.6.1
> [ ] +0 no opinion
> [ ] -1 Do not release this package because...
> 
> Thanks!


Re: catch commit error in OnTrigger to diversify session behaviour

2016-04-14 Thread Oleg Zhurakousky
Idioma

Keep an eye on this https://issues.apache.org/jira/browse/NIFI-1771 and 
consider using java.util.concurrent.atomic.AtomicReference

Cheers
Oleg

On Apr 14, 2016, at 7:01 AM, idioma 
mailto:corda.ila...@gmail.com>> wrote:

Bryan,
thank you so much, this is absolutely fantastic. I was actually looking for
an easy way to access to the content of my load class and I did not know
about ObjectHolder.

Thank you so much



--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/catch-commit-error-in-OnTrigger-to-diversify-session-behaviour-tp9027p9066.html
Sent from the Apache NiFi Developer List mailing list archive at 
Nabble.com.




[GitHub] nifi pull request: NIFI-1764 fixed NPE in PutKafka

2016-04-14 Thread olegz
GitHub user olegz opened a pull request:

https://github.com/apache/nifi/pull/350

NIFI-1764 fixed NPE in PutKafka

NIFI-1764 removed obsolete comment for MESSAGE_DELIMITER

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/olegz/nifi NIFI-1764

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/350.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #350


commit a0564c433a076f5ce4378e0518128928aaee5d98
Author: Oleg Zhurakousky 
Date:   2016-04-14T12:30:11Z

NIFI-1764 fixed NPE in PutKafka

NIFI-1764 removed obsolete comment for MESSAGE_DELIMITER




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] Incorporate SHA256 part of release process

2016-04-14 Thread Joe Percivall
+1 
 - - - - - - Joseph Percivalllinkedin.com/in/Percivalle: joeperciv...@yahoo.com
 

On Thursday, April 14, 2016 7:55 AM, Joe Skora  wrote:
 

 +1 for SHA256

Whatever process produces the checksums it would be nice if the checksum
files could be made compatible with the "--check" option on the md5sum,
sha1sum, and sha256sum commands to simplify validation.

That format is "".  With the checksum in
that format, running "md5sum --check .md5" will checksum
 and verify its checksum matches the expectations.  This then
outputs either ": OK" or ": FAILED" eliminating the
need to eyeball checksums and also making it easier to script the
validation if needed.



On Wed, Apr 13, 2016 at 11:20 PM, Andy LoPresto 
wrote:

> Fair enough. OpenSSL is pretty universal, but there are also OS-specific
> commands to perform the same task.
>
> Andy LoPresto
> alopresto.apa...@gmail.com
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> > On Apr 13, 2016, at 20:13, Aldrin Piri  wrote:
> >
> > As far as the wrapper script, I'm in favor of the manual process for the
> > SHA256.  The arbitrary shell commands/processes in the Maven build feel
> too
> > brittle across operating systems and this is multiplied in conjunction
> with
> > a maintained follow on script(s).  Overall would prefer just incurring
> the
> > "expense" on the RM to do so manually once these artifacts have been
> > generated through the process currently in place.
> >
> >> On Wed, Apr 13, 2016 at 9:58 PM, Andy LoPresto 
> wrote:
> >>
> >> Tony,
> >>
> >> That’s definitely a valid concern that I’m sure benefits all release
> >> managers to review. The conversation below is regarding the checksums
> for
> >> data integrity only; not the underlying hash used in the GPG signature
> >> process.
> >>
> >> Andy LoPresto
> >> alopre...@apache.org
> >> *alopresto.apa...@gmail.com *
> >> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
> >>
> >> On Apr 13, 2016, at 6:50 PM, Tony Kurc  wrote:
> >>
> >> I was under the impression not using SHA-1 WAS part of our release,
> when we
> >> were gpg signing (based off of [1]), which I assumed was the preferred
> form
> >> of assuring an artifact was not "bad". However, it looks like it isn't
> in
> >> our checklist to confirm that SHA-1 wasn't used to make the digital
> >> signature, and it looks like 0.6.1 is using SHA1.
> >>
> >>
> >> 1. http://www.apache.org/dev/openpgp.html#key-gen-avoid-sha1
> >>
> >>
> >>
> >>
> >> On Wed, Apr 13, 2016 at 9:13 PM, Aldrin Piri 
> wrote:
> >>
> >> This was mentioned in the vote thread for the RC2 release and wanted to
> >> separate it out to keep the release messaging streamlined. As mentioned
> by
> >> Andy, the MD5 and SHA1 are subject to collisions. From another
> viewpoint, I
> >> like having this as part of the official release process as I typically
> >> generate this myself when updating the associated Homebrew formula with
> no
> >> real connection to the artifacts created other than me saying so.
> >>
> >> The drawback is that the Maven plugins that drives the release
> >> unfortunately does not support SHA-256.[1] As a result this would fall
> on
> >> the RM to do so but could easily be added to the documentation we have
> >> until the linked ticket is resolved.
> >>
> >> This vote will be a lazy consensus and remain open for 72 hours.
> >>
> >>
> >> [1] https://issues.apache.org/jira/browse/MINSTALL-82
> >>
> >>
> >>
>

  

Re: catch commit error in OnTrigger to diversify session behaviour

2016-04-14 Thread idioma
Bryan,
thank you so much, this is absolutely fantastic. I was actually looking for
an easy way to access to the content of my load class and I did not know
about ObjectHolder. 

Thank you so much



--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/catch-commit-error-in-OnTrigger-to-diversify-session-behaviour-tp9027p9066.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


Re: [VOTE] Incorporate SHA256 part of release process

2016-04-14 Thread Joe Skora
+1 for SHA256

Whatever process produces the checksums it would be nice if the checksum
files could be made compatible with the "--check" option on the md5sum,
sha1sum, and sha256sum commands to simplify validation.

That format is "".  With the checksum in
that format, running "md5sum --check .md5" will checksum
 and verify its checksum matches the expectations.  This then
outputs either ": OK" or ": FAILED" eliminating the
need to eyeball checksums and also making it easier to script the
validation if needed.



On Wed, Apr 13, 2016 at 11:20 PM, Andy LoPresto 
wrote:

> Fair enough. OpenSSL is pretty universal, but there are also OS-specific
> commands to perform the same task.
>
> Andy LoPresto
> alopresto.apa...@gmail.com
> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
>
> > On Apr 13, 2016, at 20:13, Aldrin Piri  wrote:
> >
> > As far as the wrapper script, I'm in favor of the manual process for the
> > SHA256.  The arbitrary shell commands/processes in the Maven build feel
> too
> > brittle across operating systems and this is multiplied in conjunction
> with
> > a maintained follow on script(s).  Overall would prefer just incurring
> the
> > "expense" on the RM to do so manually once these artifacts have been
> > generated through the process currently in place.
> >
> >> On Wed, Apr 13, 2016 at 9:58 PM, Andy LoPresto 
> wrote:
> >>
> >> Tony,
> >>
> >> That’s definitely a valid concern that I’m sure benefits all release
> >> managers to review. The conversation below is regarding the checksums
> for
> >> data integrity only; not the underlying hash used in the GPG signature
> >> process.
> >>
> >> Andy LoPresto
> >> alopre...@apache.org
> >> *alopresto.apa...@gmail.com *
> >> PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69
> >>
> >> On Apr 13, 2016, at 6:50 PM, Tony Kurc  wrote:
> >>
> >> I was under the impression not using SHA-1 WAS part of our release,
> when we
> >> were gpg signing (based off of [1]), which I assumed was the preferred
> form
> >> of assuring an artifact was not "bad". However, it looks like it isn't
> in
> >> our checklist to confirm that SHA-1 wasn't used to make the digital
> >> signature, and it looks like 0.6.1 is using SHA1.
> >>
> >>
> >> 1. http://www.apache.org/dev/openpgp.html#key-gen-avoid-sha1
> >>
> >>
> >>
> >>
> >> On Wed, Apr 13, 2016 at 9:13 PM, Aldrin Piri 
> wrote:
> >>
> >> This was mentioned in the vote thread for the RC2 release and wanted to
> >> separate it out to keep the release messaging streamlined. As mentioned
> by
> >> Andy, the MD5 and SHA1 are subject to collisions. From another
> viewpoint, I
> >> like having this as part of the official release process as I typically
> >> generate this myself when updating the associated Homebrew formula with
> no
> >> real connection to the artifacts created other than me saying so.
> >>
> >> The drawback is that the Maven plugins that drives the release
> >> unfortunately does not support SHA-256.[1] As a result this would fall
> on
> >> the RM to do so but could easily be added to the documentation we have
> >> until the linked ticket is resolved.
> >>
> >> This vote will be a lazy consensus and remain open for 72 hours.
> >>
> >>
> >> [1] https://issues.apache.org/jira/browse/MINSTALL-82
> >>
> >>
> >>
>


Re: catch commit error in OnTrigger to diversify session behaviour

2016-04-14 Thread Oleg Zhurakousky
A bit unrelated, but how do you guys feel if we deprecate ObjectHolder so it 
could be gone by 1.0?
AtomicReference is available from Java 5

Cheers
Oleg

> On Apr 14, 2016, at 5:18 AM, Bryan Bende  wrote:
> 
> Hello,
> 
> It may be easier to move the load() out of the InputStreamCallback. You
> could do something like this...
> 
> final ObjectHolder holder = new ObjectHolder(null);
> 
> session.read(flowFile, new InputStreamCallback() {
> 
>@Override
>public void process(InputStream in) throws IOException {
>StringWriter strWriter = new StringWriter();
>IOUtils.copy(in, strWriter, "UTF-8");
>String contents = strWriter.toString();
>holder.set(contents);
>}
> });
> 
> try {
>load(holder.get());
>session.transfer(flowFile, SUCCESS);
>  } catch (IOException e) {
>session.transfer(flowFile, FAILURE);
> }
> 
> 
> -Bryan
> 
> On Thu, Apr 14, 2016 at 9:06 AM, idioma  wrote:
> 
>> Hi,
>> I have modified my onTrigger in this way:
>> 
>> session.read(flowFile, new InputStreamCallback() {
>> 
>>@Override
>>public void process(InputStream in) throws IOException {
>> 
>>StringWriter strWriter = new StringWriter();
>>IOUtils.copy(in, strWriter, "UTF-8");
>>String contents = strWriter.toString();
>> 
>>try {
>>load(contents);
>>} catch (IOException e) {
>>e.getMessage();
>>boolean error = true;
>>throw e;
>>}
>>}
>>});
>> 
>> What I am struggling with is how to send it to a failure or a success
>> depending on the error being thrown. Any help would be appreciated, thank
>> you so much.
>> 
>> 
>> 
>> --
>> View this message in context:
>> http://apache-nifi-developer-list.39713.n7.nabble.com/catch-commit-error-in-OnTrigger-to-diversify-session-behaviour-tp9027p9062.html
>> Sent from the Apache NiFi Developer List mailing list archive at
>> Nabble.com.
>> 



Re: Compression of Data in HDFS

2016-04-14 Thread jamesgreen
Hi Bryan
Thanks for your input, I did get it to work now. Sorry for the delayed
response 

Just to confirm if it reads from a certain file and compresses and writes
the compressed file to the Target Directory - how does nifi know that its
has read from a certain file already?
Or does it continue to read from Random Files?

Thanks 

James



--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/Compression-of-Data-in-HDFS-tp8821p9061.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


Re: catch commit error in OnTrigger to diversify session behaviour

2016-04-14 Thread Bryan Bende
Hello,

It may be easier to move the load() out of the InputStreamCallback. You
could do something like this...

final ObjectHolder holder = new ObjectHolder(null);

 session.read(flowFile, new InputStreamCallback() {

@Override
public void process(InputStream in) throws IOException {
StringWriter strWriter = new StringWriter();
IOUtils.copy(in, strWriter, "UTF-8");
String contents = strWriter.toString();
holder.set(contents);
}
});

try {
load(holder.get());
session.transfer(flowFile, SUCCESS);
  } catch (IOException e) {
session.transfer(flowFile, FAILURE);
}


-Bryan

On Thu, Apr 14, 2016 at 9:06 AM, idioma  wrote:

> Hi,
> I have modified my onTrigger in this way:
>
>  session.read(flowFile, new InputStreamCallback() {
>
> @Override
> public void process(InputStream in) throws IOException {
>
> StringWriter strWriter = new StringWriter();
> IOUtils.copy(in, strWriter, "UTF-8");
> String contents = strWriter.toString();
>
> try {
> load(contents);
> } catch (IOException e) {
> e.getMessage();
> boolean error = true;
> throw e;
> }
> }
> });
>
> What I am struggling with is how to send it to a failure or a success
> depending on the error being thrown. Any help would be appreciated, thank
> you so much.
>
>
>
> --
> View this message in context:
> http://apache-nifi-developer-list.39713.n7.nabble.com/catch-commit-error-in-OnTrigger-to-diversify-session-behaviour-tp9027p9062.html
> Sent from the Apache NiFi Developer List mailing list archive at
> Nabble.com.
>


Re: catch commit error in OnTrigger to diversify session behaviour

2016-04-14 Thread idioma
Hi,
I have modified my onTrigger in this way:

 session.read(flowFile, new InputStreamCallback() {

@Override
public void process(InputStream in) throws IOException {

StringWriter strWriter = new StringWriter();
IOUtils.copy(in, strWriter, "UTF-8");
String contents = strWriter.toString();

try {
load(contents);
} catch (IOException e) {
e.getMessage();
boolean error = true;
throw e;
}
}
});

What I am struggling with is how to send it to a failure or a success
depending on the error being thrown. Any help would be appreciated, thank
you so much. 



--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/catch-commit-error-in-OnTrigger-to-diversify-session-behaviour-tp9027p9062.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.


Re: catch commit error in OnTrigger to diversify session behaviour

2016-04-14 Thread idioma
Matt, 
thanks for your reply, but I am not sure I have actually understood what you
mean. In my load method I have the following:

try{
transaction.commit();
}catch (TitanException ex) {

   System.out.println("This is a failure message");
   transaction.rollback();
}

How are you suggesting to re-throw the exception in the OnTrigger in order
to transfer it to a failure, or in case there is no exception to a success?
Can you please provide an example? 

Thanks,

   




--
View this message in context: 
http://apache-nifi-developer-list.39713.n7.nabble.com/catch-commit-error-in-OnTrigger-to-diversify-session-behaviour-tp9027p9060.html
Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.