[jira] [Commented] (NIFI-2751) When a processor pulls a batch of FlowFiles, it keeps pulling from the same inbound connection instead of round-robin'ing

2016-11-14 Thread Koji Kawamura (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15663088#comment-15663088
 ] 

Koji Kawamura commented on NIFI-2751:
-

I was trying to review the PR, but couldn't reproduce the ArithmeticException. 
The stack trace Mark posted came from BinFiles, which is a super class of 
MergeContent processor, and since MergeContent processor requires input 
connections, I couldn't setup a flow as Pierre mentioned above.

The only possibility I think, is a case that MergeContent is scheduled, but 
before it polls the incoming flow files, other thread added more flow files 
into downstream relationship, and it becomes full:

{code: title=ProcessContext.pollFromSelfLoopsOnly}
private boolean pollFromSelfLoopsOnly() {
if (isTriggerWhenAnyDestinationAvailable()) {
// we can pull from any incoming connection, as long as at least 
one downstream connection
// is available for each relationship.
// I.e., we can poll only from self if no relationships are 
available
return !Connectables.anyRelationshipAvailable(connectable);
} else {
for (final Connection connection : connectable.getConnections()) {
// A downstream connection is full. We are only allowed to pull 
from self-loops.
if (connection.getFlowFileQueue().isFull()) {  HERE?
return true;
}
}
}

return false;
}
{code}

So I tried running MergeContent with Concurrent Tasks set higher (like 4 or 
even 32), and let 'merged' connection get full, but ArithmeticException was not 
thrown. 
The change looks good to me, but since I couldn't reproduce the issue, I 
couldn't give it a plus one. I'd like [~markap14] to confirm the fix.

> When a processor pulls a batch of FlowFiles, it keeps pulling from the same 
> inbound connection instead of round-robin'ing
> -
>
> Key: NIFI-2751
> URL: https://issues.apache.org/jira/browse/NIFI-2751
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Pierre Villard
>Priority: Blocker
>  Labels: beginner, easyfix, framework, newbie
> Fix For: 1.1.0
>
>
> When a Processor calls ProcessSession.get(int) or 
> ProcessSession.get(FlowFileFilter), the FlowFiles only come from the first 
> inbound connection, unless that connection is empty or doesn't have enough 
> FlowFiles to complete the get() call. It should instead round-robin between 
> the incoming connections, just as ProcessSession.get() does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #1215: NIFI-2851: Fixed CheckStyle error.

2016-11-14 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/1215
  
Found this as well during my PR. Checked that Koji's fix resolves the 
checkstyle error. Merging. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2851) Improve performance of SplitText

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15663166#comment-15663166
 ] 

ASF GitHub Bot commented on NIFI-2851:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/1215
  
Found this as well during my PR. Checked that Koji's fix resolves the 
checkstyle error. Merging. 


> Improve performance of SplitText
> 
>
> Key: NIFI-2851
> URL: https://issues.apache.org/jira/browse/NIFI-2851
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Oleg Zhurakousky
> Fix For: 1.1.0
>
>
> SplitText is fairly CPU-intensive and quite slow. A simple flow that splits a 
> 1.4 million line text file into 5k line chunks and then splits those 5k line 
> chunks into 1 line chunks is only capable of pushing through about 10k lines 
> per second. This equates to about 10 MB/sec. JVisualVM shows that the 
> majority of the time is spent in the locateSplitPoint() method. Isolating 
> this code and inspecting how it works, and using some micro-benchmarking, it 
> appears that if we refactor the calls to InputStream.read() to instead read 
> into a byte array, we can improve performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1215: NIFI-2851: Fixed CheckStyle error.

2016-11-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1215


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2851) Improve performance of SplitText

2016-11-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15663175#comment-15663175
 ] 

ASF subversion and git services commented on NIFI-2851:
---

Commit 13ea909122a39f36b7c63effabca0fe8a43298aa in nifi's branch 
refs/heads/master from [~ijokarumawak]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=13ea909 ]

NIFI-2851: Fixed CheckStyle error.

This closes #1215.

Signed-off-by: Andy LoPresto 


> Improve performance of SplitText
> 
>
> Key: NIFI-2851
> URL: https://issues.apache.org/jira/browse/NIFI-2851
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Oleg Zhurakousky
> Fix For: 1.1.0
>
>
> SplitText is fairly CPU-intensive and quite slow. A simple flow that splits a 
> 1.4 million line text file into 5k line chunks and then splits those 5k line 
> chunks into 1 line chunks is only capable of pushing through about 10k lines 
> per second. This equates to about 10 MB/sec. JVisualVM shows that the 
> majority of the time is spent in the locateSplitPoint() method. Isolating 
> this code and inspecting how it works, and using some micro-benchmarking, it 
> appears that if we refactor the calls to InputStream.read() to instead read 
> into a byte array, we can improve performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2851) Improve performance of SplitText

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15663176#comment-15663176
 ] 

ASF GitHub Bot commented on NIFI-2851:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1215


> Improve performance of SplitText
> 
>
> Key: NIFI-2851
> URL: https://issues.apache.org/jira/browse/NIFI-2851
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Oleg Zhurakousky
> Fix For: 1.1.0
>
>
> SplitText is fairly CPU-intensive and quite slow. A simple flow that splits a 
> 1.4 million line text file into 5k line chunks and then splits those 5k line 
> chunks into 1 line chunks is only capable of pushing through about 10k lines 
> per second. This equates to about 10 MB/sec. JVisualVM shows that the 
> majority of the time is spent in the locateSplitPoint() method. Isolating 
> this code and inspecting how it works, and using some micro-benchmarking, it 
> appears that if we refactor the calls to InputStream.read() to instead read 
> into a byte array, we can improve performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1216: NIFI-2654 Enabled encryption coverage for login-ide...

2016-11-14 Thread alopresto
GitHub user alopresto opened a pull request:

https://github.com/apache/nifi/pull/1216

NIFI-2654 Enabled encryption coverage for login-identity-providers.xml.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [x] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.

NIFI-2654 Enabled encryption coverage for login-identity-providers.xml.

Squashed commits:
[5dd22a9] NIFI-2654 Updated administration guide with 
login-identity-providers.xml flags.

Exposed master key retrieval code in NiFiPropertiesLoader.
Added logic to decrypt login identity providers XML configuration.
Updated login-identity-providers.xsd to include encryption scheme 
attribute.
Added unit tests. (+18 squashed commits)
Squashed commits:
[57c815f] NIFI-2654 Resolved issue where empty LIP property elements 
could not be encrypted.
Added unit test and resource.
[27d7309] NIFI-2654 Wired in serialization logic to write logic for LIP.
Added comprehensive unit test for LIP & NFP in same test.
[b450eb2] NIFI-2654 Finalized logic for preserving comments in LIP 
parsing.
[5aa6c9c] NIFI-2654 Added logic for maintaining XML formatting 
(comments and whitespace) for LIP.
Added unit tests (w/o encryption works; w/ does not).
[b53461f] NIFI-2654 Added unit test for full tool invocation migrating 
a login-identity-providers.xml file and updating file and bootstrap.conf with 
key.
[2d9686c] NIFI-2654 Updated tool description and various logging 
statements.
Added unit test for full tool invocation encrypting a 
login-identity-providers.xml file and updating file and bootstrap.conf with key.
[8c67cb2] NIFI-2654 Added logic to encrypt LIP XML content.
Added unit tests.
[8682d19] NIFI-2654 Added logic to handle "empty" (commented) LIP files.
Added unit tests.
[077230e] NIFI-2654 Fixed logic to decrypt multiline and 
multiple-per-line XML elements.
Added unit tests and resources.
[d5bb8da] NIFI-2654 Ignored unit test for unreadable conf directory 
because directory was causing Maven build issues.
Removed test resources.
[7e50506] NIFI-2654 Fixed AESSensitivePropertyProvider bug handling 
cipher text with whitespace.
Added unit test.
[b69a661] NIFI-2654 Fixed AESSensitivePropertyProviderFactoryTest to 
reflect absence of key causes errors.
[6f821b9] NIFI-2654 Added standard password to arbitrary encryption 
test for use in test resources.
[d289ffa] NIFI-2654 Added LIP XML decryption.
Added unit tests.
[a482245] NIFI-2654 Added LIP test resources.
[7204df4] NIFI-2654 Changed logic to only perform properties encryption 
when file path is provided.
[729e1df] NIFI-2654 Removed population of default file locations for 
bootstrap.conf, nifi.properties, and login-identity-providers.xml as not all 
files may be desired.
Added/updated unit tests.
[7dba5ef] NIFI-2654 Started LIP work (arguments & parsing).
Added unit tests.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alopresto/nifi NIFI-2654-squashed

Alternatively you can review and apply these cha

[jira] [Commented] (NIFI-2654) Encrypted configs should handle login identity provider configs

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15663254#comment-15663254
 ] 

ASF GitHub Bot commented on NIFI-2654:
--

GitHub user alopresto opened a pull request:

https://github.com/apache/nifi/pull/1216

NIFI-2654 Enabled encryption coverage for login-identity-providers.xml.

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [x] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.

NIFI-2654 Enabled encryption coverage for login-identity-providers.xml.

Squashed commits:
[5dd22a9] NIFI-2654 Updated administration guide with 
login-identity-providers.xml flags.

Exposed master key retrieval code in NiFiPropertiesLoader.
Added logic to decrypt login identity providers XML configuration.
Updated login-identity-providers.xsd to include encryption scheme 
attribute.
Added unit tests. (+18 squashed commits)
Squashed commits:
[57c815f] NIFI-2654 Resolved issue where empty LIP property elements 
could not be encrypted.
Added unit test and resource.
[27d7309] NIFI-2654 Wired in serialization logic to write logic for LIP.
Added comprehensive unit test for LIP & NFP in same test.
[b450eb2] NIFI-2654 Finalized logic for preserving comments in LIP 
parsing.
[5aa6c9c] NIFI-2654 Added logic for maintaining XML formatting 
(comments and whitespace) for LIP.
Added unit tests (w/o encryption works; w/ does not).
[b53461f] NIFI-2654 Added unit test for full tool invocation migrating 
a login-identity-providers.xml file and updating file and bootstrap.conf with 
key.
[2d9686c] NIFI-2654 Updated tool description and various logging 
statements.
Added unit test for full tool invocation encrypting a 
login-identity-providers.xml file and updating file and bootstrap.conf with key.
[8c67cb2] NIFI-2654 Added logic to encrypt LIP XML content.
Added unit tests.
[8682d19] NIFI-2654 Added logic to handle "empty" (commented) LIP files.
Added unit tests.
[077230e] NIFI-2654 Fixed logic to decrypt multiline and 
multiple-per-line XML elements.
Added unit tests and resources.
[d5bb8da] NIFI-2654 Ignored unit test for unreadable conf directory 
because directory was causing Maven build issues.
Removed test resources.
[7e50506] NIFI-2654 Fixed AESSensitivePropertyProvider bug handling 
cipher text with whitespace.
Added unit test.
[b69a661] NIFI-2654 Fixed AESSensitivePropertyProviderFactoryTest to 
reflect absence of key causes errors.
[6f821b9] NIFI-2654 Added standard password to arbitrary encryption 
test for use in test resources.
[d289ffa] NIFI-2654 Added LIP XML decryption.
Added unit tests.
[a482245] NIFI-2654 Added LIP test resources.
[7204df4] NIFI-2654 Changed logic to only perform properties encryption 
when file path is provided.
[729e1df] NIFI-2654 Removed population of default file locations for 
bootstrap.conf, nifi.properties, and login-identity-providers.xml as not all 
files may be desired.
Added/updated unit tests.
[7dba5ef] NIFI-265

[GitHub] nifi issue #1216: NIFI-2654 Enabled encryption coverage for login-identity-p...

2016-11-14 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/1216
  
Ran all tests with both `128-bit` and `256-bit` encryption enabled. I set 
up an LDAP server using Vagrant and set up the `login-identity-providers.xml` 
file with both plain and encrypted configuration values and verified the 
connection was successful. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2654) Encrypted configs should handle login identity provider configs

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15663260#comment-15663260
 ] 

ASF GitHub Bot commented on NIFI-2654:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/1216
  
Ran all tests with both `128-bit` and `256-bit` encryption enabled. I set 
up an LDAP server using Vagrant and set up the `login-identity-providers.xml` 
file with both plain and encrypted configuration values and verified the 
connection was successful. 


> Encrypted configs should handle login identity provider configs
> ---
>
> Key: NIFI-2654
> URL: https://issues.apache.org/jira/browse/NIFI-2654
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration, Tools and Build
>Affects Versions: 1.0.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>  Labels: config, encryption, ldap, security
> Fix For: 1.1.0
>
>
> The encrypted configuration tool and internal logic to load unprotected 
> values should handle sensitive values contained in the login identity 
> providers (like LDAP Manager Password).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-2654) Encrypted configs should handle login identity provider configs

2016-11-14 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-2654:

Status: Patch Available  (was: Open)

> Encrypted configs should handle login identity provider configs
> ---
>
> Key: NIFI-2654
> URL: https://issues.apache.org/jira/browse/NIFI-2654
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration, Tools and Build
>Affects Versions: 1.0.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>  Labels: config, encryption, ldap, security
> Fix For: 1.1.0
>
>
> The encrypted configuration tool and internal logic to load unprotected 
> values should handle sensitive values contained in the login identity 
> providers (like LDAP Manager Password).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1526) Allow components to provide default values for Yield Duration and Run Schedule

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15663413#comment-15663413
 ] 

ASF GitHub Bot commented on NIFI-1526:
--

Github user mathiastiberghien commented on the issue:

https://github.com/apache/nifi/pull/1107
  
Hello, it seems that the pull request remains blocked. Should I change 
something?


> Allow components to provide default values for Yield Duration and Run Schedule
> --
>
> Key: NIFI-1526
> URL: https://issues.apache.org/jira/browse/NIFI-1526
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Priority: Minor
>
> It would be nice for developers of processors (and maybe reporting tasks and 
> controller services) to be able to specify a default value for Yield duration 
> and Run Schedule.
> Currently Yield defaults to 1 second and Run Schedule defaults to 0 seconds. 
> There may be cases where these are not the best default values and the 
> developer wants to start off with better defaults, still allowing the user to 
> tune as needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #1107: origin/NIFI-1526

2016-11-14 Thread mathiastiberghien
Github user mathiastiberghien commented on the issue:

https://github.com/apache/nifi/pull/1107
  
Hello, it seems that the pull request remains blocked. Should I change 
something?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---



[jira] [Updated] (NIFI-1526) Allow components to provide default values for Yield Duration and Run Schedule

2016-11-14 Thread Joseph Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-1526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Witt updated NIFI-1526:
--
Fix Version/s: 1.1.0

> Allow components to provide default values for Yield Duration and Run Schedule
> --
>
> Key: NIFI-1526
> URL: https://issues.apache.org/jira/browse/NIFI-1526
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Priority: Minor
> Fix For: 1.1.0
>
>
> It would be nice for developers of processors (and maybe reporting tasks and 
> controller services) to be able to specify a default value for Yield duration 
> and Run Schedule.
> Currently Yield defaults to 1 second and Run Schedule defaults to 0 seconds. 
> There may be cases where these are not the best default values and the 
> developer wants to start off with better defaults, still allowing the user to 
> tune as needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-1526) Allow components to provide default values for Yield Duration and Run Schedule

2016-11-14 Thread Joseph Witt (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15663926#comment-15663926
 ] 

Joseph Witt commented on NIFI-1526:
---

[~mathiastiberghien] thanks for contributing and following up.  I've just 
tagged it to the 1.1.0 release to help put some attention on it.  The quick 
scan I did through the code looks good.  I do think we'll want to update the 
developer guide to describe that this exists and also explain that it is for a 
developer to communicate their recommended typical setting and that users can 
still alter it as necessary.  But that can be in another JIRA and come later in 
my opinion.

There is a merge conflict now.  Could you try resolving that?

[~markap14] any chance you can take a look?  

> Allow components to provide default values for Yield Duration and Run Schedule
> --
>
> Key: NIFI-1526
> URL: https://issues.apache.org/jira/browse/NIFI-1526
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Priority: Minor
> Fix For: 1.1.0
>
>
> It would be nice for developers of processors (and maybe reporting tasks and 
> controller services) to be able to specify a default value for Yield duration 
> and Run Schedule.
> Currently Yield defaults to 1 second and Run Schedule defaults to 0 seconds. 
> There may be cases where these are not the best default values and the 
> developer wants to start off with better defaults, still allowing the user to 
> tune as needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #1202: NIFI-2854: Refactor repositories and swap files to use sch...

2016-11-14 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1202
  
@joshelser I did actually look into using both Protocol Buffers as well as 
Avro to perform the serialization/deserialization. That really would be 
preferred, as they are both very stable libraries and much more 
"robust"/feature-rich than what we have here. Unfortunately, though, because of 
the way that their readers/writers work, using those would have required some 
pretty intense refactoring of some of the core repository code. This is largely 
due to the API that was created for the repository wasn't thought through well 
enough. For example, the RecordWriter has a `writeRecord` method that takes in 
a record write as well as the OutputStream to write to. The repository itself 
may write to the OutputStream in between records. These libraries wouldn't 
really support that well. So rather than rewrite some of the most critical 
parts of NiFi, I elected to create a much simpler schema-based reader/writer 
approach. It may make sense at some point to review this decision though and 
refac
 tor the repositories to make them more amenable to this type of thing.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2854) Enable repositories to support upgrades and rollback in well defined scenarios

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15663982#comment-15663982
 ] 

ASF GitHub Bot commented on NIFI-2854:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1202
  
@joshelser I did actually look into using both Protocol Buffers as well as 
Avro to perform the serialization/deserialization. That really would be 
preferred, as they are both very stable libraries and much more 
"robust"/feature-rich than what we have here. Unfortunately, though, because of 
the way that their readers/writers work, using those would have required some 
pretty intense refactoring of some of the core repository code. This is largely 
due to the API that was created for the repository wasn't thought through well 
enough. For example, the RecordWriter has a `writeRecord` method that takes in 
a record write as well as the OutputStream to write to. The repository itself 
may write to the OutputStream in between records. These libraries wouldn't 
really support that well. So rather than rewrite some of the most critical 
parts of NiFi, I elected to create a much simpler schema-based reader/writer 
approach. It may make sense at some point to review this decision though and 
refactor the repositories to make them more amenable to this type of thing.


> Enable repositories to support upgrades and rollback in well defined scenarios
> --
>
> Key: NIFI-2854
> URL: https://issues.apache.org/jira/browse/NIFI-2854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.1.0
>
>
> The flowfile, swapfile, provenance, and content repositories play a very 
> important roll in NiFi's ability to be safely upgraded and rolled back.  We 
> need to have well documented behaviors, designs, and version adherence so 
> that users can safely rely on these mechanisms.
> Once this is formalized and in place we should update our versioning guidance 
> to reflect this as well.
> The following would be true from NiFi 1.2.0 onward
> * No changes to how the repositories are persisted to disk can be made which 
> will break forward/backward compatibility and specifically this means that 
> things like the way each is serialized to disk cannot change.
> * If changes are made which impact forward or backward compatibility they 
> should be reserved for major releases only and should include a utility to 
> help users with pre-existing data convert from some older format to the newer 
> format.  It may not be feasible to have rollback on major releases.
> * The content repository should not be changed within a major release cycle 
> in any way that will harm forward or backward compatibility.
> * The flow file repository can change in that new fields can be added to 
> existing write ahead log record types but no fields can be removed nor can 
> any new types be added.  Once a field is considered required it must remain 
> required.  Changes may only be made across minor version changes - not 
> incremental.
> * Swap File storage should follow very similar rules to the flow file 
> repository.  Adding a schema to the swap file header may allow some variation 
> there but the variation should only be hints to optimize how they're 
> processed and not change their behavior otherwise. Changes are only permitted 
> during minor version releases.
> * Provenance repository changes are only permitted during minor version 
> releases.  These changes may include adding or removing fields from existing 
> event types.  If a field is considered required it must always be considered 
> required.  If a field is removed then it must not be a required field and 
> there must be a sensible default an older version could use if that value is 
> not found in new data once rolled back.  New event types may be added.  
> Fields or event types not known to older version, if seen after a rollback, 
> will simply be ignored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2854) Enable repositories to support upgrades and rollback in well defined scenarios

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15663986#comment-15663986
 ] 

ASF GitHub Bot commented on NIFI-2854:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r87804314
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/FileSystemSwapManager.java
 ---
@@ -210,30 +215,36 @@ public boolean accept(final File dir, final String 
name) {
 // "--.swap". If we 
have two dashes, then we can just check if the queue ID is equal
 // to the id of the queue given and if not we can just move on.
 final String[] splits = swapFile.getName().split("-");
-if (splits.length == 3) {
-final String queueIdentifier = splits[1];
-if 
(!queueIdentifier.equals(flowFileQueue.getIdentifier())) {
-continue;
+if (splits.length > 6) {
--- End diff --

Yes - this was broken before. It was never noticed because it was a simple 
performance tweak but i noticed this as I was stepping through code.


> Enable repositories to support upgrades and rollback in well defined scenarios
> --
>
> Key: NIFI-2854
> URL: https://issues.apache.org/jira/browse/NIFI-2854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.1.0
>
>
> The flowfile, swapfile, provenance, and content repositories play a very 
> important roll in NiFi's ability to be safely upgraded and rolled back.  We 
> need to have well documented behaviors, designs, and version adherence so 
> that users can safely rely on these mechanisms.
> Once this is formalized and in place we should update our versioning guidance 
> to reflect this as well.
> The following would be true from NiFi 1.2.0 onward
> * No changes to how the repositories are persisted to disk can be made which 
> will break forward/backward compatibility and specifically this means that 
> things like the way each is serialized to disk cannot change.
> * If changes are made which impact forward or backward compatibility they 
> should be reserved for major releases only and should include a utility to 
> help users with pre-existing data convert from some older format to the newer 
> format.  It may not be feasible to have rollback on major releases.
> * The content repository should not be changed within a major release cycle 
> in any way that will harm forward or backward compatibility.
> * The flow file repository can change in that new fields can be added to 
> existing write ahead log record types but no fields can be removed nor can 
> any new types be added.  Once a field is considered required it must remain 
> required.  Changes may only be made across minor version changes - not 
> incremental.
> * Swap File storage should follow very similar rules to the flow file 
> repository.  Adding a schema to the swap file header may allow some variation 
> there but the variation should only be hints to optimize how they're 
> processed and not change their behavior otherwise. Changes are only permitted 
> during minor version releases.
> * Provenance repository changes are only permitted during minor version 
> releases.  These changes may include adding or removing fields from existing 
> event types.  If a field is considered required it must always be considered 
> required.  If a field is removed then it must not be a required field and 
> there must be a sensible default an older version could use if that value is 
> not found in new data once rolled back.  New event types may be added.  
> Fields or event types not known to older version, if seen after a rollback, 
> will simply be ignored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1202: NIFI-2854: Refactor repositories and swap files to ...

2016-11-14 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r87804332
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/FileSystemSwapManager.java
 ---
@@ -251,353 +262,36 @@ public SwapSummary getSwapSummary(final String 
swapLocation) throws IOException
 final InputStream bufferedIn = new 
BufferedInputStream(fis);
 final DataInputStream in = new 
DataInputStream(bufferedIn)) {
 
-final int swapEncodingVersion = in.readInt();
-if (swapEncodingVersion > SWAP_ENCODING_VERSION) {
-final String errMsg = "Cannot swap FlowFiles in from " + 
swapFile + " because the encoding version is "
-+ swapEncodingVersion + ", which is too new 
(expecting " + SWAP_ENCODING_VERSION + " or less)";
-
-eventReporter.reportEvent(Severity.ERROR, EVENT_CATEGORY, 
errMsg);
-throw new IOException(errMsg);
-}
-
-final int numRecords;
-final long contentSize;
-Long maxRecordId = null;
-try {
-in.readUTF(); // ignore Connection ID
-numRecords = in.readInt();
-contentSize = in.readLong();
-
-if (numRecords == 0) {
-return StandardSwapSummary.EMPTY_SUMMARY;
-}
-
-if (swapEncodingVersion > 7) {
-maxRecordId = in.readLong();
-}
-} catch (final EOFException eof) {
-logger.warn("Found premature End-of-File when reading Swap 
File {}. EOF occurred before any FlowFiles were encountered", swapLocation);
-return StandardSwapSummary.EMPTY_SUMMARY;
-}
-
-final QueueSize queueSize = new QueueSize(numRecords, 
contentSize);
-final SwapContents swapContents = deserializeFlowFiles(in, 
queueSize, maxRecordId, swapEncodingVersion, true, claimManager, swapLocation);
-return swapContents.getSummary();
-}
-}
-
-public static int serializeFlowFiles(final List 
toSwap, final FlowFileQueue queue, final String swapLocation, final 
OutputStream destination) throws IOException {
-if (toSwap == null || toSwap.isEmpty()) {
-return 0;
-}
-
-long contentSize = 0L;
-for (final FlowFileRecord record : toSwap) {
-contentSize += record.getSize();
-}
-
-// persist record to disk via the swap file
-final OutputStream bufferedOut = new 
BufferedOutputStream(destination);
-final DataOutputStream out = new DataOutputStream(bufferedOut);
-try {
-out.writeInt(SWAP_ENCODING_VERSION);
-out.writeUTF(queue.getIdentifier());
-out.writeInt(toSwap.size());
-out.writeLong(contentSize);
-
-// get the max record id and write that out so that we know it 
quickly for restoration
-long maxRecordId = 0L;
-for (final FlowFileRecord flowFile : toSwap) {
-if (flowFile.getId() > maxRecordId) {
-maxRecordId = flowFile.getId();
-}
-}
-
-out.writeLong(maxRecordId);
-
-for (final FlowFileRecord flowFile : toSwap) {
-out.writeLong(flowFile.getId());
-out.writeLong(flowFile.getEntryDate());
-out.writeLong(flowFile.getLineageStartDate());
-out.writeLong(flowFile.getLineageStartIndex());
-out.writeLong(flowFile.getLastQueueDate());
-out.writeLong(flowFile.getQueueDateIndex());
-out.writeLong(flowFile.getSize());
-
-final ContentClaim claim = flowFile.getContentClaim();
-if (claim == null) {
-out.writeBoolean(false);
-} else {
-out.writeBoolean(true);
-final ResourceClaim resourceClaim = 
claim.getResourceClaim();
-out.writeUTF(resourceClaim.getId());
-out.writeUTF(resourceClaim.getContainer());
-out.writeUTF(resourceClaim.getSection());
-out.writeLong(claim.getOffset());
-out.writeLong(claim.getLength());
-out.writeLong(flowFile.getContentClaimOffset());
-out.writeBoolean(resourceClaim.isLossTolerant());
-}
-
-   

[GitHub] nifi pull request #1202: NIFI-2854: Refactor repositories and swap files to ...

2016-11-14 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r87804314
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/FileSystemSwapManager.java
 ---
@@ -210,30 +215,36 @@ public boolean accept(final File dir, final String 
name) {
 // "--.swap". If we 
have two dashes, then we can just check if the queue ID is equal
 // to the id of the queue given and if not we can just move on.
 final String[] splits = swapFile.getName().split("-");
-if (splits.length == 3) {
-final String queueIdentifier = splits[1];
-if 
(!queueIdentifier.equals(flowFileQueue.getIdentifier())) {
-continue;
+if (splits.length > 6) {
--- End diff --

Yes - this was broken before. It was never noticed because it was a simple 
performance tweak but i noticed this as I was stepping through code.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2854) Enable repositories to support upgrades and rollback in well defined scenarios

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15663987#comment-15663987
 ] 

ASF GitHub Bot commented on NIFI-2854:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r87804332
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/FileSystemSwapManager.java
 ---
@@ -251,353 +262,36 @@ public SwapSummary getSwapSummary(final String 
swapLocation) throws IOException
 final InputStream bufferedIn = new 
BufferedInputStream(fis);
 final DataInputStream in = new 
DataInputStream(bufferedIn)) {
 
-final int swapEncodingVersion = in.readInt();
-if (swapEncodingVersion > SWAP_ENCODING_VERSION) {
-final String errMsg = "Cannot swap FlowFiles in from " + 
swapFile + " because the encoding version is "
-+ swapEncodingVersion + ", which is too new 
(expecting " + SWAP_ENCODING_VERSION + " or less)";
-
-eventReporter.reportEvent(Severity.ERROR, EVENT_CATEGORY, 
errMsg);
-throw new IOException(errMsg);
-}
-
-final int numRecords;
-final long contentSize;
-Long maxRecordId = null;
-try {
-in.readUTF(); // ignore Connection ID
-numRecords = in.readInt();
-contentSize = in.readLong();
-
-if (numRecords == 0) {
-return StandardSwapSummary.EMPTY_SUMMARY;
-}
-
-if (swapEncodingVersion > 7) {
-maxRecordId = in.readLong();
-}
-} catch (final EOFException eof) {
-logger.warn("Found premature End-of-File when reading Swap 
File {}. EOF occurred before any FlowFiles were encountered", swapLocation);
-return StandardSwapSummary.EMPTY_SUMMARY;
-}
-
-final QueueSize queueSize = new QueueSize(numRecords, 
contentSize);
-final SwapContents swapContents = deserializeFlowFiles(in, 
queueSize, maxRecordId, swapEncodingVersion, true, claimManager, swapLocation);
-return swapContents.getSummary();
-}
-}
-
-public static int serializeFlowFiles(final List 
toSwap, final FlowFileQueue queue, final String swapLocation, final 
OutputStream destination) throws IOException {
-if (toSwap == null || toSwap.isEmpty()) {
-return 0;
-}
-
-long contentSize = 0L;
-for (final FlowFileRecord record : toSwap) {
-contentSize += record.getSize();
-}
-
-// persist record to disk via the swap file
-final OutputStream bufferedOut = new 
BufferedOutputStream(destination);
-final DataOutputStream out = new DataOutputStream(bufferedOut);
-try {
-out.writeInt(SWAP_ENCODING_VERSION);
-out.writeUTF(queue.getIdentifier());
-out.writeInt(toSwap.size());
-out.writeLong(contentSize);
-
-// get the max record id and write that out so that we know it 
quickly for restoration
-long maxRecordId = 0L;
-for (final FlowFileRecord flowFile : toSwap) {
-if (flowFile.getId() > maxRecordId) {
-maxRecordId = flowFile.getId();
-}
-}
-
-out.writeLong(maxRecordId);
-
-for (final FlowFileRecord flowFile : toSwap) {
-out.writeLong(flowFile.getId());
-out.writeLong(flowFile.getEntryDate());
-out.writeLong(flowFile.getLineageStartDate());
-out.writeLong(flowFile.getLineageStartIndex());
-out.writeLong(flowFile.getLastQueueDate());
-out.writeLong(flowFile.getQueueDateIndex());
-out.writeLong(flowFile.getSize());
-
-final ContentClaim claim = flowFile.getContentClaim();
-if (claim == null) {
-out.writeBoolean(false);
-} else {
-out.writeBoolean(true);
-final ResourceClaim resourceClaim = 
claim.getResourceClaim();
-out.writeUTF(resourceClaim.getId());
-out.writeUTF(resourceClaim.getContainer());
-out.writeUTF(resourceClaim.getSection());
-out.writeLong(claim.getOffset());

[GitHub] nifi pull request #1202: NIFI-2854: Refactor repositories and swap files to ...

2016-11-14 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r87806305
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/WriteAheadRepositoryRecordSerde.java
 ---
@@ -0,0 +1,517 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.repository;
+
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.nifi.controller.queue.FlowFileQueue;
+import org.apache.nifi.controller.repository.claim.ContentClaim;
+import org.apache.nifi.controller.repository.claim.ResourceClaim;
+import org.apache.nifi.controller.repository.claim.ResourceClaimManager;
+import org.apache.nifi.controller.repository.claim.StandardContentClaim;
+import org.apache.nifi.flowfile.FlowFile;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.wali.SerDe;
+import org.wali.UpdateType;
+
+public class WriteAheadRepositoryRecordSerde extends RepositoryRecordSerde 
implements SerDe {
+private static final Logger logger = 
LoggerFactory.getLogger(WriteAheadRepositoryRecordSerde.class);
+
+private static final int CURRENT_ENCODING_VERSION = 9;
+
+public static final byte ACTION_CREATE = 0;
+public static final byte ACTION_UPDATE = 1;
+public static final byte ACTION_DELETE = 2;
+public static final byte ACTION_SWAPPED_OUT = 3;
+public static final byte ACTION_SWAPPED_IN = 4;
+
+private long recordsRestored = 0L;
+private final ResourceClaimManager claimManager;
+
+public WriteAheadRepositoryRecordSerde(final ResourceClaimManager 
claimManager) {
+this.claimManager = claimManager;
+}
+
+@Override
+public void serializeEdit(final RepositoryRecord previousRecordState, 
final RepositoryRecord record, final DataOutputStream out) throws IOException {
+serializeEdit(previousRecordState, record, out, false);
+}
+
+public void serializeEdit(final RepositoryRecord previousRecordState, 
final RepositoryRecord record, final DataOutputStream out, final boolean 
forceAttributesWritten) throws IOException {
+if (record.isMarkedForAbort()) {
+logger.warn("Repository Record {} is marked to be aborted; it 
will be persisted in the FlowFileRepository as a DELETE record", record);
+out.write(ACTION_DELETE);
+out.writeLong(getRecordIdentifier(record));
+serializeContentClaim(record.getCurrentClaim(), 
record.getCurrentClaimOffset(), out);
+return;
+}
+
+final UpdateType updateType = getUpdateType(record);
+
+if (updateType.equals(UpdateType.DELETE)) {
+out.write(ACTION_DELETE);
+out.writeLong(getRecordIdentifier(record));
+serializeContentClaim(record.getCurrentClaim(), 
record.getCurrentClaimOffset(), out);
+return;
+}
+
+// If there's a Destination Connection, that's the one that we 
want to associated with this record.
+// However, on restart, we will restore the FlowFile and set this 
connection to its "originalConnection".
+// If we then serialize the FlowFile again before it's 
transferred, it's important to allow this to happen,
+// so we use the originalConnection instead
+FlowFileQueue associatedQueue = record.getDestination();
+if (associatedQueue == null) {
+associatedQueue = record.getOriginalQueue();
+}
+
+if (updateType.equals(UpdateType.SWAP_OUT)) {
+out.write(ACTION_SWAPPED_OUT);
+out.writeLong(getRecordIdentifier(recor

[GitHub] nifi pull request #1202: NIFI-2854: Refactor repositories and swap files to ...

2016-11-14 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r87806443
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/swap/SimpleSwapSerializer.java
 ---
@@ -0,0 +1,129 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.swap;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.nifi.controller.queue.FlowFileQueue;
+import org.apache.nifi.controller.repository.FlowFileRecord;
+import org.apache.nifi.controller.repository.claim.ContentClaim;
+import org.apache.nifi.controller.repository.claim.ResourceClaim;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class SimpleSwapSerializer implements SwapSerializer {
--- End diff --

True. Will do so.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2854) Enable repositories to support upgrades and rollback in well defined scenarios

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664011#comment-15664011
 ] 

ASF GitHub Bot commented on NIFI-2854:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r87806305
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/WriteAheadRepositoryRecordSerde.java
 ---
@@ -0,0 +1,517 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.repository;
+
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.nifi.controller.queue.FlowFileQueue;
+import org.apache.nifi.controller.repository.claim.ContentClaim;
+import org.apache.nifi.controller.repository.claim.ResourceClaim;
+import org.apache.nifi.controller.repository.claim.ResourceClaimManager;
+import org.apache.nifi.controller.repository.claim.StandardContentClaim;
+import org.apache.nifi.flowfile.FlowFile;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.wali.SerDe;
+import org.wali.UpdateType;
+
+public class WriteAheadRepositoryRecordSerde extends RepositoryRecordSerde 
implements SerDe {
+private static final Logger logger = 
LoggerFactory.getLogger(WriteAheadRepositoryRecordSerde.class);
+
+private static final int CURRENT_ENCODING_VERSION = 9;
+
+public static final byte ACTION_CREATE = 0;
+public static final byte ACTION_UPDATE = 1;
+public static final byte ACTION_DELETE = 2;
+public static final byte ACTION_SWAPPED_OUT = 3;
+public static final byte ACTION_SWAPPED_IN = 4;
+
+private long recordsRestored = 0L;
+private final ResourceClaimManager claimManager;
+
+public WriteAheadRepositoryRecordSerde(final ResourceClaimManager 
claimManager) {
+this.claimManager = claimManager;
+}
+
+@Override
+public void serializeEdit(final RepositoryRecord previousRecordState, 
final RepositoryRecord record, final DataOutputStream out) throws IOException {
+serializeEdit(previousRecordState, record, out, false);
+}
+
+public void serializeEdit(final RepositoryRecord previousRecordState, 
final RepositoryRecord record, final DataOutputStream out, final boolean 
forceAttributesWritten) throws IOException {
+if (record.isMarkedForAbort()) {
+logger.warn("Repository Record {} is marked to be aborted; it 
will be persisted in the FlowFileRepository as a DELETE record", record);
+out.write(ACTION_DELETE);
+out.writeLong(getRecordIdentifier(record));
+serializeContentClaim(record.getCurrentClaim(), 
record.getCurrentClaimOffset(), out);
+return;
+}
+
+final UpdateType updateType = getUpdateType(record);
+
+if (updateType.equals(UpdateType.DELETE)) {
+out.write(ACTION_DELETE);
+out.writeLong(getRecordIdentifier(record));
+serializeContentClaim(record.getCurrentClaim(), 
record.getCurrentClaimOffset(), out);
+return;
+}
+
+// If there's a Destination Connection, that's the one that we 
want to associated with this record.
+// However, on restart, we will restore the FlowFile and set this 
connection to its "originalConnection".
+// If we then serialize the FlowFile again before it's 
transferred, it's important to allow this to happen,
+// so we use the originalConnection instead
+FlowFileQueue associatedQueue = record.getDestination();
+if (associatedQueue == n

[jira] [Commented] (NIFI-2854) Enable repositories to support upgrades and rollback in well defined scenarios

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664017#comment-15664017
 ] 

ASF GitHub Bot commented on NIFI-2854:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r87806443
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/swap/SimpleSwapSerializer.java
 ---
@@ -0,0 +1,129 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.swap;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.nifi.controller.queue.FlowFileQueue;
+import org.apache.nifi.controller.repository.FlowFileRecord;
+import org.apache.nifi.controller.repository.claim.ContentClaim;
+import org.apache.nifi.controller.repository.claim.ResourceClaim;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class SimpleSwapSerializer implements SwapSerializer {
--- End diff --

True. Will do so.


> Enable repositories to support upgrades and rollback in well defined scenarios
> --
>
> Key: NIFI-2854
> URL: https://issues.apache.org/jira/browse/NIFI-2854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.1.0
>
>
> The flowfile, swapfile, provenance, and content repositories play a very 
> important roll in NiFi's ability to be safely upgraded and rolled back.  We 
> need to have well documented behaviors, designs, and version adherence so 
> that users can safely rely on these mechanisms.
> Once this is formalized and in place we should update our versioning guidance 
> to reflect this as well.
> The following would be true from NiFi 1.2.0 onward
> * No changes to how the repositories are persisted to disk can be made which 
> will break forward/backward compatibility and specifically this means that 
> things like the way each is serialized to disk cannot change.
> * If changes are made which impact forward or backward compatibility they 
> should be reserved for major releases only and should include a utility to 
> help users with pre-existing data convert from some older format to the newer 
> format.  It may not be feasible to have rollback on major releases.
> * The content repository should not be changed within a major release cycle 
> in any way that will harm forward or backward compatibility.
> * The flow file repository can change in that new fields can be added to 
> existing write ahead log record types but no fields can be removed nor can 
> any new types be added.  Once a field is considered required it must remain 
> required.  Changes may only be made across minor version changes - not 
> incremental.
> * Swap File storage should follow very similar rules to the flow file 
> repository.  Adding a schema to the swap file header may allow some variation 
> there but the variation should only be hints to optimize how they're 
> processed and not change their behavior otherwise. Changes are only permitted 
> during minor version releases.
> * Provenance repository changes are only permitted during minor version 
> releases.  These changes may include adding or removing fields from existing 
> event types.  If a field is considered required it must always be considered 
> required.  If a field is removed then it must not be a required field and 
> there must be a sensible default an older version could use if that value is 
> not found in new data once rolled back.  New event types may be added.  
> Fields or event types not known to older version, if seen after a rollback, 
> will simply be ig

[GitHub] nifi pull request #1202: NIFI-2854: Refactor repositories and swap files to ...

2016-11-14 Thread devriesb
Github user devriesb commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r87807081
  
--- Diff: 
nifi-commons/nifi-utils/src/main/java/org/apache/nifi/stream/io/BufferedInputStream.java
 ---
@@ -16,19 +16,445 @@
  */
 package org.apache.nifi.stream.io;
 
+import java.io.IOException;
 import java.io.InputStream;
 
 /**
  * This class is a slight modification of the BufferedInputStream in the 
java.io package. The modification is that this implementation does not provide 
synchronization on method calls, which means
  * that this class is not suitable for use by multiple threads. However, 
the absence of these synchronized blocks results in potentially much better 
performance.
  */
-public class BufferedInputStream extends java.io.BufferedInputStream {
+public class BufferedInputStream extends InputStream {
--- End diff --

I agree with Joe... we've had instance where eclipse (and I assume IntelliJ 
/ other IDEs) suggest the nifi version of BufferedInputStream, resulting in 
bad, unexpected behavior.  The name is really unfortunate.  Perhaps move the 
functionality to "UnsycnchronizedBufferedInputStream", modify nifi's 
BufferedInputStream to be an empty extension, and deprecate it.  Then complete 
the rename / remove in a future release (like a major version bump, if the 
breaking change is the concern...).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2854) Enable repositories to support upgrades and rollback in well defined scenarios

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664029#comment-15664029
 ] 

ASF GitHub Bot commented on NIFI-2854:
--

Github user devriesb commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r87807081
  
--- Diff: 
nifi-commons/nifi-utils/src/main/java/org/apache/nifi/stream/io/BufferedInputStream.java
 ---
@@ -16,19 +16,445 @@
  */
 package org.apache.nifi.stream.io;
 
+import java.io.IOException;
 import java.io.InputStream;
 
 /**
  * This class is a slight modification of the BufferedInputStream in the 
java.io package. The modification is that this implementation does not provide 
synchronization on method calls, which means
  * that this class is not suitable for use by multiple threads. However, 
the absence of these synchronized blocks results in potentially much better 
performance.
  */
-public class BufferedInputStream extends java.io.BufferedInputStream {
+public class BufferedInputStream extends InputStream {
--- End diff --

I agree with Joe... we've had instance where eclipse (and I assume IntelliJ 
/ other IDEs) suggest the nifi version of BufferedInputStream, resulting in 
bad, unexpected behavior.  The name is really unfortunate.  Perhaps move the 
functionality to "UnsycnchronizedBufferedInputStream", modify nifi's 
BufferedInputStream to be an empty extension, and deprecate it.  Then complete 
the rename / remove in a future release (like a major version bump, if the 
breaking change is the concern...).


> Enable repositories to support upgrades and rollback in well defined scenarios
> --
>
> Key: NIFI-2854
> URL: https://issues.apache.org/jira/browse/NIFI-2854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.1.0
>
>
> The flowfile, swapfile, provenance, and content repositories play a very 
> important roll in NiFi's ability to be safely upgraded and rolled back.  We 
> need to have well documented behaviors, designs, and version adherence so 
> that users can safely rely on these mechanisms.
> Once this is formalized and in place we should update our versioning guidance 
> to reflect this as well.
> The following would be true from NiFi 1.2.0 onward
> * No changes to how the repositories are persisted to disk can be made which 
> will break forward/backward compatibility and specifically this means that 
> things like the way each is serialized to disk cannot change.
> * If changes are made which impact forward or backward compatibility they 
> should be reserved for major releases only and should include a utility to 
> help users with pre-existing data convert from some older format to the newer 
> format.  It may not be feasible to have rollback on major releases.
> * The content repository should not be changed within a major release cycle 
> in any way that will harm forward or backward compatibility.
> * The flow file repository can change in that new fields can be added to 
> existing write ahead log record types but no fields can be removed nor can 
> any new types be added.  Once a field is considered required it must remain 
> required.  Changes may only be made across minor version changes - not 
> incremental.
> * Swap File storage should follow very similar rules to the flow file 
> repository.  Adding a schema to the swap file header may allow some variation 
> there but the variation should only be hints to optimize how they're 
> processed and not change their behavior otherwise. Changes are only permitted 
> during minor version releases.
> * Provenance repository changes are only permitted during minor version 
> releases.  These changes may include adding or removing fields from existing 
> event types.  If a field is considered required it must always be considered 
> required.  If a field is removed then it must not be a required field and 
> there must be a sensible default an older version could use if that value is 
> not found in new data once rolled back.  New event types may be added.  
> Fields or event types not known to older version, if seen after a rollback, 
> will simply be ignored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1202: NIFI-2854: Refactor repositories and swap files to ...

2016-11-14 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r87807209
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/test/java/org/apache/nifi/controller/swap/TestSimpleSwapSerializerDeserializer.java
 ---
@@ -0,0 +1,136 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.swap;
+
+import static org.junit.Assert.assertEquals;
+
+import java.io.DataInputStream;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.nio.file.Files;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.controller.queue.FlowFileQueue;
+import org.apache.nifi.controller.repository.FlowFileRecord;
+import org.apache.nifi.controller.repository.SwapContents;
+import org.apache.nifi.controller.repository.claim.ResourceClaimManager;
+import 
org.apache.nifi.controller.repository.claim.StandardResourceClaimManager;
+import org.apache.nifi.stream.io.NullOutputStream;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+public class TestSimpleSwapSerializerDeserializer {
+@Before
+public void setup() {
+TestFlowFile.resetIdGenerator();
+}
+
+@Test
+public void testRoundTripSerializeDeserialize() throws IOException {
+final ResourceClaimManager resourceClaimManager = new 
StandardResourceClaimManager();
+
+final List toSwap = new ArrayList<>(1);
+final Map attrs = new HashMap<>();
+for (int i = 0; i < 1; i++) {
+attrs.put("i", String.valueOf(i));
+final FlowFileRecord ff = new TestFlowFile(attrs, i, 
resourceClaimManager);
+toSwap.add(ff);
+}
+
+final FlowFileQueue flowFileQueue = 
Mockito.mock(FlowFileQueue.class);
+
Mockito.when(flowFileQueue.getIdentifier()).thenReturn("87bb99fe-412c-49f6-a441-d1b0af4e20b4");
+
+final String swapLocation = "target/testRoundTrip-" + 
UUID.randomUUID().toString() + ".swap";
+final File swapFile = new File(swapLocation);
+
+Files.deleteIfExists(swapFile.toPath());
+try {
+final SimpleSwapSerializer serializer = new 
SimpleSwapSerializer();
+try (final FileOutputStream fos = new 
FileOutputStream(swapFile)) {
+serializer.serializeFlowFiles(toSwap, flowFileQueue, 
swapLocation, fos);
+}
+
+final SimpleSwapDeserializer deserializer = new 
SimpleSwapDeserializer();
+final SwapContents swappedIn;
+try (final FileInputStream fis = new FileInputStream(swapFile);
+final DataInputStream dis = new DataInputStream(fis)) {
+swappedIn = deserializer.deserializeFlowFiles(dis, 
swapLocation, flowFileQueue, resourceClaimManager);
+}
+
+assertEquals(toSwap.size(), swappedIn.getFlowFiles().size());
+for (int i = 0; i < toSwap.size(); i++) {
+final FlowFileRecord pre = toSwap.get(i);
+final FlowFileRecord post = 
swappedIn.getFlowFiles().get(i);
+
+assertEquals(pre.getSize(), post.getSize());
+assertEquals(pre.getAttributes(), post.getAttributes());
+assertEquals(pre.getSize(), post.getSize());
+assertEquals(pre.getId(), post.getId());
+assertEquals(pre.getContentClaim(), 
post.getContentClaim());
+assertEquals(pre.getContentClaimOffset(), 
post.getContentClaimOffset());
+assertEquals(pre.getEntryDate(), post.getEntryDate());

[jira] [Commented] (NIFI-2854) Enable repositories to support upgrades and rollback in well defined scenarios

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664036#comment-15664036
 ] 

ASF GitHub Bot commented on NIFI-2854:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r87807516
  
--- Diff: 
nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/schema/FieldSerializer.java
 ---
@@ -0,0 +1,27 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.provenance.schema;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+
+import org.apache.nifi.provenance.ProvenanceEventRecord;
+
+public interface FieldSerializer {
--- End diff --

Yes - good catch.


> Enable repositories to support upgrades and rollback in well defined scenarios
> --
>
> Key: NIFI-2854
> URL: https://issues.apache.org/jira/browse/NIFI-2854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.1.0
>
>
> The flowfile, swapfile, provenance, and content repositories play a very 
> important roll in NiFi's ability to be safely upgraded and rolled back.  We 
> need to have well documented behaviors, designs, and version adherence so 
> that users can safely rely on these mechanisms.
> Once this is formalized and in place we should update our versioning guidance 
> to reflect this as well.
> The following would be true from NiFi 1.2.0 onward
> * No changes to how the repositories are persisted to disk can be made which 
> will break forward/backward compatibility and specifically this means that 
> things like the way each is serialized to disk cannot change.
> * If changes are made which impact forward or backward compatibility they 
> should be reserved for major releases only and should include a utility to 
> help users with pre-existing data convert from some older format to the newer 
> format.  It may not be feasible to have rollback on major releases.
> * The content repository should not be changed within a major release cycle 
> in any way that will harm forward or backward compatibility.
> * The flow file repository can change in that new fields can be added to 
> existing write ahead log record types but no fields can be removed nor can 
> any new types be added.  Once a field is considered required it must remain 
> required.  Changes may only be made across minor version changes - not 
> incremental.
> * Swap File storage should follow very similar rules to the flow file 
> repository.  Adding a schema to the swap file header may allow some variation 
> there but the variation should only be hints to optimize how they're 
> processed and not change their behavior otherwise. Changes are only permitted 
> during minor version releases.
> * Provenance repository changes are only permitted during minor version 
> releases.  These changes may include adding or removing fields from existing 
> event types.  If a field is considered required it must always be considered 
> required.  If a field is removed then it must not be a required field and 
> there must be a sensible default an older version could use if that value is 
> not found in new data once rolled back.  New event types may be added.  
> Fields or event types not known to older version, if seen after a rollback, 
> will simply be ignored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1202: NIFI-2854: Refactor repositories and swap files to ...

2016-11-14 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r87808270
  
--- Diff: 
nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/test/java/org/apache/nifi/provenance/TestPersistentProvenanceRepository.java
 ---
@@ -1914,112 +1916,6 @@ public void 
testFailureToCreateWriterDoesNotPreventSubsequentRollover() throws I
 }
 
 
-@Test
-public void testBehaviorOnOutOfMemory() throws IOException, 
InterruptedException {
--- End diff --

This test was assuming that the Record Writer was the SimpleRecordWriter 
and really made some other assumptions that it shouldn't make. It wasn't a 
well-written unit test, in general, as it was assuming dependencies between 
classes, so was really more of an integration test. I considered refactoring to 
make it work but then decided to simply remove it, since we already have a JIRA 
anyway for fixing how NiFi handles OOME by letting the process die and having 
the bootstrap restart it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2854) Enable repositories to support upgrades and rollback in well defined scenarios

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664046#comment-15664046
 ] 

ASF GitHub Bot commented on NIFI-2854:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r87808270
  
--- Diff: 
nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/test/java/org/apache/nifi/provenance/TestPersistentProvenanceRepository.java
 ---
@@ -1914,112 +1916,6 @@ public void 
testFailureToCreateWriterDoesNotPreventSubsequentRollover() throws I
 }
 
 
-@Test
-public void testBehaviorOnOutOfMemory() throws IOException, 
InterruptedException {
--- End diff --

This test was assuming that the Record Writer was the SimpleRecordWriter 
and really made some other assumptions that it shouldn't make. It wasn't a 
well-written unit test, in general, as it was assuming dependencies between 
classes, so was really more of an integration test. I considered refactoring to 
make it work but then decided to simply remove it, since we already have a JIRA 
anyway for fixing how NiFi handles OOME by letting the process die and having 
the bootstrap restart it.


> Enable repositories to support upgrades and rollback in well defined scenarios
> --
>
> Key: NIFI-2854
> URL: https://issues.apache.org/jira/browse/NIFI-2854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.1.0
>
>
> The flowfile, swapfile, provenance, and content repositories play a very 
> important roll in NiFi's ability to be safely upgraded and rolled back.  We 
> need to have well documented behaviors, designs, and version adherence so 
> that users can safely rely on these mechanisms.
> Once this is formalized and in place we should update our versioning guidance 
> to reflect this as well.
> The following would be true from NiFi 1.2.0 onward
> * No changes to how the repositories are persisted to disk can be made which 
> will break forward/backward compatibility and specifically this means that 
> things like the way each is serialized to disk cannot change.
> * If changes are made which impact forward or backward compatibility they 
> should be reserved for major releases only and should include a utility to 
> help users with pre-existing data convert from some older format to the newer 
> format.  It may not be feasible to have rollback on major releases.
> * The content repository should not be changed within a major release cycle 
> in any way that will harm forward or backward compatibility.
> * The flow file repository can change in that new fields can be added to 
> existing write ahead log record types but no fields can be removed nor can 
> any new types be added.  Once a field is considered required it must remain 
> required.  Changes may only be made across minor version changes - not 
> incremental.
> * Swap File storage should follow very similar rules to the flow file 
> repository.  Adding a schema to the swap file header may allow some variation 
> there but the variation should only be hints to optimize how they're 
> processed and not change their behavior otherwise. Changes are only permitted 
> during minor version releases.
> * Provenance repository changes are only permitted during minor version 
> releases.  These changes may include adding or removing fields from existing 
> event types.  If a field is considered required it must always be considered 
> required.  If a field is removed then it must not be a required field and 
> there must be a sensible default an older version could use if that value is 
> not found in new data once rolled back.  New event types may be added.  
> Fields or event types not known to older version, if seen after a rollback, 
> will simply be ignored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1202: NIFI-2854: Refactor repositories and swap files to ...

2016-11-14 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r87807516
  
--- Diff: 
nifi-nar-bundles/nifi-provenance-repository-bundle/nifi-persistent-provenance-repository/src/main/java/org/apache/nifi/provenance/schema/FieldSerializer.java
 ---
@@ -0,0 +1,27 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.provenance.schema;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+
+import org.apache.nifi.provenance.ProvenanceEventRecord;
+
+public interface FieldSerializer {
--- End diff --

Yes - good catch.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2854) Enable repositories to support upgrades and rollback in well defined scenarios

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664030#comment-15664030
 ] 

ASF GitHub Bot commented on NIFI-2854:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r87807209
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/test/java/org/apache/nifi/controller/swap/TestSimpleSwapSerializerDeserializer.java
 ---
@@ -0,0 +1,136 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.swap;
+
+import static org.junit.Assert.assertEquals;
+
+import java.io.DataInputStream;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.nio.file.Files;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.controller.queue.FlowFileQueue;
+import org.apache.nifi.controller.repository.FlowFileRecord;
+import org.apache.nifi.controller.repository.SwapContents;
+import org.apache.nifi.controller.repository.claim.ResourceClaimManager;
+import 
org.apache.nifi.controller.repository.claim.StandardResourceClaimManager;
+import org.apache.nifi.stream.io.NullOutputStream;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+public class TestSimpleSwapSerializerDeserializer {
+@Before
+public void setup() {
+TestFlowFile.resetIdGenerator();
+}
+
+@Test
+public void testRoundTripSerializeDeserialize() throws IOException {
+final ResourceClaimManager resourceClaimManager = new 
StandardResourceClaimManager();
+
+final List toSwap = new ArrayList<>(1);
+final Map attrs = new HashMap<>();
+for (int i = 0; i < 1; i++) {
+attrs.put("i", String.valueOf(i));
+final FlowFileRecord ff = new TestFlowFile(attrs, i, 
resourceClaimManager);
+toSwap.add(ff);
+}
+
+final FlowFileQueue flowFileQueue = 
Mockito.mock(FlowFileQueue.class);
+
Mockito.when(flowFileQueue.getIdentifier()).thenReturn("87bb99fe-412c-49f6-a441-d1b0af4e20b4");
+
+final String swapLocation = "target/testRoundTrip-" + 
UUID.randomUUID().toString() + ".swap";
+final File swapFile = new File(swapLocation);
+
+Files.deleteIfExists(swapFile.toPath());
+try {
+final SimpleSwapSerializer serializer = new 
SimpleSwapSerializer();
+try (final FileOutputStream fos = new 
FileOutputStream(swapFile)) {
+serializer.serializeFlowFiles(toSwap, flowFileQueue, 
swapLocation, fos);
+}
+
+final SimpleSwapDeserializer deserializer = new 
SimpleSwapDeserializer();
+final SwapContents swappedIn;
+try (final FileInputStream fis = new FileInputStream(swapFile);
+final DataInputStream dis = new DataInputStream(fis)) {
+swappedIn = deserializer.deserializeFlowFiles(dis, 
swapLocation, flowFileQueue, resourceClaimManager);
+}
+
+assertEquals(toSwap.size(), swappedIn.getFlowFiles().size());
+for (int i = 0; i < toSwap.size(); i++) {
+final FlowFileRecord pre = toSwap.get(i);
+final FlowFileRecord post = 
swappedIn.getFlowFiles().get(i);
+
+assertEquals(pre.getSize(), post.getSize());
+assertEquals(pre.getAttributes(), post.getAttributes());
+assertEquals(pre.getSize(), post.getSize());
+assertEquals(pre.getId(), post.getId());
+

[jira] [Created] (NIFI-3031) Support Multi-Statement Scripts in the PutHiveQL Processor

2016-11-14 Thread Matt Burgess (JIRA)
Matt Burgess created NIFI-3031:
--

 Summary: Support Multi-Statement Scripts in the PutHiveQL Processor
 Key: NIFI-3031
 URL: https://issues.apache.org/jira/browse/NIFI-3031
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Matt Burgess


Trying to use the PutHiveQL processor to execute a HiveQL script that contains 
multiple statements.

IE: 

USE my_database;

FROM my_database_src.base_table
INSERT OVERWRITE refined_table
SELECT *;

-- or --

use my_database;

create temporary table WORKING as
select a,b,c from RAW;

FROM RAW
INSERT OVERWRITE refined_table
SELECT *;

The current implementation doesn't even like it when you have a semicolon at 
the end of the single statement.

Either use a default delimiter like a semi-colon to mark the boundaries of a 
statement within the file or allow them to define there own.

This enables the building of pipelines that are testable by not embedding 
HiveQL into a product; rather sourcing them from files.  And the scripts can be 
complex.  Each statement should run in a linear manner and be part of the same 
JDBC session to ensure things like "temporary" tables will work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #1207: NIFI-3025: Bump nifi-spark-receiver's jackson version to m...

2016-11-14 Thread olegz
Github user olegz commented on the issue:

https://github.com/apache/nifi/pull/1207
  
+1 Merging


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-3025) Bump nifi-spark-receiver's jackson version to match Spark 2.0.1

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664063#comment-15664063
 ] 

ASF GitHub Bot commented on NIFI-3025:
--

Github user olegz commented on the issue:

https://github.com/apache/nifi/pull/1207
  
+1 Merging


> Bump nifi-spark-receiver's jackson version to match Spark 2.0.1
> ---
>
> Key: NIFI-3025
> URL: https://issues.apache.org/jira/browse/NIFI-3025
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Randy Gelhausen
>Priority: Minor
>
> Apache Spark 2.0.1 uses Jackson 2.6.5.
> When trying to use nifi-spark-receiver with Spark Streaming, including the 
> NiFi artifact conflicts with Spark's included Jackson version.
> Bumping that artifact's Jackson version to 2.6.5 fixes this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3025) Bump nifi-spark-receiver's jackson version to match Spark 2.0.1

2016-11-14 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664077#comment-15664077
 ] 

ASF subversion and git services commented on NIFI-3025:
---

Commit 8d3177c38a4fed3b7040f03d17f28b1fc76525d8 in nifi's branch 
refs/heads/master from [~randerzander]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=8d3177c ]

NIFI-3025: Bump nifi-spark-receiver's jackson version to match Spark 2.0.1

This closes #1207


> Bump nifi-spark-receiver's jackson version to match Spark 2.0.1
> ---
>
> Key: NIFI-3025
> URL: https://issues.apache.org/jira/browse/NIFI-3025
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Randy Gelhausen
>Priority: Minor
>
> Apache Spark 2.0.1 uses Jackson 2.6.5.
> When trying to use nifi-spark-receiver with Spark Streaming, including the 
> NiFi artifact conflicts with Spark's included Jackson version.
> Bumping that artifact's Jackson version to 2.6.5 fixes this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3025) Bump nifi-spark-receiver's jackson version to match Spark 2.0.1

2016-11-14 Thread Oleg Zhurakousky (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Zhurakousky updated NIFI-3025:
---
Assignee: Randy Gelhausen

> Bump nifi-spark-receiver's jackson version to match Spark 2.0.1
> ---
>
> Key: NIFI-3025
> URL: https://issues.apache.org/jira/browse/NIFI-3025
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Randy Gelhausen
>Assignee: Randy Gelhausen
>Priority: Minor
> Fix For: 1.1.0
>
>
> Apache Spark 2.0.1 uses Jackson 2.6.5.
> When trying to use nifi-spark-receiver with Spark Streaming, including the 
> NiFi artifact conflicts with Spark's included Jackson version.
> Bumping that artifact's Jackson version to 2.6.5 fixes this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (NIFI-3025) Bump nifi-spark-receiver's jackson version to match Spark 2.0.1

2016-11-14 Thread Oleg Zhurakousky (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Zhurakousky updated NIFI-3025:
---
Fix Version/s: 1.1.0

> Bump nifi-spark-receiver's jackson version to match Spark 2.0.1
> ---
>
> Key: NIFI-3025
> URL: https://issues.apache.org/jira/browse/NIFI-3025
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Randy Gelhausen
>Priority: Minor
> Fix For: 1.1.0
>
>
> Apache Spark 2.0.1 uses Jackson 2.6.5.
> When trying to use nifi-spark-receiver with Spark Streaming, including the 
> NiFi artifact conflicts with Spark's included Jackson version.
> Bumping that artifact's Jackson version to 2.6.5 fixes this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (NIFI-3025) Bump nifi-spark-receiver's jackson version to match Spark 2.0.1

2016-11-14 Thread Oleg Zhurakousky (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Zhurakousky resolved NIFI-3025.

Resolution: Fixed

> Bump nifi-spark-receiver's jackson version to match Spark 2.0.1
> ---
>
> Key: NIFI-3025
> URL: https://issues.apache.org/jira/browse/NIFI-3025
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Randy Gelhausen
>Assignee: Randy Gelhausen
>Priority: Minor
> Fix For: 1.1.0
>
>
> Apache Spark 2.0.1 uses Jackson 2.6.5.
> When trying to use nifi-spark-receiver with Spark Streaming, including the 
> NiFi artifact conflicts with Spark's included Jackson version.
> Bumping that artifact's Jackson version to 2.6.5 fixes this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-3025) Bump nifi-spark-receiver's jackson version to match Spark 2.0.1

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664078#comment-15664078
 ] 

ASF GitHub Bot commented on NIFI-3025:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1207


> Bump nifi-spark-receiver's jackson version to match Spark 2.0.1
> ---
>
> Key: NIFI-3025
> URL: https://issues.apache.org/jira/browse/NIFI-3025
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Randy Gelhausen
>Priority: Minor
> Fix For: 1.1.0
>
>
> Apache Spark 2.0.1 uses Jackson 2.6.5.
> When trying to use nifi-spark-receiver with Spark Streaming, including the 
> NiFi artifact conflicts with Spark's included Jackson version.
> Bumping that artifact's Jackson version to 2.6.5 fixes this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1207: NIFI-3025: Bump nifi-spark-receiver's jackson versi...

2016-11-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/1207


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1177: NIFI-2615 Adding a GetTCP processor

2016-11-14 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1177#discussion_r87814325
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GetTCP.java
 ---
@@ -0,0 +1,668 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.net.Socket;
+import java.net.SocketAddress;
+import java.net.SocketTimeoutException;
+import java.net.StandardSocketOptions;
+import java.nio.ByteBuffer;
+import java.nio.CharBuffer;
+import java.nio.channels.AlreadyConnectedException;
+import java.nio.channels.ConnectionPendingException;
+import java.nio.channels.SocketChannel;
+import java.nio.channels.UnresolvedAddressException;
+import java.nio.channels.UnsupportedAddressTypeException;
+import java.nio.charset.Charset;
+import java.nio.charset.CharsetDecoder;
+import java.nio.charset.StandardCharsets;
+import java.time.Instant;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ArrayBlockingQueue;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+@SupportsBatching
+@SideEffectFree
+@Tags({"get", "fetch", "poll", "tcp", "ingest", "source", "input"})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@CapabilityDescription("Connects over TCP to the provided server. When 
receiving data this will writes either the" +
+" full receive buffer or messages based on demarcator to the 
content of a FlowFile. ")
+public class GetTCP extends AbstractProcessor {
+
+private static final Validator ENDPOINT_VALIDATOR = new Validator() {
+@Override
+public ValidationResult validate(final String subject, final 
String value, final ValidationContext context) {
+if (null == value || value.isEmpty()) {
+return new 
ValidationResult.Builder().subject(subject).input(value).valid(false).explanation(subject
 + " cannot be empty").build();
+}
+//The format should be :{,:}
+//first split on ,
+final String[] hostPortPairs = value.split(",");
+boolean validHostPortPairs = true;
+String reason = "";
+String offendingSubject = subject;
+
+if(0 == hostPortPairs.length){
+return new 
ValidationResult.Builder().subject(subject).

[jira] [Commented] (NIFI-2615) Add support for GetTCP processor

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664103#comment-15664103
 ] 

ASF GitHub Bot commented on NIFI-2615:
--

Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1177#discussion_r87814325
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GetTCP.java
 ---
@@ -0,0 +1,668 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.net.Socket;
+import java.net.SocketAddress;
+import java.net.SocketTimeoutException;
+import java.net.StandardSocketOptions;
+import java.nio.ByteBuffer;
+import java.nio.CharBuffer;
+import java.nio.channels.AlreadyConnectedException;
+import java.nio.channels.ConnectionPendingException;
+import java.nio.channels.SocketChannel;
+import java.nio.channels.UnresolvedAddressException;
+import java.nio.channels.UnsupportedAddressTypeException;
+import java.nio.charset.Charset;
+import java.nio.charset.CharsetDecoder;
+import java.nio.charset.StandardCharsets;
+import java.time.Instant;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ArrayBlockingQueue;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+@SupportsBatching
+@SideEffectFree
+@Tags({"get", "fetch", "poll", "tcp", "ingest", "source", "input"})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@CapabilityDescription("Connects over TCP to the provided server. When 
receiving data this will writes either the" +
+" full receive buffer or messages based on demarcator to the 
content of a FlowFile. ")
+public class GetTCP extends AbstractProcessor {
+
+private static final Validator ENDPOINT_VALIDATOR = new Validator() {
+@Override
+public ValidationResult validate(final String subject, final 
String value, final ValidationContext context) {
+if (null == value || value.isEmpty()) {
+return new 
ValidationResult.Builder().subject(subject).input(value).valid(false).explanation(subject
 + " cannot be empty").build();
+}
+//The format should be :{,:}
+//first split on ,
+final String[] hostPortPairs = value.split(",");
+bo

[GitHub] nifi pull request #1177: NIFI-2615 Adding a GetTCP processor

2016-11-14 Thread apsaltis
Github user apsaltis commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1177#discussion_r87815456
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GetTCP.java
 ---
@@ -0,0 +1,668 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.net.Socket;
+import java.net.SocketAddress;
+import java.net.SocketTimeoutException;
+import java.net.StandardSocketOptions;
+import java.nio.ByteBuffer;
+import java.nio.CharBuffer;
+import java.nio.channels.AlreadyConnectedException;
+import java.nio.channels.ConnectionPendingException;
+import java.nio.channels.SocketChannel;
+import java.nio.channels.UnresolvedAddressException;
+import java.nio.channels.UnsupportedAddressTypeException;
+import java.nio.charset.Charset;
+import java.nio.charset.CharsetDecoder;
+import java.nio.charset.StandardCharsets;
+import java.time.Instant;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ArrayBlockingQueue;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+@SupportsBatching
+@SideEffectFree
+@Tags({"get", "fetch", "poll", "tcp", "ingest", "source", "input"})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@CapabilityDescription("Connects over TCP to the provided server. When 
receiving data this will writes either the" +
+" full receive buffer or messages based on demarcator to the 
content of a FlowFile. ")
+public class GetTCP extends AbstractProcessor {
+
+private static final Validator ENDPOINT_VALIDATOR = new Validator() {
+@Override
+public ValidationResult validate(final String subject, final 
String value, final ValidationContext context) {
+if (null == value || value.isEmpty()) {
+return new 
ValidationResult.Builder().subject(subject).input(value).valid(false).explanation(subject
 + " cannot be empty").build();
+}
+//The format should be :{,:}
+//first split on ,
+final String[] hostPortPairs = value.split(",");
+boolean validHostPortPairs = true;
+String reason = "";
+String offendingSubject = subject;
+
+if(0 == hostPortPairs.length){
+return new 
ValidationResult.Builder().subject(subjec

[jira] [Commented] (NIFI-2615) Add support for GetTCP processor

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664113#comment-15664113
 ] 

ASF GitHub Bot commented on NIFI-2615:
--

Github user apsaltis commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1177#discussion_r87815456
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GetTCP.java
 ---
@@ -0,0 +1,668 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.net.Socket;
+import java.net.SocketAddress;
+import java.net.SocketTimeoutException;
+import java.net.StandardSocketOptions;
+import java.nio.ByteBuffer;
+import java.nio.CharBuffer;
+import java.nio.channels.AlreadyConnectedException;
+import java.nio.channels.ConnectionPendingException;
+import java.nio.channels.SocketChannel;
+import java.nio.channels.UnresolvedAddressException;
+import java.nio.channels.UnsupportedAddressTypeException;
+import java.nio.charset.Charset;
+import java.nio.charset.CharsetDecoder;
+import java.nio.charset.StandardCharsets;
+import java.time.Instant;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ArrayBlockingQueue;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+@SupportsBatching
+@SideEffectFree
+@Tags({"get", "fetch", "poll", "tcp", "ingest", "source", "input"})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@CapabilityDescription("Connects over TCP to the provided server. When 
receiving data this will writes either the" +
+" full receive buffer or messages based on demarcator to the 
content of a FlowFile. ")
+public class GetTCP extends AbstractProcessor {
+
+private static final Validator ENDPOINT_VALIDATOR = new Validator() {
+@Override
+public ValidationResult validate(final String subject, final 
String value, final ValidationContext context) {
+if (null == value || value.isEmpty()) {
+return new 
ValidationResult.Builder().subject(subject).input(value).valid(false).explanation(subject
 + " cannot be empty").build();
+}
+//The format should be :{,:}
+//first split on ,
+final String[] hostPortPairs = value.split(",");
+   

[GitHub] nifi pull request #1177: NIFI-2615 Adding a GetTCP processor

2016-11-14 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1177#discussion_r87815713
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GetTCP.java
 ---
@@ -0,0 +1,668 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.net.Socket;
+import java.net.SocketAddress;
+import java.net.SocketTimeoutException;
+import java.net.StandardSocketOptions;
+import java.nio.ByteBuffer;
+import java.nio.CharBuffer;
+import java.nio.channels.AlreadyConnectedException;
+import java.nio.channels.ConnectionPendingException;
+import java.nio.channels.SocketChannel;
+import java.nio.channels.UnresolvedAddressException;
+import java.nio.channels.UnsupportedAddressTypeException;
+import java.nio.charset.Charset;
+import java.nio.charset.CharsetDecoder;
+import java.nio.charset.StandardCharsets;
+import java.time.Instant;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ArrayBlockingQueue;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+@SupportsBatching
+@SideEffectFree
+@Tags({"get", "fetch", "poll", "tcp", "ingest", "source", "input"})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@CapabilityDescription("Connects over TCP to the provided server. When 
receiving data this will writes either the" +
+" full receive buffer or messages based on demarcator to the 
content of a FlowFile. ")
+public class GetTCP extends AbstractProcessor {
+
+private static final Validator ENDPOINT_VALIDATOR = new Validator() {
+@Override
+public ValidationResult validate(final String subject, final 
String value, final ValidationContext context) {
+if (null == value || value.isEmpty()) {
+return new 
ValidationResult.Builder().subject(subject).input(value).valid(false).explanation(subject
 + " cannot be empty").build();
+}
+//The format should be :{,:}
+//first split on ,
+final String[] hostPortPairs = value.split(",");
+boolean validHostPortPairs = true;
+String reason = "";
+String offendingSubject = subject;
+
+if(0 == hostPortPairs.length){
+return new 
ValidationResult.Builder().subject(subject).

[jira] [Commented] (NIFI-2615) Add support for GetTCP processor

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664115#comment-15664115
 ] 

ASF GitHub Bot commented on NIFI-2615:
--

Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1177#discussion_r87815713
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GetTCP.java
 ---
@@ -0,0 +1,668 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.net.Socket;
+import java.net.SocketAddress;
+import java.net.SocketTimeoutException;
+import java.net.StandardSocketOptions;
+import java.nio.ByteBuffer;
+import java.nio.CharBuffer;
+import java.nio.channels.AlreadyConnectedException;
+import java.nio.channels.ConnectionPendingException;
+import java.nio.channels.SocketChannel;
+import java.nio.channels.UnresolvedAddressException;
+import java.nio.channels.UnsupportedAddressTypeException;
+import java.nio.charset.Charset;
+import java.nio.charset.CharsetDecoder;
+import java.nio.charset.StandardCharsets;
+import java.time.Instant;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ArrayBlockingQueue;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+@SupportsBatching
+@SideEffectFree
+@Tags({"get", "fetch", "poll", "tcp", "ingest", "source", "input"})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@CapabilityDescription("Connects over TCP to the provided server. When 
receiving data this will writes either the" +
+" full receive buffer or messages based on demarcator to the 
content of a FlowFile. ")
+public class GetTCP extends AbstractProcessor {
+
+private static final Validator ENDPOINT_VALIDATOR = new Validator() {
+@Override
+public ValidationResult validate(final String subject, final 
String value, final ValidationContext context) {
+if (null == value || value.isEmpty()) {
+return new 
ValidationResult.Builder().subject(subject).input(value).valid(false).explanation(subject
 + " cannot be empty").build();
+}
+//The format should be :{,:}
+//first split on ,
+final String[] hostPortPairs = value.split(",");
+bo

[GitHub] nifi issue #1177: NIFI-2615 Adding a GetTCP processor

2016-11-14 Thread apsaltis
Github user apsaltis commented on the issue:

https://github.com/apache/nifi/pull/1177
  
@olegz -- the commit that addressed the "transient" issues is in the latest 
commit. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2615) Add support for GetTCP processor

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664122#comment-15664122
 ] 

ASF GitHub Bot commented on NIFI-2615:
--

Github user apsaltis commented on the issue:

https://github.com/apache/nifi/pull/1177
  
@olegz -- the commit that addressed the "transient" issues is in the latest 
commit. 


> Add support for GetTCP processor
> 
>
> Key: NIFI-2615
> URL: https://issues.apache.org/jira/browse/NIFI-2615
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.0.0, 0.7.0, 0.6.1
>Reporter: Andrew Psaltis
>Assignee: Andrew Psaltis
>
> This processor will allow NiFi to connect to a host via TCP, thus acting as 
> the client and consume data. This should provide the following properties:
> * Endpoint -  this should accept a list of addresses in the format of 
> : - if a user wants to be able to track the ingestion rate per 
> address then you would want to have one address in this list. However, there 
> are times when multiple endpoints represent a logical entity and the 
> aggregate ingestion rate is representative of it. 
> * Failover Endpoint - An endpoint to fall over to if the list of Endpoints is 
> exhausted and a connection cannot be made to them or it is disconnected and 
> cannot reconnect.
> * Receive Buffer Size -- The size of the TCP receive buffer to use. This does 
> not related to the size of content in the resulting flow file.
> * Keep Alive -- This enables TCP keep Alive
> * Connection Timeout -- How long to wait when trying to establish a connection
> * Batch Size - The number of messages to put into a Flow File. This will be 
> the max number of messages, as there may be cases where the number of 
> messages received over the wire waiting to be emitted as FF content may be 
> less then the desired batch.
> This processor should also support the following:
> 1. If a connection to end endpoint is broken, it should be logged and 
> reconnections to it should be made. Potentially an exponential backoff 
> strategy will be used. The strategy if there is more than one should be 
> documented and potentially exposed as an Attribute.
> 2. When there are multiple instances of this processor in a flow and NiFi is 
> setup in a cluster, this processor needs to ensure that received messages are 
> not dual processed. For example if this processor is configured to point to 
> the endpoint (172.31.32.212:1) and the data flow is running on more than 
> one node then only one node should be processing data. In essence they should 
> form a group and have similar semantics as a Kafka consumer group does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2949) Improve UI of Remote Process Group Ports window

2016-11-14 Thread Andrew Lim (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664140#comment-15664140
 ] 

Andrew Lim commented on NIFI-2949:
--

[~rmoran] , thanks for those suggestions.  I think those changes will greatly 
help the UX for both scenarios (no ports displayed and ports displayed) in this 
window.

> Improve UI of Remote Process Group Ports window
> ---
>
> Key: NIFI-2949
> URL: https://issues.apache.org/jira/browse/NIFI-2949
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.0.0
>Reporter: Andrew Lim
>Priority: Minor
> Attachments: RPG_Ports_1.0.png, RPG_port_0.x.png, Screen Shot 
> 2016-11-09 at 3.07.55 PM.png
>
>
> Creating this ticket to capture some issues I am seeing with the RPG Ports 
> dialog window.
> -When there are no ports, there is a lot of empty white space in the window.  
> The "Input ports" and "Output ports" sections appear to be cut-off.  
> Screenshots attached for both 0.x and 1.0.0.
> -It seems unnecessary for the "Input ports" and "Output ports" text to be so 
> small.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #1177: NIFI-2615 Adding a GetTCP processor

2016-11-14 Thread olegz
Github user olegz commented on the issue:

https://github.com/apache/nifi/pull/1177
  
Seems like ```'org.apache.nifi.processors.standard.GetTCP``` is missing 
from the _or.apache.nifi.processor.Processor_ file.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2615) Add support for GetTCP processor

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664144#comment-15664144
 ] 

ASF GitHub Bot commented on NIFI-2615:
--

Github user olegz commented on the issue:

https://github.com/apache/nifi/pull/1177
  
Seems like ```'org.apache.nifi.processors.standard.GetTCP``` is missing 
from the _or.apache.nifi.processor.Processor_ file.


> Add support for GetTCP processor
> 
>
> Key: NIFI-2615
> URL: https://issues.apache.org/jira/browse/NIFI-2615
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.0.0, 0.7.0, 0.6.1
>Reporter: Andrew Psaltis
>Assignee: Andrew Psaltis
>
> This processor will allow NiFi to connect to a host via TCP, thus acting as 
> the client and consume data. This should provide the following properties:
> * Endpoint -  this should accept a list of addresses in the format of 
> : - if a user wants to be able to track the ingestion rate per 
> address then you would want to have one address in this list. However, there 
> are times when multiple endpoints represent a logical entity and the 
> aggregate ingestion rate is representative of it. 
> * Failover Endpoint - An endpoint to fall over to if the list of Endpoints is 
> exhausted and a connection cannot be made to them or it is disconnected and 
> cannot reconnect.
> * Receive Buffer Size -- The size of the TCP receive buffer to use. This does 
> not related to the size of content in the resulting flow file.
> * Keep Alive -- This enables TCP keep Alive
> * Connection Timeout -- How long to wait when trying to establish a connection
> * Batch Size - The number of messages to put into a Flow File. This will be 
> the max number of messages, as there may be cases where the number of 
> messages received over the wire waiting to be emitted as FF content may be 
> less then the desired batch.
> This processor should also support the following:
> 1. If a connection to end endpoint is broken, it should be logged and 
> reconnections to it should be made. Potentially an exponential backoff 
> strategy will be used. The strategy if there is more than one should be 
> documented and potentially exposed as an Attribute.
> 2. When there are multiple instances of this processor in a flow and NiFi is 
> setup in a cluster, this processor needs to ensure that received messages are 
> not dual processed. For example if this processor is configured to point to 
> the endpoint (172.31.32.212:1) and the data flow is running on more than 
> one node then only one node should be processing data. In essence they should 
> form a group and have similar semantics as a Kafka consumer group does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1217: NIFI-3031 - Multi-Statement Script support for PutH...

2016-11-14 Thread dstreev
GitHub user dstreev opened a pull request:

https://github.com/apache/nifi/pull/1217

NIFI-3031 - Multi-Statement Script support for PutHiveQL Processor

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dstreev/nifi-1 NIFI-3031

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1217.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1217


commit 22528f8f58e9afe8a0dfba6c8c43fdf09fe3b324
Author: David W. Streever 
Date:   2016-11-04T15:03:17Z

PutHiveQL and SelectHiveQL Processor enhancements. Added support for 
multiple statements in a script.  Options for delimiters, quotes, escaping, 
include header and alternate header.

Add support in SelectHiveQL to get script content from the Flow File to 
bring consistency with patterns used for PutHiveQL and support extra query 
management.

commit 482401349d8113f16ccb3bd21cd66a43be9f6e4d
Author: David W. Streever 
Date:   2016-11-05T01:07:44Z

Changed behavior of using Flowfile to match ExecuteSQL.  Handle query 
delimiter when embedded.  Added test case for embedded delimiter

commit 6b3df26b0c83a9a5611728bd4e2b0dd0061d58ff
Author: David W. Streever 
Date:   2016-11-05T01:33:20Z

Formatting and License Header

commit 0c7bb50d3156c1b709bf2359d9865a8f922b99ce
Author: David W. Streever 
Date:   2016-11-04T15:03:17Z

PutHiveQL and SelectHiveQL Processor enhancements. Added support for 
multiple statements in a script.  Options for delimiters, quotes, escaping, 
include header and alternate header.

Add support in SelectHiveQL to get script content from the Flow File to 
bring consistency with patterns used for PutHiveQL and support extra query 
management.

commit 695cadfdb3eec67cc302306fe919572cb7f71afb
Author: David W. Streever 
Date:   2016-11-05T01:07:44Z

Changed behavior of using Flowfile to match ExecuteSQL.  Handle query 
delimiter when embedded.  Added test case for embedded delimiter

commit 6ad1e4536467769055678629f305fd95d1eaf7ff
Author: David W. Streever 
Date:   2016-11-05T01:33:20Z

Formatting and License Header

commit 2e6e0b38d7ac0789f8f61671db205456a13c2189
Author: David W. Streever 
Date:   2016-11-14T14:38:00Z

NIFI-3031

Merge branch 'master' of github.com:apache/nifi into dws_hive_opt

* 'master' of github.com:apache/nifi: (36 commits)
  NIFI-2851: Fixed CheckStyle error.
  NIFI-2851: Added additional unit test to ensure correctness of 
demarcation when demarcator falls between buffered data
  NIFI-2851 initial commit of perf improvements on SplitText
  NIFI-2999: When Cluster Coordinator changes, purge any old heartbeats so 
that we don't disconnect a node due to very old heartbeats
  NIFI-2818 This closes #1059. aligned read method with read/write method
  NIFI-2818 - Minimise fs permission required by  NiFi
  [NIFI-2898] restore ellipsis for processor type, controller servies type, 
and reporting task type descriptions. This closes #1191
  This closes #1209.
  NIFI-3027: Exec

[jira] [Commented] (NIFI-3031) Support Multi-Statement Scripts in the PutHiveQL Processor

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664170#comment-15664170
 ] 

ASF GitHub Bot commented on NIFI-3031:
--

GitHub user dstreev opened a pull request:

https://github.com/apache/nifi/pull/1217

NIFI-3031 - Multi-Statement Script support for PutHiveQL Processor

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dstreev/nifi-1 NIFI-3031

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1217.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1217


commit 22528f8f58e9afe8a0dfba6c8c43fdf09fe3b324
Author: David W. Streever 
Date:   2016-11-04T15:03:17Z

PutHiveQL and SelectHiveQL Processor enhancements. Added support for 
multiple statements in a script.  Options for delimiters, quotes, escaping, 
include header and alternate header.

Add support in SelectHiveQL to get script content from the Flow File to 
bring consistency with patterns used for PutHiveQL and support extra query 
management.

commit 482401349d8113f16ccb3bd21cd66a43be9f6e4d
Author: David W. Streever 
Date:   2016-11-05T01:07:44Z

Changed behavior of using Flowfile to match ExecuteSQL.  Handle query 
delimiter when embedded.  Added test case for embedded delimiter

commit 6b3df26b0c83a9a5611728bd4e2b0dd0061d58ff
Author: David W. Streever 
Date:   2016-11-05T01:33:20Z

Formatting and License Header

commit 0c7bb50d3156c1b709bf2359d9865a8f922b99ce
Author: David W. Streever 
Date:   2016-11-04T15:03:17Z

PutHiveQL and SelectHiveQL Processor enhancements. Added support for 
multiple statements in a script.  Options for delimiters, quotes, escaping, 
include header and alternate header.

Add support in SelectHiveQL to get script content from the Flow File to 
bring consistency with patterns used for PutHiveQL and support extra query 
management.

commit 695cadfdb3eec67cc302306fe919572cb7f71afb
Author: David W. Streever 
Date:   2016-11-05T01:07:44Z

Changed behavior of using Flowfile to match ExecuteSQL.  Handle query 
delimiter when embedded.  Added test case for embedded delimiter

commit 6ad1e4536467769055678629f305fd95d1eaf7ff
Author: David W. Streever 
Date:   2016-11-05T01:33:20Z

Formatting and License Header

commit 2e6e0b38d7ac0789f8f61671db205456a13c2189
Author: David W. Streever 
Date:   2016-11-14T14:38:00Z

NIFI-3031

Merge branch 'master' of github.com:apache/nifi into dws_hive_opt

* 'master' of github.com:apache/nifi: (36 commits)
  NIFI-2851: Fixed CheckStyle error.
  NIFI-2851: Added additional unit test to ensure correctness of 
demarcation when demarcator falls between buffered data
  NIFI-2851 initial commit of perf improvements on SplitText
  NIFI-2999: When Cluster Coordinator changes, purge any old heartbeats so 
that we don't disconnect a node due to very old heartbeats
  NIFI-2818 This closes #1059. aligned read method with read/wr

[jira] [Commented] (NIFI-2951) "Enable/Disable transmission" selections in the remote process group right-click menu should be mutually exclusive

2016-11-14 Thread Andrew Lim (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664151#comment-15664151
 ] 

Andrew Lim commented on NIFI-2951:
--

[~mcgilman], thanks for clarifying those selections.  I wasn't aware of that 
available functionality.  Your suggestions in both the short and long term make 
a lot of sense, especially the richer context menu that I think will be a great 
addition to the UI beyond enable/disable transmission.

> "Enable/Disable transmission" selections in the remote process group 
> right-click menu should be mutually exclusive
> --
>
> Key: NIFI-2951
> URL: https://issues.apache.org/jira/browse/NIFI-2951
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.0.0, 0.7.0
>Reporter: Andrew Lim
>Priority: Trivial
>  Labels: UI
>
> In the right-click menu for an RPG, the UI should only show "Enable 
> transmission" when the RPG is disabled and should only show "Disable 
> transmission" when the RPG is enabled.  Showing both is confusing and 
> unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (NIFI-2854) Enable repositories to support upgrades and rollback in well defined scenarios

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664174#comment-15664174
 ] 

ASF GitHub Bot commented on NIFI-2854:
--

Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r87820804
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/swap/SimpleSwapDeserializer.java
 ---
@@ -0,0 +1,303 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.swap;
+
+import java.io.DataInputStream;
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.nifi.controller.queue.FlowFileQueue;
+import org.apache.nifi.controller.queue.QueueSize;
+import org.apache.nifi.controller.repository.FlowFileRecord;
+import org.apache.nifi.controller.repository.IncompleteSwapFileException;
+import org.apache.nifi.controller.repository.StandardFlowFileRecord;
+import org.apache.nifi.controller.repository.SwapContents;
+import org.apache.nifi.controller.repository.SwapSummary;
+import org.apache.nifi.controller.repository.claim.ResourceClaim;
+import org.apache.nifi.controller.repository.claim.ResourceClaimManager;
+import org.apache.nifi.controller.repository.claim.StandardContentClaim;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class SimpleSwapDeserializer implements SwapDeserializer {
+public static final int SWAP_ENCODING_VERSION = 10;
+private static final Logger logger = 
LoggerFactory.getLogger(SimpleSwapDeserializer.class);
+
+@Override
+public SwapSummary getSwapSummary(final DataInputStream in, final 
String swapLocation, final ResourceClaimManager claimManager) throws 
IOException {
+final int swapEncodingVersion = in.readInt();
+if (swapEncodingVersion > SWAP_ENCODING_VERSION) {
--- End diff --

Ah ok, misunderstood things and thought 10 was the new format.


> Enable repositories to support upgrades and rollback in well defined scenarios
> --
>
> Key: NIFI-2854
> URL: https://issues.apache.org/jira/browse/NIFI-2854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.1.0
>
>
> The flowfile, swapfile, provenance, and content repositories play a very 
> important roll in NiFi's ability to be safely upgraded and rolled back.  We 
> need to have well documented behaviors, designs, and version adherence so 
> that users can safely rely on these mechanisms.
> Once this is formalized and in place we should update our versioning guidance 
> to reflect this as well.
> The following would be true from NiFi 1.2.0 onward
> * No changes to how the repositories are persisted to disk can be made which 
> will break forward/backward compatibility and specifically this means that 
> things like the way each is serialized to disk cannot change.
> * If changes are made which impact forward or backward compatibility they 
> should be reserved for major releases only and should include a utility to 
> help users with pre-existing data convert from some older format to the newer 
> format.  It may not be feasible to have rollback on major releases.
> * The content repository should not be changed within a major release cycle 
> in any way that will harm forward or backward compatibility.
> * The flow file repository can change in that new fields can be added to 
> existing write ahead log record types but no fields can be removed nor can 
> any new types be added.  Once a field is considered 

[GitHub] nifi pull request #1202: NIFI-2854: Refactor repositories and swap files to ...

2016-11-14 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r87820804
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/swap/SimpleSwapDeserializer.java
 ---
@@ -0,0 +1,303 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.swap;
+
+import java.io.DataInputStream;
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.nifi.controller.queue.FlowFileQueue;
+import org.apache.nifi.controller.queue.QueueSize;
+import org.apache.nifi.controller.repository.FlowFileRecord;
+import org.apache.nifi.controller.repository.IncompleteSwapFileException;
+import org.apache.nifi.controller.repository.StandardFlowFileRecord;
+import org.apache.nifi.controller.repository.SwapContents;
+import org.apache.nifi.controller.repository.SwapSummary;
+import org.apache.nifi.controller.repository.claim.ResourceClaim;
+import org.apache.nifi.controller.repository.claim.ResourceClaimManager;
+import org.apache.nifi.controller.repository.claim.StandardContentClaim;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class SimpleSwapDeserializer implements SwapDeserializer {
+public static final int SWAP_ENCODING_VERSION = 10;
+private static final Logger logger = 
LoggerFactory.getLogger(SimpleSwapDeserializer.class);
+
+@Override
+public SwapSummary getSwapSummary(final DataInputStream in, final 
String swapLocation, final ResourceClaimManager claimManager) throws 
IOException {
+final int swapEncodingVersion = in.readInt();
+if (swapEncodingVersion > SWAP_ENCODING_VERSION) {
--- End diff --

Ah ok, misunderstood things and thought 10 was the new format.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2854) Enable repositories to support upgrades and rollback in well defined scenarios

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664217#comment-15664217
 ] 

ASF GitHub Bot commented on NIFI-2854:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r87824046
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/test/java/org/apache/nifi/controller/swap/TestFlowFile.java
 ---
@@ -0,0 +1,136 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.swap;
+
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.nifi.controller.repository.FlowFileRecord;
+import org.apache.nifi.controller.repository.claim.ContentClaim;
+import org.apache.nifi.controller.repository.claim.ResourceClaim;
+import org.apache.nifi.controller.repository.claim.ResourceClaimManager;
+import org.apache.nifi.controller.repository.claim.StandardContentClaim;
+import org.apache.nifi.flowfile.FlowFile;
+
+public class TestFlowFile implements FlowFileRecord {
--- End diff --

On second though, after looking at the scope of the class, MockFlowFile 
does work better.


> Enable repositories to support upgrades and rollback in well defined scenarios
> --
>
> Key: NIFI-2854
> URL: https://issues.apache.org/jira/browse/NIFI-2854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.1.0
>
>
> The flowfile, swapfile, provenance, and content repositories play a very 
> important roll in NiFi's ability to be safely upgraded and rolled back.  We 
> need to have well documented behaviors, designs, and version adherence so 
> that users can safely rely on these mechanisms.
> Once this is formalized and in place we should update our versioning guidance 
> to reflect this as well.
> The following would be true from NiFi 1.2.0 onward
> * No changes to how the repositories are persisted to disk can be made which 
> will break forward/backward compatibility and specifically this means that 
> things like the way each is serialized to disk cannot change.
> * If changes are made which impact forward or backward compatibility they 
> should be reserved for major releases only and should include a utility to 
> help users with pre-existing data convert from some older format to the newer 
> format.  It may not be feasible to have rollback on major releases.
> * The content repository should not be changed within a major release cycle 
> in any way that will harm forward or backward compatibility.
> * The flow file repository can change in that new fields can be added to 
> existing write ahead log record types but no fields can be removed nor can 
> any new types be added.  Once a field is considered required it must remain 
> required.  Changes may only be made across minor version changes - not 
> incremental.
> * Swap File storage should follow very similar rules to the flow file 
> repository.  Adding a schema to the swap file header may allow some variation 
> there but the variation should only be hints to optimize how they're 
> processed and not change their behavior otherwise. Changes are only permitted 
> during minor version releases.
> * Provenance repository changes are only permitted during minor version 
> releases.  These changes may include adding or removing fields from existing 
> event types.  If a field is considered required it must always be considered 
> required.  If a field is removed then it must not be a required field and 
> there must be a sensible default an older version could use if that value is 
> not found in new data once rolled back.  New event types may be added.  
> Fields or event types not

[GitHub] nifi pull request #1202: NIFI-2854: Refactor repositories and swap files to ...

2016-11-14 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r87824046
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/test/java/org/apache/nifi/controller/swap/TestFlowFile.java
 ---
@@ -0,0 +1,136 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.swap;
+
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.nifi.controller.repository.FlowFileRecord;
+import org.apache.nifi.controller.repository.claim.ContentClaim;
+import org.apache.nifi.controller.repository.claim.ResourceClaim;
+import org.apache.nifi.controller.repository.claim.ResourceClaimManager;
+import org.apache.nifi.controller.repository.claim.StandardContentClaim;
+import org.apache.nifi.flowfile.FlowFile;
+
+public class TestFlowFile implements FlowFileRecord {
--- End diff --

On second though, after looking at the scope of the class, MockFlowFile 
does work better.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1202: NIFI-2854: Refactor repositories and swap files to ...

2016-11-14 Thread markap14
Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r87825804
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/test/java/org/apache/nifi/controller/swap/TestSimpleSwapSerializerDeserializer.java
 ---
@@ -0,0 +1,136 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.swap;
+
+import static org.junit.Assert.assertEquals;
+
+import java.io.DataInputStream;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.nio.file.Files;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.controller.queue.FlowFileQueue;
+import org.apache.nifi.controller.repository.FlowFileRecord;
+import org.apache.nifi.controller.repository.SwapContents;
+import org.apache.nifi.controller.repository.claim.ResourceClaimManager;
+import 
org.apache.nifi.controller.repository.claim.StandardResourceClaimManager;
+import org.apache.nifi.stream.io.NullOutputStream;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+public class TestSimpleSwapSerializerDeserializer {
+@Before
+public void setup() {
+TestFlowFile.resetIdGenerator();
+}
+
+@Test
+public void testRoundTripSerializeDeserialize() throws IOException {
+final ResourceClaimManager resourceClaimManager = new 
StandardResourceClaimManager();
+
+final List toSwap = new ArrayList<>(1);
+final Map attrs = new HashMap<>();
+for (int i = 0; i < 1; i++) {
+attrs.put("i", String.valueOf(i));
+final FlowFileRecord ff = new TestFlowFile(attrs, i, 
resourceClaimManager);
+toSwap.add(ff);
+}
+
+final FlowFileQueue flowFileQueue = 
Mockito.mock(FlowFileQueue.class);
+
Mockito.when(flowFileQueue.getIdentifier()).thenReturn("87bb99fe-412c-49f6-a441-d1b0af4e20b4");
+
+final String swapLocation = "target/testRoundTrip-" + 
UUID.randomUUID().toString() + ".swap";
--- End diff --

Technically, it should. For the scope of this unit test I don't think it 
matters, but I will update it just to be consistent with how it is expected to 
work normally. Thanks for calling that out.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2854) Enable repositories to support upgrades and rollback in well defined scenarios

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664231#comment-15664231
 ] 

ASF GitHub Bot commented on NIFI-2854:
--

Github user markap14 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1202#discussion_r87825804
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/test/java/org/apache/nifi/controller/swap/TestSimpleSwapSerializerDeserializer.java
 ---
@@ -0,0 +1,136 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.controller.swap;
+
+import static org.junit.Assert.assertEquals;
+
+import java.io.DataInputStream;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.nio.file.Files;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.nifi.controller.queue.FlowFileQueue;
+import org.apache.nifi.controller.repository.FlowFileRecord;
+import org.apache.nifi.controller.repository.SwapContents;
+import org.apache.nifi.controller.repository.claim.ResourceClaimManager;
+import 
org.apache.nifi.controller.repository.claim.StandardResourceClaimManager;
+import org.apache.nifi.stream.io.NullOutputStream;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+public class TestSimpleSwapSerializerDeserializer {
+@Before
+public void setup() {
+TestFlowFile.resetIdGenerator();
+}
+
+@Test
+public void testRoundTripSerializeDeserialize() throws IOException {
+final ResourceClaimManager resourceClaimManager = new 
StandardResourceClaimManager();
+
+final List toSwap = new ArrayList<>(1);
+final Map attrs = new HashMap<>();
+for (int i = 0; i < 1; i++) {
+attrs.put("i", String.valueOf(i));
+final FlowFileRecord ff = new TestFlowFile(attrs, i, 
resourceClaimManager);
+toSwap.add(ff);
+}
+
+final FlowFileQueue flowFileQueue = 
Mockito.mock(FlowFileQueue.class);
+
Mockito.when(flowFileQueue.getIdentifier()).thenReturn("87bb99fe-412c-49f6-a441-d1b0af4e20b4");
+
+final String swapLocation = "target/testRoundTrip-" + 
UUID.randomUUID().toString() + ".swap";
--- End diff --

Technically, it should. For the scope of this unit test I don't think it 
matters, but I will update it just to be consistent with how it is expected to 
work normally. Thanks for calling that out.


> Enable repositories to support upgrades and rollback in well defined scenarios
> --
>
> Key: NIFI-2854
> URL: https://issues.apache.org/jira/browse/NIFI-2854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.1.0
>
>
> The flowfile, swapfile, provenance, and content repositories play a very 
> important roll in NiFi's ability to be safely upgraded and rolled back.  We 
> need to have well documented behaviors, designs, and version adherence so 
> that users can safely rely on these mechanisms.
> Once this is formalized and in place we should update our versioning guidance 
> to reflect this as well.
> The following would be true from NiFi 1.2.0 onward
> * No changes to how the repositories are persisted to disk can be made which 
> will break forward/backward compatibility and specifically this means that 
> things like the way each is serialized to disk cannot change.
> * If changes are made which impact forward or backward compatibility th

[jira] [Commented] (NIFI-2615) Add support for GetTCP processor

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664249#comment-15664249
 ] 

ASF GitHub Bot commented on NIFI-2615:
--

Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1177#discussion_r87827521
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GetTCP.java
 ---
@@ -0,0 +1,668 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.net.Socket;
+import java.net.SocketAddress;
+import java.net.SocketTimeoutException;
+import java.net.StandardSocketOptions;
+import java.nio.ByteBuffer;
+import java.nio.CharBuffer;
+import java.nio.channels.AlreadyConnectedException;
+import java.nio.channels.ConnectionPendingException;
+import java.nio.channels.SocketChannel;
+import java.nio.channels.UnresolvedAddressException;
+import java.nio.channels.UnsupportedAddressTypeException;
+import java.nio.charset.Charset;
+import java.nio.charset.CharsetDecoder;
+import java.nio.charset.StandardCharsets;
+import java.time.Instant;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ArrayBlockingQueue;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+@SupportsBatching
+@SideEffectFree
+@Tags({"get", "fetch", "poll", "tcp", "ingest", "source", "input"})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@CapabilityDescription("Connects over TCP to the provided server. When 
receiving data this will writes either the" +
+" full receive buffer or messages based on demarcator to the 
content of a FlowFile. ")
+public class GetTCP extends AbstractProcessor {
+
+private static final Validator ENDPOINT_VALIDATOR = new Validator() {
+@Override
+public ValidationResult validate(final String subject, final 
String value, final ValidationContext context) {
+if (null == value || value.isEmpty()) {
+return new 
ValidationResult.Builder().subject(subject).input(value).valid(false).explanation(subject
 + " cannot be empty").build();
+}
+//The format should be :{,:}
+//first split on ,
+final String[] hostPortPairs = value.split(",");
+bo

[GitHub] nifi pull request #1177: NIFI-2615 Adding a GetTCP processor

2016-11-14 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1177#discussion_r87827521
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GetTCP.java
 ---
@@ -0,0 +1,668 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.net.Socket;
+import java.net.SocketAddress;
+import java.net.SocketTimeoutException;
+import java.net.StandardSocketOptions;
+import java.nio.ByteBuffer;
+import java.nio.CharBuffer;
+import java.nio.channels.AlreadyConnectedException;
+import java.nio.channels.ConnectionPendingException;
+import java.nio.channels.SocketChannel;
+import java.nio.channels.UnresolvedAddressException;
+import java.nio.channels.UnsupportedAddressTypeException;
+import java.nio.charset.Charset;
+import java.nio.charset.CharsetDecoder;
+import java.nio.charset.StandardCharsets;
+import java.time.Instant;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ArrayBlockingQueue;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+@SupportsBatching
+@SideEffectFree
+@Tags({"get", "fetch", "poll", "tcp", "ingest", "source", "input"})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@CapabilityDescription("Connects over TCP to the provided server. When 
receiving data this will writes either the" +
+" full receive buffer or messages based on demarcator to the 
content of a FlowFile. ")
+public class GetTCP extends AbstractProcessor {
+
+private static final Validator ENDPOINT_VALIDATOR = new Validator() {
+@Override
+public ValidationResult validate(final String subject, final 
String value, final ValidationContext context) {
+if (null == value || value.isEmpty()) {
+return new 
ValidationResult.Builder().subject(subject).input(value).valid(false).explanation(subject
 + " cannot be empty").build();
+}
+//The format should be :{,:}
+//first split on ,
+final String[] hostPortPairs = value.split(",");
+boolean validHostPortPairs = true;
+String reason = "";
+String offendingSubject = subject;
+
+if(0 == hostPortPairs.length){
+return new 
ValidationResult.Builder().subject(subject).

[GitHub] nifi pull request #1177: NIFI-2615 Adding a GetTCP processor

2016-11-14 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1177#discussion_r87829391
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GetTCP.java
 ---
@@ -0,0 +1,668 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.net.Socket;
+import java.net.SocketAddress;
+import java.net.SocketTimeoutException;
+import java.net.StandardSocketOptions;
+import java.nio.ByteBuffer;
+import java.nio.CharBuffer;
+import java.nio.channels.AlreadyConnectedException;
+import java.nio.channels.ConnectionPendingException;
+import java.nio.channels.SocketChannel;
+import java.nio.channels.UnresolvedAddressException;
+import java.nio.channels.UnsupportedAddressTypeException;
+import java.nio.charset.Charset;
+import java.nio.charset.CharsetDecoder;
+import java.nio.charset.StandardCharsets;
+import java.time.Instant;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ArrayBlockingQueue;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+@SupportsBatching
+@SideEffectFree
+@Tags({"get", "fetch", "poll", "tcp", "ingest", "source", "input"})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@CapabilityDescription("Connects over TCP to the provided server. When 
receiving data this will writes either the" +
+" full receive buffer or messages based on demarcator to the 
content of a FlowFile. ")
+public class GetTCP extends AbstractProcessor {
+
+private static final Validator ENDPOINT_VALIDATOR = new Validator() {
+@Override
+public ValidationResult validate(final String subject, final 
String value, final ValidationContext context) {
+if (null == value || value.isEmpty()) {
+return new 
ValidationResult.Builder().subject(subject).input(value).valid(false).explanation(subject
 + " cannot be empty").build();
+}
+//The format should be :{,:}
+//first split on ,
+final String[] hostPortPairs = value.split(",");
+boolean validHostPortPairs = true;
+String reason = "";
+String offendingSubject = subject;
+
+if(0 == hostPortPairs.length){
+return new 
ValidationResult.Builder().subject(subject).

[jira] [Commented] (NIFI-2615) Add support for GetTCP processor

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664266#comment-15664266
 ] 

ASF GitHub Bot commented on NIFI-2615:
--

Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1177#discussion_r87829391
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GetTCP.java
 ---
@@ -0,0 +1,668 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.net.Socket;
+import java.net.SocketAddress;
+import java.net.SocketTimeoutException;
+import java.net.StandardSocketOptions;
+import java.nio.ByteBuffer;
+import java.nio.CharBuffer;
+import java.nio.channels.AlreadyConnectedException;
+import java.nio.channels.ConnectionPendingException;
+import java.nio.channels.SocketChannel;
+import java.nio.channels.UnresolvedAddressException;
+import java.nio.channels.UnsupportedAddressTypeException;
+import java.nio.charset.Charset;
+import java.nio.charset.CharsetDecoder;
+import java.nio.charset.StandardCharsets;
+import java.time.Instant;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ArrayBlockingQueue;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+@SupportsBatching
+@SideEffectFree
+@Tags({"get", "fetch", "poll", "tcp", "ingest", "source", "input"})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@CapabilityDescription("Connects over TCP to the provided server. When 
receiving data this will writes either the" +
+" full receive buffer or messages based on demarcator to the 
content of a FlowFile. ")
+public class GetTCP extends AbstractProcessor {
+
+private static final Validator ENDPOINT_VALIDATOR = new Validator() {
+@Override
+public ValidationResult validate(final String subject, final 
String value, final ValidationContext context) {
+if (null == value || value.isEmpty()) {
+return new 
ValidationResult.Builder().subject(subject).input(value).valid(false).explanation(subject
 + " cannot be empty").build();
+}
+//The format should be :{,:}
+//first split on ,
+final String[] hostPortPairs = value.split(",");
+bo

[GitHub] nifi pull request #1177: NIFI-2615 Adding a GetTCP processor

2016-11-14 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1177#discussion_r87830655
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GetTCP.java
 ---
@@ -0,0 +1,668 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.net.Socket;
+import java.net.SocketAddress;
+import java.net.SocketTimeoutException;
+import java.net.StandardSocketOptions;
+import java.nio.ByteBuffer;
+import java.nio.CharBuffer;
+import java.nio.channels.AlreadyConnectedException;
+import java.nio.channels.ConnectionPendingException;
+import java.nio.channels.SocketChannel;
+import java.nio.channels.UnresolvedAddressException;
+import java.nio.channels.UnsupportedAddressTypeException;
+import java.nio.charset.Charset;
+import java.nio.charset.CharsetDecoder;
+import java.nio.charset.StandardCharsets;
+import java.time.Instant;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ArrayBlockingQueue;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+@SupportsBatching
+@SideEffectFree
+@Tags({"get", "fetch", "poll", "tcp", "ingest", "source", "input"})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@CapabilityDescription("Connects over TCP to the provided server. When 
receiving data this will writes either the" +
+" full receive buffer or messages based on demarcator to the 
content of a FlowFile. ")
+public class GetTCP extends AbstractProcessor {
+
+private static final Validator ENDPOINT_VALIDATOR = new Validator() {
+@Override
+public ValidationResult validate(final String subject, final 
String value, final ValidationContext context) {
+if (null == value || value.isEmpty()) {
+return new 
ValidationResult.Builder().subject(subject).input(value).valid(false).explanation(subject
 + " cannot be empty").build();
+}
+//The format should be :{,:}
+//first split on ,
+final String[] hostPortPairs = value.split(",");
+boolean validHostPortPairs = true;
+String reason = "";
+String offendingSubject = subject;
+
+if(0 == hostPortPairs.length){
+return new 
ValidationResult.Builder().subject(subject).

[jira] [Commented] (NIFI-2615) Add support for GetTCP processor

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664275#comment-15664275
 ] 

ASF GitHub Bot commented on NIFI-2615:
--

Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1177#discussion_r87830655
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GetTCP.java
 ---
@@ -0,0 +1,668 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.net.Socket;
+import java.net.SocketAddress;
+import java.net.SocketTimeoutException;
+import java.net.StandardSocketOptions;
+import java.nio.ByteBuffer;
+import java.nio.CharBuffer;
+import java.nio.channels.AlreadyConnectedException;
+import java.nio.channels.ConnectionPendingException;
+import java.nio.channels.SocketChannel;
+import java.nio.channels.UnresolvedAddressException;
+import java.nio.channels.UnsupportedAddressTypeException;
+import java.nio.charset.Charset;
+import java.nio.charset.CharsetDecoder;
+import java.nio.charset.StandardCharsets;
+import java.time.Instant;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ArrayBlockingQueue;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+@SupportsBatching
+@SideEffectFree
+@Tags({"get", "fetch", "poll", "tcp", "ingest", "source", "input"})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@CapabilityDescription("Connects over TCP to the provided server. When 
receiving data this will writes either the" +
+" full receive buffer or messages based on demarcator to the 
content of a FlowFile. ")
+public class GetTCP extends AbstractProcessor {
+
+private static final Validator ENDPOINT_VALIDATOR = new Validator() {
+@Override
+public ValidationResult validate(final String subject, final 
String value, final ValidationContext context) {
+if (null == value || value.isEmpty()) {
+return new 
ValidationResult.Builder().subject(subject).input(value).valid(false).explanation(subject
 + " cannot be empty").build();
+}
+//The format should be :{,:}
+//first split on ,
+final String[] hostPortPairs = value.split(",");
+bo

[GitHub] nifi issue #1202: NIFI-2854: Refactor repositories and swap files to use sch...

2016-11-14 Thread joshelser
Github user joshelser commented on the issue:

https://github.com/apache/nifi/pull/1202
  
Thanks for the thoughtful explanation, @markap14! It's very apparent that 
you have put the thought into this one. Sorry for doubting :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2854) Enable repositories to support upgrades and rollback in well defined scenarios

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664295#comment-15664295
 ] 

ASF GitHub Bot commented on NIFI-2854:
--

Github user joshelser commented on the issue:

https://github.com/apache/nifi/pull/1202
  
Thanks for the thoughtful explanation, @markap14! It's very apparent that 
you have put the thought into this one. Sorry for doubting :)


> Enable repositories to support upgrades and rollback in well defined scenarios
> --
>
> Key: NIFI-2854
> URL: https://issues.apache.org/jira/browse/NIFI-2854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.1.0
>
>
> The flowfile, swapfile, provenance, and content repositories play a very 
> important roll in NiFi's ability to be safely upgraded and rolled back.  We 
> need to have well documented behaviors, designs, and version adherence so 
> that users can safely rely on these mechanisms.
> Once this is formalized and in place we should update our versioning guidance 
> to reflect this as well.
> The following would be true from NiFi 1.2.0 onward
> * No changes to how the repositories are persisted to disk can be made which 
> will break forward/backward compatibility and specifically this means that 
> things like the way each is serialized to disk cannot change.
> * If changes are made which impact forward or backward compatibility they 
> should be reserved for major releases only and should include a utility to 
> help users with pre-existing data convert from some older format to the newer 
> format.  It may not be feasible to have rollback on major releases.
> * The content repository should not be changed within a major release cycle 
> in any way that will harm forward or backward compatibility.
> * The flow file repository can change in that new fields can be added to 
> existing write ahead log record types but no fields can be removed nor can 
> any new types be added.  Once a field is considered required it must remain 
> required.  Changes may only be made across minor version changes - not 
> incremental.
> * Swap File storage should follow very similar rules to the flow file 
> repository.  Adding a schema to the swap file header may allow some variation 
> there but the variation should only be hints to optimize how they're 
> processed and not change their behavior otherwise. Changes are only permitted 
> during minor version releases.
> * Provenance repository changes are only permitted during minor version 
> releases.  These changes may include adding or removing fields from existing 
> event types.  If a field is considered required it must always be considered 
> required.  If a field is removed then it must not be a required field and 
> there must be a sensible default an older version could use if that value is 
> not found in new data once rolled back.  New event types may be added.  
> Fields or event types not known to older version, if seen after a rollback, 
> will simply be ignored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi issue #1104: [NIFI-2844] Update CSS styles for Cluster Summary Dialog i...

2016-11-14 Thread scottyaslan
Github user scottyaslan commented on the issue:

https://github.com/apache/nifi/pull/1104
  
@moranr @mcgilman I was able to update the styles to match the operate 
palette so please take a look.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2844) The icon for the component is cut-off in the Cluster Summary window.

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664309#comment-15664309
 ] 

ASF GitHub Bot commented on NIFI-2844:
--

Github user scottyaslan commented on the issue:

https://github.com/apache/nifi/pull/1104
  
@moranr @mcgilman I was able to update the styles to match the operate 
palette so please take a look.


> The icon for the component is cut-off in the Cluster  Summary 
> window.
> 
>
> Key: NIFI-2844
> URL: https://issues.apache.org/jira/browse/NIFI-2844
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.0.0
>Reporter: Andrew Lim
>Assignee: Scott Aslan
>Priority: Minor
> Attachments: NIFI-2844_connection.png, NIFI-2844_inputPort.png, 
> NIFI-2844_outputPort.png, NIFI-2844_processor.png, selected-component-info.png
>
>
> In a clustered environment, select "Summary" from the Global Menu.
> As an example, in the Processors tab, select "View processor details" icon 
> (the 3 cube cluster icon) for one of the processors.  The icon shown for the 
> processor is cut-off at the top.
> This also occurs for the following components:
> -Input Ports
> -Output Ports
> -Connections
> Screenshots attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi pull request #1177: NIFI-2615 Adding a GetTCP processor

2016-11-14 Thread olegz
Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1177#discussion_r87834336
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GetTCP.java
 ---
@@ -0,0 +1,668 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.net.Socket;
+import java.net.SocketAddress;
+import java.net.SocketTimeoutException;
+import java.net.StandardSocketOptions;
+import java.nio.ByteBuffer;
+import java.nio.CharBuffer;
+import java.nio.channels.AlreadyConnectedException;
+import java.nio.channels.ConnectionPendingException;
+import java.nio.channels.SocketChannel;
+import java.nio.channels.UnresolvedAddressException;
+import java.nio.channels.UnsupportedAddressTypeException;
+import java.nio.charset.Charset;
+import java.nio.charset.CharsetDecoder;
+import java.nio.charset.StandardCharsets;
+import java.time.Instant;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ArrayBlockingQueue;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+@SupportsBatching
+@SideEffectFree
+@Tags({"get", "fetch", "poll", "tcp", "ingest", "source", "input"})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@CapabilityDescription("Connects over TCP to the provided server. When 
receiving data this will writes either the" +
+" full receive buffer or messages based on demarcator to the 
content of a FlowFile. ")
+public class GetTCP extends AbstractProcessor {
+
+private static final Validator ENDPOINT_VALIDATOR = new Validator() {
+@Override
+public ValidationResult validate(final String subject, final 
String value, final ValidationContext context) {
+if (null == value || value.isEmpty()) {
+return new 
ValidationResult.Builder().subject(subject).input(value).valid(false).explanation(subject
 + " cannot be empty").build();
+}
+//The format should be :{,:}
+//first split on ,
+final String[] hostPortPairs = value.split(",");
+boolean validHostPortPairs = true;
+String reason = "";
+String offendingSubject = subject;
+
+if(0 == hostPortPairs.length){
+return new 
ValidationResult.Builder().subject(subject).

[jira] [Commented] (NIFI-2615) Add support for GetTCP processor

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664313#comment-15664313
 ] 

ASF GitHub Bot commented on NIFI-2615:
--

Github user olegz commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1177#discussion_r87834336
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GetTCP.java
 ---
@@ -0,0 +1,668 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.components.Validator;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.net.Socket;
+import java.net.SocketAddress;
+import java.net.SocketTimeoutException;
+import java.net.StandardSocketOptions;
+import java.nio.ByteBuffer;
+import java.nio.CharBuffer;
+import java.nio.channels.AlreadyConnectedException;
+import java.nio.channels.ConnectionPendingException;
+import java.nio.channels.SocketChannel;
+import java.nio.channels.UnresolvedAddressException;
+import java.nio.channels.UnsupportedAddressTypeException;
+import java.nio.charset.Charset;
+import java.nio.charset.CharsetDecoder;
+import java.nio.charset.StandardCharsets;
+import java.time.Instant;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ArrayBlockingQueue;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+@SupportsBatching
+@SideEffectFree
+@Tags({"get", "fetch", "poll", "tcp", "ingest", "source", "input"})
+@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN)
+@CapabilityDescription("Connects over TCP to the provided server. When 
receiving data this will writes either the" +
+" full receive buffer or messages based on demarcator to the 
content of a FlowFile. ")
+public class GetTCP extends AbstractProcessor {
+
+private static final Validator ENDPOINT_VALIDATOR = new Validator() {
+@Override
+public ValidationResult validate(final String subject, final 
String value, final ValidationContext context) {
+if (null == value || value.isEmpty()) {
+return new 
ValidationResult.Builder().subject(subject).input(value).valid(false).explanation(subject
 + " cannot be empty").build();
+}
+//The format should be :{,:}
+//first split on ,
+final String[] hostPortPairs = value.split(",");
+bo

[GitHub] nifi issue #1202: NIFI-2854: Refactor repositories and swap files to use sch...

2016-11-14 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1202
  
@joshelser no worries - I am glad that someone at least doubted that 
decision :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2854) Enable repositories to support upgrades and rollback in well defined scenarios

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664316#comment-15664316
 ] 

ASF GitHub Bot commented on NIFI-2854:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1202
  
@joshelser no worries - I am glad that someone at least doubted that 
decision :)


> Enable repositories to support upgrades and rollback in well defined scenarios
> --
>
> Key: NIFI-2854
> URL: https://issues.apache.org/jira/browse/NIFI-2854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.1.0
>
>
> The flowfile, swapfile, provenance, and content repositories play a very 
> important roll in NiFi's ability to be safely upgraded and rolled back.  We 
> need to have well documented behaviors, designs, and version adherence so 
> that users can safely rely on these mechanisms.
> Once this is formalized and in place we should update our versioning guidance 
> to reflect this as well.
> The following would be true from NiFi 1.2.0 onward
> * No changes to how the repositories are persisted to disk can be made which 
> will break forward/backward compatibility and specifically this means that 
> things like the way each is serialized to disk cannot change.
> * If changes are made which impact forward or backward compatibility they 
> should be reserved for major releases only and should include a utility to 
> help users with pre-existing data convert from some older format to the newer 
> format.  It may not be feasible to have rollback on major releases.
> * The content repository should not be changed within a major release cycle 
> in any way that will harm forward or backward compatibility.
> * The flow file repository can change in that new fields can be added to 
> existing write ahead log record types but no fields can be removed nor can 
> any new types be added.  Once a field is considered required it must remain 
> required.  Changes may only be made across minor version changes - not 
> incremental.
> * Swap File storage should follow very similar rules to the flow file 
> repository.  Adding a schema to the swap file header may allow some variation 
> there but the variation should only be hints to optimize how they're 
> processed and not change their behavior otherwise. Changes are only permitted 
> during minor version releases.
> * Provenance repository changes are only permitted during minor version 
> releases.  These changes may include adding or removing fields from existing 
> event types.  If a field is considered required it must always be considered 
> required.  If a field is removed then it must not be a required field and 
> there must be a sensible default an older version could use if that value is 
> not found in new data once rolled back.  New event types may be added.  
> Fields or event types not known to older version, if seen after a rollback, 
> will simply be ignored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi-minifi pull request #43: MINIFI-115 Upgrade to NiFi 1.0 API

2016-11-14 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi-minifi/pull/43#discussion_r87824638
  
--- Diff: minifi-assembly/src/main/assembly/dependencies.xml ---
@@ -64,15 +79,11 @@
 true
 
minifi-bootstrap
-slf4j-api
-logback-classic
-nifi-api
+minifi-api
+minifi-commons-schema
+minifi-utils
+snakeyaml
 nifi-utils
-jetty-server
--- End diff --

These "Jetty" deps are for the Rest Change Ingestor. I'm pretty sure it'll 
fail without them (haven't tested yet, reviewing code first).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi pull request #43: MINIFI-115 Upgrade to NiFi 1.0 API

2016-11-14 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi-minifi/pull/43#discussion_r87825052
  
--- Diff: minifi-assembly/src/main/assembly/dependencies.xml ---
@@ -36,23 +36,38 @@
minifi-bootstrap
 minifi-resources
 
+
org.apache.nifi:nifi-framework-core:1.0.0
+zookeeper
 spring-aop
 spring-context
-spring-security-core
 spring-beans
+spring-expression
 swagger-annotations
 slf4j-log4j12
 aspectjweaver
 h2
 netty
 jaxb-impl
-httpclient
 mail
 log4j
+lucene-analyzers-common
 lucene-queryparser
 commons-net
+spring-context
+spring-security-core
 
 
+
+runtime
+false
+lib
--- End diff --

Why is a second dependency set for "lib" added? The only differences I see 
are that it's explicitly including and  "useTransitiveFiltering" is not set.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi pull request #43: MINIFI-115 Upgrade to NiFi 1.0 API

2016-11-14 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi-minifi/pull/43#discussion_r87827996
  
--- Diff: 
minifi-nar-bundles/minifi-framework-bundle/minifi-framework-nar/pom.xml ---
@@ -34,51 +34,70 @@ limitations under the License.
 true
 
 
-
+
+org.apache.nifi.minifi
+minifi-framework-core
+provided
+
+
+org.apache.nifi.minifi
+minifi-api
+provided
+
+
+
 
 org.apache.nifi
 nifi-api
-provided
+compile
 
 
 org.apache.nifi
 nifi-runtime
-provided
+compile
 
 
 org.apache.nifi
 nifi-nar-utils
-provided
+compile
 
 
 org.apache.nifi
 nifi-properties
-provided
+compile
 
 
-org.apache.nifi.minifi
-minifi-framework-core
-provided
+org.apache.nifi
+nifi-security
+compile
+1.0.0
 
 
-org.apache.nifi
-nifi-administration
-provided
+org.eclipse.jetty
+jetty-server
 
 
-org.apache.nifi.minifi
-minifi-api
-provided
+org.eclipse.jetty
+jetty-servlet
 
 
-org.apache.nifi
-nifi-framework-core
-provided
+org.eclipse.jetty
+jetty-webapp
--- End diff --

Why are Jetty deps being added to the MiNiFi-framework-nar? I don't see any 
other changes to the module that would warrant the additions.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi pull request #43: MINIFI-115 Upgrade to NiFi 1.0 API

2016-11-14 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi-minifi/pull/43#discussion_r87825672
  
--- Diff: minifi-assembly/pom.xml ---
@@ -177,23 +182,57 @@ limitations under the License.
 
 
 
+org.bouncycastle
+bcprov-jdk15on
+1.54
+compile
+
+
 org.eclipse.jetty
-jetty-servlet
+jetty-util
+9.3.9.v20160517
 compile
 
 
+org.apache.commons
+commons-lang3
+
+
+org.apache.httpcomponents
+httpclient
+
+
+com.google.guava
+guava
+
+
+org.apache.nifi.minifi
+minifi-framework-core
+
+
+
+
+
+
+
+
+
+
+
+
--- End diff --

Lots of commented out items that could be deleted.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi pull request #43: MINIFI-115 Upgrade to NiFi 1.0 API

2016-11-14 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi-minifi/pull/43#discussion_r87834364
  
--- Diff: minifi-nar-bundles/minifi-standard-nar/pom.xml ---
@@ -52,5 +52,60 @@ limitations under the License.
 nifi-standard-reporting-tasks
 ${org.apache.nifi.version}
 
+
--- End diff --

If these are the deps and their versions that will be provided in lib then 
the version information should be set in the root pom. Also maybe even the 
scope but given the hiccups we've had on transitive "scoping" I'm hesitant.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi pull request #43: MINIFI-115 Upgrade to NiFi 1.0 API

2016-11-14 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi-minifi/pull/43#discussion_r87826189
  
--- Diff: minifi-assembly/pom.xml ---
@@ -177,23 +182,57 @@ limitations under the License.
 
 
 
+org.bouncycastle
+bcprov-jdk15on
+1.54
+compile
+
+
 org.eclipse.jetty
-jetty-servlet
+jetty-util
+9.3.9.v20160517
 compile
 
 
+org.apache.commons
+commons-lang3
+
+
+org.apache.httpcomponents
+httpclient
+
+
+com.google.guava
+guava
+
+
+org.apache.nifi.minifi
+minifi-framework-core
--- End diff --

Slight nit-pick, this section is marked as for "Provided in NiFi so must 
include here too". I believe this should be with  the other MiNiFi modules.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi pull request #43: MINIFI-115 Upgrade to NiFi 1.0 API

2016-11-14 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi-minifi/pull/43#discussion_r87834675
  
--- Diff: minifi-nar-bundles/minifi-standard-services-api-nar/pom.xml ---
@@ -0,0 +1,82 @@
+http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+
+4.0.0
+
+org.apache.nifi
+nifi-standard-services
+1.0.0
+
+org.apache.nifi.minifi
+minifi-standard-services-api-nar
+0.1.0-SNAPSHOT
+nar
+
+true
+true
+
+
+
+org.apache.nifi
+nifi-ssl-context-service-api
+compile
+
+
+org.apache.nifi
+
nifi-distributed-cache-client-service-api
+compile
+
+
+org.apache.nifi
+nifi-load-distribution-service-api
+compile
+
+
+org.apache.nifi
+nifi-http-context-map-api
+compile
+
+
+org.apache.nifi
+nifi-dbcp-service-api
+compile
+
+
+org.apache.nifi
+nifi-hbase-client-service-api
+compile
+
+
--- End diff --

Same comment about setting version/scope in root pom.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1216: NIFI-2654 Enabled encryption coverage for login-identity-p...

2016-11-14 Thread YolandaMDavis
Github user YolandaMDavis commented on the issue:

https://github.com/apache/nifi/pull/1216
  
@alopresto will review


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi pull request #43: MINIFI-115 Upgrade to NiFi 1.0 API

2016-11-14 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi-minifi/pull/43#discussion_r87823501
  
--- Diff: pom.xml ---
@@ -526,6 +566,1191 @@ limitations under the License.
 okhttp
 3.4.1
 
+
+ch.qos.logback
+logback-classic
+1.1.3
+
+
+ch.qos.logback
+jcl-over-slf4j
+1.1.3
+provided
+
+
+org.slf4j
+slf4j-api
+
+
+
+
+org.slf4j
+jcl-over-slf4j
+${org.slf4j.version}
+provided
+
+
+org.slf4j
+log4j-over-slf4j
+${org.slf4j.version}
+provided
+
+
+org.slf4j
+jul-to-slf4j
+${org.slf4j.version}
+provided
+
+
+org.slf4j
+slf4j-api
+${org.slf4j.version}
+provided
+
+
+junit
+junit
+4.12
+
+
+org.mockito
+mockito-core
+1.10.19
+
+
+org.mockito
+mockito-all
+1.10.19
+test
+
+
+org.slf4j
+slf4j-simple
+${org.slf4j.version}
+
+
+org.apache.commons
+commons-compress
+1.11
+
+
+org.apache.commons
+commons-lang3
+3.4
+
+
+org.antlr
+antlr-runtime
+3.5.2
+
+
+org.mongodb
+mongo-java-driver
+3.2.2
+
+
+com.wordnik
+swagger-annotations
+1.5.3-M1
+
+
+org.apache.ignite
+ignite-core
+1.6.0
+
+
+org.apache.ignite
+ignite-spring
+1.6.0
+
+
+org.apache.ignite
+ignite-log4j2
+1.6.0
+
+
+commons-cli
+commons-cli
+1.3.1
+
+
+commons-codec
+commons-codec
+1.10
+
+
+commons-net
+commons-net
+3.3
+
+
+commons-io
+commons-io
+2.5
+
+
+org.bouncycastle
+bcprov-jdk15on
+1.54
+provided
+
+
+org.bouncycastle
+bcpg-jdk15on
+1.54
+
+
+org.bouncycastle
+bcpkix-jdk15on
+1.54
+provided
+
+
+com.jcraft
+jsch
+0.1.52
+
+
+org.apache.httpcomponents
+httpclient
+4.4.1
+
+
+javax.mail
+mail
+1.4.7
+
+
+com.github.jponge
+lzma-java
+1.3
+
+
+org.tukaani
+xz
+1.5
+
+
+net.sf.saxon
+Saxon-HE
+9.6.0-5
+
+
+stax
+stax-api
+1.0.1
+
+
+

[GitHub] nifi issue #1202: NIFI-2854: Refactor repositories and swap files to use sch...

2016-11-14 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1202
  
@JPercivall i have pushed a new commit that I believe should address your 
feedback. Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi pull request #43: MINIFI-115 Upgrade to NiFi 1.0 API

2016-11-14 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi-minifi/pull/43#discussion_r87826363
  
--- Diff: minifi-assembly/pom.xml ---
@@ -177,23 +182,57 @@ limitations under the License.
 
 
 
+org.bouncycastle
+bcprov-jdk15on
+1.54
+compile
+
+
 org.eclipse.jetty
-jetty-servlet
+jetty-util
+9.3.9.v20160517
 compile
 
 
+org.apache.commons
+commons-lang3
+
+
+org.apache.httpcomponents
+httpclient
+
+
+com.google.guava
+guava
--- End diff --

Is this really provided in NiFi? I didn't see it in the Assembly pom.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi pull request #43: MINIFI-115 Upgrade to NiFi 1.0 API

2016-11-14 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi-minifi/pull/43#discussion_r87825541
  
--- Diff: minifi-assembly/pom.xml ---
@@ -150,16 +145,26 @@ limitations under the License.
 
 
 org.apache.nifi
-nifi-framework-core-api
+nifi-ssl-context-service-api
 
 
 org.apache.nifi
-nifi-standard-services-api-nar
-nar
+nifi-framework-api
 
 
 org.apache.nifi
-nifi-ssl-context-service-nar
+nifi-framework-core-api
+
+
+org.apache.nifi.minifi
+minifi-standard-services-api-nar
+0.1.0-SNAPSHOT
--- End diff --

This should probably use let dependency management section from the root 
pom say which version to use. (Same with "minifi-ssl-context-service-nar" below)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi pull request #43: MINIFI-115 Upgrade to NiFi 1.0 API

2016-11-14 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi-minifi/pull/43#discussion_r87824120
  
--- Diff: pom.xml ---
@@ -526,6 +566,1191 @@ limitations under the License.
 okhttp
 3.4.1
 
+
+ch.qos.logback
+logback-classic
+1.1.3
+
+
+ch.qos.logback
+jcl-over-slf4j
+1.1.3
+provided
+
+
+org.slf4j
+slf4j-api
+
+
+
+
+org.slf4j
+jcl-over-slf4j
+${org.slf4j.version}
+provided
+
+
+org.slf4j
+log4j-over-slf4j
+${org.slf4j.version}
+provided
+
+
+org.slf4j
+jul-to-slf4j
+${org.slf4j.version}
+provided
+
+
+org.slf4j
+slf4j-api
+${org.slf4j.version}
+provided
+
+
+junit
+junit
+4.12
+
+
+org.mockito
+mockito-core
+1.10.19
+
+
+org.mockito
+mockito-all
+1.10.19
+test
+
+
+org.slf4j
+slf4j-simple
+${org.slf4j.version}
+
+
+org.apache.commons
+commons-compress
+1.11
+
+
+org.apache.commons
+commons-lang3
+3.4
+
+
+org.antlr
+antlr-runtime
+3.5.2
+
+
+org.mongodb
+mongo-java-driver
+3.2.2
+
+
+com.wordnik
+swagger-annotations
+1.5.3-M1
+
+
+org.apache.ignite
+ignite-core
+1.6.0
+
+
+org.apache.ignite
+ignite-spring
+1.6.0
+
+
+org.apache.ignite
+ignite-log4j2
+1.6.0
+
+
+commons-cli
+commons-cli
+1.3.1
+
+
+commons-codec
+commons-codec
+1.10
+
+
+commons-net
+commons-net
+3.3
+
+
+commons-io
+commons-io
+2.5
+
+
+org.bouncycastle
+bcprov-jdk15on
+1.54
+provided
+
+
+org.bouncycastle
+bcpg-jdk15on
+1.54
+
+
+org.bouncycastle
+bcpkix-jdk15on
+1.54
+provided
+
+
+com.jcraft
+jsch
+0.1.52
+
+
+org.apache.httpcomponents
+httpclient
+4.4.1
+
+
+javax.mail
+mail
+1.4.7
+
+
+com.github.jponge
+lzma-java
+1.3
+
+
+org.tukaani
+xz
+1.5
+
+
+net.sf.saxon
+Saxon-HE
+9.6.0-5
+
+
+stax
+stax-api
+1.0.1
+
+
+

[GitHub] nifi-minifi pull request #43: MINIFI-115 Upgrade to NiFi 1.0 API

2016-11-14 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi-minifi/pull/43#discussion_r87834646
  
--- Diff: pom.xml ---
@@ -503,6 +517,32 @@ limitations under the License.
 commons-io
 2.5
 
+
+org.apache.nifi
+nifi-site-to-site-reporting-nar
+${org.apache.nifi.version}
+nar
+
+
+
--- End diff --

Same comment about setting version/scope in root pom.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi pull request #43: MINIFI-115 Upgrade to NiFi 1.0 API

2016-11-14 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi-minifi/pull/43#discussion_r87827505
  
--- Diff: 
minifi-nar-bundles/minifi-framework-bundle/minifi-framework/minifi-framework-core/pom.xml
 ---
@@ -27,6 +27,48 @@ limitations under the License.
 
 
 
+org.eclipse.jetty
+jetty-server
+9.3.9.v20160517
--- End diff --

Can't these use the version declared in the root level pom?

Also there is extra spaces in these deps that could be fixed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2654) Encrypted configs should handle login identity provider configs

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664321#comment-15664321
 ] 

ASF GitHub Bot commented on NIFI-2654:
--

Github user YolandaMDavis commented on the issue:

https://github.com/apache/nifi/pull/1216
  
@alopresto will review


> Encrypted configs should handle login identity provider configs
> ---
>
> Key: NIFI-2654
> URL: https://issues.apache.org/jira/browse/NIFI-2654
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Configuration, Tools and Build
>Affects Versions: 1.0.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>  Labels: config, encryption, ldap, security
> Fix For: 1.1.0
>
>
> The encrypted configuration tool and internal logic to load unprotected 
> values should handle sensitive values contained in the login identity 
> providers (like LDAP Manager Password).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi-minifi pull request #43: MINIFI-115 Upgrade to NiFi 1.0 API

2016-11-14 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi-minifi/pull/43#discussion_r87826764
  
--- Diff: minifi-docs/src/main/markdown/System_Admin_Guide.md ---
@@ -229,14 +229,13 @@ Option | Description
 -- | ---
 health | The connections's queued bytes and queued FlowFile count.
 bulletins | A list of all the current bulletins (if there are any).
-authorizationIssues | A list of all the current authorization issues (if 
there are any).
--- End diff --

Thanks for fixing the docs too!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-2854) Enable repositories to support upgrades and rollback in well defined scenarios

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664338#comment-15664338
 ] 

ASF GitHub Bot commented on NIFI-2854:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1202
  
@JPercivall i have pushed a new commit that I believe should address your 
feedback. Thanks!


> Enable repositories to support upgrades and rollback in well defined scenarios
> --
>
> Key: NIFI-2854
> URL: https://issues.apache.org/jira/browse/NIFI-2854
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Mark Payne
>Assignee: Mark Payne
> Fix For: 1.1.0
>
>
> The flowfile, swapfile, provenance, and content repositories play a very 
> important roll in NiFi's ability to be safely upgraded and rolled back.  We 
> need to have well documented behaviors, designs, and version adherence so 
> that users can safely rely on these mechanisms.
> Once this is formalized and in place we should update our versioning guidance 
> to reflect this as well.
> The following would be true from NiFi 1.2.0 onward
> * No changes to how the repositories are persisted to disk can be made which 
> will break forward/backward compatibility and specifically this means that 
> things like the way each is serialized to disk cannot change.
> * If changes are made which impact forward or backward compatibility they 
> should be reserved for major releases only and should include a utility to 
> help users with pre-existing data convert from some older format to the newer 
> format.  It may not be feasible to have rollback on major releases.
> * The content repository should not be changed within a major release cycle 
> in any way that will harm forward or backward compatibility.
> * The flow file repository can change in that new fields can be added to 
> existing write ahead log record types but no fields can be removed nor can 
> any new types be added.  Once a field is considered required it must remain 
> required.  Changes may only be made across minor version changes - not 
> incremental.
> * Swap File storage should follow very similar rules to the flow file 
> repository.  Adding a schema to the swap file header may allow some variation 
> there but the variation should only be hints to optimize how they're 
> processed and not change their behavior otherwise. Changes are only permitted 
> during minor version releases.
> * Provenance repository changes are only permitted during minor version 
> releases.  These changes may include adding or removing fields from existing 
> event types.  If a field is considered required it must always be considered 
> required.  If a field is removed then it must not be a required field and 
> there must be a sensible default an older version could use if that value is 
> not found in new data once rolled back.  New event types may be added.  
> Fields or event types not known to older version, if seen after a rollback, 
> will simply be ignored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] nifi-minifi pull request #43: MINIFI-115 Upgrade to NiFi 1.0 API

2016-11-14 Thread JPercivall
Github user JPercivall commented on a diff in the pull request:

https://github.com/apache/nifi-minifi/pull/43#discussion_r87832510
  
--- Diff: minifi-nar-bundles/minifi-provenance-reporting-bundle/pom.xml ---
@@ -25,10 +25,17 @@
 pom
 
 
-minifi-provenance-reporting-task
 minifi-provenance-reporting-nar
 
 
+
+
+org.apache.nifi
+nifi-site-to-site-reporting-task
+1.0.0
+
+
+
 
--- End diff --

I believe this dependency management section can be removed (or at least 
remove minifi-provenance-reporting-task since it no longer exists).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #23: MINIFI-131: Provenance Support

2016-11-14 Thread apiri
Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/23#discussion_r87503452
  
--- Diff: README.md ---
@@ -69,6 +69,7 @@ Perspectives of the role of MiNiFi should be from the 
perspective of the agent a
 * libboost and boost-devel
   * 1.48.0 or greater
 * libxml2 and libxml2-devel
+* leveldb 
--- End diff --

We need to capture this in the LICENSE file


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi-minifi-cpp pull request #23: MINIFI-131: Provenance Support

2016-11-14 Thread apiri
Github user apiri commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/23#discussion_r87836931
  
--- Diff: libminifi/include/Provenance.h ---
@@ -0,0 +1,902 @@
+/**
+ * @file Provenance.h
+ * Flow file record class declaration
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef __PROVENANCE_H__
+#define __PROVENANCE_H__
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "leveldb/db.h"
+
+#include "TimeUtil.h"
+#include "Logger.h"
+#include "Configure.h"
+#include "Property.h"
+#include "ResourceClaim.h"
+#include "Relationship.h"
+#include "Connection.h"
+#include "FlowFileRecord.h"
+
+// Provenance Event Record Serialization Seg Size
+#define PROVENANCE_EVENT_RECORD_SEG_SIZE 2048
+
+class ProvenanceRepository;
+
+//! Provenance Event Record
+class ProvenanceEventRecord
+{
+public:
+   enum ProvenanceEventType {
+
+   /**
+* A CREATE event is used when a FlowFile is generated from data 
that was
+* not received from a remote system or external process
+*/
+   CREATE,
+
+   /**
+* Indicates a provenance event for receiving data from an external 
process. This Event Type
+* is expected to be the first event for a FlowFile. As such, a 
Processor that receives data
+* from an external source and uses that data to replace the 
content of an existing FlowFile
+* should use the {@link #FETCH} event type, rather than the 
RECEIVE event type.
+*/
+   RECEIVE,
+
+   /**
+* Indicates that the contents of a FlowFile were overwritten using 
the contents of some
+* external resource. This is similar to the {@link #RECEIVE} event 
but varies in that
+* RECEIVE events are intended to be used as the event that 
introduces the FlowFile into
+* the system, whereas FETCH is used to indicate that the contents 
of an existing FlowFile
+* were overwritten.
+*/
+   FETCH,
+
+   /**
+* Indicates a provenance event for sending data to an external 
process
+*/
+   SEND,
+
+   /**
+* Indicates that the contents of a FlowFile were downloaded by a 
user or external entity.
+*/
+   DOWNLOAD,
+
+   /**
+* Indicates a provenance event for the conclusion of an object's 
life for
+* some reason other than object expiration
+*/
+   DROP,
+
+   /**
+* Indicates a provenance event for the conclusion of an object's 
life due
+* to the fact that the object could not be processed in a timely 
manner
+*/
+   EXPIRE,
+
+   /**
+* FORK is used to indicate that one or more FlowFile was derived 
from a
+* parent FlowFile.
+*/
+   FORK,
+
+   /**
+* JOIN is used to indicate that a single FlowFile is derived from 
joining
+* together multiple parent FlowFiles.
+*/
+   JOIN,
+
+   /**
+* CLONE is used to indicate that a FlowFile is an exact duplicate 
of its
+* parent FlowFile.
+*/
+   CLONE,
+
+   /**
+* CONTENT_MODIFIED is used to indicate that a FlowFile's content 
was
+* modified in some way. When using this Event Type, it is 
advisable to
+* provide details about how the content is modified.
+*/
+   CONTENT_MODIFIED,
+
+   /**
+* ATTRIBUTES_MODIFIED is used to indicate that a FlowFile's 
attributes were
+* modified in some way. This event is not needed when another 
event is

[GitHub] nifi-minifi-cpp issue #23: MINIFI-131: Provenance Support

2016-11-14 Thread apiri
Github user apiri commented on the issue:

https://github.com/apache/nifi-minifi-cpp/pull/23
  
@benqiu2016 also, we should update .travis.yml with the leveldb dependency.

I verified this corrects the CI build on a branch at: 
https://github.com/apiri/nifi-minifi-cpp/blob/travis-leveldb/.travis.yml#L38


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1107: origin/NIFI-1526

2016-11-14 Thread markap14
Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1107
  
@mathiastiberghien my apologies. I didn't see where @pvillard31 had 
mentioned me. I'll review now & if all is good will merge to master this 
afternoon. Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (NIFI-1526) Allow components to provide default values for Yield Duration and Run Schedule

2016-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-1526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15664464#comment-15664464
 ] 

ASF GitHub Bot commented on NIFI-1526:
--

Github user markap14 commented on the issue:

https://github.com/apache/nifi/pull/1107
  
@mathiastiberghien my apologies. I didn't see where @pvillard31 had 
mentioned me. I'll review now & if all is good will merge to master this 
afternoon. Thanks!


> Allow components to provide default values for Yield Duration and Run Schedule
> --
>
> Key: NIFI-1526
> URL: https://issues.apache.org/jira/browse/NIFI-1526
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Priority: Minor
> Fix For: 1.1.0
>
>
> It would be nice for developers of processors (and maybe reporting tasks and 
> controller services) to be able to specify a default value for Yield duration 
> and Run Schedule.
> Currently Yield defaults to 1 second and Run Schedule defaults to 0 seconds. 
> There may be cases where these are not the best default values and the 
> developer wants to start off with better defaults, still allowing the user to 
> tune as needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >