[GitHub] nifi pull request: [NIFI-774] Create a DeleteS3Object Processor

2015-08-04 Thread yu-iskw
Github user yu-iskw commented on a diff in the pull request:

https://github.com/apache/nifi/pull/72#discussion_r36270771
  
--- Diff: 
nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/test/java/org/apache/nifi/processors/aws/s3/TestDeleteS3Object.java
 ---
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import java.io.IOException;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.nio.file.Paths;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+import org.junit.Before;
+import org.junit.Ignore;
+import org.junit.Test;
+
+@Ignore("For local testing only - interacts with S3 so the credentials 
file must be configured and all necessary buckets created")
+public class TestDeleteS3Object {
+
+private final String CREDENTIALS_FILE = 
System.getProperty("user.home") + "/aws-credentials.properties";
+
+// When you want to test this, you should create a bucket on Amazon S3 
as follows.
+private final String TEST_REGION = "us-east-1";
+private final String TEST_BUCKET = 
"test-bucket-----1234567890123";
+
+@Before
--- End diff --

Do you have any idea to prepare for an idea for the test?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: [NIFI-774] Create a DeleteS3Object Processor

2015-08-04 Thread yu-iskw
Github user yu-iskw commented on a diff in the pull request:

https://github.com/apache/nifi/pull/72#discussion_r36270440
  
--- Diff: 
nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/DeleteS3Object.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import java.io.IOException;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.model.*;
+import org.apache.nifi.annotation.behavior.*;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+@SupportsBatching
+@SeeAlso({PutS3Object.class})
+@Tags({"Amazon", "S3", "AWS", "Archive", "Delete"})
+@CapabilityDescription("Deletes FlowFiles on an Amazon S3 Bucket")
+@DynamicProperty(name = "The name of a User-Defined Metadata field to add 
to the S3 Object",
+value = "The value of a User-Defined Metadata field to add to the 
S3 Object",
+description = "Allows user-defined metadata to be added to the S3 
object as key/value pairs",
+supportsExpressionLanguage = true)
+@ReadsAttribute(attribute = "filename", description = "Uses the FlowFile's 
filename as the filename for the S3 object")
+@WritesAttributes({
+@WritesAttribute(attribute = "s3.bucket", description = "The name 
of the S3 bucket"),
+@WritesAttribute(attribute = "s3.version", description = "The 
version of the S3 Object"),
+@WritesAttribute(attribute = "path", description = "The path of 
the file"),
+})
+public class DeleteS3Object extends AbstractS3Processor {
+
+public static final PropertyDescriptor VERSION_ID = new 
PropertyDescriptor.Builder()
+.name("Version")
+.description("The Version of the Object to delete")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(true)
+.required(false)
+.build();
+
+public static final List properties = 
Collections.unmodifiableList(
+Arrays.asList(KEY, BUCKET, ACCESS_KEY, SECRET_KEY, 
CREDENTAILS_FILE, REGION, TIMEOUT,
+FULL_CONTROL_USER_LIST, READ_USER_LIST, 
WRITE_USER_LIST, READ_ACL_LIST, WRITE_ACL_LIST, OWNER));
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+return properties;
+}
+
+@Override
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(true)
+.dynamic(true)
+.build();
+}
+
+public void onTrigger(final ProcessContext context, final 
ProcessSession session) {
+FlowFile flowFile = session.get();
+if (flowFile == null) {
+return;
+}
+
+final long startNanos = System.nanoTime();
+
+final String bucket = 
context.getProperty(BUCKET).evaluateAttributeExpressions(flowFile).getValue();
+final String key = 
context.getProperty(KEY).evaluateAttributeExpressions(flowFile).getValue();
+final String versionId = 
context.getProperty(VERSION_ID).evaluateAttributeExpressions(flowFile).getValue();
+
+   

[GitHub] nifi pull request: [NIFI-774] Create a DeleteS3Object Processor

2015-08-04 Thread yu-iskw
Github user yu-iskw commented on a diff in the pull request:

https://github.com/apache/nifi/pull/72#discussion_r36270059
  
--- Diff: 
nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/DeleteS3Object.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import java.io.IOException;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.model.*;
+import org.apache.nifi.annotation.behavior.*;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+@SupportsBatching
+@SeeAlso({PutS3Object.class})
+@Tags({"Amazon", "S3", "AWS", "Archive", "Delete"})
+@CapabilityDescription("Deletes FlowFiles on an Amazon S3 Bucket")
+@DynamicProperty(name = "The name of a User-Defined Metadata field to add 
to the S3 Object",
+value = "The value of a User-Defined Metadata field to add to the 
S3 Object",
+description = "Allows user-defined metadata to be added to the S3 
object as key/value pairs",
+supportsExpressionLanguage = true)
+@ReadsAttribute(attribute = "filename", description = "Uses the FlowFile's 
filename as the filename for the S3 object")
+@WritesAttributes({
+@WritesAttribute(attribute = "s3.bucket", description = "The name 
of the S3 bucket"),
+@WritesAttribute(attribute = "s3.version", description = "The 
version of the S3 Object"),
+@WritesAttribute(attribute = "path", description = "The path of 
the file"),
+})
+public class DeleteS3Object extends AbstractS3Processor {
+
+public static final PropertyDescriptor VERSION_ID = new 
PropertyDescriptor.Builder()
+.name("Version")
+.description("The Version of the Object to delete")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(true)
+.required(false)
+.build();
+
+public static final List properties = 
Collections.unmodifiableList(
+Arrays.asList(KEY, BUCKET, ACCESS_KEY, SECRET_KEY, 
CREDENTAILS_FILE, REGION, TIMEOUT,
+FULL_CONTROL_USER_LIST, READ_USER_LIST, 
WRITE_USER_LIST, READ_ACL_LIST, WRITE_ACL_LIST, OWNER));
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+return properties;
+}
+
+@Override
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(true)
+.dynamic(true)
+.build();
+}
+
+public void onTrigger(final ProcessContext context, final 
ProcessSession session) {
+FlowFile flowFile = session.get();
+if (flowFile == null) {
+return;
+}
+
+final long startNanos = System.nanoTime();
+
+final String bucket = 
context.getProperty(BUCKET).evaluateAttributeExpressions(flowFile).getValue();
+final String key = 
context.getProperty(KEY).evaluateAttributeExpressions(flowFile).getValue();
+final String versionId = 
context.getProperty(VERSION_ID).evaluateAttributeExpressions(flowFile).getValue();
+
+   

[GitHub] nifi pull request: [NIFI-774] Create a DeleteS3Object Processor

2015-08-04 Thread yu-iskw
Github user yu-iskw commented on a diff in the pull request:

https://github.com/apache/nifi/pull/72#discussion_r36269663
  
--- Diff: 
nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/DeleteS3Object.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import java.io.IOException;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.model.*;
+import org.apache.nifi.annotation.behavior.*;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+@SupportsBatching
+@SeeAlso({PutS3Object.class})
+@Tags({"Amazon", "S3", "AWS", "Archive", "Delete"})
+@CapabilityDescription("Deletes FlowFiles on an Amazon S3 Bucket")
+@DynamicProperty(name = "The name of a User-Defined Metadata field to add 
to the S3 Object",
+value = "The value of a User-Defined Metadata field to add to the 
S3 Object",
+description = "Allows user-defined metadata to be added to the S3 
object as key/value pairs",
+supportsExpressionLanguage = true)
+@ReadsAttribute(attribute = "filename", description = "Uses the FlowFile's 
filename as the filename for the S3 object")
+@WritesAttributes({
+@WritesAttribute(attribute = "s3.bucket", description = "The name 
of the S3 bucket"),
+@WritesAttribute(attribute = "s3.version", description = "The 
version of the S3 Object"),
+@WritesAttribute(attribute = "path", description = "The path of 
the file"),
+})
+public class DeleteS3Object extends AbstractS3Processor {
+
+public static final PropertyDescriptor VERSION_ID = new 
PropertyDescriptor.Builder()
+.name("Version")
+.description("The Version of the Object to delete")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(true)
+.required(false)
+.build();
+
+public static final List properties = 
Collections.unmodifiableList(
+Arrays.asList(KEY, BUCKET, ACCESS_KEY, SECRET_KEY, 
CREDENTAILS_FILE, REGION, TIMEOUT,
+FULL_CONTROL_USER_LIST, READ_USER_LIST, 
WRITE_USER_LIST, READ_ACL_LIST, WRITE_ACL_LIST, OWNER));
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+return properties;
+}
+
+@Override
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(true)
+.dynamic(true)
+.build();
+}
+
+public void onTrigger(final ProcessContext context, final 
ProcessSession session) {
+FlowFile flowFile = session.get();
+if (flowFile == null) {
+return;
+}
+
+final long startNanos = System.nanoTime();
+
+final String bucket = 
context.getProperty(BUCKET).evaluateAttributeExpressions(flowFile).getValue();
+final String key = 
context.getProperty(KEY).evaluateAttributeExpressions(flowFile).getValue();
+final String versionId = 
context.getProperty(VERSION_ID).evaluateAttributeExpressions(flowFile).getValue();
+
+   

[GitHub] nifi pull request: [NIFI-774] Create a DeleteS3Object Processor

2015-08-04 Thread yu-iskw
Github user yu-iskw commented on a diff in the pull request:

https://github.com/apache/nifi/pull/72#discussion_r36269143
  
--- Diff: 
nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/DeleteS3Object.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import java.io.IOException;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.model.*;
+import org.apache.nifi.annotation.behavior.*;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+@SupportsBatching
+@SeeAlso({PutS3Object.class})
+@Tags({"Amazon", "S3", "AWS", "Archive", "Delete"})
+@CapabilityDescription("Deletes FlowFiles on an Amazon S3 Bucket")
+@DynamicProperty(name = "The name of a User-Defined Metadata field to add 
to the S3 Object",
+value = "The value of a User-Defined Metadata field to add to the 
S3 Object",
+description = "Allows user-defined metadata to be added to the S3 
object as key/value pairs",
+supportsExpressionLanguage = true)
+@ReadsAttribute(attribute = "filename", description = "Uses the FlowFile's 
filename as the filename for the S3 object")
+@WritesAttributes({
+@WritesAttribute(attribute = "s3.bucket", description = "The name 
of the S3 bucket"),
+@WritesAttribute(attribute = "s3.version", description = "The 
version of the S3 Object"),
+@WritesAttribute(attribute = "path", description = "The path of 
the file"),
+})
+public class DeleteS3Object extends AbstractS3Processor {
+
+public static final PropertyDescriptor VERSION_ID = new 
PropertyDescriptor.Builder()
+.name("Version")
+.description("The Version of the Object to delete")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(true)
+.required(false)
+.build();
+
+public static final List properties = 
Collections.unmodifiableList(
+Arrays.asList(KEY, BUCKET, ACCESS_KEY, SECRET_KEY, 
CREDENTAILS_FILE, REGION, TIMEOUT,
+FULL_CONTROL_USER_LIST, READ_USER_LIST, 
WRITE_USER_LIST, READ_ACL_LIST, WRITE_ACL_LIST, OWNER));
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+return properties;
+}
+
+@Override
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
--- End diff --

When I removed the method, I got a `IllegalStateException`.

```
java.lang.AssertionError: java.lang.IllegalStateException: Attempting to 
Evaluate Expressions but PropertyDescriptor[Version] indicates that the 
Expression Language is not supported. If you realize that this is the case and 
do not want this error to occur, it can be disabled by calling 
TestRunner.setValidateExpressionUsage(false)
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: [NIFI-774] Create a DeleteS3Object Processor

2015-08-04 Thread yu-iskw
Github user yu-iskw commented on a diff in the pull request:

https://github.com/apache/nifi/pull/72#discussion_r36267548
  
--- Diff: 
nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/DeleteS3Object.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import java.io.IOException;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.model.*;
+import org.apache.nifi.annotation.behavior.*;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+@SupportsBatching
+@SeeAlso({PutS3Object.class})
+@Tags({"Amazon", "S3", "AWS", "Archive", "Delete"})
+@CapabilityDescription("Deletes FlowFiles on an Amazon S3 Bucket")
+@DynamicProperty(name = "The name of a User-Defined Metadata field to add 
to the S3 Object",
+value = "The value of a User-Defined Metadata field to add to the 
S3 Object",
+description = "Allows user-defined metadata to be added to the S3 
object as key/value pairs",
+supportsExpressionLanguage = true)
+@ReadsAttribute(attribute = "filename", description = "Uses the FlowFile's 
filename as the filename for the S3 object")
+@WritesAttributes({
--- End diff --

I got it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: [NIFI-774] Create a DeleteS3Object Processor

2015-08-04 Thread yu-iskw
Github user yu-iskw commented on a diff in the pull request:

https://github.com/apache/nifi/pull/72#discussion_r36267522
  
--- Diff: 
nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/DeleteS3Object.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import java.io.IOException;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.model.*;
+import org.apache.nifi.annotation.behavior.*;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+@SupportsBatching
+@SeeAlso({PutS3Object.class})
+@Tags({"Amazon", "S3", "AWS", "Archive", "Delete"})
+@CapabilityDescription("Deletes FlowFiles on an Amazon S3 Bucket")
+@DynamicProperty(name = "The name of a User-Defined Metadata field to add 
to the S3 Object",
+value = "The value of a User-Defined Metadata field to add to the 
S3 Object",
+description = "Allows user-defined metadata to be added to the S3 
object as key/value pairs",
+supportsExpressionLanguage = true)
+@ReadsAttribute(attribute = "filename", description = "Uses the FlowFile's 
filename as the filename for the S3 object")
--- End diff --

I got it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: [NIFI-774] Create a DeleteS3Object Processor

2015-08-04 Thread yu-iskw
Github user yu-iskw commented on a diff in the pull request:

https://github.com/apache/nifi/pull/72#discussion_r36267498
  
--- Diff: 
nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/DeleteS3Object.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import java.io.IOException;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.model.*;
+import org.apache.nifi.annotation.behavior.*;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+@SupportsBatching
+@SeeAlso({PutS3Object.class})
+@Tags({"Amazon", "S3", "AWS", "Archive", "Delete"})
+@CapabilityDescription("Deletes FlowFiles on an Amazon S3 Bucket")
+@DynamicProperty(name = "The name of a User-Defined Metadata field to add 
to the S3 Object",
--- End diff --

Alright.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: heads-up: releases missing from archive.apache

2015-08-04 Thread Joe Witt
Ok - well thanks for catching that and flagging it.

On Tue, Aug 4, 2015 at 10:22 PM, Sean Busbey  wrote:
> AFAIK going from the active dist area to archive is all automated. I think
> making the directory on the initial transition just got missed.
>
> On Tue, Aug 4, 2015 at 9:17 PM, Joe Witt  wrote:
>
>> Thanks Sean.  Any sort of release process snafu on our end or just a
>> wiring issue on the infra side they will need to sort out?
>>
>> On Tue, Aug 4, 2015 at 9:49 PM, Sean Busbey  wrote:
>> > FYI, nifi releases aren't showing up on archive.apache. I filed a ticket
>> > with INFRA already:
>> >
>> > https://issues.apache.org/jira/browse/INFRA-10080
>> >
>> > --
>> > Sean
>>
>
>
>
> --
> Sean


Re: heads-up: releases missing from archive.apache

2015-08-04 Thread Sean Busbey
AFAIK going from the active dist area to archive is all automated. I think
making the directory on the initial transition just got missed.

On Tue, Aug 4, 2015 at 9:17 PM, Joe Witt  wrote:

> Thanks Sean.  Any sort of release process snafu on our end or just a
> wiring issue on the infra side they will need to sort out?
>
> On Tue, Aug 4, 2015 at 9:49 PM, Sean Busbey  wrote:
> > FYI, nifi releases aren't showing up on archive.apache. I filed a ticket
> > with INFRA already:
> >
> > https://issues.apache.org/jira/browse/INFRA-10080
> >
> > --
> > Sean
>



-- 
Sean


Re: heads-up: releases missing from archive.apache

2015-08-04 Thread Joe Witt
Thanks Sean.  Any sort of release process snafu on our end or just a
wiring issue on the infra side they will need to sort out?

On Tue, Aug 4, 2015 at 9:49 PM, Sean Busbey  wrote:
> FYI, nifi releases aren't showing up on archive.apache. I filed a ticket
> with INFRA already:
>
> https://issues.apache.org/jira/browse/INFRA-10080
>
> --
> Sean


Re: Route Original Flow File Base on InvokeHTTP Response

2015-08-04 Thread Joe Witt
https://issues.apache.org/jira/browse/NIFI-812

On Tue, Aug 4, 2015 at 12:52 PM, Joe Witt  wrote:
> Again with adam.  Will make a jira and knock that out.  Will also pursue
> what bryan mentioned though that will take longer
>
> On Aug 4, 2015 12:44 PM, "Adam Taft"  wrote:
>>
>> One option I think we kicked around at some point was to capture the
>> response body as a flowfile attribute in the original flowfile.  For
>> reasonably sized response bodies, this would work OK.  It would be a nice
>> way to handle your situation, because then the response becomes an
>> attribute of the request.
>>
>> This would obviously take a code change, but adding a property to the
>> effect of "Capture response body as flowfile attribute" might be a nice
>> feature.
>>
>>
>> On Tue, Aug 4, 2015 at 11:57 AM, steveM 
>> wrote:
>>
>> > My use case is I pull the doc id from the file, call a web service with
>> > that
>> > id. The service responds with json that I would then parse to determine
>> > where to route the document next. Sometimes the document might be new,
>> > sometimes an update is allowed, sometimes duplicates need to be put
>> > somewhere else.
>> > I was hoping I was missing something that allowed you to handle a
>> > response
>> > and just add an attribute to the original file (or something similar to
>> > handle this case).
>> >
>> >
>> >
>> > --
>> > View this message in context:
>> >
>> > http://apache-nifi-incubating-developer-list.39713.n7.nabble.com/Route-Original-Flow-File-Base-on-InvokeHTTP-Response-tp2317p2343.html
>> > Sent from the Apache NiFi (incubating) Developer List mailing list
>> > archive
>> > at Nabble.com.
>> >


heads-up: releases missing from archive.apache

2015-08-04 Thread Sean Busbey
FYI, nifi releases aren't showing up on archive.apache. I filed a ticket
with INFRA already:

https://issues.apache.org/jira/browse/INFRA-10080

-- 
Sean


Re: Route Original Flow File Base on InvokeHTTP Response

2015-08-04 Thread Joe Witt
Again with adam.  Will make a jira and knock that out.  Will also pursue
what bryan mentioned though that will take longer
On Aug 4, 2015 12:44 PM, "Adam Taft"  wrote:

> One option I think we kicked around at some point was to capture the
> response body as a flowfile attribute in the original flowfile.  For
> reasonably sized response bodies, this would work OK.  It would be a nice
> way to handle your situation, because then the response becomes an
> attribute of the request.
>
> This would obviously take a code change, but adding a property to the
> effect of "Capture response body as flowfile attribute" might be a nice
> feature.
>
>
> On Tue, Aug 4, 2015 at 11:57 AM, steveM 
> wrote:
>
> > My use case is I pull the doc id from the file, call a web service with
> > that
> > id. The service responds with json that I would then parse to determine
> > where to route the document next. Sometimes the document might be new,
> > sometimes an update is allowed, sometimes duplicates need to be put
> > somewhere else.
> > I was hoping I was missing something that allowed you to handle a
> response
> > and just add an attribute to the original file (or something similar to
> > handle this case).
> >
> >
> >
> > --
> > View this message in context:
> >
> http://apache-nifi-incubating-developer-list.39713.n7.nabble.com/Route-Original-Flow-File-Base-on-InvokeHTTP-Response-tp2317p2343.html
> > Sent from the Apache NiFi (incubating) Developer List mailing list
> archive
> > at Nabble.com.
> >
>


Re: Route Original Flow File Base on InvokeHTTP Response

2015-08-04 Thread Adam Taft
One option I think we kicked around at some point was to capture the
response body as a flowfile attribute in the original flowfile.  For
reasonably sized response bodies, this would work OK.  It would be a nice
way to handle your situation, because then the response becomes an
attribute of the request.

This would obviously take a code change, but adding a property to the
effect of "Capture response body as flowfile attribute" might be a nice
feature.


On Tue, Aug 4, 2015 at 11:57 AM, steveM  wrote:

> My use case is I pull the doc id from the file, call a web service with
> that
> id. The service responds with json that I would then parse to determine
> where to route the document next. Sometimes the document might be new,
> sometimes an update is allowed, sometimes duplicates need to be put
> somewhere else.
> I was hoping I was missing something that allowed you to handle a response
> and just add an attribute to the original file (or something similar to
> handle this case).
>
>
>
> --
> View this message in context:
> http://apache-nifi-incubating-developer-list.39713.n7.nabble.com/Route-Original-Flow-File-Base-on-InvokeHTTP-Response-tp2317p2343.html
> Sent from the Apache NiFi (incubating) Developer List mailing list archive
> at Nabble.com.
>


Re: Route Original Flow File Base on InvokeHTTP Response

2015-08-04 Thread steveM
My use case is I pull the doc id from the file, call a web service with that
id. The service responds with json that I would then parse to determine
where to route the document next. Sometimes the document might be new,
sometimes an update is allowed, sometimes duplicates need to be put
somewhere else. 
I was hoping I was missing something that allowed you to handle a response
and just add an attribute to the original file (or something similar to
handle this case).



--
View this message in context: 
http://apache-nifi-incubating-developer-list.39713.n7.nabble.com/Route-Original-Flow-File-Base-on-InvokeHTTP-Response-tp2317p2343.html
Sent from the Apache NiFi (incubating) Developer List mailing list archive at 
Nabble.com.


[GitHub] nifi pull request: [NIFI-774] Create a DeleteS3Object Processor

2015-08-04 Thread rahst12
Github user rahst12 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/72#discussion_r36206754
  
--- Diff: 
nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/DeleteS3Object.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import java.io.IOException;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.model.*;
+import org.apache.nifi.annotation.behavior.*;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+@SupportsBatching
+@SeeAlso({PutS3Object.class})
+@Tags({"Amazon", "S3", "AWS", "Archive", "Delete"})
+@CapabilityDescription("Deletes FlowFiles on an Amazon S3 Bucket")
+@DynamicProperty(name = "The name of a User-Defined Metadata field to add 
to the S3 Object",
+value = "The value of a User-Defined Metadata field to add to the 
S3 Object",
+description = "Allows user-defined metadata to be added to the S3 
object as key/value pairs",
+supportsExpressionLanguage = true)
+@ReadsAttribute(attribute = "filename", description = "Uses the FlowFile's 
filename as the filename for the S3 object")
+@WritesAttributes({
+@WritesAttribute(attribute = "s3.bucket", description = "The name 
of the S3 bucket"),
+@WritesAttribute(attribute = "s3.version", description = "The 
version of the S3 Object"),
+@WritesAttribute(attribute = "path", description = "The path of 
the file"),
+})
+public class DeleteS3Object extends AbstractS3Processor {
+
+public static final PropertyDescriptor VERSION_ID = new 
PropertyDescriptor.Builder()
+.name("Version")
+.description("The Version of the Object to delete")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(true)
+.required(false)
+.build();
+
+public static final List properties = 
Collections.unmodifiableList(
+Arrays.asList(KEY, BUCKET, ACCESS_KEY, SECRET_KEY, 
CREDENTAILS_FILE, REGION, TIMEOUT,
+FULL_CONTROL_USER_LIST, READ_USER_LIST, 
WRITE_USER_LIST, READ_ACL_LIST, WRITE_ACL_LIST, OWNER));
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+return properties;
+}
+
+@Override
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(true)
+.dynamic(true)
+.build();
+}
+
+public void onTrigger(final ProcessContext context, final 
ProcessSession session) {
+FlowFile flowFile = session.get();
+if (flowFile == null) {
+return;
+}
+
+final long startNanos = System.nanoTime();
+
+final String bucket = 
context.getProperty(BUCKET).evaluateAttributeExpressions(flowFile).getValue();
+final String key = 
context.getProperty(KEY).evaluateAttributeExpressions(flowFile).getValue();
+final String versionId = 
context.getProperty(VERSION_ID).evaluateAttributeExpressions(flowFile).getValue();
+
+   

[GitHub] nifi pull request: [NIFI-774] Create a DeleteS3Object Processor

2015-08-04 Thread danbress
Github user danbress commented on a diff in the pull request:

https://github.com/apache/nifi/pull/72#discussion_r36206528
  
--- Diff: 
nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/test/java/org/apache/nifi/processors/aws/s3/TestDeleteS3Object.java
 ---
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import java.io.IOException;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.nio.file.Paths;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+import org.junit.Before;
+import org.junit.Ignore;
+import org.junit.Test;
+
+@Ignore("For local testing only - interacts with S3 so the credentials 
file must be configured and all necessary buckets created")
+public class TestDeleteS3Object {
+
+private final String CREDENTIALS_FILE = 
System.getProperty("user.home") + "/aws-credentials.properties";
+
+// When you want to test this, you should create a bucket on Amazon S3 
as follows.
+private final String TEST_REGION = "us-east-1";
+private final String TEST_BUCKET = 
"test-bucket-----1234567890123";
+
+@Before
+public void setUp() throws IOException, URISyntaxException {
+final TestRunner runner = TestRunners.newTestRunner(new 
PutS3Object());
+runner.setProperty(PutS3Object.REGION, "ap-northeast-1");
+runner.setProperty(PutS3Object.CREDENTAILS_FILE, CREDENTIALS_FILE);
+runner.setProperty(PutS3Object.BUCKET, TEST_BUCKET);
+
+final Map attrs = new HashMap<>();
+attrs.put("filename", "hello.txt");
+URL file = 
this.getClass().getClassLoader().getResource("hello.txt");
+runner.enqueue(Paths.get(file.toURI()), attrs);
+runner.run(1);
+}
+
+@Test
+public void testSimpleDelete() throws IOException {
+DeleteS3Object deleter = new DeleteS3Object();
+final TestRunner runner = TestRunners.newTestRunner(deleter);
+runner.setProperty(DeleteS3Object.CREDENTAILS_FILE, 
CREDENTIALS_FILE);
+runner.setProperty(DeleteS3Object.REGION, TEST_REGION);
+runner.setProperty(DeleteS3Object.BUCKET, TEST_BUCKET);
--- End diff --

Doesn't the KEY need to be set, so you can specify which object to delete?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: [NIFI-774] Create a DeleteS3Object Processor

2015-08-04 Thread danbress
Github user danbress commented on a diff in the pull request:

https://github.com/apache/nifi/pull/72#discussion_r36206235
  
--- Diff: 
nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/test/java/org/apache/nifi/processors/aws/s3/TestDeleteS3Object.java
 ---
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import java.io.IOException;
+import java.net.URISyntaxException;
+import java.net.URL;
+import java.nio.file.Paths;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.nifi.util.TestRunner;
+import org.apache.nifi.util.TestRunners;
+import org.junit.Before;
+import org.junit.Ignore;
+import org.junit.Test;
+
+@Ignore("For local testing only - interacts with S3 so the credentials 
file must be configured and all necessary buckets created")
+public class TestDeleteS3Object {
+
+private final String CREDENTIALS_FILE = 
System.getProperty("user.home") + "/aws-credentials.properties";
+
+// When you want to test this, you should create a bucket on Amazon S3 
as follows.
+private final String TEST_REGION = "us-east-1";
+private final String TEST_BUCKET = 
"test-bucket-----1234567890123";
+
+@Before
--- End diff --

I'm not sure you need this setUp() method, because you are doing the same 
sort of thing again in your testSimpleDelete() method


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: [NIFI-774] Create a DeleteS3Object Processor

2015-08-04 Thread danbress
Github user danbress commented on a diff in the pull request:

https://github.com/apache/nifi/pull/72#discussion_r36205772
  
--- Diff: 
nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/DeleteS3Object.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import java.io.IOException;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.model.*;
+import org.apache.nifi.annotation.behavior.*;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+@SupportsBatching
+@SeeAlso({PutS3Object.class})
+@Tags({"Amazon", "S3", "AWS", "Archive", "Delete"})
+@CapabilityDescription("Deletes FlowFiles on an Amazon S3 Bucket")
+@DynamicProperty(name = "The name of a User-Defined Metadata field to add 
to the S3 Object",
+value = "The value of a User-Defined Metadata field to add to the 
S3 Object",
+description = "Allows user-defined metadata to be added to the S3 
object as key/value pairs",
+supportsExpressionLanguage = true)
+@ReadsAttribute(attribute = "filename", description = "Uses the FlowFile's 
filename as the filename for the S3 object")
+@WritesAttributes({
+@WritesAttribute(attribute = "s3.bucket", description = "The name 
of the S3 bucket"),
+@WritesAttribute(attribute = "s3.version", description = "The 
version of the S3 Object"),
+@WritesAttribute(attribute = "path", description = "The path of 
the file"),
+})
+public class DeleteS3Object extends AbstractS3Processor {
+
+public static final PropertyDescriptor VERSION_ID = new 
PropertyDescriptor.Builder()
+.name("Version")
+.description("The Version of the Object to delete")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(true)
+.required(false)
+.build();
+
+public static final List properties = 
Collections.unmodifiableList(
+Arrays.asList(KEY, BUCKET, ACCESS_KEY, SECRET_KEY, 
CREDENTAILS_FILE, REGION, TIMEOUT,
--- End diff --

Shouldn't this list contain the VERSION_ID PropertyDescriptor?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: [NIFI-774] Create a DeleteS3Object Processor

2015-08-04 Thread danbress
Github user danbress commented on a diff in the pull request:

https://github.com/apache/nifi/pull/72#discussion_r36205680
  
--- Diff: 
nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/DeleteS3Object.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import java.io.IOException;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.model.*;
+import org.apache.nifi.annotation.behavior.*;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+@SupportsBatching
+@SeeAlso({PutS3Object.class})
+@Tags({"Amazon", "S3", "AWS", "Archive", "Delete"})
+@CapabilityDescription("Deletes FlowFiles on an Amazon S3 Bucket")
+@DynamicProperty(name = "The name of a User-Defined Metadata field to add 
to the S3 Object",
+value = "The value of a User-Defined Metadata field to add to the 
S3 Object",
+description = "Allows user-defined metadata to be added to the S3 
object as key/value pairs",
+supportsExpressionLanguage = true)
+@ReadsAttribute(attribute = "filename", description = "Uses the FlowFile's 
filename as the filename for the S3 object")
+@WritesAttributes({
+@WritesAttribute(attribute = "s3.bucket", description = "The name 
of the S3 bucket"),
+@WritesAttribute(attribute = "s3.version", description = "The 
version of the S3 Object"),
+@WritesAttribute(attribute = "path", description = "The path of 
the file"),
+})
+public class DeleteS3Object extends AbstractS3Processor {
+
+public static final PropertyDescriptor VERSION_ID = new 
PropertyDescriptor.Builder()
+.name("Version")
+.description("The Version of the Object to delete")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(true)
+.required(false)
--- End diff --

Why wouldn't this field be required?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: [NIFI-774] Create a DeleteS3Object Processor

2015-08-04 Thread danbress
Github user danbress commented on a diff in the pull request:

https://github.com/apache/nifi/pull/72#discussion_r36204940
  
--- Diff: 
nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/DeleteS3Object.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import java.io.IOException;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.model.*;
+import org.apache.nifi.annotation.behavior.*;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+@SupportsBatching
+@SeeAlso({PutS3Object.class})
+@Tags({"Amazon", "S3", "AWS", "Archive", "Delete"})
+@CapabilityDescription("Deletes FlowFiles on an Amazon S3 Bucket")
+@DynamicProperty(name = "The name of a User-Defined Metadata field to add 
to the S3 Object",
+value = "The value of a User-Defined Metadata field to add to the 
S3 Object",
+description = "Allows user-defined metadata to be added to the S3 
object as key/value pairs",
+supportsExpressionLanguage = true)
+@ReadsAttribute(attribute = "filename", description = "Uses the FlowFile's 
filename as the filename for the S3 object")
+@WritesAttributes({
+@WritesAttribute(attribute = "s3.bucket", description = "The name 
of the S3 bucket"),
+@WritesAttribute(attribute = "s3.version", description = "The 
version of the S3 Object"),
+@WritesAttribute(attribute = "path", description = "The path of 
the file"),
+})
+public class DeleteS3Object extends AbstractS3Processor {
+
+public static final PropertyDescriptor VERSION_ID = new 
PropertyDescriptor.Builder()
+.name("Version")
+.description("The Version of the Object to delete")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(true)
+.required(false)
+.build();
+
+public static final List properties = 
Collections.unmodifiableList(
+Arrays.asList(KEY, BUCKET, ACCESS_KEY, SECRET_KEY, 
CREDENTAILS_FILE, REGION, TIMEOUT,
+FULL_CONTROL_USER_LIST, READ_USER_LIST, 
WRITE_USER_LIST, READ_ACL_LIST, WRITE_ACL_LIST, OWNER));
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+return properties;
+}
+
+@Override
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(true)
+.dynamic(true)
+.build();
+}
+
+public void onTrigger(final ProcessContext context, final 
ProcessSession session) {
+FlowFile flowFile = session.get();
+if (flowFile == null) {
+return;
+}
+
+final long startNanos = System.nanoTime();
+
+final String bucket = 
context.getProperty(BUCKET).evaluateAttributeExpressions(flowFile).getValue();
+final String key = 
context.getProperty(KEY).evaluateAttributeExpressions(flowFile).getValue();
+final String versionId = 
context.getProperty(VERSION_ID).evaluateAttributeExpressions(flowFile).getValue();
+
+  

[GitHub] nifi pull request: [NIFI-774] Create a DeleteS3Object Processor

2015-08-04 Thread danbress
Github user danbress commented on a diff in the pull request:

https://github.com/apache/nifi/pull/72#discussion_r36204625
  
--- Diff: 
nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/DeleteS3Object.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import java.io.IOException;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.model.*;
+import org.apache.nifi.annotation.behavior.*;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+@SupportsBatching
+@SeeAlso({PutS3Object.class})
+@Tags({"Amazon", "S3", "AWS", "Archive", "Delete"})
+@CapabilityDescription("Deletes FlowFiles on an Amazon S3 Bucket")
+@DynamicProperty(name = "The name of a User-Defined Metadata field to add 
to the S3 Object",
+value = "The value of a User-Defined Metadata field to add to the 
S3 Object",
+description = "Allows user-defined metadata to be added to the S3 
object as key/value pairs",
+supportsExpressionLanguage = true)
+@ReadsAttribute(attribute = "filename", description = "Uses the FlowFile's 
filename as the filename for the S3 object")
+@WritesAttributes({
+@WritesAttribute(attribute = "s3.bucket", description = "The name 
of the S3 bucket"),
+@WritesAttribute(attribute = "s3.version", description = "The 
version of the S3 Object"),
+@WritesAttribute(attribute = "path", description = "The path of 
the file"),
+})
+public class DeleteS3Object extends AbstractS3Processor {
+
+public static final PropertyDescriptor VERSION_ID = new 
PropertyDescriptor.Builder()
+.name("Version")
+.description("The Version of the Object to delete")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(true)
+.required(false)
+.build();
+
+public static final List properties = 
Collections.unmodifiableList(
+Arrays.asList(KEY, BUCKET, ACCESS_KEY, SECRET_KEY, 
CREDENTAILS_FILE, REGION, TIMEOUT,
+FULL_CONTROL_USER_LIST, READ_USER_LIST, 
WRITE_USER_LIST, READ_ACL_LIST, WRITE_ACL_LIST, OWNER));
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+return properties;
+}
+
+@Override
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(true)
+.dynamic(true)
+.build();
+}
+
+public void onTrigger(final ProcessContext context, final 
ProcessSession session) {
+FlowFile flowFile = session.get();
+if (flowFile == null) {
+return;
+}
+
+final long startNanos = System.nanoTime();
+
+final String bucket = 
context.getProperty(BUCKET).evaluateAttributeExpressions(flowFile).getValue();
+final String key = 
context.getProperty(KEY).evaluateAttributeExpressions(flowFile).getValue();
+final String versionId = 
context.getProperty(VERSION_ID).evaluateAttributeExpressions(flowFile).getValue();
+
+  

[GitHub] nifi pull request: [NIFI-774] Create a DeleteS3Object Processor

2015-08-04 Thread danbress
Github user danbress commented on a diff in the pull request:

https://github.com/apache/nifi/pull/72#discussion_r36204722
  
--- Diff: 
nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/DeleteS3Object.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import java.io.IOException;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.model.*;
+import org.apache.nifi.annotation.behavior.*;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+@SupportsBatching
+@SeeAlso({PutS3Object.class})
+@Tags({"Amazon", "S3", "AWS", "Archive", "Delete"})
+@CapabilityDescription("Deletes FlowFiles on an Amazon S3 Bucket")
+@DynamicProperty(name = "The name of a User-Defined Metadata field to add 
to the S3 Object",
+value = "The value of a User-Defined Metadata field to add to the 
S3 Object",
+description = "Allows user-defined metadata to be added to the S3 
object as key/value pairs",
+supportsExpressionLanguage = true)
+@ReadsAttribute(attribute = "filename", description = "Uses the FlowFile's 
filename as the filename for the S3 object")
+@WritesAttributes({
+@WritesAttribute(attribute = "s3.bucket", description = "The name 
of the S3 bucket"),
+@WritesAttribute(attribute = "s3.version", description = "The 
version of the S3 Object"),
+@WritesAttribute(attribute = "path", description = "The path of 
the file"),
+})
+public class DeleteS3Object extends AbstractS3Processor {
+
+public static final PropertyDescriptor VERSION_ID = new 
PropertyDescriptor.Builder()
+.name("Version")
+.description("The Version of the Object to delete")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(true)
+.required(false)
+.build();
+
+public static final List properties = 
Collections.unmodifiableList(
+Arrays.asList(KEY, BUCKET, ACCESS_KEY, SECRET_KEY, 
CREDENTAILS_FILE, REGION, TIMEOUT,
+FULL_CONTROL_USER_LIST, READ_USER_LIST, 
WRITE_USER_LIST, READ_ACL_LIST, WRITE_ACL_LIST, OWNER));
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+return properties;
+}
+
+@Override
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
+return new PropertyDescriptor.Builder()
+.name(propertyDescriptorName)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(true)
+.dynamic(true)
+.build();
+}
+
+public void onTrigger(final ProcessContext context, final 
ProcessSession session) {
+FlowFile flowFile = session.get();
+if (flowFile == null) {
+return;
+}
+
+final long startNanos = System.nanoTime();
+
+final String bucket = 
context.getProperty(BUCKET).evaluateAttributeExpressions(flowFile).getValue();
+final String key = 
context.getProperty(KEY).evaluateAttributeExpressions(flowFile).getValue();
+final String versionId = 
context.getProperty(VERSION_ID).evaluateAttributeExpressions(flowFile).getValue();
+
+  

RE: Route Original Flow File Base on InvokeHTTP Response

2015-08-04 Thread Mark Payne
Steve,

I certainly agree with both Joe & Adam here. I do think that this could in fact 
be done by using
the MergeContent processor. However, it would get pretty confusing and would be 
inefficient
and a bit 'unclean', as you would be really be using some processors to  
achieve a goal very different
than what they were designed for.

That being said, a member of the community, Joe Gresock, submitted a ticket a 
while back [1] that
I think would be a great solution for this use case.

Unfortunately, it's not a processor that is in the baseline right now, but it's 
being worked on. We should
be able to get it into the build pretty soon.

I hope this helps!
-Mark


[1] HoldFile Processor: https://issues.apache.org/jira/browse/NIFI-190


> Date: Tue, 4 Aug 2015 10:45:44 -0400
> Subject: Re: Route Original Flow File Base on InvokeHTTP Response
> From: joe.w...@gmail.com
> To: dev@nifi.apache.org
> CC: d...@nifi.incubator.apache.org
>
> Yep - i'm with Adam's interpretation here.
>
> Steve: For your case can you elaborate on what you'd want to do with
> the content of the web response in deciding how to handle the orig? I
> do think a custom processor would be necessary but if perhaps we can
> add something simple/consistent with the purpose of InvokeHTTP it can
> be avoided. Just need to understand the use case a bit better
>
> Thanks
> Joe
>
> On Tue, Aug 4, 2015 at 10:36 AM, Adam Taft  wrote:
>> Right. Fundamentally, with InvokeHTTP you have two flowfiles involved.
>> The original flowfile which represents the HTTP request, and a newly
>> created "response" flowfile which captures the message body from the
>> response.
>>
>> After invoke HTTP does its thing, you are effectively left with two
>> flowfiles. They have a common "transaction id" associated to both
>> flowfiles, so it might be possible to use this with, for example,
>> MergeContent and associate the two files back together.
>>
>> If MergeContent can't be made to work, it might require a new custom
>> processor to take the two flowfiles coming out from InvokeHTTP and evaluate
>> them as a single unit.
>>
>> Adam
>>
>>
>> On Tue, Aug 4, 2015 at 10:27 AM, Joe Witt  wrote:
>>
>>> Aldrin, Bryan
>>>
>>> I think the point is to route the orig flowfile based on analyzing the
>>> content of the web response. The setup would be more involved
>>> On Aug 4, 2015 8:52 AM, "Aldrin Piri"  wrote:
>>>
 Steve,

 Building on the template Bryan provided you can route a successful
>>> response
 to perform further inspection. Depending on content any of RouteOnContent
 or EvaluateJsonPath/EvaluateXPath in conjunction with RouteOnAtrribute
 could be viable options.

 Let us know if this helps with the use case you are tackling or if there
 are any other questions.

 --
 Sent from my mobile device.
 On Tue, Aug 4, 2015 at 07:32 steveM  wrote:

> Thanks for the quick response. However, my use case requires that I
>>> parse
> the
> actual response (not just make sure the response code is 200) to make a
> decision on routing.
>
>
>
> --
> View this message in context:
>

>>> http://apache-nifi-incubating-developer-list.39713.n7.nabble.com/Route-Original-Flow-File-Base-on-InvokeHTTP-Response-tp2317p2324.html
> Sent from the Apache NiFi (incubating) Developer List mailing list
 archive
> at Nabble.com.
>

>>>
  

[GitHub] nifi pull request: [NIFI-774] Create a DeleteS3Object Processor

2015-08-04 Thread danbress
Github user danbress commented on a diff in the pull request:

https://github.com/apache/nifi/pull/72#discussion_r36204173
  
--- Diff: 
nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/DeleteS3Object.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import java.io.IOException;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.model.*;
+import org.apache.nifi.annotation.behavior.*;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+@SupportsBatching
+@SeeAlso({PutS3Object.class})
+@Tags({"Amazon", "S3", "AWS", "Archive", "Delete"})
+@CapabilityDescription("Deletes FlowFiles on an Amazon S3 Bucket")
+@DynamicProperty(name = "The name of a User-Defined Metadata field to add 
to the S3 Object",
+value = "The value of a User-Defined Metadata field to add to the 
S3 Object",
+description = "Allows user-defined metadata to be added to the S3 
object as key/value pairs",
+supportsExpressionLanguage = true)
+@ReadsAttribute(attribute = "filename", description = "Uses the FlowFile's 
filename as the filename for the S3 object")
+@WritesAttributes({
+@WritesAttribute(attribute = "s3.bucket", description = "The name 
of the S3 bucket"),
+@WritesAttribute(attribute = "s3.version", description = "The 
version of the S3 Object"),
+@WritesAttribute(attribute = "path", description = "The path of 
the file"),
+})
+public class DeleteS3Object extends AbstractS3Processor {
+
+public static final PropertyDescriptor VERSION_ID = new 
PropertyDescriptor.Builder()
+.name("Version")
+.description("The Version of the Object to delete")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.expressionLanguageSupported(true)
+.required(false)
+.build();
+
+public static final List properties = 
Collections.unmodifiableList(
+Arrays.asList(KEY, BUCKET, ACCESS_KEY, SECRET_KEY, 
CREDENTAILS_FILE, REGION, TIMEOUT,
+FULL_CONTROL_USER_LIST, READ_USER_LIST, 
WRITE_USER_LIST, READ_ACL_LIST, WRITE_ACL_LIST, OWNER));
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+return properties;
+}
+
+@Override
+protected PropertyDescriptor 
getSupportedDynamicPropertyDescriptor(final String propertyDescriptorName) {
--- End diff --

I don't see dynamic properties being read by this processor, I would 
suggest you remove this method override


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: [NIFI-774] Create a DeleteS3Object Processor

2015-08-04 Thread danbress
Github user danbress commented on a diff in the pull request:

https://github.com/apache/nifi/pull/72#discussion_r36203978
  
--- Diff: 
nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/DeleteS3Object.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import java.io.IOException;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.model.*;
+import org.apache.nifi.annotation.behavior.*;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+@SupportsBatching
+@SeeAlso({PutS3Object.class})
+@Tags({"Amazon", "S3", "AWS", "Archive", "Delete"})
+@CapabilityDescription("Deletes FlowFiles on an Amazon S3 Bucket")
+@DynamicProperty(name = "The name of a User-Defined Metadata field to add 
to the S3 Object",
+value = "The value of a User-Defined Metadata field to add to the 
S3 Object",
+description = "Allows user-defined metadata to be added to the S3 
object as key/value pairs",
+supportsExpressionLanguage = true)
+@ReadsAttribute(attribute = "filename", description = "Uses the FlowFile's 
filename as the filename for the S3 object")
--- End diff --

I don't see the 'filename' FlowFile attribute being read by this processor, 
I would suggest removing this annotation.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: [NIFI-774] Create a DeleteS3Object Processor

2015-08-04 Thread danbress
Github user danbress commented on a diff in the pull request:

https://github.com/apache/nifi/pull/72#discussion_r36204088
  
--- Diff: 
nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/DeleteS3Object.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import java.io.IOException;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.model.*;
+import org.apache.nifi.annotation.behavior.*;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+@SupportsBatching
+@SeeAlso({PutS3Object.class})
+@Tags({"Amazon", "S3", "AWS", "Archive", "Delete"})
+@CapabilityDescription("Deletes FlowFiles on an Amazon S3 Bucket")
+@DynamicProperty(name = "The name of a User-Defined Metadata field to add 
to the S3 Object",
+value = "The value of a User-Defined Metadata field to add to the 
S3 Object",
+description = "Allows user-defined metadata to be added to the S3 
object as key/value pairs",
+supportsExpressionLanguage = true)
+@ReadsAttribute(attribute = "filename", description = "Uses the FlowFile's 
filename as the filename for the S3 object")
+@WritesAttributes({
--- End diff --

I don't see the 's3.bucket', 's3.version' or 'path' FlowFile attributes 
being written by this processor, I would suggest removing these annotations.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: [NIFI-774] Create a DeleteS3Object Processor

2015-08-04 Thread danbress
Github user danbress commented on a diff in the pull request:

https://github.com/apache/nifi/pull/72#discussion_r36203920
  
--- Diff: 
nifi/nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/DeleteS3Object.java
 ---
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.aws.s3;
+
+import java.io.IOException;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+
+import com.amazonaws.AmazonClientException;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.model.*;
+import org.apache.nifi.annotation.behavior.*;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.util.StandardValidators;
+
+@SupportsBatching
+@SeeAlso({PutS3Object.class})
+@Tags({"Amazon", "S3", "AWS", "Archive", "Delete"})
+@CapabilityDescription("Deletes FlowFiles on an Amazon S3 Bucket")
+@DynamicProperty(name = "The name of a User-Defined Metadata field to add 
to the S3 Object",
--- End diff --

I don't see dynamic properties used by this processor, I would suggest 
removing this annotation.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request: [NIFI-774] Create a DeleteS3Object Processor

2015-08-04 Thread danbress
Github user danbress commented on the pull request:

https://github.com/apache/nifi/pull/72#issuecomment-127654425
  
Yu,
Thank you for the contribution!  I took a look at your processor and 
noticed a few things I didn't understand.  Perhaps you can elaborate on them.

Dan


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Route Original Flow File Base on InvokeHTTP Response

2015-08-04 Thread Joe Witt
Yep - i'm with Adam's interpretation here.

Steve: For your case can you elaborate on what you'd want to do with
the content of the web response in deciding how to handle the orig?  I
do think a custom processor would be necessary but if perhaps we can
add something simple/consistent with the purpose of InvokeHTTP it can
be avoided.  Just need to understand the use case a bit better

Thanks
Joe

On Tue, Aug 4, 2015 at 10:36 AM, Adam Taft  wrote:
> Right.  Fundamentally, with InvokeHTTP you have two flowfiles involved.
> The original flowfile which represents the HTTP request, and a newly
> created "response" flowfile which captures the message body from the
> response.
>
> After invoke HTTP does its thing, you are effectively left with two
> flowfiles.  They have a common "transaction id" associated to both
> flowfiles, so it might be possible to use this with, for example,
> MergeContent and associate the two files back together.
>
> If MergeContent can't be made to work, it might require a new custom
> processor to take the two flowfiles coming out from InvokeHTTP and evaluate
> them as a single unit.
>
> Adam
>
>
> On Tue, Aug 4, 2015 at 10:27 AM, Joe Witt  wrote:
>
>> Aldrin, Bryan
>>
>> I think the point is to route the orig flowfile based on analyzing the
>> content of the web response.  The setup would be more involved
>> On Aug 4, 2015 8:52 AM, "Aldrin Piri"  wrote:
>>
>> > Steve,
>> >
>> > Building on the template Bryan provided you can route a successful
>> response
>> > to perform further inspection. Depending on content any of RouteOnContent
>> > or EvaluateJsonPath/EvaluateXPath in conjunction with RouteOnAtrribute
>> > could be viable options.
>> >
>> > Let us know if this helps with the use case you are tackling or if there
>> > are any other questions.
>> >
>> > --
>> > Sent from my mobile device.
>> > On Tue, Aug 4, 2015 at 07:32 steveM  wrote:
>> >
>> > > Thanks for the quick response. However, my use case requires that I
>> parse
>> > > the
>> > > actual response (not just make sure the response code is 200) to make a
>> > > decision on routing.
>> > >
>> > >
>> > >
>> > > --
>> > > View this message in context:
>> > >
>> >
>> http://apache-nifi-incubating-developer-list.39713.n7.nabble.com/Route-Original-Flow-File-Base-on-InvokeHTTP-Response-tp2317p2324.html
>> > > Sent from the Apache NiFi (incubating) Developer List mailing list
>> > archive
>> > > at Nabble.com.
>> > >
>> >
>>


Re: Route Original Flow File Base on InvokeHTTP Response

2015-08-04 Thread Adam Taft
Right.  Fundamentally, with InvokeHTTP you have two flowfiles involved.
The original flowfile which represents the HTTP request, and a newly
created "response" flowfile which captures the message body from the
response.

After invoke HTTP does its thing, you are effectively left with two
flowfiles.  They have a common "transaction id" associated to both
flowfiles, so it might be possible to use this with, for example,
MergeContent and associate the two files back together.

If MergeContent can't be made to work, it might require a new custom
processor to take the two flowfiles coming out from InvokeHTTP and evaluate
them as a single unit.

Adam


On Tue, Aug 4, 2015 at 10:27 AM, Joe Witt  wrote:

> Aldrin, Bryan
>
> I think the point is to route the orig flowfile based on analyzing the
> content of the web response.  The setup would be more involved
> On Aug 4, 2015 8:52 AM, "Aldrin Piri"  wrote:
>
> > Steve,
> >
> > Building on the template Bryan provided you can route a successful
> response
> > to perform further inspection. Depending on content any of RouteOnContent
> > or EvaluateJsonPath/EvaluateXPath in conjunction with RouteOnAtrribute
> > could be viable options.
> >
> > Let us know if this helps with the use case you are tackling or if there
> > are any other questions.
> >
> > --
> > Sent from my mobile device.
> > On Tue, Aug 4, 2015 at 07:32 steveM  wrote:
> >
> > > Thanks for the quick response. However, my use case requires that I
> parse
> > > the
> > > actual response (not just make sure the response code is 200) to make a
> > > decision on routing.
> > >
> > >
> > >
> > > --
> > > View this message in context:
> > >
> >
> http://apache-nifi-incubating-developer-list.39713.n7.nabble.com/Route-Original-Flow-File-Base-on-InvokeHTTP-Response-tp2317p2324.html
> > > Sent from the Apache NiFi (incubating) Developer List mailing list
> > archive
> > > at Nabble.com.
> > >
> >
>


Re: Route Original Flow File Base on InvokeHTTP Response

2015-08-04 Thread Joe Witt
Aldrin, Bryan

I think the point is to route the orig flowfile based on analyzing the
content of the web response.  The setup would be more involved
On Aug 4, 2015 8:52 AM, "Aldrin Piri"  wrote:

> Steve,
>
> Building on the template Bryan provided you can route a successful response
> to perform further inspection. Depending on content any of RouteOnContent
> or EvaluateJsonPath/EvaluateXPath in conjunction with RouteOnAtrribute
> could be viable options.
>
> Let us know if this helps with the use case you are tackling or if there
> are any other questions.
>
> --
> Sent from my mobile device.
> On Tue, Aug 4, 2015 at 07:32 steveM  wrote:
>
> > Thanks for the quick response. However, my use case requires that I parse
> > the
> > actual response (not just make sure the response code is 200) to make a
> > decision on routing.
> >
> >
> >
> > --
> > View this message in context:
> >
> http://apache-nifi-incubating-developer-list.39713.n7.nabble.com/Route-Original-Flow-File-Base-on-InvokeHTTP-Response-tp2317p2324.html
> > Sent from the Apache NiFi (incubating) Developer List mailing list
> archive
> > at Nabble.com.
> >
>


Re: Route Original Flow File Base on InvokeHTTP Response

2015-08-04 Thread Aldrin Piri
Steve,

Building on the template Bryan provided you can route a successful response
to perform further inspection. Depending on content any of RouteOnContent
or EvaluateJsonPath/EvaluateXPath in conjunction with RouteOnAtrribute
could be viable options.

Let us know if this helps with the use case you are tackling or if there
are any other questions.

--
Sent from my mobile device.
On Tue, Aug 4, 2015 at 07:32 steveM  wrote:

> Thanks for the quick response. However, my use case requires that I parse
> the
> actual response (not just make sure the response code is 200) to make a
> decision on routing.
>
>
>
> --
> View this message in context:
> http://apache-nifi-incubating-developer-list.39713.n7.nabble.com/Route-Original-Flow-File-Base-on-InvokeHTTP-Response-tp2317p2324.html
> Sent from the Apache NiFi (incubating) Developer List mailing list archive
> at Nabble.com.
>


Re: Route Original Flow File Base on InvokeHTTP Response

2015-08-04 Thread steveM
Thanks for the quick response. However, my use case requires that I parse the
actual response (not just make sure the response code is 200) to make a
decision on routing. 



--
View this message in context: 
http://apache-nifi-incubating-developer-list.39713.n7.nabble.com/Route-Original-Flow-File-Base-on-InvokeHTTP-Response-tp2317p2324.html
Sent from the Apache NiFi (incubating) Developer List mailing list archive at 
Nabble.com.