Re: Storing output of shellscript in s3 bucket

2020-04-11 Thread Andy LoPresto
If that script cannot write directly to STDOUT in this scenario (which the docs 
do not make clear), there are multiple ways to achieve what you need. 

1. Instead of invoking the command directly in the ExecuteStreamCommand 
processor, invoke a custom script file “example_zk_script.sh 
zk_source_data.json” which contains:

```
zk-migrator.sh -s -z 
destinationHostname:destinationClientPort/destinationRootPath/components -f 
/path/to/export/“$@“
cat /path/to/export/"$@"
```

This script will execute the command you were originally calling (which sends 
data _to_ ZooKeeper that is read _from_ the file location you provided) and 
then output the file contents to STDOUT, which will be captured as the flowfile 
content. 

2. Read directly from the file using a FetchFile processor. The path to the 
file is known in the context of NiFi, either as a static value in the 
ExecuteStreamCommand property descriptor, or as an attribute or variable, so 
that file path can be read from directly using another processor. The FetchFile 
processor replaces a flowfile’s content with the content of a file on disk, so 
pass the result of the ExecuteStreamCommand to the FetchFile processor with the 
path in the filename attribute (update using an UpdateAttribute processor if 
necessary). 

I would recommend the first option as a cleaner and more robust solution. 


Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
He/Him
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

> On Apr 10, 2020, at 7:19 PM, Joe Witt  wrote:
> 
> If the zk-migrator can be configured to write to stdout instead of a file 
> then yes.
> 
> On Fri, Apr 10, 2020 at 3:52 PM sanjeet rath  > wrote:
> Hi,
> 
> Thanks for your quick reply.Yeah i am using executestreamcommand to execute 
> bellow script
> 
> zk-migrator.sh -s -z 
> destinationHostname:destinationClientPort/destinationRootPath/components -f 
> /path/to/export/zk-source-data.json
> 
> Can the zk-source-data.json file written as a output flow file of the above 
> processor.if yes please let me know how.
> 
> Many thanks
> sanjeet
> 
> On Fri, 10 Apr 2020, 9:25 pm Bryan Bende,  > wrote:
> Hello,
> 
> Assuming you are using the ExecuteStreamCommand processors, then the output 
> of the command is written to the flow file content. So if your command writes 
> the JSON to stdout, then it should end up in the flow file.
> 
> Thanks,
> 
> Bryan
> 
> 
> On Fri, Apr 10, 2020 at 11:23 AM sanjeet rath  > wrote:
> Hi,
> 
> I have a scenario , where i have to Execute a shell script amd the output of 
> the script is a json file and i want to put the file in s3 bucket.
> 
> I am able to do it by building 2 flows.
> 
> One flow for executestremproces and store the json file in folder in System
> 
> Then another flow to get the file from system and using puts3object to put in 
> s3 bucket. But building write and read in separate flow is bit risky and 
> noway i am make it dependent as getfile has no preprocessor.
> 
> 
> Is it possible not to store the json file (output of shell script) in file 
> system and as a flow file move it to s3 bucket ?
> Means in one flow to execute shellscript and store output in s3 bucket.
> 
> Regards,
> Sanjeet
> 



Re: Storing output of shellscript in s3 bucket

2020-04-10 Thread Joe Witt
If the zk-migrator can be configured to write to stdout instead of a file
then yes.

On Fri, Apr 10, 2020 at 3:52 PM sanjeet rath  wrote:

> Hi,
>
> Thanks for your quick reply.Yeah i am using executestreamcommand to
> execute bellow script
>
> zk-migrator.sh -s -z 
> destinationHostname:destinationClientPort/destinationRootPath/components
> -f /path/to/export/zk-source-data.json
>
> Can the zk-source-data.json file written as a output flow file of the
> above processor.if yes please let me know how.
>
> Many thanks
> sanjeet
>
> On Fri, 10 Apr 2020, 9:25 pm Bryan Bende,  wrote:
>
>> Hello,
>>
>> Assuming you are using the ExecuteStreamCommand processors, then the
>> output of the command is written to the flow file content. So if your
>> command writes the JSON to stdout, then it should end up in the flow file.
>>
>> Thanks,
>>
>> Bryan
>>
>>
>> On Fri, Apr 10, 2020 at 11:23 AM sanjeet rath 
>> wrote:
>>
>>> Hi,
>>>
>>> I have a scenario , where i have to Execute a shell script amd the
>>> output of the script is a json file and i want to put the file in s3 bucket.
>>>
>>> I am able to do it by building 2 flows.
>>>
>>> One flow for executestremproces and store the json file in folder in
>>> System
>>>
>>> Then another flow to get the file from system and using puts3object to
>>> put in s3 bucket. But building write and read in separate flow is bit risky
>>> and noway i am make it dependent as getfile has no preprocessor.
>>>
>>>
>>> Is it possible not to store the json file (output of shell script) in
>>> file system and as a flow file move it to s3 bucket ?
>>> Means in one flow to execute shellscript and store output in s3 bucket.
>>>
>>> Regards,
>>> Sanjeet
>>>
>>>


Re: Storing output of shellscript in s3 bucket

2020-04-10 Thread sanjeet rath
Hi,

Thanks for your quick reply.Yeah i am using executestreamcommand to execute
bellow script

zk-migrator.sh -s -z
destinationHostname:destinationClientPort/destinationRootPath/components
-f /path/to/export/zk-source-data.json

Can the zk-source-data.json file written as a output flow file of the above
processor.if yes please let me know how.

Many thanks
sanjeet

On Fri, 10 Apr 2020, 9:25 pm Bryan Bende,  wrote:

> Hello,
>
> Assuming you are using the ExecuteStreamCommand processors, then the
> output of the command is written to the flow file content. So if your
> command writes the JSON to stdout, then it should end up in the flow file.
>
> Thanks,
>
> Bryan
>
>
> On Fri, Apr 10, 2020 at 11:23 AM sanjeet rath 
> wrote:
>
>> Hi,
>>
>> I have a scenario , where i have to Execute a shell script amd the output
>> of the script is a json file and i want to put the file in s3 bucket.
>>
>> I am able to do it by building 2 flows.
>>
>> One flow for executestremproces and store the json file in folder in
>> System
>>
>> Then another flow to get the file from system and using puts3object to
>> put in s3 bucket. But building write and read in separate flow is bit risky
>> and noway i am make it dependent as getfile has no preprocessor.
>>
>>
>> Is it possible not to store the json file (output of shell script) in
>> file system and as a flow file move it to s3 bucket ?
>> Means in one flow to execute shellscript and store output in s3 bucket.
>>
>> Regards,
>> Sanjeet
>>
>>


Re: Storing output of shellscript in s3 bucket

2020-04-10 Thread Bryan Bende
Hello,

Assuming you are using the ExecuteStreamCommand processors, then the output
of the command is written to the flow file content. So if your command
writes the JSON to stdout, then it should end up in the flow file.

Thanks,

Bryan


On Fri, Apr 10, 2020 at 11:23 AM sanjeet rath 
wrote:

> Hi,
>
> I have a scenario , where i have to Execute a shell script amd the output
> of the script is a json file and i want to put the file in s3 bucket.
>
> I am able to do it by building 2 flows.
>
> One flow for executestremproces and store the json file in folder in System
>
> Then another flow to get the file from system and using puts3object to put
> in s3 bucket. But building write and read in separate flow is bit risky and
> noway i am make it dependent as getfile has no preprocessor.
>
>
> Is it possible not to store the json file (output of shell script) in file
> system and as a flow file move it to s3 bucket ?
> Means in one flow to execute shellscript and store output in s3 bucket.
>
> Regards,
> Sanjeet
>
>


Storing output of shellscript in s3 bucket

2020-04-10 Thread sanjeet rath
Hi,

I have a scenario , where i have to Execute a shell script amd the output
of the script is a json file and i want to put the file in s3 bucket.

I am able to do it by building 2 flows.

One flow for executestremproces and store the json file in folder in System

Then another flow to get the file from system and using puts3object to put
in s3 bucket. But building write and read in separate flow is bit risky and
noway i am make it dependent as getfile has no preprocessor.


Is it possible not to store the json file (output of shell script) in file
system and as a flow file move it to s3 bucket ?
Means in one flow to execute shellscript and store output in s3 bucket.

Regards,
Sanjeet