Attributes To JSON question

2020-06-03 Thread DAVID SMITH
Hi
I am using the attributesToJSON processor to create a JSON flowfile which I 
then send to Elastic, I have noticed that some of my attributes which ar 
numbers such as file size always come out of the attributesToJSON processor as 
string values (ie with double quotes around them), therefore when I send my 
JSON string to Elastic it indexes all the fields as Strings and therefore won't 
let me do a cumulative count of all filesizes.I have tried to cast my filesize 
toNumber just before the attributes go into the attributesToJSON processor and 
it makes no difference, do you know how I can correct this?
Many thanksDave


Merging the unique attributes of 2 flowfiles

2020-06-01 Thread DAVID SMITH
Hi
I have a group of log files coming in via http listener, up to 30 logs  per 
transaction, of which I only need the values that are in 2 of those log files 
per transaction. After using some RouteOnContents I end up with the two log 
flowfiles I want.
 In my current flow I am using a MergeContent processor to try and merge the 
two required flowfiles on a common ident attribute value  which I have 
extracted from each log files earlier, I have also extracted some other 
attributes from the flowfiles at this point, and as everything I am interested 
in these attributes I don't mind what happens with the content of the 
flowfiiles. When I step through the flow all is fine and works as I expect, 
however when I run it at pace and log files are coming in for multiple 
transactions at the same time the merge fails on most occasions. 

My mergecontent settings are:Merge Strategy    Bin Packing 
AlgorithmMerge Format  Binary ConcatenationAttribute 
Strategy    Keep all Unique atttributesCorrelation Attribute 
Name ${import.ident}Metadata Strategy    Ignore 
MeatdataMinimum No Of Entries    2Maximum No Of Enteries    
2Max bin age     1 minutes
All the other properties are at default.
Have I not set something correctly or is there a simpler way of merging the 
attributes from two flowfiles onto one flowfile?
Many thanksDave


Counting the number of occurrences of a phrase

2020-05-25 Thread DAVID SMITH
Good morning devs
I have a flowfile which contains a space separated log file, within this file 
is a section such as:-
FileSelectors:[{fileselector:},{fileselector:},{fileselector:}]
I would like to create an attribute which is the count the number of 
occurrences of the word fileselector and ideally another attribute which is the 
filenames in one list, although this one is just a nice to have. I have made 
the whole phrase an attribute and then tried various things in the expression 
language but nothing seems to work.Can anyone help please ?
Many thanks Dave


Re: Record Readers and Writers

2020-04-21 Thread DAVID SMITH
Hi Matt
Thanks for your reply, I will certainly take on board everything you and Andy 
advise and I will look at classes you mentioned and I will also read the links 
provided.
I ran the TestXMLReader as a junit in Eclipse, a sample of the the console 
output is :
20:15:01.675 [pool-1-thread-1] DEBUG 
org.apache.nifi.schema.access.AvroSchemaTextStrategy - For {path=target, 
filename=253762304418.mockFlowFile, xml.stream.is.array=true, 
uuid=34fb0980-8fc3-4c41-b4f5-3078d26b6f67} found schema text {
  "namespace": "nifi",
  "name": "test",
  "type": "record",
  "fields": [
    { "name": "ID", "type": "string" },
    { "name": "NAME", "type": "string" },
    { "name": "AGE", "type": "int" },
    { "name": "COUNTRY", "type": "string" }
  ]
}


Anyway, thanks again I have something to go on now.
Dave
   On Tuesday, 21 April 2020, 17:47:21 BST, Andy LoPresto 
 wrote:  
 
 Hi Dave,

The underlying internal “record format” is not JSON. Avro [1] is used to 
describe schemas across all record formats, but the internal data storage is 
NiFi specific. You may be interested in these articles by Mark Payne and Bryan 
Bende [2][3][4] and the potential use of the ScriptedReader [5] or 
ScriptedRecordSetWriter [6] to prototype your needed conversions. 

[1] https://avro.apache.org/ <https://avro.apache.org/>
[2] https://blogs.apache.org/nifi/entry/record-oriented-data-with-nifi 
<https://blogs.apache.org/nifi/entry/record-oriented-data-with-nifi>
[3] https://blogs.apache.org/nifi/entry/real-time-sql-on-event 
<https://blogs.apache.org/nifi/entry/real-time-sql-on-event>
[4] 
https://bryanbende.com/development/2017/06/20/apache-nifi-records-and-schema-registries
 
<https://bryanbende.com/development/2017/06/20/apache-nifi-records-and-schema-registries>
[5] 
https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-scripting-nar/1.11.4/org.apache.nifi.record.script.ScriptedReader/index.html
[6] 
https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-scripting-nar/1.11.4/org.apache.nifi.record.script.ScriptedRecordSetWriter/index.html

Andy LoPresto
alopre...@apache.org
alopresto.apa...@gmail.com
He/Him
PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4  BACE 3C6E F65B 2F7D EF69

> On Apr 21, 2020, at 6:01 AM, DAVID SMITH  
> wrote:
> 
> Hi
> I want to use the ConvertRecord Processor with it's underlying Record Readers 
> and Writers to convert files from XML or JSON to a bespoke format and 
> probably vice versa.I have looked at the Readers/Writers currently provided 
> and decided that I can use the XML/JSON ones provided but I will need to 
> write something for the bespoke format. So I started looking at the current 
> source code for the Readers/Writers to see how they work and what I would 
> need to do. When running the unit tests on the XMLReader I notice on the 
> console that the output is in JSON format.My question is, is JSON the common 
> format that all records are converted to and from? 
> Also is there any specific documentation on writing Reader/Writers, I have 
> only found the developers guide?
> Many thanksDave  
> 
  

Record Readers and Writers

2020-04-21 Thread DAVID SMITH
Hi
I want to use the ConvertRecord Processor with it's underlying Record Readers 
and Writers to convert files from XML or JSON to a bespoke format and probably 
vice versa.I have looked at the Readers/Writers currently provided and decided 
that I can use the XML/JSON ones provided but I will need to write something 
for the bespoke format. So I started looking at the current source code for the 
Readers/Writers to see how they work and what I would need to do. When running 
the unit tests on the XMLReader I notice on the console that the output is in 
JSON format.My question is, is JSON the common format that all records are 
converted to and from? 
Also is there any specific documentation on writing Reader/Writers, I have only 
found the developers guide?
Many thanksDave  



Re: Advanced UI button on Update Attributes

2019-01-09 Thread DAVID SMITH
Koji

Thanks for the reply, I found it about 30 minutes before I saw your answer, 
this now works I have Advanced UI on my test processor. I am now amending the 
UI to try to get it to work/look and feel slightly differently, however, I now 
have a couple of questions:

1) I can change the labels for Rules, Conditions, Actions for ones that I want 
but I cannot find out where to change the header 'Expression' in the conditions 
panel or Attribute & Value in the Actions panel, I have done a grep through the 
code and cannot find out how these are populated ?  I have amended the 
following couple of lines in the application.js file but it makes no 
difference, can anyone help?

  // initialize the actions grid
var actionsColumns = [
{id: "start", name: "Start", field: "start", sortable: true, 
cssClass: 'pointer', editor: ua.getCustomLongTextEditor, validator: 
ua.requiredFieldValidator},
{id: "stop", name: "Stop", field: "stop", sortable: true, cssClass: 
'pointer', editor: ua.getCustomLongTextEditor, validator: 
ua.requiredFieldValidator}
];

2) When I bring up the add actions dialog, I have changed the labels to Start 
and Stop, now I want to change the type of input box used for the value (now 
Stop) to the same type that is used for the attribute (now Start), I guess it 
is the way the DIV is set up in the worksheet.jsp but any help would be 
appreciated.




On Mon, 7/1/19, Koji Kawamura  wrote:

 Subject: Re: Advanced UI button on Update Attributes
 To: "dev" 
 Date: Monday, 7 January, 2019, 3:00
 
 Probably you've already found
 a solution, but just in case, did you
 update
 nifi-processor-configuration file, too?
 nifi-update-attribute-ui/src/main/webapp/META-INF/nifi-processor-configuration
 
 Thanks,
 Koji
 
 On Thu, Jan 3, 2019 at 7:29
 PM DAVID SMITH
 
 wrote:
 >
 > Hi
 > I am looking to create a bespoke processor
 based very loosely on the Update Attribute processor set, I
 have taken a copy of the update attribute bundle and changed
 the name of the bundle and its components to
 nifi-test-build-processor, nifi-test-build-model etc, I have
 also amended the pom files to reference these new names.
 However, when I build the new test processor set, it builds
 and loads in NiFi 1.8.0 but the Advanced button has
 disappeared in the UpdateAttribute processor configuration.
 Can anyone tell me what I have missed, I have done this once
 before but I can't remember what the answer is?
 > Many thanksDave


Advanced UI button on Update Attributes

2019-01-03 Thread DAVID SMITH
Hi
I am looking to create a bespoke processor based very loosely on the Update 
Attribute processor set, I have taken a copy of the update attribute bundle and 
changed the name of the bundle and its components to nifi-test-build-processor, 
nifi-test-build-model etc, I have also amended the pom files to reference these 
new names. However, when I build the new test processor set, it builds and 
loads in NiFi 1.8.0 but the Advanced button has disappeared in the 
UpdateAttribute processor configuration. Can anyone tell me what I have missed, 
I have done this once before but I can't remember what the answer is?
Many thanksDave

Re: Help with loading a file into a cache

2018-11-30 Thread DAVID SMITH
Hi 

As requested here is an example file with some redacted data:

ZA105:{"Aircraft Type":"Sea King", "Lifed Items":{ "port engine 
ser#":"RR-P1234", "starboard engine ser#":"RR-S1234","gearboxes ser#":[ 
"WHM1234", "WHI1234", "WHT1234" ] }}
ZA106:{"Aircraft Type":"Sea King", "Lifed Items":{ "port engine 
ser#":"RR-P2345", "starboard engine ser#":"RR-S2345","gearboxes ser#":[ 
"WHM2345", "WHI2345", "WHT2345" ] }}
ZA107:{"Aircraft Type":"Merlin", "Lifed Items":{ "port engine ser#":"RR-P3456", 
"starboard engine ser#":"RR-S3456","centre engine ser#":"RR-C3456","gearboxes 
ser#":[ "WHM3456", "WHI3456", "WHT3456" ] }}
ZA108:{"Aircraft Type":"Merlin", "Lifed Items":{ "port engine ser#":"RR-P4567", 
"starboard engine ser#":"RR-S4567","centre engine ser#":"RR-C4567","gearboxes 
ser#":[ "WHM4567", "WHI4567", "WHT4567" ] }}
ZA109:{"Aircraft Type":"Wessex", "Lifed Items":{ "port engine":"RR-P9876", 
"starboard engine":"RR-S9876","gearboxes":[ "WHM9876", "WHI9876", "WHT9876" ] }}
ZA104:{"Aircraft Type":"Wessex", "Lifed Items":{ "port engine":"RR-P8765", 
"starboard engine":"RR-S8765","gearboxes":[ "WHM8765", "WHI8765", "WHT8765" ] }}
ZA103:{"Aircraft Type":"Wessex", "Lifed Items":{ "port engine":"RR-P7654", 
"starboard engine":"RR-S7654","gearboxes":[ "WHM7654", "WHI7654", "WHT7654" ] }}



What I would like is the aircraft tail no eg ZA104 to be the key of the cache 
item and everything after the colon (the aircraft type and replaceables serial 
numbers to be the cached item value. The cached item value can stay as a JSON 
string.


Many thanks

Dave

On Fri, 30/11/18, Mike Thomsen  wrote:

 Subject: Re: Help with loading a file into a cache
 To: dev@nifi.apache.org
 Date: Friday, 30 November, 2018, 15:26
 
 Dave,
 
 Can you post a redacted example with dummy
 data?
 
 Thanks,
 
 Mike
 
 On
 Fri, Nov 30, 2018 at 7:08 AM DAVID SMITH
 
 wrote:
 
 > Hi Devs
 > I am running a NiFi 1.8 cluster, each node
 has 128Gb of Ram. I need to
 > load the
 contents of a file of which is around 5Gb in size  into
 a
 > Key/Value cache.
 >
 The file I want to load is produced by another company so
 the format it
 > comes in is not
 negotiable. The file contains thousands of lines in the
 > following format:-
 >
 :{: , 
 name>:}:{:  value>, :}
 >
 :{: , 
 name>:}
 >
 > I want the index value to become the Key
 and everything  beyond the colon
 > to
 become the value.
 > What would be the
 most efficient way of reading the file, and parsing it
 > to load into a cache, I thought of reading
 in the file, using a split
 > content on
 CR/LF and then splitting on the first colon.I have noticed
 in
 > 1.8 there are some CSV and JSON
 Readers (controller services), would these
 > be a better way of doing this, but the
 problem I can see is that the file
 >
 isn't quite a CSV and it isn't quite a JSON Array
 file.
 > Many thanksDave
 


Help with loading a file into a cache

2018-11-30 Thread DAVID SMITH
Hi Devs
I am running a NiFi 1.8 cluster, each node has 128Gb of Ram. I need to load the 
contents of a file of which is around 5Gb in size  into a Key/Value cache.
The file I want to load is produced by another company so the format it comes 
in is not negotiable. The file contains thousands of lines in the following 
format:-
:{: , :}:{: , 
:}
:{: , :}

I want the index value to become the Key and everything  beyond the colon to 
become the value.
What would be the most efficient way of reading the file, and parsing it to 
load into a cache, I thought of reading in the file, using a split content on 
CR/LF and then splitting on the first colon.I have noticed in 1.8 there are 
some CSV and JSON Readers (controller services), would these be a better way of 
doing this, but the problem I can see is that the file isn't quite a CSV and it 
isn't quite a JSON Array file.
Many thanksDave

Re: Accumulo processors

2018-05-27 Thread DAVID SMITH
Ok, thanks I will keep that in mind when I (or whoever on my team writes these) 
get to a point where we would like a review before trying to submit back to the 
OS.
Dave
 

On Sunday, 27 May 2018, 21:51, Mike Thomsen <mikerthom...@gmail.com> wrote:
 

 If you do a PR, ping @joshelser. He works on HBase and Accumulo and
mentioned to me that he might be up for a code review on the Accumulo
processors.

On Sun, May 27, 2018 at 4:38 PM DAVID SMITH
<davidrsm...@btinternet.com.invalid> wrote:

> Mike
> Thanks for the suggestion I will certainly have a look at the HBase code,
> I'm not really familiar with Accumulo, I am currently reading the docs on
> the apache web site.
> Dave
>
>
>    On Sunday, 27 May 2018, 21:25, Mike Thomsen <mikerthom...@gmail.com>
> wrote:
>
>
>  Dave,
>
> I don't know how far along Mark's code is, but you might find some useful
> code you can borrow from the HBase commit(s) that added visibility label
> support. For example, there is code for handling visibility labels w/
> PutHBaseRecord that might be reusable if you want to create a put record
> processor for Accumulo. It won't help you with the Accumulo APIs, but it
> could provide useful strategies for how to identify and assign labels from
> user input.
>
> On Sun, May 27, 2018 at 3:14 PM DAVID SMITH
> <davidrsm...@btinternet.com.invalid> wrote:
>
> > Mark
> > Thanks for the link, I have downloaded the code from Github, it will be a
> > good basis to start with.
> > Many thanksDave
> >
> >
> >    On Saturday, 26 May 2018, 20:22, Mark Payne <marka...@hotmail.com>
> > wrote:
> >
> >
> >  Hi Dave,
> >
> > I do have a branch in Github with the work that I had done:
> > https://github.com/apache/nifi/tree/NIFI-818
> > To be perfectly honest, though, I have absolutely no idea what state the
> > code is in, if it's been tested, etc.
> > But you're welcome to take it and run with it, if you'd like.
> >
> > Thanks
> > -Mark
> >
> >
> > On May 26, 2018, at 12:05 PM, davidrsmith <davidrsm...@btinternet.com
> > .INVALID<mailto:davidrsm...@btinternet.com.INVALID>> wrote:
> >
> > Hi
> >
> > A team at work has a need to interface with accumulo, has anyone tried
> > this, I know a while ago Mark Payne raised nifi jira ticket 818 but as
> far
> > as I am aware this was never completed.
> > I would be grateful if anyone can help or point me in the direction of
> > Mark's code that will give us a start.
> >
> > Many thanks
> > Dave
> >
> >
> >
> >
> > Sent from Samsung tablet
> >
> >
> >
>
>
>


   

Re: Accumulo processors

2018-05-27 Thread DAVID SMITH
Mike
Thanks for the suggestion I will certainly have a look at the HBase code, I'm 
not really familiar with Accumulo, I am currently reading the docs on the 
apache web site.
Dave
 

On Sunday, 27 May 2018, 21:25, Mike Thomsen <mikerthom...@gmail.com> wrote:
 

 Dave,

I don't know how far along Mark's code is, but you might find some useful
code you can borrow from the HBase commit(s) that added visibility label
support. For example, there is code for handling visibility labels w/
PutHBaseRecord that might be reusable if you want to create a put record
processor for Accumulo. It won't help you with the Accumulo APIs, but it
could provide useful strategies for how to identify and assign labels from
user input.

On Sun, May 27, 2018 at 3:14 PM DAVID SMITH
<davidrsm...@btinternet.com.invalid> wrote:

> Mark
> Thanks for the link, I have downloaded the code from Github, it will be a
> good basis to start with.
> Many thanksDave
>
>
>    On Saturday, 26 May 2018, 20:22, Mark Payne <marka...@hotmail.com>
> wrote:
>
>
>  Hi Dave,
>
> I do have a branch in Github with the work that I had done:
> https://github.com/apache/nifi/tree/NIFI-818
> To be perfectly honest, though, I have absolutely no idea what state the
> code is in, if it's been tested, etc.
> But you're welcome to take it and run with it, if you'd like.
>
> Thanks
> -Mark
>
>
> On May 26, 2018, at 12:05 PM, davidrsmith <davidrsm...@btinternet.com
> .INVALID<mailto:davidrsm...@btinternet.com.INVALID>> wrote:
>
> Hi
>
> A team at work has a need to interface with accumulo, has anyone tried
> this, I know a while ago Mark Payne raised nifi jira ticket 818 but as far
> as I am aware this was never completed.
> I would be grateful if anyone can help or point me in the direction of
> Mark's code that will give us a start.
>
> Many thanks
> Dave
>
>
>
>
> Sent from Samsung tablet
>
>
>


   

Re: Accumulo processors

2018-05-27 Thread DAVID SMITH
Mark
Thanks for the link, I have downloaded the code from Github, it will be a good 
basis to start with.
Many thanksDave
 

On Saturday, 26 May 2018, 20:22, Mark Payne  wrote:
 

 Hi Dave,

I do have a branch in Github with the work that I had done: 
https://github.com/apache/nifi/tree/NIFI-818
To be perfectly honest, though, I have absolutely no idea what state the code 
is in, if it's been tested, etc.
But you're welcome to take it and run with it, if you'd like.

Thanks
-Mark


On May 26, 2018, at 12:05 PM, davidrsmith 
> 
wrote:

Hi

A team at work has a need to interface with accumulo, has anyone tried this, I 
know a while ago Mark Payne raised nifi jira ticket 818 but as far as I am 
aware this was never completed.
I would be grateful if anyone can help or point me in the direction of Mark's 
code that will give us a start.

Many thanks
Dave




Sent from Samsung tablet


   

Help & advice needed on application.js in Processors

2018-01-30 Thread DAVID SMITH
Hi
I am trying to create a processor that is partially based on the 
UpdateAttribute processor 0.7.3.I have cloned the UpdateAttribute source, 
renamed the processor and I have started by trying to amend the Advanced 
configuration UI. I have found that I can change labels for fields in the 
Advanced configuration UI in WEB-INF/jsp/worksheet.jsp, and these are reflected 
in the Advanced UI after a reload.
However any changes that I make in webapp/js/application.js, such as changing 
the label on the Rule FIlter button never gets picked up and displayed in the 
UI.I have unpacked the war file and the application.js looks exactly as I have 
edited it.
When I build the new processor  nar I am using mvn clean install, I am seeing 
no errors, also when NiFi loads there are no errors or warnings.Is there a 
developers guide for creating this type of UI in processors, or can someone 
help and tell me why my changes are not being picked up?
Many thanksDave


Re: Setting up an SSL context in Minifi 0.3.0

2018-01-21 Thread DAVID SMITH
Aldrin
I have created MINIFI-429, as my system is segregated I have typed in the Error 
message I am seeing and the pertinent parts of the config.yml file.If any more 
information is required please let me know.
Many thanksDave 

On Thursday, 18 January 2018, 16:32, Aldrin Piri <aldrinp...@gmail.com> 
wrote:
 

 Hi David,

Had a couple of fixes [1][2] to remedy similar issues but would be
interested in seeing your configuration to see if we are missing
something.  Would you mind please opening up a JIRA (
https://issues.apache.org/jira/projects/MINIFI/issues) with as much of
template, config.yml, and logs as possible for your instance exhibiting
problems.  Please be mindful to strip out any sensitive information.

[1] https://issues.apache.org/jira/browse/MINIFI-408
[2] https://issues.apache.org/jira/browse/MINIFI-403

Thanks,
Aldrin

On Thu, Jan 18, 2018 at 10:56 AM, DAVID SMITH <davidrsm...@btinternet.com>
wrote:

> Hi
> I put up a post a little while ago, because I was having trouble getting
> an SSL context working in Minifi 0.2.0Aldrin advised to wait until 0.3.0
> came out as some dev work had been done in this area.
> I have trying it again today and still cannot get an SSL context to work
> in a ListenHTTP processor, I get an illegalStateException in the ListenHTTP
> processor and it never starts. It says the SSL Context service validated
> against  is invalid because Invalid Controller service is not a valid
> controller service or does not reference the correct type of controller
> service.
> The NifI (1.3) that I set up the flow on before templating it is on the
> same VM and using the same keystore, the flow works without any problem in
> NiFi. Iused the Minifi 0.3.0 toolkit to convert the template into the yml
> file and I have populated the controller service passwords as required.
> Strangely I also have a POSTHTTP in the same flow using the same SSL
> context and it loads and starts without any problems.
> Can anyone help?
> Many thanksDave


   

Setting up an SSL context in Minifi 0.3.0

2018-01-18 Thread DAVID SMITH
Hi
I put up a post a little while ago, because I was having trouble getting an SSL 
context working in Minifi 0.2.0Aldrin advised to wait until 0.3.0 came out as 
some dev work had been done in this area.
I have trying it again today and still cannot get an SSL context to work in a 
ListenHTTP processor, I get an illegalStateException in the ListenHTTP 
processor and it never starts. It says the SSL Context service validated 
against  is invalid because Invalid Controller service is not a valid 
controller service or does not reference the correct type of controller service.
The NifI (1.3) that I set up the flow on before templating it is on the same VM 
and using the same keystore, the flow works without any problem in NiFi. Iused 
the Minifi 0.3.0 toolkit to convert the template into the yml file and I have 
populated the controller service passwords as required.
Strangely I also have a POSTHTTP in the same flow using the same SSL context 
and it loads and starts without any problems.
Can anyone help?
Many thanksDave

Re: Syslog processing from cisco switches to Splunk

2017-10-19 Thread DAVID SMITH
Hi
An example message is:
<190>2155664: Oct 18 11:54:58: %SEC-6-IPACCESSLOGP: list inbound-to-zzz denied 
tcp 192.168.0.1(12345) -> 192.168.10.1(443), 1 packet
Many thanksDave 

On Thursday, 19 October 2017, 14:37, Bryan Bende <bbe...@gmail.com> wrote:
 

 If you can provide an example message we can try to see why
ListenSyslog says it is invalid.

I'm not sure that will solve the issue, but would give you something
else to try.

On Thu, Oct 19, 2017 at 8:38 AM, Andrew Psaltis
<psaltis.and...@gmail.com> wrote:
> Dave,
> To clarify you are using the PutUDP processor, not the PutSplunk processor?
>
> On Thu, Oct 19, 2017 at 7:31 AM, DAVID SMITH <davidrsm...@btinternet.com>
> wrote:
>
>> Hi
>> We are trying to do something which on the face of it seems fairly simple
>> but will not work.We have a cisco switch which is producing syslogs,
>> normally we use zoneranger to send them to Splunk and the records are
>> shown.However we want to do a bit of content routing, so we are using NiFi
>> 0.7.3 with a ListenUDP on port 514 and we can see the records coming in to
>> NiFi. Without doing anything to the records we use a putUDP to send records
>> to the Splunk server, NiFi says they have sent successfully but they never
>> show in Splunk.We have used a listenUDP on another NiFi and the records
>> transfer and look exactly the same as they were sent.We have also used
>> listenSyslog and putSyslog, but the listenSyslog says the records are
>> invalid.
>> Has anyone ever to do this, and can you give us any guidance on what we
>> may be missing?
>> Many thanksDave
>
>
>
>
> --
> Thanks,
> Andrew


   

Syslog processing from cisco switches to Splunk

2017-10-19 Thread DAVID SMITH
Hi
We are trying to do something which on the face of it seems fairly simple but 
will not work.We have a cisco switch which is producing syslogs, normally we 
use zoneranger to send them to Splunk and the records are shown.However we want 
to do a bit of content routing, so we are using NiFi 0.7.3 with a ListenUDP on 
port 514 and we can see the records coming in to NiFi. Without doing anything 
to the records we use a putUDP to send records to the Splunk server, NiFi says 
they have sent successfully but they never show in Splunk.We have used a 
listenUDP on another NiFi and the records transfer and look exactly the same as 
they were sent.We have also used listenSyslog and putSyslog, but the 
listenSyslog says the records are invalid.
Has anyone ever to do this, and can you give us any guidance on what we may be 
missing?
Many thanksDave

Help with Flowfile Content to Attribute

2017-06-06 Thread DAVID SMITH
Hi
I have a scenario where I have a flowfile which is come in and I want to email 
the contents to a recipient. I am currently using the putSmtp processor which 
works fine except that I have to email the flowfile as an attachment, so when 
the recipient gets the email they have to open an attachment. It would be nice 
to allow them to read it as a normal email body. The putSmtp processor has a 
property descriptor which will allow a flowfile attribute to be used as the 
body of the email.
So what I would like to do is, if the flowfile is less than a certain size (for 
arguments sake 2.5K) then I would like to make the flowfile into an attribute 
to use in the property Descriptor, otherwise  I am happy to send it as an 
attachment. 
Does anyone have any ideas on how after I have routed on filesize I can make my 
flowfile into an attribute preferably using standard processors, I am using 
NiFi 0.7.1?
Many thanksDave

Deconstructing SMTP /Mime emails

2017-03-31 Thread DAVID SMITH
Hi
I have a scenario where I use listenSMTP to listen for emails, and then I use 
ExtractEmailHeaders and ExtractEmailAttachments to deconstruct these emails 
into their constituent parts, however, I don't seem to be able to get the body 
text of the email as either an attribute or a FlowFile. Can anyone help , is 
there a way of doing this with a standard processor or do I need to write a 
bespoke processor?
Many thanksDave


Help with back porting Email bundle from NiFi 1.1.1 to NiFi 0.7.2

2017-02-25 Thread DAVID SMITH
Hi
At work we are in need of a set of email/smtp processors, our system currently 
uses NiFi 0.7.2  on Centos 7 and Java 8. 
 We can't upgrade the NiFi version at the present time.I found a set of 
processors which will do the job we require in NiFi 1.1.1, I have tried running 
the 1.1.1 nar bundle in 0.7.2 but I get errors stating that ComponentLog is not 
found.I have taken the source code and changed the parent nar in the poms to 
0.7.1 and then changed componentLog to ProcessorLog in the classes. When I then 
try to rebuild the code I get the following error:
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.2:compile (default-compile) on 
project nifi-email-processors: Compilation failure
[ERROR] bootstrap class path not set in conjunction with -source 1.7
[ERROR] 
/home/dave/workspace-luna/nifi-email-bundle/nifi-email-processors/src/main/java/org/apache/nifi/processors/email/ListenSMTP.java:[241,86]
 error: lambda expressions are not supported in -source 1.7
I have Java 8 set as my default within eclipse, is there something in a higher 
level pom that is being acted on here.
Can anyone help with this please?
many thanksDave


Help with ExtractText processor and Mime format files

2017-01-14 Thread DAVID SMITH
Hi
I have received some mime format files and I want to extract certain parts of 
them to use as attributes, such as the 'To' and 'From'  fields.
I have tried using the ExtractText processor but I can never get the regex to 
give me a match.
My questions are :1)    Am I using the best processor to decode mime format 
messages?2)    If the extractText processor is the best way of decoded a mime 
format message, can someone give me an example of a regex which  would give me 
either the sender or the recipient .

The following is an example of a file I have received:
Date: Fri, 13 Jan 2017 10:26:12 -0400 (EDT)
From: David Smith <d...@home.example.com>
To: John Doe <j...@example.com>
Subject: =?iso-8859-1?Q?Test Email?=
Message-ID: <pine.msc.4.21.0001112233.8672-2345...@home.example.com>
MIME-Version: 1.0
Content-Type: MULTIPART/MIXED; BOUNDARY="-123456789-987654321-958746372=:6982"

  This message is in MIME format.  The first part should be plain text. 

---123456789-987654321-958746372=:6982
Content-Type: TEXT/PLAIN; charset=US-ASCII


Many thanksDave



NiFi Elastic Clustering

2016-12-08 Thread DAVID SMITH
Hi
We are looking to set up some NiFi Clusters, however, we would like them to 
scale elastically without having to restart the NCM as we add or remove nodes.
>From NiFi 0.7.1 onwards is this possible or is this a feature that is being 
>looked at or should we be looking at our own solution?
Many thanksDave

ExecuteScript and flowfiles

2016-07-18 Thread DAVID SMITH
Hi

I have a question from a colleague who is using nifi 0.5.1, he has some files 
coming in which he needs to break up into 3 constituent parts.
He has a python script which should do this but what he wants to know is can he 
either

1)   send all three parts down one relationship as three separate flowfiles, 
from his python script, or
2) can he create three relationships from within his python script, to send a 
flowfile down each?

Does anyone have any examples of doing either option or can advise how to do 
this.

Many thanks
Dave

Sent from Yahoo! Mail on Android



Re: Problems with links on the Contributors Guide

2016-02-23 Thread DAVID SMITH
Pierre
I have loaded Formatter file as you suggested, and that works I have tried both 
links for the checkstyle config, one gives me a 404 error the other takes me to 
the NiFi pom file and highlights a area of code which is a maven checkstyle 
plugin, I tried saving it as an xml file and importing it but again I got  
'Document is invalid - No grammer found' error.
Dave 

On Tuesday, 23 February 2016, 15:12, Pierre Villard 
<pierre.villard...@gmail.com> wrote:
 

 Hi David,

Regarding the formatter file, it is supposed to be loaded in Preferences /
Java  / Code style / Formatter.

For the checkstyle configuration, if I remember correctly there are two
links pointing to the configuration file but only one is valid. Did you
manage to add the checkstyle conf?

Pierre

2016-02-23 16:02 GMT+01:00 DAVID SMITH <davidrsm...@btinternet.com>:

> Hi
> As I am looking to submit code I looked on the Contributors Guide at the
> Code Style section.I am using Eclipse Mars and I loaded the Checksytyle
> plugin, I downloaded the eclipse-formatter.xml from the link, but it will
> not load into the Check Configuration panel in Eclipse, I get the following
> error:Unable to parse configuration stream - Document is invalid: no
> grammar found.:12:10
> Has anyone seen this error before and managed to overcome it?
> Also under the Eclipse Users heading the 'checkstyle rules' hyperlink
> always returns error 404.
> ThanksDave


  

Problems with links on the Contributors Guide

2016-02-23 Thread DAVID SMITH
Hi
As I am looking to submit code I looked on the Contributors Guide at the Code 
Style section.I am using Eclipse Mars and I loaded the Checksytyle plugin, I 
downloaded the eclipse-formatter.xml from the link, but it will not load into 
the Check Configuration panel in Eclipse, I get the following error:Unable to 
parse configuration stream - Document is invalid: no grammar found.:12:10
Has anyone seen this error before and managed to overcome it?
Also under the Eclipse Users heading the 'checkstyle rules' hyperlink always 
returns error 404.
ThanksDave

Extending the NiFi API

2016-02-23 Thread DAVID SMITH
Hi
At work we run many instances of NiFi and other associated data applications, 
with more instances of NiFi being planned it is becoming impracticable to use 
the UI's to add, delete users or amend their roles. We have developed an in 
house tool which controls these associated applications via their  API's .Could 
the NiFi API be extended to allow users to be added/removed or allow their 
roles to be updated. Thereby allowing us centrally manage many instances at 
once.
Dave

Re: [GitHub] nifi pull request: NIFI-865 added initial support for AMQP publish...

2016-02-07 Thread DAVID SMITH
Thanks for the prompt reply.  
|| This email has been sent from a virus-free computer protected by Avast. 
www.avast.com  |

My processor set isn't QPID specific, as you say it is the protocol not the 
client provider that dictates how the connection is made to AMQP. I used the 
Apache Qpid JMS client for much the same reasons as you chose RabbitMQ, it was 
the client I had come across before and also it has been mandated at work that 
all new AMQP brokers will use the 0.10 protocol or later, and RabbitMQ only 
works with protocols upto 0.9.1 I discounted it. Also I found that AMQP 
protocol 0.10 uses a URL and a 'destination' and these two items hold all the 
information required to make the connection to the broker, whether that be  
with username/password or SSL.However, there are some legacy RabbitMQ brokers 
at work and I was asked to write a set of processors to work with these 
brokers, initially I tried to do it all in one bundle but it started to get a 
bit complex so I broke them out into separate bundles.I have looked at the 
source for your processor set and I have noticed that we have classes with the 
same names and the classpaths, so I will have to look at modifying mine 
slightly so that they don't clash. Again I would value any comments when you 
get chance to look through my code.
Many ThanksDave
David
Great points and in a way confirms what I new all along. AMQP 0.9 and 1.0 might 
as well be given different names since it is not really an evolution of one 
from another, but rather two different specs. So, what I think we would need to 
do is work on the same set of processors for AMQP 1.0 version as a separate 
bundle. You can me them the same way since really soon the concept of bundle 
versioning will be coming to NiFi so we won’t be dependent on names as unique 
identifier.
So if you can work on that please do so and let’s make it happen. 
CheersOleg
Oleg
That sounds really good to me. I will try and get on with the tests asap, if 
you don't mind I will have a look at how you have done yours.The Apache Qpid 
Client library also includes an internal 'broker' which I found I can 
instantiate in my Junit tests and hopefully this may a good way of testing the 
processors.
Dave 

On Saturday, 6 February 2016, 23:04, Oleg Zhurakousky 
<ozhurakou...@hortonworks.com> wrote:
 

 I hate spell checker. The last sentence in previous response should read 
compliant instead of compliment.

Sent from my iPhone

> On Feb 6, 2016, at 17:46, Oleg Zhurakousky <ozhurakou...@hortonworks.com> 
> wrote:
> 
> David
> 
> Thank you so much for reaching out.
> The reason why I am using RabbitMQ client library is because I am familiar 
> with it, but as you aware AMQP is a protocol-based specification therefore it 
> doesn’t mater which client library is used as long as they are compliant with 
> the protocol version and current implementation is based on AMQP 0.9.1. 
> Also, you are mentioning QPID client libraries. Do you have an opinion which 
> one is better since as I mentioned I just went with the one I know?. As far 
> as QPID JMS, are you saying that you are using QPID JMS layer to make AMQP 
> look like JMS? If so in my experience layering JMS over AMQP while possible 
> brings a lot of limitations of JMS. In any event would be nice to hear your 
> thoughts on that.
> As far as your processor implementation. Sorry I didn’t have a chance to look 
> at them at the time of writing this response (will look later on), but do you 
> look at them as QPID specific (i.e., QPID vs RabbitMQ)? And if so what is in 
> them that is specific to QPID? The reason why I am asking (and you can see it 
> from discussion on JIRA) is that with this effort we are aiming for 
> processors that are compliment with specific protocol regardless of the 
> broker implementation used, so it must be neutral. 
> 
> Thanks for reaching out once again.
> Cheers
> Oleg
> 
>> On Feb 6, 2016, at 5:19 PM, DAVID SMITH <davidrsm...@btinternet.com> wrote:
>> 
>> Hi Guys
>> |    | This email has been sent from a virus-free computer protected by 
>> Avast. 
>> www.avast.com  |
>> 
>> 
>> As you may remember I have developed some processors that publish/subscribe 
>> to AMQP brokers, but I was having problems writing Junit tests for these 
>> processors. I was interested to see that you have been working on NiFi Pull 
>> Request 865. I have looked at your code for these processors, we are both 
>> using different property descriptors to allow messages to be published and 
>> pulled. I also noticed that you are using RabbitMQ libraries to connect to 
>> the broker, whereas I connect to the AMQP broker using the QPID JMS 
>> libraries. I can still see a use for my processors and I would still be 
>> interested getting my processors uplo

Re: [GitHub] nifi pull request: NIFI-865 added initial support for AMQP publish...

2016-02-06 Thread DAVID SMITH
Hi Guys
|| This email has been sent from a virus-free computer protected by Avast. 
www.avast.com  |


As you may remember I have developed some processors that publish/subscribe to 
AMQP brokers, but I was having problems writing Junit tests for these 
processors. I was interested to see that you have been working on NiFi Pull 
Request 865. I have looked at your code for these processors, we are both using 
different property descriptors to allow messages to be published and pulled. I 
also noticed that you are using RabbitMQ libraries to connect to the broker, 
whereas I connect to the AMQP broker using the QPID JMS libraries. I can still 
see a use for my processors and I would still be interested getting my 
processors uploaded to run alongside yours in a future release of NiFi.I have 
tidied up my code and pushed it back to github:
https://github.com/helicopterman22/nifi_amqp_processors.git
I would appreciate your feedback Dave

 

On Wednesday, 3 February 2016, 2:05, asfgit  wrote:
 

 Github user asfgit closed the pull request at:

    https://github.com/apache/nifi/pull/200


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  

Testing processor logging message in Junit tests

2016-01-09 Thread DAVID SMITH
Hi
Does anyone know how to test for logging messages generated by my processors in 
my Juint tests for that processor?I am trying to determine whilst testing my 
processor if I have entered an exception, where I would just normally log a 
warning or an Error but necessarily end up on the failure relationship.
Is there a good example of this anywhere within the current codebase?
Many thanksDave

Re: XML Questions

2015-12-12 Thread DAVID SMITH
Joe

Thanks for the reply, that is pretty much what I thought about the flowfile 
attributes. Ill have to have a think about that one to see if I can work 
out a way around it.
With the merging, we will get multiple xml files which will be related by an 
attribute, I want to combine  them into one xml file which I can validate 
before sending it on.
I will have a look at the merge content to combine the xml files into one 
flowfile and then use an xslt transformer to get rid of unwanted bits and put 
the whole lot into a xml format that I can validate.

Dave

XML Questions

2015-12-11 Thread DAVID SMITH
Hi

I have two questions which I think may require bespoke processors.

1)  I have extracted a set of  values from xml tags and made them into flowfile 
attributes, how a can I add these flowfile attributes to one or more flowfiles?

2)  I have multiple xml files which I need to merge into one xml file, can I do 
this with the current processor set?

Thanks
Dave



[DISCUSS] Job Code entry feature for graph changes

2015-09-17 Thread DAVID SMITH
Hi 
I was doing a demonstration of NiFi to a new audience at work yesterday, the 
audience were most impressed and could see many uses, however two major 
questions arose. Audit and Accounting is a major thing where I work, and we 
would like to track who made changes and why.  
I think I have seen one of these questions as a discussion item on the forum 
but I can't remember the outcome.
The questions are:
1)     Is there going to be an undo feature in future releases?
2)     When a graph is made operational and a change needs to be made we would 
it be possible to have some sort of system that requires a work code or reason 
phrase to be entered before the updated graph is saved. Would this be feasible 
and could a reporting task be created to publish the change information (if any 
) perhaps on a monthly basis?

Dave 

Re: Advanced UI on a Processor

2015-09-13 Thread DAVID SMITH
Matt
Thanks for that, it worked a treat, now I can work how to make my own Advanced 
UI.
Dave 


 On Saturday, 12 September 2015, 22:20, Matt Gilman 
<matt.c.gil...@gmail.com> wrote:
   

 Dave,

Under META-INF in the war you should see a file that indicates which processor 
type that custom UI is for. Make sure that has your processors type specified. 
Let me know if that helps.

Matt

Sent from my iPhone

> On Sep 12, 2015, at 3:59 PM, DAVID SMITH <davidrsm...@btinternet.com> wrote:
> 
> Hi 
> I have to create a processor that dedupes  some values held in an flowfile 
> attribute against a set of 'criteria' that may vary over time. I then need to 
> update/create a new attribute based on the algorithm and then route on this 
> attribute. I have a prototype of this processor working but I currently input 
> the criteria as properties, but because of the fluidity of the criteria this 
> is impracticable to maintain.I thought I could use an Advanced tab such as 
> that used in the UpdateAttribute processor, this would allow me to input as 
> many criteria as I wish hold them in some sort of set and then use them 
> within my processor algorithm without having to rebuild the processor.  So I 
> would be looking at changing 'rule' to criteria, dispense with the 
> 'conditions'  area and change the actions area columns to Start and Stop.
> To try this out I have copied the UpdateAttribute sub-project, when I rename 
> the processor within the project and build it the 'Advanced' button is no 
> longer visible in the processor properties tab.  How do I ensure the 
> 'Advanced' button appears on my processor and the Advanced pane appears?
> Many ThanksDave
> 

  

Re: Update Attribute Advanced rules not persisting

2015-08-24 Thread DAVID SMITH
Matt
Thanks for replying, I checked the flow.xml.gz and as you suggested the rules 
were indeed stored in the annotationData element.  I tried restarting nifi and 
it wouldn't start correctly stating that it couldn't find bootstrap files.I 
therefore decided to do a re-install, this new installation booted fine and I 
retried the test I was running with the UpdateAttributer and again it worked as 
it should do.
Dave 


 On Sunday, 23 August 2015, 23:04, Matt Gilman matt.c.gil...@gmail.com 
wrote:
   

 David,
We are not aware of any issues using UpdateAttribute. The advanced rules are 
actually saved as part of your flow.xml.gz. Could you check the contents of the 
flow.xml.gz for that particular processor (look for it by ID) and verify if the 
annotationData element is populated? Or could you possibly provide steps to 
reproduce the issue your seeing? Thanks!

Matt

On Sun, Aug 23, 2015 at 4:20 PM, DAVID SMITH davidrsm...@btinternet.com wrote:

Hi
I have an instance of nifi-0.2.1 running in Centos 6.6, I have used an Update 
Attribute processor to set some rules, I can save the rules ad the processor 
runs as expected. However, if I go back into the processor configuration the 
rules have not persisted.Do you know about this?
Dave



  

Re: Writing to a flowfile

2015-08-16 Thread DAVID SMITH
Mark
Thanks very much for all of your help, that works really well, I have also 
taken on board your other comments and implemented them on my home version.  I 
will use it all at work tomorrow.
As you may have seen on a post I made in July, I have taken the put  get JMS 
processors and made a modified version for using with an AMQP broker. They 
appear to work well and my boss (John Thorp) would like me to contribute them 
back to org.apache.nifi.
Before I can do that I need to write some Junit tests, but I have no idea how 
to mock an AMQP broker/queue.To contribute the code for consideration do I need 
to create my own branch in the development code, insert my code and then push 
it back up. Currently my code is on github (link in July posts) .
Thanks again for your helpDave 


 On Saturday, 15 August 2015, 22:39, Mark Payne marka...@hotmail.com 
wrote:
   

 Dave,

Not a problem.

The FlowFile object itself is immutable. If you want to modify the FlowFile, 
you do so by asking 
the session to give you a new version of the FlowFile with some update. For 
instance, by adding 
an attribute or changing the content of the FlowFile.

So any call to session.putAttribute or session.write returns a new FlowFile. If 
you update
the line that calls putAttribute so that it stores the returned FlowFile into 
your 'parsed' variable,
you should be good to go.

So you would do:

FlowFile parsed = session.create(original);
parsed = session.putAttribute(parsed, CoreAttributes.FILENAME.key(), 
context.getProperty(PARSED_FILENAME).getValue());

Otherwise, you end up trying to modify the same version twice (once when you 
call session.putAttribute and 
again when you call session.write). This is what the message is complaining 
about.

Just looking through the code, a few other comments that I would offer:

* the static boolean error = false; is likely to cause problems. All 
instances of your processor would get the same 'error' variable. 
I would recommend you use an org.apache.nifi.util.BooleanHolder object (defined 
in the nifi-utils module) and define 
it within your onTrigger method, rather than using a member variable.

* Experience has shown that with any log message, you should log the FlowFIle 
that you are referring to. You can
also parameterize your log messages. For example:

logger.error(Failed to parse {}; routing to failure, new Object[] {original});

rather than

logger.error(parsing to failure);


I hope this helps! Let us know if you're still having problems!

Thanks
-Mark


 Date: Sat, 15 Aug 2015 19:41:00 + 
 From: davidrsm...@btinternet.com 
 To: dev@nifi.apache.org 
 Subject: Re: Writing to a flowfile 
 
 Mark 
 
 Thanks for your help. I have used the snippet of code you sent and it 
 works although I am fairly sure I haven't implemented it correctly, I 
 have had to put all of my code in the OnTrigger method, instead of in 
 the the callback. 
 I also need to change the filename attribute of the parsed flowfile, I 
 have inserted the following line: 
 
 session.putAttribute(parsed, CoreAttributes.FILENAME.key(), 
 context.getProperty(PARSED_FILENAME).getValue()); 
 
 But it gives me the following error: 
 2015-08-15 21:28:55,628 ERROR [Timer-Driven Process Thread-5] 
 o.a.nifi.processors.standard.ParseMyData 
 ParseMyData[id=63ef3e50-cf02-4a2f-b8f7-1415b39e521b] 
 ParseMyData[id=63ef3e50-cf02-4a2f-b8f7-1415b39e521b] failed to process 
 due to org.apache.nifi.processor.exception.FlowFileHandlingException: 
 StandardFlowFileRecord[uuid=6912ef5c-4bbe-414f-9a97-39c584b7a284,claim=,offset=0,name=testFile2.txt,size=0]
  
 is not the most recent version of this FlowFile within this session 
 (StandardProcessSession[id=21562]); rolling back session: 
 org.apache.nifi.processor.exception.FlowFileHandlingException: 
 StandardFlowFileRecord[uuid=6912ef5c-4bbe-414f-9a97-39c584b7a284,claim=,offset=0,name=testFile2.txt,size=0]
  
 is not the most recent version of this FlowFile within this session 
 (StandardProcessSession[id=21562]) 
 
 
 I have attached my processor class, I would be grateful if you could 
 give it a quick look and tell me what I have done wrong. 
 
 Many thanks 
 Dave 
 
 
 
 On Saturday, 15 August 2015, 13:16, Mark Payne marka...@hotmail.com wrote: 
 
 
 David, 
 
 In this case, since you want to keep the original intact, you will need 
 to create a 'child' flowfile to write to. 
 You do this with ProcessSession.create(FlowFile) 
 
 So you will have code that looks something like this: 
 
 final FlowFile original = session.get(); 
 if (original == null) { 
 return; 
 } 
 
 // create a new 'child' FlowFile. The framework will automatically handle 
 // the provenance information so that 'parsed' is forked from 'original'. 
 FlowFile parsed = session.create(original); 
 
 // Get an OutputStream for the 'parsed' FlowFile 
 parsed = session.write(parsed, new OutputStreamCallback() { 
 public void process(OutputStream parsedOut) { 
 
 // Get an InputStream 

Re: Instantiating a Controller Service in a Junit test

2015-08-10 Thread DAVID SMITH
Mark
Thanks for the information, it works a treat.
Dave
 


 On Monday, 10 August 2015, 1:12, Mark Payne marka...@hotmail.com wrote:
   

 David,

Yes, you'll also need to set the controller service in your processor. Sorry, I 
forgot to mention that.

So after the call to runner.enableControllerService(), and before the call to 
runner.run(), you would do:

runner.setProperty(CacheTester.CACHE_SERVICE, my-cache);

This way, the CacheTester processor knows to reference that controller service. 
So your method will look like:

@Test
public void checkCache() throws InitializationException, IOException{
   final TestRunner runner = TestRunners.newTestRunner(CacheTester.class);
   final StandardCacheService cacheService = new StandardCacheService();
   runner.addControllerService(my-cache, cacheService);
   runner.setProperty(cacheService, StandardCacheService.DATAFILE, 
/data/TEST_FILE);
   runner.setProperty(CacheTester.CACHE_SERVICE, my-cache);
   runner.enableControllerService(cacheService);
   runner.run();
}

Thanks
-Mark


 Date: Sun, 9 Aug 2015 21:06:18 +
 From: davidrsm...@btinternet.com
 To: dev@nifi.apache.org
 Subject: Re: Instantiating a Controller Service in a Junit test

 Mark
 Thanks for the reply, I have changed my test as you suggested, see below:
 @Test
 public void checkCache() throws InitializationException, IOException{
 final TestRunner runner = TestRunners.newTestRunner(CacheTester.class);
 final StandardCacheService cacheService = new StandardCacheService();
 runner.addControllerService(my-cache, cacheService);
 runner.setProperty(cacheService, StandardCacheService.DATAFILE, 
 /data/TEST_FILE);
 runner.enableControllerService(cacheService);
 runner.run();
 }


 When I run my test I now get a null pointer exception in my CacheTester 
 class. It appears the cache in my CacheTester class doesn't exist, when I 
 comment out all the calls to the cache methods the test passes.
 If I understand the code above correctly I don't believe I have set the 
 PropertyDescriptor in my CacheTester processor class which is shown below has 
 been set, am I correct?:
 public static final PropertyDescriptor CACHE_SERVICE = new 
 PropertyDescriptor.Builder() .name(Cache Service) .description(The 
 Controller Service to use in order to obtain a Cache Service) 
 .required(false) .identifiesControllerService(CacheServiceAPI.class) .build();


 BTW, the former I mentioned in my original post was referring to the 
 descriptions I had given about how to instantiate the Controller Service.
 Many thanksDave


 On Sunday, 9 August 2015, 21:05, Mark Payne marka...@hotmail.com wrote:


 Hi David,

 You should be able to just import your StandardCacheService in your unit test.

 You can then instantiate the controller service and use 
 TestRunner.addControllerService, as you're doing here.
 At that point, to set the properties, you can use TestRunner.setProperty. For 
 example:

 final StandardCacheService cacheService = new StandardCacheService();
 runner.addControllerService(my-cache, cacheService);
 runner.setProperty(cacheService, StandardCacheService.DATAFILE, /data/file);
 runner.enableControllerService(cacheService);

 There is no need to actually create the Logger and call initialize, as that 
 is handled for you when you call TestRunner.addControllerService.

 In your message, can you explain a bit further what you meant by
 If the former is correct how do I set the PropertyDescriptor as when I did 
 try this option the StandardCacheService.DATAFILE PropertyDescriptor was 
 never visible?

 It's important that you not mark the PropertyDescriptor as private, or else 
 you won't be able to access it, and you'll also want to ensure that
 it is returned by your getSupportedPropertyDescriptors() method. If I am 
 misunderstanding the comment, please advise.

 Let me know if this clears things up for you, or if you need any more details.

 If anything doesn't make sense, just give a shout - we're always happy to 
 help! :)

 Thanks
 -Mark


 
 Date: Sun, 9 Aug 2015 14:40:53 +
 From: davidrsm...@btinternet.com
 To: dev@nifi.apache.org
 Subject: Instantiating a Controller Service in a Junit test

 Hi
 I have written a simple Cache Controller Service, this Controller Service 
 has a property which if populated allows the cache to be populated when it 
 is intialized. I have also written a simple processor that allows me to 
 utilize the Controller Service and checks some of the preloaded values and 
 also checks some of the cache methods.
 I now want to write some Junit tests for my processor, and I want to 
 instantiate my Cache Controller Service. I have looked at other Junit test 
 classes in the nifi-0.2.1 source release for some guidance on how to do 
 this, looking particularly at the test classes for the DetectDuplicate 
 processor.
 I have imported the Controller Service API and based on what I saw in the