This sounds pretty cool and definitely think you're both on a good
track design-wise.

On Fri, Mar 18, 2016 at 2:04 PM, Uwe Geercken <uwe.geerc...@web.de> wrote:
> Adam,
>
> thanks again.
>
> I didn't know about the contributors guide - I was always looking in the docs 
> inside Nifi and there is a reference to the developer guide only.
>
> I will try to make a good processor for velocity first. The next step would 
> then be to include also freemarker. I will try to keep that in mind during 
> design and coding. I don't know anything about Markdown or Asciidoc. So I 
> will have to have a look first.
>
> Regards,
>
> Uwe
>
>
>
>> Gesendet: Freitag, 18. März 2016 um 18:58 Uhr
>> Von: "Adam Taft" <a...@adamtaft.com>
>> An: dev@nifi.apache.org
>> Betreff: Re: Processor: User friendly vs system friendly design
>>
>> Uwe,
>>
>> The Developer Guide[1] and Contributor Guide[2] are pretty solid.  The
>> Developer Guide has a section dealing with reading & writing flowfile
>> attributes.  Please check these out, and then if you have any specific
>> questions, please feel free to reply.
>>
>> For inclusion in NIFI directly, you'd want to create a NIFI Jira ticket
>> mentioning the new feature, and then fork the NIFI project in Github and
>> send a Pull Request referencing the ticket.  However, if you just want some
>> feedback on suitability and consideration for inclusion, using your own
>> personal Github project and sending a link would be fine.
>>
>> Having a template conversion processor would be a nice addition.  Making it
>> generic to support Velocity, FreeMarker, and others might be really nice.
>> Extra bonus points for Markdown or Asciidoc transforms as well (but these
>> might be too separate of a use case).
>>
>> Hope this helps.
>>
>> Adam
>>
>> [1]  http://nifi.apache.org/developer-guide.html
>>
>> [2]  https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide
>>
>>
>>
>>
>> On Fri, Mar 18, 2016 at 1:36 PM, Uwe Geercken <uwe.geerc...@web.de> wrote:
>>
>> > Adam,
>> >
>> > interesting and I agree. that sounds very good.
>> >
>> > can you give me short tip of how to access attributes from code?
>> >
>> > once I have something usable or for testing where would I publish it? just
>> > on my github site? or is there a place for sharing?
>> >
>> > greetings
>> >
>> > Uwe
>> >
>> >
>> >
>> > Gesendet: Freitag, 18. März 2016 um 18:03 Uhr
>> > Von: "Adam Taft" <a...@adamtaft.com>
>> > An: dev@nifi.apache.org
>> > Betreff: Re: Processor: User friendly vs system friendly design
>> > I'm probably on the far end of favoring composibility and processor reuse.
>> > In this case, I would even go one step further and suggest that you're
>> > talking about three separate operations:
>> >
>> > 1. Split a multi-line CSV input file into individual single line flowfiles.
>> > 2. Read columns from a single CSV line into flowfile attributes.
>> > 3. Pass flowfile attributes into the Velocity transform processor.
>> >
>> > The point here, have you considered driving your Velocity template
>> > transform using flowfile attributes as opposed to CSV? Flowfile attributes
>> > are NIFI's lowest common data representation, many many processors create
>> > attributes which would enable your Velocity processor to be used by more
>> > than just CSV input.
>> >
>> > Adam
>> >
>> >
>> >
>> > On Fri, Mar 18, 2016 at 11:06 AM, Uwe Geercken <uwe.geerc...@web.de>
>> > wrote:
>> >
>> > >
>> > > Hello,
>> > >
>> > > my first mailing here. I am a Java developer, using Apache Velocity,
>> > > Drill, Tomcat, Ant, Pentaho ETL, MongoDb, Mysql and more and I am very
>> > much
>> > > a data guy.
>> > >
>> > > I have used Nifi for a while now and started yesterday of coding my first
>> > > processor. I basically do it to widen my knowledge and learn something
>> > new.
>> > >
>> > > I started with the idea of combining Apache Velocity - a template engine
>> > -
>> > > with Nifi. So in comes a CSV file, it gets merged with a template
>> > > containing formatting information and some placeholders (and some limited
>> > > logic maybe) and out comes a new set of data, formatted differently. So
>> > it
>> > > separates the processing logic from the formatting. One could create
>> > HTML,
>> > > XML, Json or other text based formats from it. Easy to use and very
>> > > efficient.
>> > >
>> > > Now my question is: Should I rather implement the logic this way that I
>> > > process a whole CSV file - which usually has multiple lines? That would
>> > be
>> > > good for the user as he or she has to deal with only one processor doing
>> > > the work. But the logic would be more specialized.
>> > >
>> > > The other way around, I could code the processor to handle one row of the
>> > > CSV file and the user will have to come up with a flow that divides the
>> > CSV
>> > > file into multiple flowfiles before my processor can be used. That is not
>> > > so specialized but it requires more preparation work from the user.
>> > >
>> > > I tend to go the second way. Also because there is already a processor
>> > > that will split a file into multiple flowfiles. But I wanted to hear your
>> > > opinion of what is the best way to go. Do you have a recommendation for
>> > me?
>> > > (Maybe the answer is to do both?!)
>> > >
>> > > Thanks for sharing your thoughts.
>> > >
>> > > Uwe
>> > >
>> >
>>

Reply via email to