Thanks for the suggestion
--
Sent from: http://apache-nifi-developer-list.39713.n7.nabble.com/
Hi Venu,
I belive it would only be possible if the PDF would be base64 encoded
before DB insertion, and then in NiFi it would need to be base64 decoded.
Seems possible, but adds a bit of computing overhead to the flow.
Best regards,
Endre
On Thu, Apr 18, 2019, 5:09 PM Venugopal Iyengar
wrote:
>
Something to keep in mind is that the master branch of the nifi-maven
plugin has been updated recently to include additional logic for
generating and including metadata information in the resulting NAR.
This is a stepping stone to adding support for hosting NAR extension
bundles in NiFi Registry.
Franklin,
What kind of support do you mean? Maybe an "official" NiFi Gradle
plugin to build NARs, like the NAR MOJO plugin for Maven? What are the
concerns with using the third-party library (I heard it works fine)? I
also wrote my own inline script [1] (rather than a plugin) you could
use in your
There is substantial work being lead by Jeff Storck to migrate NiFi to Java 11.
You can see discussions about that on the mailing list [1] and in one of the
PRs [2].
[1]
https://lists.apache.org/thread.html/8af1752087480ac83bc01ea89d616316a125848b334624a3e2fce4d3@%3Cdev.nifi.apache.org%3E
Could you create a jira for this new feature? I tried to see if there was
one already, and I couldn’t find one.
On April 18, 2019 at 13:02:13, Franklin George (
franklin.geo...@protegrity.com.invalid) wrote:
Hi,
Can you please provide the support to build NAR Files using Gradle too?
Currently,
Hi,
Can you please provide the support to build NAR Files using Gradle too?
Currently, the only solution available is to use a 3rd party Gradle plugin:
https://github.com/sponiro/gradle-nar-plugin for Packaging custom processors
into NAR Files.
Regards,
Franklin George
Thanks Peter.
We are trying to avoid using nvarchar going forward. However, for some
schema it is implemented that way by some consultants. If you can give me
the steps to debug that would be helpful. Is it possible to get this
working by using some converters as part of the flow? I appreciate your
One other thing, that seems to catch me every time I upgrade an old instance:
you will need to go in and allow users to read provenance data again. Somewhere
along the way (1.6?) provenance reading moved into a separate policy, and it
does not get assigned to anyone after upgrade.
-Original
Hi,
I have just one data point on the version but I would suggest moving to 1.9
if you're just starting out and if you're using the Record based processors
with potentially dynamic/changing schemas.
The automatic schema inference described in this blog post[1] makes things
much easier (or possible
10 matches
Mail list logo