Thanks a lot Joe for answering my query!
Sudhindra.
On 7/30/18, 3:47 PM, "Joe Witt" wrote:
Sudhindra
The current ListFile processor scans through the configured directory
including any subdirectories and looks for files. It does this by
generating a listing, comparing it
Sudhindra
The current ListFile processor scans through the configured directory
including any subdirectories and looks for files. It does this by
generating a listing, comparing it to what it has seen already
(largely based on mod time) then sending out resulting listings.
These can be sent to a
Thanks Bryan - That makes sense
-Tim
> On Jul 30, 2018, at 2:45 PM, Bryan Bende wrote:
>
> Tim,
>
> In the case where it is your own custom service api and service impl,
> and you know you are never going to have another implementation of the
> API, then it doesn't really matter and having the
Hi,
We just came across NIFI as a possible option for backing up our data lake
periodically into S3. We have our pipelines that dump batches of data at some
granularity. For example, our one-minute dumps are of the form “201807210617”,
“201807210618”, “201807210619” etc. We are looking for a
Hey Mike,
As long as it's a controller service PropertyDescriptor that uses
dynamicallyModifiesClasspath, check out the JMSConnectionFactoryProvider in
the nifi-jms-bundle.
-- Mike
On Sat, Jul 28, 2018 at 8:52 AM Mike Thomsen wrote:
> Is there a good example somewhere that shows how to use
>
Tim,
In the case where it is your own custom service api and service impl,
and you know you are never going to have another implementation of the
API, then it doesn't really matter and having them all in one NAR will
work.
The issue is that by bundling the implementation with the API, now
someone
We’ve got a handful of custom NiFi controller services. The documentation
describes how to properly separate the API of a custom service from its
implementation using different NAR files. When we don’t do that, we see a
message that says the following:
org.apache.nifi.nar.ExtensionManager Contr
Mike,
For your Maven command in #1, those hive.* properties are for the Hive
1 NAR (PutHiveStreaming, e.g.), you can override the hive3.* versions
of them if you want to set a particular version (they default to
Apache Hive 3.0.0).
For #2, I didn't see anything in the new Hive Streaming API that
Hi Matt,
#1
Thank you very much!!!
PutHive3Streaming works after compilation :)
BTW: compilation with options
mvn -T C2.0 clean install -Phortonworks -Dhive.version=3.1.0.3.0.0.0-1634
-Dhive.hadoop.version=3.1.0.3.0.0.0-1634 -Dhadoop.version=3.1.0.3.0.0.0-1634
-Pinclude-hive3 -DskipTests -e
thr
Yep could do that too, the argument could just be something like:
java.arg.tmp=-Djava.io.tmpdir=/path/to/tmpdir
Regards,
Matt
On Fri, Jul 27, 2018 at 8:00 AM Otto Fowler wrote:
>
> Why not have the java tmp dir be configurable and make sure this doesn’t
> happen for any other possible nar?
> It
Mike,
That error usually indicates a Thrift version mismatch, which in this
case is pretty much expected since PutHiveStreaming uses version 1.2.1
and HDP 3.0 uses 3.0.0+. As of NiFi 1.7.0 you can add the
"-Pinclude-hive3" profile in your Maven build and it will add a full
set of Hive 3-compatible
Hi,
Of course.
I can even make a separate nifi server having only Puthivestreaming processor
to narrow logs only to those important.
Is it ok for you?
Would you like me to switch on any additional debugging or anything?
Regards,
Mike
Fr
Hi Mike,
By any chance, could you share the full stack trace from nifi-app.log?
Thanks,
Pierre
2018-07-30 11:19 GMT+02:00 Michal Tomaszewski :
> Hello,
> Is PutHiveStreaming processor working with Hive 3.0?
> We installed Hortonworks HDP 3.0, compiled newest NiFi 1.8 snapshot and
> got errors:
Hello,
Is PutHiveStreaming processor working with Hive 3.0?
We installed Hortonworks HDP 3.0, compiled newest NiFi 1.8 snapshot and got
errors:
Failed connecting to Hive endpoint table: hivetest3 at thrift://servername.
NiFi has access to all site.xml's. Configuration is exactly the same as with
14 matches
Mail list logo