Ravi,

There is a pull request on GitHub based on Sumo's branch (from your #1
question) to add a Maven profile for building the nifi-hadoop-libraries-nar
with MapR JARs rather than Apache Hadoop JARs:

https://github.com/apache/nifi/pull/475

Regards,
Matt

On Tue, Jun 14, 2016 at 1:36 PM, Ravi Papisetti (rpapiset) <
rpapi...@cisco.com> wrote:

> Hi,
>
> I have configure the way it is mentioned in below e-mail, still no luck:-(.
>
> I have two questions:
> 1.
> https://github.com/xmlking/mapr-nifi-hadoop-libraries-bundle/blob/master/mapr-client.md
>  –
> has a step to configure fs.defaultFS, should I configure the cluster
> resource manage here? Or should I get maprfs url from our IT who is
> supporting the cluster?
> 2. Matt: What is MaprPR? Can you please elaborate
>
> Appreciate all your responses.
>
>
> Thanks,
>
> Ravi Papisetti
>
> Technical Leader
>
> Services Technology Incubation Center
> <http://wwwin.cisco.com/CustAdv/ts/cstg/stic/>
>
> rpapi...@cisco.com
>
> Phone: +1 512 340 3377
>
>
> [image: stic-logo-email-blue]
>
> From: Matt Burgess <mattyb...@gmail.com>
> Reply-To: "users@nifi.apache.org" <users@nifi.apache.org>
> Date: Monday, June 13, 2016 at 8:26 PM
>
> To: "users@nifi.apache.org" <users@nifi.apache.org>
> Subject: Re: Writing files to MapR File system using putHDFS
>
> Sumo,
>
> I'll try the MapR PR with your additional settings below. If they work,
> they'll need to be added to the doc (or ideally, the profile if possible).
> That's what I suspected had been missing but haven't had a chance to try
> yet, will do that shortly :)
>
> Thanks,
> Matt
>
> On Jun 13, 2016, at 9:17 PM, Sumanth Chinthagunta <xmlk...@gmail.com>
> wrote:
>
>
> I had been using custom build nifi-hadoop-libraries-nar-0.6.1.nar that
> worked with MapR 4.02
> make sure you add java.security.auth.login.config and follow mapR client
> setup on the NiFi server (
> https://github.com/xmlking/mapr-nifi-hadoop-libraries-bundle/blob/master/mapr-client.md)
>
>
> $NIFI_HOME/conf/bootstrap.conf
>
>
> java.arg.15=-Djava.security.auth.login.config=/opt/mapr/conf/mapr.login.conf
> I just build *Nar* with MapR 2.7.0-mapr-1602 libs. I haven’t tested with
> MapR 5.1 but you can try and let us know.
>
> Hadoop bundle for NiFi v0.6.1 and MapR 2.7.0-mapr-1602
> https://github.com/xmlking/mapr-nifi-hadoop-libraries-bundle/releases
>
> -Sumo
>
>
> On Jun 13, 2016, at 5:57 PM, Bryan Bende <bbe...@gmail.com> wrote:
>
> I'm not sure if this would make a difference, but typically the
> configuration resources would be the full paths to core-site.xml and
> hdfs-site.xml. Wondering if using those instead of yarn-site.xml changes
> anything.
>
> On Monday, June 13, 2016, Ravi Papisetti (rpapiset) <rpapi...@cisco.com>
> wrote:
>
>> Yes, Aldrin. Tried listHDFS, gets the similar error complaining directory
>> doesn't exist.
>>
>> NiFi – 0.6.1
>> MapR – 5.1
>>
>> NiFi is local standalone instance. Target cluster is enabled with token
>> based authentication. I am able to execute "hadoop fs –ls <path>" from cli
>> on the node with NiFi installed.
>>
>> Thanks,
>> Ravi Papisetti
>> Technical Leader
>> Services Technology Incubation Center
>> <http://wwwin.cisco.com/CustAdv/ts/cstg/stic/>
>> rpapi...@cisco.com
>> Phone: +1 512 340 3377
>>
>> <0B65E9AA-485A-49EE-9FB4-8600F4D55880[22].png>
>>
>> From: Aldrin Piri <aldrinp...@gmail.com>
>> Reply-To: "users@nifi.apache.org" <users@nifi.apache.org>
>> Date: Monday, June 13, 2016 at 6:24 PM
>> To: "users@nifi.apache.org" <users@nifi.apache.org>
>> Subject: Re: Writing files to MapR File system using putHDFS
>>
>> Hi Ravi,
>>
>> Could you provide some additional details in terms of both your NiFi
>> environment and the MapR destination?
>>
>> Is your NiFi a single instance or clustered?  In the case of the latter,
>> is security established for your ZooKeeper ensemble?
>>
>> Is your target cluster Kerberized?  What version are you running?  Have
>> you attempted to use the List/GetHDFS processors?  Do they also have errors
>> in reading?
>>
>> Thanks!
>> --aldrin
>>
>> On Mon, Jun 13, 2016 at 5:19 PM, Ravi Papisetti (rpapiset) <
>> rpapi...@cisco.com> wrote:
>>
>>> Thanks Conrad for your reply.
>>>
>>> Yes, I have configured putHDFS with "Remove Owner" and "Renive Group"
>>> with same values as on HDFS. Also, nifi service is started under the same
>>> user.
>>>
>>>
>>>
>>> Thanks,
>>> Ravi Papisetti
>>> Technical Leader
>>> Services Technology Incubation Center
>>> <http://wwwin.cisco.com/CustAdv/ts/cstg/stic/>
>>> rpapi...@cisco.com
>>> Phone: +1 512 340 3377
>>>
>>> <0B65E9AA-485A-49EE-9FB4-8600F4D55880[21].png>
>>>
>>> From: Conrad Crampton <conrad.cramp...@secdata.com>
>>> Reply-To: "users@nifi.apache.org" <users@nifi.apache.org>
>>> Date: Monday, June 13, 2016 at 4:01 PM
>>> To: "users@nifi.apache.org" <users@nifi.apache.org>
>>> Subject: Re: Writing files to MapR File system using putHDFS
>>>
>>> Hi,
>>>
>>> Sounds like a permissions problem. Have you set the Remote Owner and
>>> Remote Groups settings in the processor appropriate for HDFS permissions?
>>>
>>> Conrad
>>>
>>>
>>>
>>> *From: *"Ravi Papisetti (rpapiset)" <rpapi...@cisco.com>
>>> *Reply-To: *"users@nifi.apache.org" <users@nifi.apache.org>
>>> *Date: *Monday, 13 June 2016 at 21:25
>>> *To: *"users@nifi.apache.org" <users@nifi.apache.org>, "
>>> d...@nifi.apache.org" <d...@nifi.apache.org>
>>> *Subject: *Writing files to MapR File system using putHDFS
>>>
>>>
>>>
>>> Hi,
>>>
>>>
>>>
>>> We just started exploring apache nifi for data onboarding into MapR
>>> distribution. Have configured putHDFS with yarn-site.xml from on local mapr
>>> client where cluster information is provided, configured the "Directory"
>>> with mapr fs directory to write the files, configured nifi to run as user
>>> has permission to write to mapr fs, inspie of that we are getting below
>>> error while writing the file into given file system path. I am doubting,
>>> nifi is not talking to the cluster or talking with wrong user, appreciate
>>> if you some can guide me to troubleshoot this issue or any solutions if we
>>> are doing something wrong:
>>>
>>> Nifi workflow is very simple: GetFile is configure to read from locla
>>> file system, connected to PutHDFS with yarn-site.xml and directory
>>> information configured.
>>>
>>> 2016-06-13 15:14:36,305 INFO [Timer-Driven Process Thread-2]
>>> o.apache.nifi.processors.hadoop.PutHDFS
>>> PutHDFS[id=07abcfaa-fa8d-496b-81f0-b1b770672719] Kerberos relogin
>>> successful or ticket still valid
>>> 2016-06-13 15:14:36,324 ERROR [Timer-Driven Process Thread-2]
>>> o.apache.nifi.processors.hadoop.PutHDFS
>>> PutHDFS[id=07abcfaa-fa8d-496b-81f0-b1b770672719] Failed to write to HDFS
>>> due to java.io.IOException: /app/DataAnalyticsFramework/catalog/nifi could
>>> not be created: java.io.IOException:
>>> /app/DataAnalyticsFramework/catalog/nifi could not be created
>>> 2016-06-13 15:14:36,330 ERROR [Timer-Driven Process Thread-2]
>>> o.apache.nifi.processors.hadoop.PutHDFS
>>> java.io.IOException: /app/DataAnalyticsFramework/catalog/nifi could not
>>> be created
>>>         at
>>> org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:238)
>>> ~[nifi-hdfs-processors-0.6.1.jar:0.6.1]
>>>         at
>>> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>>> [nifi-api-0.6.1.jar:0.6.1]
>>>         at
>>> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1059)
>>> [nifi-framework-core-0.6.1.jar:0.6.1]
>>>         at
>>> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136)
>>> [nifi-framework-core-0.6.1.jar:0.6.1]
>>>         at
>>> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
>>> [nifi-framework-core-0.6.1.jar:0.6.1]
>>>         at
>>> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:123)
>>> [nifi-framework-core-0.6.1.jar:0.6.1]
>>>         at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>> [na:1.7.0_101]
>>>         at
>>> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>>> [na:1.7.0_101]
>>>         at
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>>> [na:1.7.0_101]
>>>         at
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>>> [na:1.7.0_101]
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> [na:1.7.0_101]
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> [na:1.7.0_101]
>>>         at java.lang.Thread.run(Thread.java:745) [na:1.7.0_101]
>>>
>>> Appreciate any help.
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Ravi Papisetti
>>>
>>> Technical Leader
>>>
>>> Services Technology Incubation Center
>>> <http://wwwin.cisco.com/CustAdv/ts/cstg/stic/>
>>>
>>> rpapi...@cisco.com
>>>
>>> Phone: +1 512 340 3377
>>>
>>>
>>>
>>> <image001.png>
>>>
>>>
>>>
>>> ***This email originated outside SecureData***
>>>
>>> Click here <https://www.mailcontrol.com/sr/MZbqvYs5QwJvpeaetUwhCQ==> to
>>> report this email as spam.
>>>
>>>
>>> SecureData, combating cyber threats
>>>
>>> ------------------------------
>>>
>>> The information contained in this message or any of its attachments may
>>> be privileged and confidential and intended for the exclusive use of the
>>> intended recipient. If you are not the intended recipient any disclosure,
>>> reproduction, distribution or other dissemination or use of this
>>> communications is strictly prohibited. The views expressed in this email
>>> are those of the individual and not necessarily of SecureData Europe Ltd.
>>> Any prices quoted are only valid if followed up by a formal written quote.
>>>
>>> SecureData Europe Limited. Registered in England & Wales 04365896.
>>> Registered Address: SecureData House, Hermitage Court, Hermitage Lane,
>>> Maidstone, Kent, ME16 9NT
>>>
>>
>>
>
> --
> Sent from Gmail Mobile
>
>
>

Reply via email to