NVidia hardware is GPU-based math hardware, the chips in high-end 3d graphics
cards. Cuda is the name of the NVidia development environment. Your Hadoop job
want to run code on an NVidia card.
I'm guessing 'jcuda' is a utility program for the CUDA tools. What is your code
written in?
We need
Adarsh Sharma wrote:
> But Still don't know why it fails in Map-reduce job.
> [hadoop@ws37-mah-lin hadoop-0.20.2]$ bin/hadoop jar wordcount1.jar
> org.myorg.WordCount /user/hadoop/gutenberg /user/hadoop/output1
> 11/02/28 15:01:45 INFO input.FileInputFormat: Total input paths to
> process : 3
>
Harsh J wrote:
You're facing a permissions issue with a device, not a Hadoop-related
issue. Find a way to let users access the required devices
(/dev/nvidiactl is what's reported in your ST, for starters).
On Mon, Feb 28, 2011 at 12:05 PM, Adarsh Sharma
wrote:
Greetings to all,
Today i came
Thanx Harsh, I am not so much expert in Linux but knows little bit.
Do I have to research on my Application *nvidiactl* level or there or
just simple commands to make Hadoop uses /dev/nvidiaactl ( driver file
libcuddpp.so )
Best Regards, Adarsh
Harsh J wrote:
You're facing a permissions
You're facing a permissions issue with a device, not a Hadoop-related
issue. Find a way to let users access the required devices
(/dev/nvidiactl is what's reported in your ST, for starters).
On Mon, Feb 28, 2011 at 12:05 PM, Adarsh Sharma
wrote:
> Greetings to all,
>
> Today i came across a stran
Greetings to all,
Today i came across a strange problem about non-root users in Linux (
CentOS ).
I am able to compile & run a Java Program through below commands properly :
[root@cuda1 hadoop-0.20.2]# javac EnumDevices.java
[root@cuda1 hadoop-0.20.2]# java EnumDevices
Total number of devices