Sandeep,
I don't understand your situation completely, but why not just use
bin/hadoop dfs -copyFromLocal ?
-John
On Wed, Jul 18, 2012 at 11:33 AM, Sandeep Reddy P <
sandeepreddy.3...@gmail.com> wrote:
> Hi,
> I'm trying to load data into hdfs from local linux file system using java
> code fr
Akash,
I forgot to mention you will need to make sure the connector for your
oracle db is available to Hadoop. There are many ways to do this, but what
works for me is I copy the library to /lib in all nodes in
my cluster.
I've written a similar program to do what you are asking about but using
Akash,
You can write a simple Java program that queries your Oracle DB and uses
whatever kind of file output object from java.io that you like to write the
data to a file.
Compile the program and package it into a jar file.
Then run the program using /'hadoop jar
in your Hadoop cluster.
The r
: For reading the input split.
>
>
> Thanks
> Devaraj
>
>
> From: John Hancock [jhancock1...@gmail.com]
> Sent: Thursday, May 17, 2012 3:40 PM
> To: common-user@hadoop.apache.org
> Subject: custom FileInputFormat class
>
> All,
>
> Can anyo
All,
Can anyone on the list point me in the right direction as to how to write
my own FileInputFormat class?
Perhaps this is not even the way I should go, but my goal is to write a
MapReduce job that gets its input from a binary file of integers and longs.
-John
Alex,
Give it parameters 1 1 and it will tell you pi is about 4.
I think what really helps get better precision is making the second
parameter larger since that is the number of samples.
-John
On Tue, May 8, 2012 at 8:35 PM, Alex Paransky wrote:
> So, I installed Hadoop on my imac via port in