HI vikas,
You can download example programes from facebook group link below:
http://www.facebook.com/groups/416125741763625/
It contain some ppt as well.
Regards,
Saravanan Nagarajan
On Wed, Jul 25, 2012 at 10:17 AM, minumichael wrote:
>
> Hi Vikas,
>
> You could also try out various example
Hi,
check if you have given the correct input filepath. Or check out by giving
other file types or removing the .txt extension.
--
View this message in context:
http://old.nabble.com/Hadoop-java.io.IOexception%3A-Cannot-open-filename-tp34027586p34208589.html
Sent from the Hadoop core-user mail
Hi Vikas,
You could also try out various examples like finding the maximum temperature
from a given dataset
006701199091950051507004...999N9+1+999...
004301199091950051512004...999N9+00221+999...
004301199091950051518004...999N9-00111+999...
00
hi habeeb,
if the namenode is in safemode ,there is a command line option to leave the
safemode .could u try the following command
hadoop dfsadmin -safemode leave
thanks
jithin
--
View this message in context:
http://old.nabble.com/SafeModeExceptionHadoop-Startup-tp34028890p34208486.ht
Habeeb Raza wrote:
>
> Hi,
>
> I have setup a hadoop cluster with three nodes. when I start the cluster,
> all daemons are running in Master and Slave as well. while monitor from
> UI(using port 50030), its showing only one node is live node(i.e, Master
> node). While checking the logs, I do s
Hi,
I wrote a simple program to gather some statistics about bigrams in some
data.
I print statistics to a custom file.
Path file = new Path(context.getConfiguration().get("mapred.output.dir")
+ "/bigram.txt");
FSDataOutputStream out =
file.getFileSystem(context.getConfiguration()).create(fi
Hi,
Do we not have any info on this? Join must be such a common scenario for
most of the people out on this list.
Thanks.
On 07/22/2012 10:22 PM, Abhinav M Kulkarni wrote:
Hi,
I was planning to use DataJoin jar (located in
$HADOOP_INSTALL/contrib/datajoin) for reduce-side join (version 1.0
Hi,
I found out that copying the files from native folder to
/usr/local/hadoop/lib folder solved the problem but the main issue then is
why hadoop is not able to pick up the native libraries based on the
configuration of environment variables; i.e, why is java.library.path not
set from LD_LIBRARY_
my suspicion is that fs.close() closes the FileSystem in the cache,
regardless of whether if it is used by other processes as well at that
point (as opposed to a system where the cache keeps a count of users and
only closes it when the last user asks for a close). can anyone confirm?
although in p
In all my experience you let FileSystem instances close themselves.
On Tue, Jul 24, 2012 at 10:34 AM, Koert Kuipers wrote:
> Since FileSystem is a Closeable i would expect code using it to be like
> this:
>
> FileSystem fs = path.getFileSystem(conf);
> try {
> // do something with fs, such as
Since FileSystem is a Closeable i would expect code using it to be like
this:
FileSystem fs = path.getFileSystem(conf);
try {
// do something with fs, such as read from the path
} finally {
fs.close()
}
However i have repeatedly gotten into trouble with this approach. In one
situation it
Hi Oleg
>From the job tracker page, you can get to the failed tasks and see
which was the file split processed by that task. The split information
is available under the status column for each task.
The file split information is not available on job history.
Regrads
Bejoy KS
On Tue, Jul 24, 20
hadoop will not use/hold on to memory unless its needed.
you push load on the cluster and the stats will automatically grow
On Tue, Jul 24, 2012 at 2:52 PM, Kamil Rogoń
wrote:
> Hello,
>
> Reading on the Internet best practices for selecting hardware for Hadoop I
> noticed there are always many
Hello,
Reading on the Internet best practices for selecting hardware for Hadoop
I noticed there are always many RAM memory. On my Hadoop environment I
have 16GB memory on all hardware, but I am worried about small
utilization of it:
$ free -m
total used free shar
Hi , I got such exception running hadoop job:
java.io.EOFException: Unexpected end of input stream at
org.apache.hadoop.io.compress.DecompressorStream.getCompressedData(DecompressorStream.java:99)
at
org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:87)
at
org.apa
15 matches
Mail list logo