Madhu,
Ditch the '*' in the classpath element that has the configuration
directory. The directory ought to be on the classpath, not the files
AFAIK.
Try and let us know if it then picks up the proper config (right now,
its using the local mode).
On Wed, Jul 27, 2011 at 10:25 AM, madhu phatak
Hi Folks,
I have a bunch of binary files which I've stored in a sequencefile.
The name of the file is the key, the data is the value and I've stored
them sorted by key. (I'm not tied to using a sequencefile for this).
The current test data is only 50MB, but the real data will be 500MB -
1GB.
My
On 27/07/11 05:55, madhu phatak wrote:
Hi
I am submitting the job as follows
java -cp
Nectar-analytics-0.0.1-SNAPSHOT.jar:/home/hadoop/hadoop-for-nectar/hadoop-0.21.0/conf/*:$HADOOP_COMMON_HOME/lib/*:$HADOOP_COMMON_HOME/*
com.zinnia.nectar.regression.hadoop.primitive.jobs.SigmaJob
Thank you . Will have a look on it.
On Wed, Jul 27, 2011 at 3:28 PM, Steve Loughran ste...@apache.org wrote:
On 27/07/11 05:55, madhu phatak wrote:
Hi
I am submitting the job as follows
java -cp
Nectar-analytics-0.0.1-**SNAPSHOT.jar:/home/hadoop/**
Its the problem of multiple versions of same jar.
On Thu, Jul 21, 2011 at 5:15 PM, Steve Loughran ste...@apache.org wrote:
On 20/07/11 07:16, Juwei Shi wrote:
Hi,
We faced a problem of loading logging class when start the name node. It
seems that hadoop can not find commons-logging-*.jar
1. Any reason not to use a sequence file for this? Perhaps a mapfile?
Since I've sorted it, I don't need random accesses, but I do need
to be aware of the keys, as I need to be sure that I get all of the
relevant keys sent to a given mapper
MapFile *may* be better here (see my answer for 2
Roger,
Or you can take a look at Hadoop's MultipleOutputs class.
Thanks.
Alejandro
On Tue, Jul 26, 2011 at 11:30 PM, Luca Pireddu pire...@crs4.it wrote:
On July 26, 2011 06:11:33 PM Roger Chen wrote:
Hi all,
I am attempting to implement MultipleOutputFormat to write data to
multiple
See (inline at ***)
Cheers,
A Df
From: Harsh J ha...@cloudera.com
To: common-user@hadoop.apache.org; A Df abbey_dragonfor...@yahoo.com
Sent: Tuesday, 26 July 2011, 21:29
Subject: Re: Cygwin not working with Hadoop and Eclipse Plugin
A Df,
On Wed, Jul 27, 2011
Hi All:
I am have Hadoop 0.20.2 and I am using cygwin on Windows 7. I modified the
files as shown below for the Hadoop configuration.
conf/core-site.xml:
configuration
property
namefs.default.name/name
valuehdfs://localhost:9100/value
/property
/configuration
Hi A Df,
Did you format the NameNode first?
Can you check the NN logs whether NN is started or not?
Regards,
Uma
**
This email and its attachments contain confidential information from HUAWEI,
which is
Good afternoon,
during writing a MapReduce job, I need to get the value of some configuration
settings.
For instance, I need to get the value of dfs.write.packet.size inside the
reducer, so I write, using the context of the reducer:
Configuration
See inline at **. More questions and many Thanks :D
From: Uma Maheswara Rao G 72686 mahesw...@huawei.com
To: common-user@hadoop.apache.org; A Df abbey_dragonfor...@yahoo.com
Cc: common-user@hadoop.apache.org common-user@hadoop.apache.org
Sent: Wednesday, 27
3. Another idea might be create separate seq files for chunk of
records and make them non-splittable, ensuring that they go to a
single mapper. Assuming I can get away with this, see any pros/cons
with that approach?
Separate sequence files would require the least amount of custom code.
All
When starting hadoop on OSX I am getting this error. is there a fix for it
java[22373:1c03] Unable to load realm info from SCDynamicStore
Hi Bobby,
I just want to ask you if there is away of using a reducer or something like
concatenation to glue my outputs from the mapper and outputs
them as a single file and segment of the predicted RNA 2D structure?
FYI: I have used a reducer NONE before:
HADOOP_HOME$ bin/hadoop jar
You could either use a custom RecordReader or you could override the
run() method on your Mapper class to do the merging before calling the
map() method.
-Joey
On Wed, Jul 27, 2011 at 11:09 AM, Tom Melendez t...@supertom.com wrote:
3. Another idea might be create separate seq files for chunk
Hi Vighnesh,
Also, Cloudera has a decent screencast that walks you through building in
eclipse:
http://www.cloudera.com/blog/2009/04/configuring-eclipse-for-hadoop-development-a-screencast/
http://wiki.apache.org/hadoop/EclipseEnvironment
-Eric
-Original Message-
From: Uma
Just trying to understand what happens if there are 3 nodes with
replication set to 3 and one node fails. Does it fail the writes too?
If there is a link that I can look at will be great. I tried searching
but didn't see any definitive answer.
Thanks,
Mohit
Hello
I don't know if the question has been answered. I am trying to understand the
overlap between FILE_BYTES_READ and HDFS_BYTES_READ. What are the various
components that provide value to this counter? For example when I see
FILE_BYTES_READ for a specific task ( Map or Reduce ) , is it
Thank you Harsha . I am able to run the jobs by ditching *.
On Wed, Jul 27, 2011 at 11:41 AM, Harsh J ha...@cloudera.com wrote:
Madhu,
Ditch the '*' in the classpath element that has the configuration
directory. The directory ought to be on the classpath, not the files
AFAIK.
Try and let
Hi All,
How can I determine if a file is being written to (by any thread) in HDFS. I
have a continuous process on the master node, which is tracking a particular
folder in HDFS for files to process. On the slave nodes, I am creating files
in the same folder using the following code :
At the
21 matches
Mail list logo