Hi Shi
Try out the following,it could get things working
Use DistributedCache.getCacheFiles() instead of
DistributedCache.getLocalCacheFiles()
public void setup(JobConf job)
{
DistributedCache.getLocalCacheFiles(job)
.
.
.
}
If that also doesn't seem to work and if you have
In my company, we intend to set up an hadoop cluster to run analylitics
applications. This cluster would have about 120 data nodes with dual sockets
servers with a GB interconnect. We are also exploring a solution with 60
quad sockets servers. How do compare the quad sockets and dual sockets
Hi,
I am getting the below error since my last java update, I am using Mac OS
10.7.2 and CDH3u0.
I tried with open JDK 1.7 and sun JDK 1.6._29. I use to run oozie with
Hadoop till the last update.
Any help is appreciated. I can send the details of core-site.xml as well.
Thanks,
-Idris
Some workaround available in https://issues.apache.org/jira/browse/HADOOP-7489
try adding that options in hadoop-env.sh.
Regards,
Uma
From: Idris Ali [psychid...@gmail.com]
Sent: Friday, December 16, 2011 8:16 PM
To: common-user@hadoop.apache.org;
Thanks Uma,
I have been using that parameter to avoid SCDyanmic.. error, and things
were fine till 1.6.0_26 sun jdk update.
But with the new update nothing seems to work.
-Idris
On Fri, Dec 16, 2011 at 8:24 PM, Uma Maheswara Rao G
mahesw...@huawei.comwrote:
Some workaround available in
Pierre,
As discussed in recent other threads, it depends.
The most sensible thing for Hadoop nodes is to find a sweet spot for
price/performance.
In general that will mean keeping a balance between compute power, disks,
and network bandwidth, and factor in racks, space, operating costs etc.
How
Yes you can use utility methods from IOUtils
ex:
FileOutputStream fo = new FileOutputStream (file);
IOUtils.copyBytes(fs.open(fileName), fo, 1024, true);
here fs is DFS stream.
other option is, you can make use of FileSystem apis.
EX:
FileSystem fs=new DistributedFileSystem();
Hey all,
I've run into a problem where I need to change the user who I'm running the
HDFS commands as.
I've got clients uploading data from windows boxes as a specific user. In HDFS,
the owner shows up as domain\user. Now I need to get the data from a linux box
which is tied to AD with the
Follow my previous question, I put the complete code as
follows, I doubt is there any method to get this working on
0.20.X using the new API.
The command I executed was:
bin/hadoop jar myjar.jar FileTest -files textFile.txt /input/
/output/
The complete code:
public class FileTest extends
You might be suffering from HADOOP-7822; I'd suggest you verify your pid
files and fix the problem by hand if it is the same issue.
-Rahul
On Fri, Dec 16, 2011 at 2:40 PM, Joey Krabacher jkrabac...@gmail.comwrote:
Turns out my tasktrackers(on the datanodes) are not starting properly
so I
Our CDH2 production grid just crashed with some sort of master node failure.
When I went in there, JobTracker was missing and NameNode was up.
Trying to ls on HDFS met with no connection.
We decided to go for a restart. This is in the namenode log right now:
2011-12-17 01:37:35,568 INFO
pid files are there, I checked for running processes with the same
ID's and they all checked out.
--Joey
On Fri, Dec 16, 2011 at 5:40 PM, Rahul Jain rja...@gmail.com wrote:
You might be suffering from HADOOP-7822; I'd suggest you verify your pid
files and fix the problem by hand if it is the
pid files are there, I checked for running processes with the sameID's
and they all checked out.
--Joey
On Fri, Dec 16, 2011 at 5:40 PM, Rahul Jain rja...@gmail.com wrote:
You might be suffering from HADOOP-7822; I'd suggest you verify your pid
files and fix the problem by hand if it is the same
All of the worker nodes datanodes' logs haven't logged anything after the
initial startup announcement:
STARTUP_MSG: host = prod1-worker075/10.2.19.75
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.1+169.56
STARTUP_MSG: build = -r 8e662cb065be1c4bc61c55e6bff161e09c1d36f3;
compiled by
14 matches
Mail list logo