you using 64bit-JDK? Which version?
regards,
Alex
On Fri, Oct 21, 2011 at 10:00 AM, Peng, Wei wrote:
> I am using the default heap size, which is 1000MB. The jobtracker hung
> when only I was running one job. Now I could not even restart the
> jobtracker.
> Can you teach me how t
isable
the trash-facility).
also you can use "# hadoop dfs -expunge"
And, at least, check the limits on the server (limits.conf)
- alex
On Fri, Oct 21, 2011 at 12:09 PM, Peng, Wei wrote:
> Alex, thank you a lot for helping me. I will figure out how to change
> the conf file. It
;) I use 2000 in our environment, but depends on the memory
on
your servers.
regards,
Alex
On Fri, Oct 21, 2011 at 10:58 AM, Peng, Wei wrote:
> Yes, the heap size the default 1000m. /bin/java -Xmx1000m
> So if I can change the heapsize to be bigger, I should be able to
solve
> thi
21, 2011 at 10:31 AM, Peng, Wei wrote:
> Thank you for your quick reply!!
>
> I cannot change the hadoop conf files because they are owned by a
person
> who has left the company, though I have the root access. My Java
version
> is java version "1.5.0_07"
> Java(TM) 2 R
Alex
On Fri, Oct 21, 2011 at 10:00 AM, Peng, Wei wrote:
> I am using the default heap size, which is 1000MB. The jobtracker hung
> when only I was running one job. Now I could not even restart the
> jobtracker.
> Can you teach me how to turn on GC logging in hadoop?
>
> Tha
/technicalArticles/Programming/GCPortal/
- Alex
On Fri, Oct 21, 2011 at 9:47 AM, Peng, Wei wrote:
> Hi,
>
>
>
> When I was running a job on hadoop with 75% mappers finished, the
> jobtracker hung so that I cannot access
> jobtrackerserver:7845/jobtracker.jsp and hadoop job -
Hi,
When I was running a job on hadoop with 75% mappers finished, the
jobtracker hung so that I cannot access
jobtrackerserver:7845/jobtracker.jsp and hadoop job -status hung as
well.
Then I stopped jobtracker and restarted it. However, the jobtracker
cannot be started. I received error mes
I just solved the problem by releasing more space on the related HD
partitions.
Thank you all for your help !
Wei
-Original Message-
From: Peng, Wei [mailto:wei.p...@xerox.com]
Sent: Tuesday, September 20, 2011 9:35 PM
To: common-user@hadoop.apache.org
Subject: RE: RE
, 2011 9:30 PM
To: common-user@hadoop.apache.org
Subject: Re: RE: java.io.IOException: Incorrect data format
Are you able to create the directory manually in the DataNode Machine?
#mkdirs /state/partition2/hadoop/dfs/tmp
Regards,
Uma
- Original Message -
From: "Peng, Wei"
Date:
: Incorrect data format
Are you able to create the directory manually in the DataNode Machine?
#mkdirs /state/partition2/hadoop/dfs/tmp
Regards,
Uma
- Original Message -
From: "Peng, Wei"
Date: Wednesday, September 21, 2011 9:44 am
Subject: RE: java.io.IOException: Incorrect data
e.org/jira/browse/HDFS-1594
Regards,
Uma
- Original Message -----
From: "Peng, Wei"
Date: Wednesday, September 21, 2011 9:01 am
Subject: java.io.IOException: Incorrect data format
To: common-user@hadoop.apache.org
> I was not able to restart my name server because I the name server r
I was not able to restart my name server because I the name server ran
out of space. Then I adjusted dfs.datanode.du.reserved to 0, and used
tune2fs -m to get some space, but I still could not restart the name
node.
I got the following error:
java.io.IOException: Incorrect data format. logVer
Hi,
I was copying some files from an old cluster to a new cluster.
When it failed copying, I was not watching it. (I think over 85% of data
has been transferred).
The name node crashed, and I cannot restarted it.
I got the following error when I try to restart the namenode
Hadoop-daemon.
need to figure out why the job is failing.
Matt
-----Original Message-
From: Peng, Wei [mailto:wei.p...@xerox.com]
Sent: Tuesday, September 20, 2011 10:44 AM
To: common-user@hadoop.apache.org
Subject: RE: how to set the number of mappers with 0 reducers?
The input is 9010 files (each 50
each split and have 1 file per map.
How much data/percentage of input are you assuming will be output from
each of these maps?
Matt
-Original Message-
From: Peng, Wei [mailto:wei.p...@xerox.com]
Sent: Tuesday, September 20, 2011 10:22 AM
To: common-user@hadoop.apache.org
Subject: RE: h
single file.
Soumya
On Tue, Sep 20, 2011 at 2:04 PM, Harsh J wrote:
> Hello Wei!
>
> On Tue, Sep 20, 2011 at 1:25 PM, Peng, Wei wrote:
> (snip)
> > However, the output from the mappers result in many small files
(size is
> > ~50k, the block size is however 64M, so
Hi,
I have a hadoop job running on over 50k files, each of which is about
500M.
I need to extract some tiny information from each file and no reducer is
needed.
However, the output from the mappers result in many small files (size is
~50k, the block size is however 64M, so it wastes a lot of
aren't applied until the hadoop daemons are
restarted. Sounds like someone enabled permissions previously, but they
didn't take hold until you rebooted your cluster.
cheers,
-James
On Mon, Apr 25, 2011 at 1:19 AM, Peng, Wei wrote:
> I forgot to mention that the hadoop was runnin
[mailto:ja...@tynt.com]
Sent: Sunday, April 24, 2011 5:36 AM
To: common-user@hadoop.apache.org
Subject: Re: HDFS permission denied
Check where the hadoop tmp setting is pointing to.
James
Sent from my mobile. Please excuse the typos.
On 2011-04-24, at 12:41 AM, "Peng, Wei" wrote:
>
AM
To: common-user@hadoop.apache.org
Subject: Re: HDFS permission denied
Check where the hadoop tmp setting is pointing to.
James
Sent from my mobile. Please excuse the typos.
On 2011-04-24, at 12:41 AM, "Peng, Wei" wrote:
> Hi,
>
>
>
> I need a help very bad.
>
>
&
Hi,
I need a help very bad.
I got an HDFS permission error by starting to run hadoop job
org.apache.hadoop.security.AccessControlException: Permission denied:
user=wp, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x
I have the right permission to read and write files to my own hado
Thanks for quick response.
Partitioning graphs into subgraphs and later combining the results is
too complicated to do. I prefer a simple method.
Currently, I do not want to divide the breadth-first search from a
single source. I just want to run 100 breadth-first search from 100
source nodes wit
Can someone tell me whether we can run multiple threads in hadoop?
Thanks
Wei
-Original Message-
From: Peng, Wei [mailto:wei.p...@xerox.com]
Sent: Tuesday, December 21, 2010 9:07 PM
To: common-user@hadoop.apache.org
Subject: RE: breadth-first search
I was just trying to run 100 source
]
Sent: Tuesday, December 21, 2010 1:10 PM
To: common-user@hadoop.apache.org
Subject: Re: breadth-first search
Absolutely true. Nobody should pretend otherwise.
On Tue, Dec 21, 2010 at 10:04 AM, Peng, Wei wrote:
> Hadoop is useful when the data is huge and cannot fit into memory, but
it
>
s less important.
2010/12/21 Peng, Wei
> The graph that my BFS algorithm is running on only needs 4 levels to reach
> all nodes. The reason I say "many iterations" is that there are 100 sources
> nodes, so totally 400 iterations. The algorithm should be right, and I
> cannot
et.
But, I had tested the BFS using hama and, hbase.
Sent from my iPhone
On 2010. 12. 21., at 오전 11:30, "Peng, Wei" wrote:
> Yoon,
>
> Can I use HAMA now, or it is still in development?
>
> Thanks
>
> Wei
>
> -Original Message-
> From: Edward
: common-user@hadoop.apache.org
Subject: Re: breadth-first search
On Mon, Dec 20, 2010 at 8:16 PM, Peng, Wei wrote:
> ... My question is really about what is the efficient way for graph
> computation, matrix computation, algorithms that need many iterations
to
> converge (with intermediat
://horicky.blogspot.com/2010/02/nosql-graphdb.html
One MR algorithm is based on Dijkstra and the other is based on BFS. I
think
the first one is more efficient than the second one.
Rgds,
Ricky
-Original Message-
From: Peng, Wei [mailto:wei.p...@xerox.com]
Sent: Monday, December 20, 2010 5:50 PM
To
://people.apache.org/~edwardyoon/papers/Apache_HAMA_BSP.pdf
On Tue, Dec 21, 2010 at 10:49 AM, Peng, Wei wrote:
>
> I implemented an algorithm to run hadoop on a 25GB graph data to
> calculate its average separation length.
> The input format is V1(tab)V2 (where V2 is the friend of V1).
> M
I implemented an algorithm to run hadoop on a 25GB graph data to
calculate its average separation length.
The input format is V1(tab)V2 (where V2 is the friend of V1).
My purpose is to first randomly select some seed nodes, and then for
each node, calculate the shortest paths from this node to al
Praveen,
I just had a quick solution (might be stupid).
In the first job, you can easily create adjacent list plus the reversed
friendship from your input file. (you can use some special characters to
distinguish these two types of output. E.g. "|"). The input is
1 2
1 3
2 4
2
ason for that?
>
> Thank you,
> Maha
>
>
>
> On Dec 17, 2010, at 2:59 PM, Peng, Wei wrote:
>
> >
> > You can put your local file to distributed file system by hadoop fs
-put
> localfile DFSfile.
> > Then access it by
> > Configurat
You can put your local file to distributed file system by hadoop fs -put
localfile DFSfile.
Then access it by
Configuration conf = new Configuration();
try {
FileSystem fs = FileSystem.get(URI.create(DFSfile),
conf);
FSDataInputStr
sorting and shuffling.
Is there anything that I can call to exit mapreduce and return back to the main
method?
Thanks
Wei
-Original Message-----
From: Peng, Wei [mailto:wei.p...@xerox.com]
Sent: Fri 12/17/2010 2:14 PM
To: common-user@hadoop.apache.org
Subject: RE: Please help with hadoop
My question is how to exit hadoop reducer.
Can I just write "return;" in reducer?
Wei
-Original Message-
From: Aman [mailto:aman_d...@hotmail.com]
Sent: Fri 12/17/2010 1:24 PM
To: hadoop-u...@lucene.apache.org
Subject: RE: Please help with hadoop configuration parameter set and get
Set
}
> }
> }
>
> Please excuse me if there are missing braces. There might be more
> efficient ways to setup the jobs and file system. I didn't have much
> time -- so, I ended up with something that worked for me then. Let
> me know if you have more questions.
&g
job.setMapOutputValueClass(IntWritable.class);
>
> job.waitForCompletion(true);
> }
> }
> }
>
> Please excuse me if there are missing braces. There might be more
> efficient ways to setup the jobs and file system. I didn't
p/tutorial/module4.html
) running until a solution was found.
Kind regards,
Arindam Khaled
On Dec 17, 2010, at 12:58 AM, Peng, Wei wrote:
> Hi,
>
>
>
> I am a newbie of hadoop.
>
> Today I was struggling with a hadoop problem for several hours.
>
>
>
&
Hi,
I am a newbie of hadoop.
Today I was struggling with a hadoop problem for several hours.
I initialize a parameter by setting job configuration in main.
E.g. Configuration con = new Configuration();
con.set("test", "1");
Job job = new Job(con);
Then in the mapper class, I want to
39 matches
Mail list logo