Is this related?
http://stackoverflow.com/questions/1124771/how-to-solve-java-io-ioexception-error-12-cannot-allocate-memory-calling-run
On Wed, Aug 1, 2012 at 1:33 PM, Keith Wiley wrote:
> I know there is a lot of discussion about JVM reuse in Hadoop, but that
> usually refers to mappers runni
t imagine what would happen if you had C/C++
where you are buried in Seg Faults.
I would say that you can use C/C++ to implement MapReduce, if you were using
multicore/GPU's as your underlying platform where you know the hardware
initimately and are free from network I/O latency.
-Dhruv Kuma
1) Check with jps to see if all services are functioning.
2) Have you tried appending dfshealth.jsp at the end of the URL as the 404
says?
Try using this:
http://localhost:50070/dfshealth.jsp
On Thu, Jul 7, 2011 at 7:13 AM, Adarsh Sharma wrote:
> Dear all,
>
> Today I am stucked with the stra
ritable to see how it generates type
> codes.
>
> -Joey
> On Jul 4, 2011 2:55 PM, "Dhruv Kumar" wrote:
> > I'm having some difficulty with using ArrayWritable in the following test
> > code:
> >
> > ArrayWritable array = new ArrayWritable(IntWritable
I'm having some difficulty with using ArrayWritable in the following test
code:
ArrayWritable array = new ArrayWritable(IntWritable.class);
IntWritable[] ints = new IntWritable[4];
for (int i =0 ; i < 4; i++) {
ints[i] = new IntWritable(i);
}
array.set(ints)
It is a permission issue. Are you sure that the account "hadoop" has read
and write access to /usr/local/* directories?
The installation of Hadoop has always been effortless for me. Just follow
the instructions step by step given on:
http://hadoop.apache.org/common/docs/stable/single_node_setup.ht
Can you pre-process the data to adhere to a uniform serialization scheme
first?
Dir 1: to to
Dir 2: to
or
Dir 1: to
Dir 2: to to
Next, do a reduce side join.
To the best of my knowledge, Hadoop does not allow multiple types for values
in the reduce side.
On Tue, Jun 28, 2011 at 5:53
On Tue, Jun 21, 2011 at 4:32 PM, Harsh J wrote:
> ((IntWritable) entry.getKey()).get(); and similar.
>
Perfect! Thanks Harsh.
>
> On Wed, Jun 22, 2011 at 2:00 AM, Dhruv Kumar wrote:
> > The exact problem I'm facing is the following:
> >
> > entry.g
r(s) to another type (Vector) which can
be consumed by some legacy code for actual processing.
>
> alberto.
>
> On 21 June 2011 17:14, Dhruv Kumar wrote:
>
> > I want to extract the key-value pairs from a MapWritable, cast them into
> > Integer (key) and Double (val
I want to extract the key-value pairs from a MapWritable, cast them into
Integer (key) and Double (value) types, and add them to another collection.
I'm attempting the following but this code is incorrect.
// initialDistributionStripe is a MapWritable
// initialProbabilities is of type Vector whi
Can you be more specific? Tom White's book has a whole section devoted to
it.
On Fri, Jun 10, 2011 at 7:24 PM, Madhu Ramanna wrote:
> Hello,
>
> What is the most optimal way to compress several files already in hadoop ?
>
>
11 matches
Mail list logo