value is a custom object I
can't get them into file but a reference.
What and where I have to make changes /additions so that print into file
function handles the custom-writable object?
Thanks & regards,
--
- Deepak Diwakar,
x27;t
perform its job.
Anyways thanks for your help! It helped me sort out somethings.
Cheers,
Deepak
On Thu, Feb 12, 2009 at 5:32 PM, He Chen wrote:
> I think you should confirm your balancer is still running. Do you change
> the threshold of the HDFS balancer? May be too large?
>
> The b
stops when one node ran our of disk?
Any futher inputs are appericiapted!
Cheers,
Deepak
TellyTopia Inc.
Thanks friend.
2009/1/19 Miles Osborne
> that is a timing / space report
>
> Miles
>
> 2009/1/19 Deepak Diwakar :
> > Hi friends,
> >
> > could somebody tell me what does the following quoted massage mean?
> >
> > " 3154.42user 76.0
it because of
Heap size of program?
I am running hadoop task in standalone mode on almost 250GB of compressed
data.
This error massage comes after finishing the task.
Thanks in advance,
--
- Deepak Diwakar,
> Actually I got same problem and temporally I've solved it including jdbc
> dependecies inside main jar.
>
> Actually another solution I've found is you can place all jar dependencias
> inside hadoop/lib directory.
>
>
> Hope it helps.
>
>
> -- Gerard
Hi all,
I am sure someone must have tried mysql connection using hadoop. But I am
getting problem.
Basically I am not getting how to inlcude classpath of jar of jdbc connector
in the run command of hadoop or is there any other way so that we can
incorporate jdbc connector jar into the main jar wh
there.
Then in the main task, we just include "import
org.apache.hadoop.io.{X}Writable;" But this is not working for me. Basically
at the time of compilation compiler doesn't find my customwritable class
which i have placed in the mentioned folder.
plz help me in this endevor.
Thanks
deepak
ards,
--
- Deepak Diwakar,
into file hadoop-site.xml and change the value field
different-different for different-different hadoop directory. Then there
would not be any conflict while keeping the intermediate files for the
different-different map tasks.
Thanks
Deepak,
2008/8/29 Deepak Diwakar <[EMAIL PROTECTED]>
rpretation.
Any feasible solution are appreciable for the standalone mode.
Thanks
Deepak
2008/8/28 lohit <[EMAIL PROTECTED]>
> Hi Deepak,
> Can you explain what process and what files they are trying to read? If you
> are talking about map/reduce tasks reading files on DFS, then,
extra space.
Plz do suggest me any suitable solution to this.
Thanks & Regards,
Deepak
Hadoop usually takes either a single file or a folder as an input parameter.
But is it possible to modify it so that it can take list of files(not a
folder) as input parameter
--
- Deepak Diwakar,
e up, anything in the logs?
> http://wiki.apache.org/hadoop/Help
>
> Arun
>
>
>
--
- Deepak Diwakar,
Associate Software Eng.,
Pubmatic, pune
Contact: +919960930405
how to make
full utilization of a single server with multicore processors? Is there in
pseudo dfs mode in hadoop? What are the changes required in config files
.Please let me know in detail. Is there anything to do with hadoop-site.xml
and mapred-default.xml?
Thanks in advance.
--
- Deepak
are alive.
Thanks,
Deepak
16 matches
Mail list logo