Any ideas?
regards,
--
Ahmad Humayun
Research Associate
Computer Science Dpt., LUMS
http://suraj.lums.edu.pk/~ahmadh
+92 321 4457315
> basis like passing it as config like any other configs of hadoop.
>> Thanks,
>> Lohit
>>
>>
>>
>>
>> - Original Message
>> From: Ahmad Humayun <[EMAIL PROTECTED]>
>> To: core-dev@hadoop.apache.org
>> Sent: Tuesday, September
ype
>> of computation (algorithm) and also the cluster setup it was run on, plus
>> the input data size. we are looking for computations that have taken a
>> large
>> amount of time.
>>
>>
>
> http://developer.yahoo.com/blogs/hadoop/2008/02/yahoo-worlds-l
more than glad, that you ppl are finding it helpful :)
On Tue, Jul 1, 2008 at 1:30 AM, Sangmin Lee <[EMAIL PROTECTED]> wrote:
> thank you for sharing your valuable doc.
>
> -sangmin
>
> On Tue, Jun 24, 2008 at 7:02 AM, Ahmad Humayun <[EMAIL PROTECTED]>
> wrote:
&
int me to the startpoint ?
> >
> > I appreciate your help in advance.
> >
> > Cheers,
> > Sangmin
> >
> >
>
>
> --
> oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo
> 00 oo 00 oo
> "If you want your children to be intelligent, read them fairy tales. If you
> want them to be more intelligent, read them more fairy tales." (Albert
> Einstein)
>
--
Ahmad Humayun
Research Assistant
Computer Science Dpt., LUMS
http://suraj.lums.edu.pk/~ahmadh
+92 321 4457315
494)
Any ideas how to solve this? Do I need to open some port on my network ...
even though I am running hadoop on a single machine?
thanks
--
Ahmad Humayun
Research Assistant
Computer Science Dpt., LUMS
http://suraj.lums.edu.pk/~ahmadh
+92 321 4457315
anyways
Thanks for the support guys :)
Regards,
On Tue, May 6, 2008 at 11:04 PM, Arun C Murthy <[EMAIL PROTECTED]> wrote:
>
> On May 5, 2008, at 2:03 PM, Ahmad Humayun wrote:
>
> Thanks Christophe :) Hadoop is running fine now :)
> >
> > does anyone know how to redu
I set the value to -Xms512m and now works with 512mb assigned to my VM :)
Is there some recommended heap size I should use with hadoop? Is 512 too
less?
Regards,
On Tue, May 6, 2008 at 6:17 PM, Ahmad Humayun <[EMAIL PROTECTED]> wrote:
> 32 bit JVM
>
>
> On Tue, May 6, 2008
32 bit JVM
On Tue, May 6, 2008 at 3:20 PM, Steve Loughran <[EMAIL PROTECTED]> wrote:
> Ahmad Humayun wrote:
>
> > Just tried with 512 MB and 1 GBand guess what it started
> > (finally!!) working at a GB.
> >
> > Is there a way to lower this require
mailing list all of you are
doing a great job :)
Regards,
On Tue, May 6, 2008 at 1:01 AM, Ahmad Humayun <[EMAIL PROTECTED]> wrote:
> Just tried with 512 MB and 1 GBand guess what it started
> (finally!!) working at a GB.
>
> Is there a way to lower this requirement?.
juice that way :(
Regards,
On Tue, May 6, 2008 at 12:49 AM, Ahmad Humayun <[EMAIL PROTECTED]>
wrote:
> Well my VM is allocated 256 MB.I'll just increase it and report back
>
> Plus I have just tried HelloWorld programsand since they hardly have
> any memory usage, t
other Java
> applications in your JVM?
>
> Christophe
>
> On Mon, May 5, 2008 at 9:33 PM, Ahmad Humayun <[EMAIL PROTECTED]>
> wrote:
>
> > Hi there,
> >
> > Has anybody tried running Hadoop on VMware (6.0). I have installed open
> > SUSE
> > 10.
object heap
Could not create the Java virtual machine.*
Any ideas? Is it a problem with VMware? Or maybe my java environment setup?
Or I'm simply doing something wrong in setting up Hadoop?
Thanks again!
Regards,
--
Ahmad Humayun
Research Assistant
Computer Science Dpt., LUMS
+92 321 4457315
One small question everyone, does Hadoop need JDK or it can even run on a
simple JRE?
thanks
Regards,
--
Ahmad Humayun
Research Assistant
Computer Science Dpt., LUMS
+92 321 4457315
r occurred during initialization of VM
Any ideas?
Regards,
--
Ahmad Humayun
Research Assistant
Computer Science Dpt., LUMS
+92 321 4457315
Figured it out.the dir path to the jvm wasn't right (look at the first
two paths)
Thanks anyways :)
On Thu, Apr 17, 2008 at 9:50 PM, Ahmad Humayun <[EMAIL PROTECTED]>
wrote:
> Hello there,
>
> I'm trying to get the swig (python) wrapper for libhdfs working using
>
'make' somehow ends up
deleting libhdfs.so.1. And I also dont get the file _pyhdfs.so.
It will be great if someone can diagnose my problem.
Regards,
--
Ahmad Humayun
Research Assistant
Computer Science Dpt., LUMS
+92 321 4457315
I need to use HDFS with Python. I have looked at Saptarshi's guide (
http://www.stat.purdue.edu/~sguha/code.html#hadoopy) but it mentions that
the method doesn't support writes. I need one which does.any ideas?
regards
--
Ahmad Humayun
Research Assistant
Computer Science Dpt., LU
<[EMAIL PROTECTED]>
wrote:
> Does libhdfs require Java installed?
> Can I write a C++ application that is using HDFS without requiring
> Java installation?
>
> Thanks for your help,
> Cagdas
>
--
Ahmad Humayun
Research Assistant
Computer Science Dpt., LUMS
+92 321 4457315
the
intermediate data to the DFS for safe keeping?
thanks for all the help :)
regards,
--
Ahmad Humayun
Research Assistant
Computer Science Dpt., LUMS
+92 321 4457315
ese cases?
>
> Thanks,
> Edward.
>
> On 3/13/08, Sanjay Radia <[EMAIL PROTECTED]> wrote:
> > Ahmad Humayun wrote:
> > > So does that mean nodes can possibly read files that have been
> "deleted"
> > >
> > If the name node entry has been deleted, n
s of
> that
> > file, as soon as that file gets deleted?
> >
>
> The replicas are scheduled to be deleted by the namenode. But there may be
> some delay before they are actually deleted on the datanodes.
>
> Hairong
>
>
--
Ahmad Humayun
Research Assistant
Computer Science Dpt., LUMS
+92 321 4457315
Thanks Amar :)
On Wed, Mar 12, 2008 at 5:48 PM, Amar Kamat <[EMAIL PROTECTED]> wrote:
> See HADOOP-2919. It explains the current technique. This will be a good
> starting point.
> Amar
> On Wed, 12 Mar 2008, Ahmad Humayun wrote:
>
> > Can somebody explain the proces
deleted?
Thank you for all the help :)
regards,
--
Ahmad Humayun
Research Assistant
Computer Science Dpt., LUMS
+92 321 4457315
.
regards,
--
Ahmad Humayun
Research Assistant
Computer Science Dpt., LUMS
+92 321 4457315
Thanks a lot Amar. As usual, you have cleared a lot of the haze in head :)
regards,
On Mon, Mar 3, 2008 at 9:32 PM, Amar Kamat <[EMAIL PROTECTED]> wrote:
> On Mon, 3 Mar 2008, Ahmad Humayun wrote:
>
> > Hello everyone,
> >
> > I have a question about the inter
hash
function is in map?
thanks again for the great support on this mailing list.
regards,
--
Ahmad Humayun
Research Assistant
Computer Science Dpt., LUMS
+92 321 4457315
t change/modify the contents of a file once it is written.
>
> Thanks,
> dhruba
>
> -Original Message-
> From: Ahmad Humayun [mailto:[EMAIL PROTECTED]
> Sent: Saturday, March 01, 2008 1:27 AM
> To: core-dev@hadoop.apache.org
> Subject: Re: hdfsLock
>
> We
Hello there,
Whats the difference b/w the TaskInProgress class and the
TaskTracker.TaskInProgress (the inner class)?
thanks for bearing with my stupid question :)
regards,
--
Ahmad Humayun
Research Assistant
Computer Science Dpt., LUMS
+92 321 4457315
Hi there,
Can somebody point me to papers related to MapReduce / Hadoop like Sinfonia
and the MapReduce itself.
Thanks for the help :)
regards,
--
Ahmad Humayun
Research Assistant
Computer Science Dpt., LUMS
+92 321 4457315
stream. FileLocks have
> nothing to do with that. They were meant to be something like 'flock()'
> system call.
>
> Raghu.
>
> Ahmad Humayun wrote:
> > Do you any reason why? Is it because only one thread can write to a
> specific
> > file in the hdfs at a
Raghu Angadi <[EMAIL PROTECTED]>
wrote:
>
> File locking is not supported in HDFS. Not sure if it ever was supported
> properly. This interface was deprecated last year.
>
> Raghu.
>
> Ahmad Humayun wrote:
> > Hello everyone,
> >
> > Does anybody have a
Hello everyone,
Does anybody have an idea why hdfsLock and hdfsReleaseLock been taken out of
libhdfs? How do I lock a file now using libhdfs? Can somebody point me to
the changelog or smth?
regards,
--
Ahmad Humayun
Research Assistant
Computer Science Dpt., LUMS
+92 321 4457315
thanks Arun :)
On Thu, Feb 28, 2008 at 5:04 AM, Arun C Murthy <[EMAIL PROTECTED]> wrote:
>
> On Feb 27, 2008, at 11:27 AM, Ahmad Humayun wrote:
>
> > Thanks Arun for the comment :) That actually explains how the
> > libhdfs code
> > is minute.
> >
> >
original hadoop
code is lying while using libhdfs.so
thanks,
regards,
Ahmad H.
On Thu, Feb 28, 2008 at 12:04 AM, Arun C Murthy <[EMAIL PROTECTED]> wrote:
> Ahmad,
>
> On Feb 27, 2008, at 10:44 AM, Ahmad Humayun wrote:
>
> > Hello everyone,
> >
> > Apparently n
Hello everyone,
Apparently no one knew the answer to my question :( Am I looking at things
the right way or nobody has compared the libhdfs code to hdfs? Or am I
completely wrong, libhdfs and hdfs has no difference at all?
regards,
On Mon, Feb 25, 2008 at 11:07 PM, Ahmad Humayun <[EM
mean a peer to peer system? Although
> that
> would be very fault tolerant, wouldn't there be consistency and
> performance
> issues?
> If I understand correctly, the rationale behind current centralized
> architecture is that it keeps the system simple. Would it be useful to
thanks :)...sorry for buggin u over n over.
regards,
On Mon, Feb 25, 2008 at 11:07 PM, Arun C Murthy <[EMAIL PROTECTED]> wrote:
> Ahmad,
>
> On Feb 25, 2008, at 9:57 AM, Ahmad Humayun wrote:
>
> > So I'm guessing that ant uses the build.xml file :)
> >
>
&
ith the libhdfs.
thanks,
--
Ahmad Humayun
Research Assistant
Computer Science Dpt., LUMS
+92 321 4457315
So I'm guessing that ant uses the build.xml file :)
thanks again Arun.
regards,
Ahmad H.
On Mon, Feb 25, 2008 at 10:46 PM, Arun C Murthy <[EMAIL PROTECTED]> wrote:
>
> On Feb 24, 2008, at 11:23 PM, Ahmad Humayun wrote:
>
> > Thanks Arun :), I'll try that, cau
Thanks Arun :), I'll try that, cause I was just using make before.
So in short, hadoop is not configured to compiled with make?
regards,
On Mon, Feb 25, 2008 at 11:07 AM, Arun C Murthy <[EMAIL PROTECTED]> wrote:
> Ahmad,
>
> On Feb 24, 2008, at 3:52 AM, Ahmad Humayun wrot
ut please it will be great if someone can help me here.
regards,
--
Ahmad Humayun
Research Assistant
Computer Science Dpt., LUMS
+92 321 4457315
s. Do you have any idea about where is the scheduling module in the
> source files?
>
> 2008/2/13, Ahmad Humayun <[EMAIL PROTECTED]>:
> >
> > Its not a separate module, as far as I know, its just part of the whole
> > Hadoop implementation. If you would like to us
ask others too :)
regards,
On Feb 13, 2008 10:18 PM, Zhu Huijun <[EMAIL PROTECTED]> wrote:
> Thank you, Ahmad Humayun. I am not asking for the idea of MapReduce. What
> I
> am asking is the scheduling scheme in Hadoop. I am wondering whether the
> scheduling module is a part of any libr
; of Hadoop, or is it a standalone library? Is there any publications
> specific
> on scheduling in Hadoop? Could you please share some details about
> scheduling or suggest some literature of Hadoop?
>
> Thanks!
>
> Best wishes,
>
> Huijun Zhu
>
--
Ahmad Humayun
Research Assistant
Computer Science Dpt., LUMS
+92 321 4457315
45 matches
Mail list logo