- Original Message -
From: "M. C. Srivas"
Date: Sunday, November 6, 2011 3:13 am
Subject: Re: FileSystem contract of listStatus
To: common-dev@hadoop.apache.org
> On Thu, Nov 3, 2011 at 4:27 AM, Uma Maheswara Rao G 72686 <
> mahesw...@huawei.com> wrote:
>
>
Yes, i remember this issue filed by Harsh recently.
GlobStatus will sort the results and return. May be we can fix for listStatus
in the same way.
Regards,
Uma
- Original Message -
From: Harsh J
Date: Thursday, November 3, 2011 7:52 am
Subject: Re: FileSystem contract of listStatus
To
Original Message -
From: gschen
Date: Monday, October 24, 2011 7:54 am
Subject: Does hadoop use epoll to manage the network read/write/errors events?
To: "common-dev@hadoop.apache.org"
> Hi guys,
>
> I am new to hadoop. And I have a question about hadoop:
>
> does hadoop use epoll to
- Original Message -
From: Bharath Ravi
Date: Wednesday, October 19, 2011 8:16 am
Subject: Re: Load balancing requests in HDFS
To: common-dev@hadoop.apache.org
> Thanks a lot Steve!
>
> ReplicationTargetChooser seems to address load balancing for initially
> placing/laying out data,
> bu
> So is there a client program to call this?
>
> Can one write their own simple client to call this method from all
> diskson the cluster?
>
> How about a map reduce job to collect from all disks on the cluster?
>
> On 10/15/11 4:51 AM, "Uma Maheswara Rao G 7268
Hi,
If you have some free time, can you please review HDFS-1447 ?
HDFS-1447 (Make getGenerationStampFromFile() more efficient, so it doesn't
reprocess full directory listing for every block)
Regards,
Uma
/** Return the disk usage of the filesystem, including total capacity,
* used space, and remaining space */
public DiskStatus getDiskStatus() throws IOException {
return dfs.getDiskStatus();
}
DistributedFileSystem has the above API from java API side.
Regards,
Uma
- Original Mess
+1 for option 4.
Let the User starts required services from it.
Regards,
Uma
- Original Message -
From: giridharan kesavan
Date: Wednesday, October 12, 2011 11:24 pm
Subject: Re: 0.23 & trunk tars, we'll we publishing 1 tar per component or a
single tar? What about source tar?
To: hdfs-
13, Uma Maheswara Rao G 72686 wrote:
> > I did not get your proposed strategy implementations.
> >
> > Note that, already you can set the replication levels for files.
> If you set less replication, then obviously your perf and space
> will get benefits and also risk will be high
I did not get your proposed strategy implementations.
Note that, already you can set the replication levels for files. If you set
less replication, then obviously your perf and space will get benefits and also
risk will be high in this case. I think we can manage your requirements using
that re
Hello Ruby,
It is just logging the trace of configuration object invocations.
It will not throw exception.
Regards,
Uma
- Original Message -
From: Ruby Stevenson
Date: Thursday, September 29, 2011 6:15 am
Subject: The configuration loading behavior
To: common-dev@hadoop.apache.org
> A
Some more info, i attached the screenshot to know where exactly you need to
edit after opening the below link provided by Steve.
Regards,
Uma
- Original Message -
From: Steve Loughran
Date: Tuesday, September 20, 2011 10:21 pm
Subject: Re: Change Jira email address
To: common-dev@hadoop.
+1, that is nice point.
Thanks,
Uma
- Original Message -
From: Vinod Kumar Vavilapalli
Date: Friday, September 9, 2011 3:09 pm
Subject: JIRA attachments order
To: common-dev@hadoop.apache.org
> Can someone with JIRA admin privileges see if the default sorting
> order for
> attachments
Hi Jhon,
Mostly the problem with your java. This problem can come if your java link
refers to java-gcj.
Please check some related links:
http://jeffchannell.com/Flex-3/gc-warning.html
Regards,
Uma
- Original Message -
From: john smith
Date: Sunday, September 4, 2011 10:22 pm
Subject:
Hi,
Community started working on NameNode High availabily.
you can refer : https://issues.apache.org/jira/browse/HDFS-1623
regards,
Uma
- Original Message -
From: George Kousiouris
Date: Thursday, August 25, 2011 3:15 pm
Subject: Any related paper on how to resolve hadoop SPOF issue?
Hi Prasanna,
First of all thanks for your interest in contributing to Hadoop.
You can refer the 'How To Contribute To Common' page from Hadoop wiki
http://wiki.apache.org/hadoop/HowToContribute
You can follow the guidelines mentioned there.
Regards,
Uma
- Original Message -
From:
Hi,
AFAIK, HIVE will use the derby for storing the metadata.
I am not expert in HIVE, but i can give the some info.
HIVE will store the table and hdfs path mappings ralated information in Derby.
I think you can configure other DBs as well but by default Hive supports derby
Regards,
Uma
- Or
Thanks Tom!
- Original Message -
From: Thomas Graves
Date: Thursday, July 28, 2011 7:57 pm
Subject: Re: yahoo.net build machines
To: "common-dev@hadoop.apache.org" , Todd Lipcon
> Its being looked into.
>
> Tom
>
>
> On 7/28/11 12:14 AM, "Todd Lipcon" wrote:
>
> > Hi all,
> >
> >
Hi Zinab,
1) First you can check all DNs are running or not
Because NN will take some time (heartbeat expiry period) to detect the DN
shutdown. UI may show as live nodes at that time.
2) When NN choosing the DNs , it will check whether Node is good or not.
Here it will check multiple condi
19 matches
Mail list logo