p: permission
>
>
> -- Adam
>
> On May 7, 2012, at 10:55 PM, Austin Chungath wrote:
>
> > Thanks Adam,
> >
> > That was very helpful. Your second point solved my problems :-)
> > The hdfs port number was wrong.
> > I didn
tead of hftp ... for
> > more you can refer
> >
> https://groups.google.com/a/cloudera.org/group/cdh-user/browse_thread/thread/d0d99ad9f1554edd
> >
> >
> >
> > if it failed there should be some error
> > On Mon, May 7, 2012 at 4:44 PM, Austin Chungath
>
ava:79)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:908)
Any idea why this error is coming?
I am copying one file from 0.20.205 (/docs/index.html ) to cdh3u3
(/user/hadoop)
Thanks & Regards,
Austin
On Mon, May 7, 2012 at 3:57 PM, Austin Chungath wrote:
> Thanks,
>
> So I decide
ur existing cluster...
> >>> Remove your old log files, temp files on HDFS anything you don't need.
> >>> This should give you some more space.
> >>> Start copying some of the directories/files to the new cluster.
> >>> As you gain space, dec
and Cloudea... You really want to accept my
> upcoming proposal talk... ;-)
>
>
> Sent from a remote device. Please excuse any typos...
>
> Mike Segel
>
> On May 3, 2012, at 5:25 AM, Austin Chungath wrote:
>
> > Yes. This was first posted on the cloudera mailin
oudera mailing list.
>
> On Thu, May 3, 2012 at 2:51 AM, Austin Chungath
> wrote:
>
> > There is only one cluster. I am not copying between clusters.
> >
> > Say I have a cluster running apache 0.20.205 with 10 TB storage capacity
> > and has about 8 TB of data.
>
>
> On Thu, May 3, 2012 at 12:51 PM, Austin Chungath
> wrote:
>
> > Thanks for the suggestions,
> > My concerns are that I can't actually copyToLocal from the dfs because
> the
> > data is huge.
> >
> > Say if my hadoop was 0.20 and I am upgrading t
d a copyFromLocal
>
> On Thu, May 3, 2012 at 11:41 AM, Austin Chungath
> wrote:
>
> > Hi,
> > I am migrating from Apache hadoop 0.20.205 to CDH3u3.
> > I don't want to lose the data that is in the HDFS of Apache hadoop
> > 0.20.205.
> > How do I mi
Hi,
I am migrating from Apache hadoop 0.20.205 to CDH3u3.
I don't want to lose the data that is in the HDFS of Apache hadoop
0.20.205.
How do I migrate to CDH3u3 but keep the data that I have on 0.20.205.
What is the best practice/ techniques to do this?
Thanks & Regards,
Austin
un:
> hadoop fs -mkdir /user/
> hadoop fs -chown : /user/
>
> 3. For default file/dir permissions to be 700, tweak the dfs.umaskmode
> property.
>
> Much of this is also documented at the permissions guide:
> http://hadoop.apache.org/common/docs/r0.20.2/hdfs_permission
I have a 2 node cluster running hadoop 0.20.205. There is only one user ,
username: hadoop of group: hadoop.
What is the easiest way to add one more user say hadoop1 with DFS
permissions set as true?
I did the following to create a user in the master node.
sudo adduser --ingroup hadoop hadoop1
My
Hi,
I was looking at the following link and reading about hdfs permissions (my
hadoop version is 20.205)
http://hadoop.apache.org/common/docs/r0.20.205.0/hdfs_permissions_guide.html
I find this dfs.umask property particularly useful, but I can't find this
property in my hdfs-default.xml doc
I wa
15 PM, Austin Chungath wrote:
> I tried the patch MAPREDUCE-2457 but it didn't work for my hadoop 0.20.205.
> Are you sure this patch will work for 0.20.205?
> According to the description it says that the patch works for 0.21 and
> 0.22 and it says that 0.20 supports group.name withou
ix presented in https://issues.apache.org/jira/browse/MAPREDUCE-2457
> to have group.name support.
>
> On Thu, Mar 1, 2012 at 6:42 PM, Austin Chungath
> wrote:
> > I am running fair scheduler on hadoop 0.20.205.0
> >
> > http://hadoop.apache.org/common/docs/r0.20.205.0/fa
ob to specific pool from your
> allocation.xml file you can define it as follows:
>
> Configuration conf3 = conf;
> conf3.set("pool.name", "pool3"); // conf.set(propriety.name, value)
>
> Let me know if it works..
>
>
> On 29 February 2012 14:
guration conf3 = conf;
> conf3.set("pool.name", "pool3"); // conf.set(propriety.name, value)
>
> Let me know if it works..
>
>
> On 29 February 2012 14:18, Austin Chungath wrote:
>
> > How can I set the fair scheduler such that all jobs submitted from
I am running fair scheduler on hadoop 0.20.205.0
http://hadoop.apache.org/common/docs/r0.20.205.0/fair_scheduler.html
The above page talks about the following property
*mapred.fairscheduler.poolnameproperty*
**
which I can set to *group.name*
The default is user.name and when a user submits a jo
How can I set the fair scheduler such that all jobs submitted from a
particular user group go to a pool with the group name?
I have setup fair scheduler and I have two users: A and B (belonging to the
user group hadoop)
When these users submit hadoop jobs, the jobs from A got to a pool named A
an
mizing+How+Lines+are+Split+into+Key%2FValue+Pairs
>
> Read this link, your options are wrong below.
>
>
>
> On Tue, Feb 28, 2012 at 1:13 PM, Austin Chungath
> wrote:
>
> > When I am using more than one reducer in hadoop streaming where I am
> using
> > my
When I am using more than one reducer in hadoop streaming where I am using
my custom separater rather than the tab, it looks like the hadoop shuffling
process is not happening as it should.
This is the reducer output when I am using '\t' to separate my key value
pair that is output from the mapper
20 matches
Mail list logo