Re: How to change default ports of datanodes in a cluster

2013-08-31 Thread Harsh J
ault port for datanode is 50075 i am able to change namenode default port > by changing > > dfs.namenode.http-address.ns1 & dfs.namenode.http-address.ns2 in my > hdfs-site.xml of my 2 namenodes > > how to change default port address of my multiple datanodes > > -- Harsh J

Re: How to change default ports of datanodes in a cluster

2013-08-31 Thread Harsh J
ple ip address under tag > dfs.datanode.http.address as i have 4 data nodes > > > On Sat, Aug 31, 2013 at 9:44 PM, Harsh J wrote: >> >> Looking at the hdfs-default.xml should help with such questions: >> >> http://hadoop.apache.org/docs/current/hadoop-project-dist

Re: custom writablecomparable with complex fields

2013-08-31 Thread Harsh J
t; what I am not sure about is how to write > > readFields and write methods for this object. Any help would be appreciated. > > Thanks > Adeel -- Harsh J

Re: custom writablecomparable with complex fields

2013-09-01 Thread Harsh J
Aug 31, 2013 4:52 PM, "Harsh J" wrote: >> >> The idea behind write(…) and readFields(…) is simply that of ordering. >> You need to write your custom objects (i.e. a representation of them) >> in order, and read them back in the same order. >> >> An example w

Re: forbid use of hadoop.job.ugi to impersonate user without Kerberos

2013-09-03 Thread Harsh J
onsidering using hadoop for a share cluster. As start to setup quickly > the system I'm was planning to differ Kerberos use. > Is there other way than using Kerberos to forbid a user to use > hadoop.job.ugi parameter to impersonate another user? > > thanks > Mt -- Harsh J

Re: yarn-site.xml and aux-services

2013-09-03 Thread Harsh J
are some details missing, like how the lifetime of the temporary files > is controlled to extend beyond the mapper-like task lifetime but still be > cleaned up on AM exit, and how the reducer-like tasks are informed of which > nodes have data. > > John > > > -Original Message

Re: Hadoop Yarn

2013-09-03 Thread Harsh J
ommunication is strictly prohibited. If you have received this > communication in error, please contact the sender immediately and delete it > from your system. Thank You. -- Harsh J

Re: jaspersoft ireport -reg

2013-09-04 Thread Harsh J
ake report on > it . > > Thanks & Regards > -- > N Venkata Rami Reddy > -- Harsh J

Re: manage bandwidth resource in YARN

2013-09-04 Thread Harsh J
der-review patch? > > I searched but returned nothing, > > Can anyone give some hints? > > Best, > > Nan > -- Harsh J

Re: How to update the timestamp of a file in HDFS

2013-09-04 Thread Harsh J
; Hi, > > Can you please help on to update the date & timestamp of a file in HDFS. > > regards, > Rams -- Harsh J

Re: How to update the timestamp of a file in HDFS

2013-09-05 Thread Harsh J
e > > > On Thu, Sep 5, 2013 at 12:14 PM, murali adireddy > wrote: >> >> Hi , >> >> Try this "touchz" hadoop command. >> >> hadoop -touchz filename >> >> >> Thanks and Regards, >> Adi Reddy Murali >> >>

Re: Disc not equally utilized in hdfs data nodes

2013-09-05 Thread Harsh J
he > data directories. > > We having 4x1 TB drives, but huge data storing in single disc only at all > the nodes. How to balance for utilize all the drives. > > This causes the hdfs storage size becomes high very soon even though we have > available space. > > Thanks, > Viswa.J -- Harsh J

Re: yarn-site.xml and aux-services

2013-09-05 Thread Harsh J
quest like that? > > Thanks > John > > -Original Message- > From: Harsh J [mailto:ha...@cloudera.com] > Sent: Wednesday, September 04, 2013 12:05 AM > To: > Subject: Re: yarn-site.xml and aux-services > >> Thanks for the clarification. I would find it very

Re: SNN not writing data fs.checkpoint.dir location

2013-09-05 Thread Harsh J
: >> > Hi, >> > >> > I have configured fs.checkpoint.dir in hdfs-site.xml, but still it was >> > writing in /tmp location. Please give me some solution for checkpointing >> > on >> > respective location. >> > >> > -- >> > *Regards* >> > * >> > * >> > *Munna* >> > > > > > > -- > Regards > > Munna -- Harsh J

Re: Disc not equally utilized in hdfs data nodes

2013-09-05 Thread Harsh J
ices. > > Thanks, > V > > On Sep 5, 2013 10:53 PM, "Harsh J" wrote: >> >> Please share your hdfs-site.xml. HDFS needs to be configured to use >> all 4 disk mounts - it does not auto-discover and use all drives >> today. >> >> On Thu, Sep 5

Re: How to support the (HDFS) FileSystem API of various Hadoop Distributions?

2013-09-05 Thread Harsh J
it enough to pick different vanilla Versions (for IPC 5, 7)? > > Best Regards, > Christian. > -- Harsh J

Re: How to support the (HDFS) FileSystem API of various Hadoop Distributions?

2013-09-05 Thread Harsh J
Oh and btw, nice utility! :) On Fri, Sep 6, 2013 at 7:50 AM, Harsh J wrote: > Hello, > > There are a few additions to the FileSystem that may bite you across > versions, but if you pick an old stable version such as Apache Hadoop > 0.20.2, and stick to only its offered APIs, it wo

Re: How to speed up Hadoop?

2013-09-05 Thread Harsh J
oking for ways to configure Hadoop inorder to speed up data > processing. Assuming all my nodes are highly fault tolerant, will making > data replication factor 1 speed up the processing? Are there some way to > disable failure monitoring done by Hadoop? > > Thank you for your time. > > -Sundeep -- Harsh J

Re: How to support the (HDFS) FileSystem API of various Hadoop Distributions?

2013-09-07 Thread Harsh J
ad the correct implementation for > that. > > But how to check the IPC version? > > Best Regards, > Christian. > > > P.S.: Thanks, that motivates me to continue :) > > > 2013/9/6 Harsh J >> >> Oh and btw, nice utility! :) >> >> On Fri,

Re: Hadoop is hardcoded the prot into 8020

2013-09-07 Thread Harsh J
I changed the port into > 8020 hadoop is working fine.So I would like to know whether the port 8020 is > hard codded in side the hadoop code base > > Thanks > Raju Nair > -- Harsh J

Re: Hadoop on IPv6

2013-09-09 Thread Harsh J
the individual or entity >> to which it is addressed and may contain information that is confidential, >> privileged and exempt from disclosure under applicable law. If the reader of >> this message is not the intended recipient, you are hereby notified that any >> printing, copying, dissemination, distribution, disclosure or forwarding of >> this communication is strictly prohibited. If you have received this >> communication in error, please contact the sender immediately and delete it >> from your system. Thank You. > > -- Harsh J

Re: Questions about BlockPlacementPolicy

2013-09-09 Thread Harsh J
it > possible to set different block have different replication number? Like > different files have different copies? I make some try but it seems cannot > work > > > Thank you very much > > > > > > Yifan -- Harsh J

Re: modify hdfs block size

2013-09-09 Thread Harsh J
k size is 32MB, I know the default is 64MB > thanks > > -- > > In the Hadoop world, I am just a novice, explore the entire Hadoop > ecosystem, I hope one day I can contribute their own code > > YanBit > yankunhad...@gmail.com > -- Harsh J

Re: whether dfs.domain.socket.path is supported in Hadoop 1.1?

2013-09-10 Thread Harsh J
nd the configuration parameter of this "dfs.domain.socket.path" and I > wonder whether this parameter setting only gets used by Hadoop 2.* release? > > Could you provide explanation on this about the Hadoop version support? > > Regards, > > Jun > > -- Harsh J

Re: hadoop web UI security

2013-09-11 Thread Harsh J
> > > > hadoop.http.authentication.signature.secret.file > /home/hadoop/hadoop-0.23.3/conf/security/username > > > > hadoop.http.authentication.cookie.domain > > > > > > hadoop.http.authentication.simple.anonymous.allowed > false > -- Harsh J

Re: Status of mapred*

2013-09-11 Thread Harsh J
> Jay Vyas > http://jayunit100.blogspot.com -- Harsh J

Re: Real Multiple Outputs for Hadoop -- is this implementation correct?

2013-09-13 Thread Harsh J
ockito > > https://github.com/paulhoule/infovore/wiki/Unit-Testing-Hadoop-Mappers-and-Reducers > > Anyway, I'd appreciate anybody looking at this code and trying to poke > holes in it. It runs OK on my tiny dev cluster in 1.0.4, 1.1.2 and in AMZN > EMR but I am wondering if I missed something. > > -- Harsh J

Re: fsck -move is copying not moving

2013-09-13 Thread Harsh J
/my/bad/filepath. > > Is that expected? I thought fsck should Move not Copy the corrupt files. > > ... I then ran fsck /my/bad/filepath -delete and it deleted the bad file so > it's all fine, but that seems unnecessary. > > I'm on CDH3u5. > > Thanks -- Harsh J

Re: Yarn log directory perms

2013-09-13 Thread Harsh J
rmissions for user log dir. >* $logdir/$user/$appId */ > private static final short LOGDIR_PERM = (short)0710; > > > Any reasons for not having this be a configurable property? > > Thanks, > Prashant -- Harsh J

Re: FATAL org.apache.hadoop.mapred.JettyBugMonitor question

2013-09-14 Thread Harsh J
is detected, the TaskTracker will shut itself down. This > feature can be disabled by setting > mapred.tasktracker.jetty.cpu.check.enabled to false. > > How do you solve the problem usually ? Is there a simple method to deal with > the problem ? > Thanks. -- Harsh J

Re: HDFS performance with an without replication

2013-09-15 Thread Harsh J
sue. > > > > What is the difference in write performance using replication=1 vs 3? For > reading I’d expect the performance to be roughly requivalent. > > > > john -- Harsh J

Re: Yarn and HDFS federation

2013-09-18 Thread Harsh J
nodes. In this case > how this Yarn works? > Where will be the Resource Manger if i have multiple name nodes? (Normally > in Master Node I guess) > Can we execute yarn applications in hadoop cluster without having this hdfs > federation? > > Please clarify my doubt. > > > > Thanks, > Manickam p -- Harsh J

Re: Issue: Max block location exceeded for split error when running hive

2013-09-18 Thread Harsh J
ing to dig up additional documentation on this since the default > seems to be 10, not sure how that limit was set. > Additionally what is the recommended value and what factors does it depend > on? > > We are running YARN, the actual query is Hive on CDH 4.3, with Hive version > 0.10 > > Any pointers in this direction will be helpful. > > Regards, > md -- Harsh J

Re: Issue: Max block location exceeded for split error when running hive

2013-09-19 Thread Harsh J
involved it fails on a larger split size. > > > On Wed, Sep 18, 2013 at 6:34 PM, Harsh J wrote: >> >> Do your input files carry a replication factor of 10+? That could be >> one cause behind this. >> >> On Thu, Sep 19, 2013 at 6:20 AM, Murtaza Doctor >&

Re: Task status query

2013-09-19 Thread Harsh J
complete > > 4 tasks are running and they are (4%, 10%, 50%, 70%) complete > > But, given that YARN tasks are simply executables, how can the AM even get > at this information? Can the AM get access to stdout/stderr? > > Thanks > > John > > -- Harsh J

Re: Task status query

2013-09-20 Thread Harsh J
John Lilley wrote: > Thanks Harsh. Is this protocol something that is available to all AMs/tasks? > Or is it up to each AM/task pair to develop their own protocol? > john > > -Original Message- > From: Harsh J [mailto:ha...@cloudera.com] > Sent: Thursday, Septe

Re: Yarn and Hdfs federation

2013-09-24 Thread Harsh J
des and 3 data nodes. Now > i want to have separate RM for yarn. Can i install like that? Then will my > name node > work like NM? How it will work? > > Can i install hive in any one of the name node in federated cluster? > > Pls help me to understand this. > > > Thanks, > Manickam P -- Harsh J

Re: Lz4Codec

2013-09-24 Thread Harsh J
p 1.2.1. > Where is org.apache.hadoop.io.compress.Lz4Codec? How can I use it? > > Regards > > Tomas -- Harsh J

Re: 2 Map tasks running for a small input file

2013-09-26 Thread Harsh J
0 2 >> SLOTS_MILLIS_REDUCES 0 0 9,199 >> *** >> My question why r there 2 launched map tasks when i have only a small >> file. >> Per my understanding it is only 1 block. >> and should be only 1 split. >> Then for each line a map computation should occur >> but it shows 2 map tasks. >> Please let me know. >> Thanks >> Sai >> > > -- Harsh J

Re: Unable to create a directory in federated hdfs.

2013-09-26 Thread Harsh J
.99.68:9001 > > > dfs.namenode.http-address.ns1 > 10.108.99.68:50070 > > > dfs.namenode.secondary.http-address.ns1 > 10.108.99.69:50090 > > > dfs.namenode.rpc-address.ns2 > 10.108.99.69:9001 > > > dfs.namenode.http-address.ns2 > 10.108.99.69:50070 > > > dfs.namenode.secondary.http-address.ns2 > 10.108.99.68:50090 > > > Thanks, > Manickam P > -- Harsh J

Re: Is there any way to partially process HDFS edits?

2013-09-26 Thread Harsh J
>> again. >> >> Is there any way that the edits file can be partially processed to avoid >> having to re-process the same edits over and over until I can allocate >> enough memory for it to be done in one shot? >> >> How long should it take (hours? days?) to process an edits file of that >> size? >> >> Any help is appreciated! >> >> --Tom >> >> > -- Harsh J

Re: Is there any way to partially process HDFS edits?

2013-09-26 Thread Harsh J
;m >>> trying to figure out if I'm hitting some strange bug. >>> >>> When the edits were originally made (over the course of 6 weeks), the >>> namenode only had 512MB and was able to contain the filesystem completely in >>> memory. I don't under

Re: Calling the JobTracker from Reducer throws InvalidCredentials GSSException

2013-09-28 Thread Harsh J
> > The call to getCounter() API throws GSSException (No valid credentials > provided - Failed to find any kerberos tgt). > > I launched this job using hadoop jar command. > > Any help would be much appreciated. > > Thanks > Manish > -- Harsh J

Re: Calling the JobTracker from Reducer throws InvalidCredentials GSSException

2013-09-29 Thread Harsh J
ested this > logic in a single node cloudera VM which did not have kerberos installed. > > Thanks > Manish > > > On Sat, Sep 28, 2013 at 11:06 AM, Harsh J wrote: >> >> You'll need to reuse the security tokens of the current job to >> communicate witho

Re: Accessing only particular folder using hadoop streaming

2013-10-02 Thread Harsh J
Lshard2---d1_1 > | |_d2_2 > Lshard3---d1_1 > | |_d2_3 > Lshard4---d1_1 >|_d2_4 > > > Now, I want to search something in d1 (and excluding all the d2's) in it. > So how do i do that in python? > Thanks > -- Harsh J

Re: using environment variables in XML conf files

2013-10-09 Thread Harsh J
adoop.tmp.dir in > conf/core-site.xml: > > > > hadoop.tmp.dir > /home/aim/tmp/hadoop-${user.name} > > > > Is it possible to replace the "/home/aim" with a substitutable version > $HOME? On a whim I tried ${env.home} but that didn't work... > > -- > andy > > -- Harsh J

Re: is jdk required to run hadoop or jre alone is sufficient

2013-10-16 Thread Harsh J
or > JDK is required ? > > we are planning to install latest stable version of hadoop > > Thanks, > > Oc.tsdb -- Harsh J

Re: is jdk required to run hadoop or jre alone is sufficient

2013-10-17 Thread Harsh J
No, you do not need JDK to just run Apache Hadoop. On Thu, Oct 17, 2013 at 2:16 PM, oc tsdb wrote: > In general to run just hadoop without using any other tools like sqoop, do > we need jdk ? > > > On Wed, Oct 16, 2013 at 1:08 PM, Harsh J wrote: >> >> You will nee

Re: CDH4.4 and HBASE-8912 issue

2013-10-17 Thread Harsh J
Suggested solution was to update hbase to version 0.94.13, which is absent > in cloudera distribution. > > Is it possible to run pure hbase over cloudera hadoop? > > Or how can i find if this bug is present in previous versions of cdh? > > -- > Best regards, > > Boris Emelyanov. -- Harsh J

Re: simple word count program remains un assigned...

2013-10-19 Thread Harsh J
From source with checksum ac7e170aa709b3ace13dc5f775487180 >> > This command was run using >> > /usr/lib/hadoop/hadoop-common-2.0.0-cdh4.4.0.jar >> > >> > >> > and the outcome of jps (from root) >> > >> > - >> > [root@localhost ~]# jps >> > 2202 TaskTracker >> > 4161 Bootstrap >> > 3134 DataNode >> > 3520 Application >> > 3262 NameNode >> > 1879 ThriftServer >> > 1740 Main >> > 3603 RunJar >> > 1606 HMaster >> > 2078 JobTracker >> > 16277 Jps >> > 3624 RunJar >> > 4053 RunJar >> > 4189 Sqoop >> > 3582 Bootstrap >> > 3024 JobHistoryServer >> > 3379 SecondaryNameNode >> > 4732 ResourceManager >> > >> > -- >> > -- >> > Thanks & Regards >> > Gunjan Mishra >> > 732-200-5839(H) >> > 917-216-9739(C) >> > > > > -- > Thanks & Regards > Gunjan Mishra > 732-200-5839(H) > 917-216-9739(C) -- Harsh J

Re: temporary file locations for YARN applications

2013-10-20 Thread Harsh J
ess to > these locations? Is this something that must be configured outside of YARN? > > Thanks, > > John > > > > -- Harsh J

Re: temporary file locations for YARN applications

2013-10-20 Thread Harsh J
do. > > Thanks > John > > > -Original Message- > From: Harsh J [mailto:ha...@cloudera.com] > Sent: Sunday, October 20, 2013 10:49 AM > To: > Subject: Re: temporary file locations for YARN applications > > Every container gets its own local work directory (Yo

Re: temporary file locations for YARN applications

2013-10-21 Thread Harsh J
or: LocalDirAllocator(String > contextCfgItemName) and a note mentioning that an example of this item is > "mapred.local.dir". Is that the correct usage, or is there something > YARN-generic? > > Cheers, > john > > -Original Message- > From: Harsh J [

Re: if i configed NN HA,should i still need start backup node?

2013-10-29 Thread Harsh J
No, you do not require the BackupNode in a HA NN configuration. On Tue, Oct 29, 2013 at 9:31 AM, ch huang wrote: > ATT -- Harsh J

Re: Combining MultithreadedMapper threadpool size & map.tasks.maximum

2013-10-29 Thread Harsh J
The setup and cleanup threads are run only once per task attempt. On Mon, Oct 28, 2013 at 11:02 PM, Shashank Gupta wrote: > A "little" late to the party but I have a concluding question :- > > What about Setup and Cleanup functions? Will each thread invoke those > functions too? > -- Harsh J

Re: Why is my output directory owned by yarn?

2013-10-31 Thread Harsh J
29 14:58:53,062 WARN > org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: > USER=UnknownUserIP=10.128.0.17 OPERATION=Stop Container > Request TARGET=ContainerManagerImpl RESULT=FAILURE DESCRIPTION=Trying to > stop unknown > container! APPID=application_1383020136544_0005 > CONTAINERID=container_13830 > 20136544_0005_01_01 > > > > > Thanks, >John > -- Harsh J

Re: can flume use hadoop HA?

2013-10-31 Thread Harsh J
Any and all HDFS clients can use HA and flume is no different. On Wed, Oct 30, 2013 at 7:31 AM, ch huang wrote: > ATT -- Harsh J

Re: Why is my output directory owned by yarn?

2013-11-01 Thread Harsh J
hadoop-yarn/cache/yarn/nm-local-dir/nmPrivate/container_1383247324024_ > 0005_01_01.pid > exec setsid /bin/bash > "/tmp/hadoop-yarn/cache/yarn/nm-local-dir/usercache/jdoe/appcache/applicati > on_1383247324024_0005/container_1383247324024_0005_01_01/launch_contain > er.sh&qu

Re: Path exception when running from inside IDE.

2013-11-02 Thread Harsh J
Your job configuration isn't picking up or passing the right default filesystem (fs.default.name or fs.defaultFS) before submitting the job. As a result, the non-configured default of local filesystem is getting picked up for paths you intended to look for on HDFS. On Friday, November 1, 2013, Oma

Re: access to hadoop cluster to post tasks remotely

2013-11-06 Thread Harsh J
; > > > The same jar sent from inside the cluster runs fine. > > > > The network where cluster lives is protected by firewall with only NameNode > and JobTracker ports opened externally. > > iptables on all nodes are off. > > > > I have no ideas about reasons of these messages in the log. To the moment I > were sure that entry point to hadoop cluster contains just NameNode and > JobTracker ports. > > Both are open. > > > > Please help! > > > > > > -- Harsh J

Re: EclipsePlugin source in branch-2.x.x

2013-11-06 Thread Harsh J
che.org/hadoop/EclipsePlugIn > > Javi Roman -- Harsh J

Re: Why SSH

2013-11-10 Thread Harsh J
odes what protocol hadoop > uses? SSH or http or https -- Harsh J

Re: Hadoop on multiple user mode?

2013-11-11 Thread Harsh J
submit job as a guest user, > the map process is executed as admin user. I print user home in my main code > as well as inside map process. Is there a way span map process a job > submitting user? -- Harsh J

Re: Hadoop on multiple user mode?

2013-11-12 Thread Harsh J
Hi, In MR2 over YARN, you'll need to configure the LinuxContainerExecutor instead. On Tue, Nov 12, 2013 at 2:11 PM, rab ra wrote: > Thanks for the response. However I could not find LinuxTaskController in > hadoop 2.2.0. > On 12 Nov 2013 03:10, "Harsh J" wrote: >

Re: New data on unfinalized hdfs upgrade

2013-11-15 Thread Harsh J
erted back to 1.0.4 safely? > > > > -- > View this message in context: > http://hadoop-common.472056.n3.nabble.com/New-data-on-unfinalized-hdfs-upgrade-tp4029496.html > Sent from the Users mailing list archive at Nabble.com. -- Harsh J

Re: How can I see the history log of non-mapreduce job in yarn

2013-11-26 Thread Harsh J
nly > help me to see the history log of mapreduce jobs. I still could not see the > logs of non-mapreduce job. How can I see the history log of non-mapreduce > job ? -- Harsh J

Re: How can I remote debug application master

2013-11-26 Thread Harsh J
e to remote debug the application master ? Thanks -- Harsh J

Re: Start and Stop Namenode

2013-11-27 Thread Harsh J
ID 3132 is DataNode, is this correct as I expected something > like "3132 NameNode"? does it mean that the following two commands are doing > the same thing in 2.2.0? > > ./sbin/hadoop-daemon.sh --script hdfs start namenode > ./sbin/hadoop-daemon.sh --script hdfs start datanode > > > regards > -- Harsh J

Re: passing configuration parameter to comparator

2013-12-02 Thread Harsh J
o access job config from anywhere? > > > > Thanks! > > Sergey. -- Harsh J

Re: about hadoop-2.2.0 "mapred.child.java.opts"

2013-12-03 Thread Harsh J
. It is advised that any unauthorized use of confidential > information of Winbond is strictly prohibited; and any information in this > email irrelevant to the official business of Winbond shall be deemed as > neither given nor endorsed by Winbond. -- Harsh J

Re: about hadoop-2.2.0 "mapred.child.java.opts"

2013-12-04 Thread Harsh J
Actually, its the other way around (thanks Sandy for catching this error in my post). The presence of mapreduce.map|reduce.java.opts overrides mapred.child.java.opts, not the other way round as I had stated earlier (below). On Wed, Dec 4, 2013 at 1:28 PM, Harsh J wrote: > Yes but the

Re: Check compression codec of an HDFS file

2013-12-05 Thread Harsh J
ith? > > We use both Gzip and Snappy compression so I want a way to determine how a > specific file is compressed. > > The closest I found is the getCodec but that relies on the file name suffix > ... which don't exist since Reducers typically don't add a suffix to the > filenames they create. > > Thanks -- Harsh J

Re: Help

2013-12-08 Thread Harsh J
Wrong list, dial 911 (or whatever your country uses)? On Mon, Dec 9, 2013 at 9:25 AM, busybody wrote: > Help -- Harsh J

Re: issue about corrupt block test

2013-12-10 Thread Harsh J
=3 [192.168.10.222:50010, 192.168.10.223:50010, > 192.168.10.221:50010] > 1. BP-50684181-192.168.10.220-1383638483950:blk_2504407693800874616_106252 > len=32891136 repl=3 [192.168.10.222:50010, 192.168.10.221:50010, > 192.168.10.224:50010] -- Harsh J

Re: issue about corrupt block test

2013-12-11 Thread Harsh J
828059_106796.meta > blk_580162309124277323 > > > On Wed, Dec 11, 2013 at 3:16 PM, Harsh J wrote: >> >> Block files are not stored in a flat directory (to avoid FS limits of >> max files under a dir). Instead of looking for them right under >> finalized, issue a &

Re: issue about file in DN datadir

2013-12-11 Thread Harsh J
93828059 > blk_-5451264646515882190_106793.meta > blk_4621466474283145207 blk_516060569193828059_106796.meta > blk_580162309124277323 -- Harsh J

Re: Encrypting files in Hadoop - Using the io.compression.codecs

2012-08-07 Thread Harsh J
files. > I've downloaded the complete code & create the jar file,Change the propertise > in core-site.xml as the site says. > But when I add a new file,nothing has happened & encryption isn't working. > What can I do for encryption hdfs files ? Does anyone know how I should use > this class ? > > Tnx -- Harsh J

Re: Encrypting files in Hadoop - Using the io.compression.codecs

2012-08-07 Thread Harsh J
> On Tue, Aug 7, 2012 at 12:32 PM, Harsh J wrote: >> >> Farrokh, >> >> The codec org.apache.hadoop.io.compress.crypto.CyptoCodec needs to be >> used. What you've done so far is merely add it to be loaded by Hadoop >> at runtime, but you will need to use it

Re: [ANNOUNCE] - New user@ mailing list for hadoop users in-lieu of (common,hdfs,mapreduce)-user@

2012-08-07 Thread Harsh J
uce)-user@ have been migrated over. > > I'm in the process of changing the website to reflect this (HADOOP-8652). > > Henceforth, please use the new mailing list for all user-related > discussions. > > thanks, > Arun > -- Harsh J

Re: unsubscribe

2012-08-08 Thread Harsh J
Visit: http://jvdrums.sourceforge.net/ > LinkedIn: http://www.linkedin.com/in/egolan74 > Skype: egolan74 > > P Save a tree. Please don't print this e-mail unless it's really necessary > -- Harsh J

Re: UNSUBSCRIBE IS A SEPARATE LIST

2012-08-08 Thread Harsh J
sed and may contain confidential and/or privileged >> material. Any review, retransmission, dissemination or other use of, or >> >> taking of any action in reliance upon, this information by persons or >> entities other than the intended recipient is prohibited. If you >> received >> this message in error, please contact the sender and delete the >> material >> from any computer. >> >> -- > > > > > -- > Bertrand Dechoux -- Harsh J

Re: potential bug when read directly from local block

2012-08-09 Thread Harsh J
nd it may the bug of hdfs. Is there anyone > can confirm that? > > > I make a simple test case and the output is attached. > > > I test it on hadoop-dist-2.0.0-alpha > > > > > Best Regards > > > -- > Zhanwei Wang > -- Harsh J

Re: hftp in Hadoop 0.20.2

2012-08-09 Thread Harsh J
; I tried to telnet this port, but got "connection refused" error. Seems the > hftp service is not actually running. Could someone tell me how to enable > the hftp service in the 0.20.2 hadoop cluster so that I can run distcp? > > Thanks in advance, > > John -- Harsh J

Re: Recommended way to check if namenode already formatted

2012-08-09 Thread Harsh J
es. $ hadoop fs -ls / | wc -l 0 On Fri, Aug 10, 2012 at 4:34 AM, Stephen Boesch wrote: > > Hi, what's your take? I was thinking to check if a certain always-present > file exists via a hadoop dfs -ls . Other suggestions welcome. > > thanks > > stephenb -- Harsh J

Re: hftp in Hadoop 0.20.2

2012-08-09 Thread Harsh J
> > > On Thu, Aug 9, 2012 at 6:00 PM, Harsh J wrote: >> >> Hi Jian, >> >> HFTP is always-on by default. Can you check and make sure that the >> firewall isn't the cause of the connection refused on port 50070 on >> the NN and ports 50075 on the

Re: namenode instantiation error

2012-08-10 Thread Harsh J
at >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372) >> at >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:335) >> at >> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271) >> at >> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:467) >> at >> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330) >> at >> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339) >> >> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG: >> / >> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1 >> / >> >> > -- Harsh J

Re: hftp in Hadoop 0.20.2

2012-08-10 Thread Harsh J
ion refused > telnet: Unable to connect to remote host: Connection refused > > [hadoop@pnjhadoopnn01 ~]$ telnet localhost 50070 > Trying 127.0.0.1... > telnet: connect to address 127.0.0.1: Connection refused > telnet: Unable to connect to remote host: Connection refused > > Th

Re: Compiling a mapreduce job under the new Apache Hadoop YARN

2012-08-11 Thread Harsh J
specific way or using different jars for compiling a YARN > job? Or maybe for some reason you need to build the source yourself in the > new releases to get the core.jar? I have searched quite a lot but can't find > a relevant answer. > > Thank you very much! -- Harsh J

Re: hftp in Hadoop 0.20.2

2012-08-11 Thread Harsh J
Aug 10, 2012 at 5:08 PM, Joey Echeverria >> > wrote: >> >> >> >> Can you post your NN logs? It looks like the NN is not actually >> >> started or is listening on another port for HTTP. >> >> >> >> -Joey >> >> >> >> On Fri, Aug 10, 2012 at 2:38 PM, Jian F

Re: Hadoop hardware failure recovery

2012-08-12 Thread Harsh J
ure but what happens if >>>> an >>>> entire drive crashes (physically) for whatever reason? How does Hadoop >>>> recover, if it can, from this situation? What else should I know before >>>> setting up my cluster this way? Thanks in advance. >>>> >>>> >>> >> >> >> >> -- >> Thanks & Regards, >> Anil Gupta -- Harsh J

Re: Hadoop hardware failure recovery

2012-08-12 Thread Harsh J
ashes (physically) for whatever reason? How does Hadoop > recover, if it can, from this situation? What else should I know before > setting up my cluster this way? Thanks in advance. > > -- Harsh J

Re: CDH4 yarn pig installation ambiguity

2012-08-12 Thread Harsh J
file or syntax error,what i need to do for these statement > to make grunt and pig work?? > -- Harsh J

Re: disable pipeline replication

2012-08-12 Thread Harsh J
> high latency in our tests ? > > thanks > > Cyril SCETBON > -- Harsh J

Re: MAPREDUCE-3661

2012-08-12 Thread Harsh J
; employ? I can create alias scripts to return expected results if that is an > option. > > https://issues.apache.org/jira/browse/MAPREDUCE-3661 > -- Harsh J

Re: fs.local.block.size vs file.blocksize

2012-08-12 Thread Harsh J
age. > 4. Is there any way to run with say a 512MB blocksize for the persistent > data and the default 64MB blocksize for the shuffled data? See (2). > Thanks! Do let us know if you have further questions. > ellis -- Harsh J

Re: fs.local.block.size vs file.blocksize

2012-08-12 Thread Harsh J
JIRA is up). On Sun, Aug 12, 2012 at 11:10 PM, Ellis H. Wilson III wrote: > Many thanks to Eli and Harsh for their responses! Comments in-line: > > > On 08/12/2012 09:48 AM, Harsh J wrote: >> >> Hi Ellis, >> >> Note that when in Hadoop-land, a "block s

Re: hadoop-fuse

2012-08-13 Thread Harsh J
on that is confidential, > proprietary, privileged or otherwise protected by law. The message is > intended solely for the named addressee. If received in error, please > destroy and notify the sender. Any use of this email is prohibited when > received in error. Impetus does not represent, warrant and/or guarantee, > that the integrity of this communication has been maintained nor that the > communication is free of errors, virus, interception or interference. -- Harsh J

Re: hadoop-fuse

2012-08-13 Thread Harsh J
Compiled by buildd on Thu Sep 22 06:29:01 UTC 2011 > From source with checksum 3127e3d410455d2bacbff7673bf3284c > > Do let me know if I am missing on something. > > Thanks > Rishabh > > -Original Message- > From: Harsh J [mailto:ha...@cloudera.com] > Sent: Monda

Re: Hadoop in Pseudo-Distributed

2012-08-13 Thread Harsh J
a[74092:1203] Unable to load realm info from >> SCDynamicStore >> >> >> http://10.1.66.17:50060/tasklog?plaintext=true&attemptid=attempt_201208130857_0001_m_00_0&filter=stdout >> ---> >> >> I have changed my hadoop-env.sh acoording to Mathew Buckett in >> https://issues.apache.org/jira/browse/HADOOP-7489 >> >> Also this error of Unable to load realm info from SCDynamicStore does not >> show up when I do 'hadoop namenode -format' or 'start-all.sh' >> >> I am also attaching a zipped copy of my logs >> >> >> Cheers, >> >> Subho. >> >> > -- Harsh J

Re: CDH4 eclipse(juno) yarn plugin

2012-08-13 Thread Harsh J
which is being discussed here... > > http://stackoverflow.com/questions/11166125/build-a-hadoop-ecplise-library-from-cdh4-jar-files > > and if anyone can provide all the jars build that will be great. > > > -- Harsh J

<    1   2   3   4   5   6   7   8   9   10   >