Re: How can I find out which nodemanagers are unhealthy and which nodemangers are lost?

2018-10-16 Thread Harsh J
> > What are the possible values for the state in LiveNodeManagers bean? Will > LOST, ACTIV, REBOOTED and DECOMMISSIONED show up in the state filed? > > ____ > 发件人: Harsh J > 发送时间: 2018年10月15日 12:46:49 > 收件人: ims...@outlook.com > 抄送: > 主题: Re

Re: How can I find out which nodemanagers are unhealthy and which nodemangers are lost?

2018-10-14 Thread Harsh J
nt number of unhealthy NodeManagers > NumRebootedNMs Current number of rebooted NodeManagers > > > How can I find out which nodemangers are unhealthy and which are lost? Better > if it could be achieved by calling jmx rest api or hadoop command. > >

Re: ZKFC ActiveBreadCrumb Value

2018-09-15 Thread Harsh J
I gave the zkCli command as an example. I'm using this Go Lib > (github.com/samuel/go-zookeeper/zk) and I get the same result. > > On Fri, 14 Sep 2018 at 02:22 Harsh J wrote: >> >> The value you are looking at directly in ZooKeeper is in a >> serialized/encoded form.

Re: ZKFC ActiveBreadCrumb Value

2018-09-13 Thread Harsh J
-nn3$active-nn3.example.com �>(�> > > How can I effectively write a generic code deployed on different HDFS > clusters to effectively find out which is the active NN from querying ZK? > > Or am I doing something wrong? Is the behavior above expected? -- Harsh J

Re: [EXTERNAL] Yarn : Limiting users to list only his applications

2018-03-19 Thread Harsh J
You are likely looking for the feature provided by YARN-7157. This will work if you have YARN ACLs enabled. On Tue, Mar 20, 2018 at 3:37 AM Benoy Antony wrote: > Thanks Christopher. > > > On Mon, Mar 19, 2018 at 2:23 PM, Christopher Weller < > christopher.wel...@gm.com>

Re: How to print values in console while running MapReduce application

2017-10-08 Thread Harsh J
Consider running your job in the local mode (set config ' mapreduce.framework.name' to 'local'). Otherwise, rely on the log viewer from the (Job) History Server to check the console prints in each task (under the stdout or stderr sections). On Thu, 5 Oct 2017 at 05:15 Tanvir Rahman

Re: Is Hadoop validating the checksum when reading only a part of a file?

2017-09-19 Thread Harsh J
Yes, checksum match is checked for every form of read (unless explicitly disabled). By default, a checksum is generated and stored for every 512 bytes of data (io.bytes.per.checksum), so only the relevant parts are checked vs. the whole file when doing a partial read. On Mon, 18 Sep 2017 at 19:23

Re: Forcing a file to update its length

2017-08-09 Thread Harsh J
eer* > > [image: cid:image004.png@01D19182.F24CA3E0] > > > > *From:* Harsh J [mailto:ha...@cloudera.com] > *Sent:* Wednesday, August 9, 2017 3:01 PM > *To:* David Robison <david.robi...@psgglobal.net>; user@hadoop.apache.org > *Subject:* Re: Forcing a file to

Re: Forcing a file to update its length

2017-08-09 Thread Harsh J
I don't think it'd be safe for a reader to force an update of length at the replica locations directly. Only the writer would be perfectly aware of the DNs in use for the replicas and their states, and the precise count of bytes entirely flushed out of the local buffer. Thereby only the writer is

Re: Directed hdfs block reads

2017-07-23 Thread Harsh J
There isn't an API way to hint/select DNs to read from currently - you may need to do manual changes (contribution of such a feature is welcome, please file a JIRA to submit a proposal). You can perhaps hook your control of which replica location for a given block is selected by the reader under

Re: GARBAGE COLLECTOR

2017-06-19 Thread Harsh J
You can certainly configure it this way without any ill effects, but note that MR job tasks are typically short lived and GC isn't really a big issue for most of what it does. On Mon, 19 Jun 2017 at 14:20 Sidharth Kumar wrote: > Hi Team, > > How feasible will it be,

Re: HDFS - How to delete orphaned blocks

2017-03-24 Thread Harsh J
The rate of deletion of DN blocks is throttled via dfs.namenode.invalidate.work.pct.per.iteration (documented at https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml#dfs.namenode.invalidate.work.pct.per.iteration). If your problem is the rate and your usage is

Re: No edits files in dfs.namenode.edits.dir

2016-05-22 Thread Harsh J
Are you absolutely certain you are looking at the right directory? The NameNode is designed to crash if it cannot persist edits (transactions) durably. The "hdfs getconf" utility checks local command classpaths but your service may be running over a different configuration directory. If you have

Re: How to configure detection of failed NodeManager's sooner?

2016-02-14 Thread Harsh J
You're looking for the property "yarn.nm.liveness-monitor.expiry-interval-ms", whose default is 60ms (10m). This is to be set on the ResourceManager(s)' yarn-site.xml. (X-Ref:

Re: hdfs : Change supergroup

2016-02-08 Thread Harsh J
Changing the supergroup configuration would not affect the existing group maintained on the file inodes (its persisted since the beginning, not pulled dynamically from config at every restart). You will need to manually fs -chgrp those. On Mon, Feb 8, 2016 at 10:15 PM Francis Dupin

Re: How does this work

2015-12-24 Thread Harsh J
Hue and Beeline access your warehouse data and metadata via the HiveServer2 APIs. The HiveServer2 service runs as the 'hive' user. On Wed, Dec 23, 2015 at 9:42 PM Kumar Jayapal wrote: > Hi, > > My environment has Kerbros and Senry for authentication and authorisation. > >

Re: Utilizing snapshotdiff for distcp

2015-12-11 Thread Harsh J
You need to pass the -diff option (works only when -update is active). The newer snapshot name can also be "." to indicate the current view. On Sat, Dec 12, 2015 at 12:53 AM Nicolas Seritti wrote: > Hello all, > > It looks like HDFS-8828 implemented a way to utilize the

Re: nodemanager listen on 0.0.0.0

2015-12-08 Thread Harsh J
Hello, Could you file a JIRA for this please? Currently the ShuffleHandler will always bind to wildcard address due to the code being that way (in both branch-2 and trunk presently:

Re: lzo error while running mr job

2015-10-27 Thread Harsh J
Every codec in the io.compression.codecs list of classes will be initialised, regardless of actual further use. Since the Lzo*Codec classes require the native library to initialise, the failure is therefore expected. On Tue, Oct 27, 2015 at 11:42 AM Kiru Pakkirisamy

Re: lzo error while running mr job

2015-10-27 Thread Harsh J
-------- > *From*:"Harsh J" <ha...@cloudera.com> > *Date*:Mon, Oct 26, 2015 at 11:39 PM > *Subject*:Re: lzo error while running mr job > > Every codec in the io.compression.codecs list of classes will be > initialised, regardless of actual further use.

Re: Concurrency control

2015-10-01 Thread Harsh J
If all your Apps are MR, then what you are looking for is MAPREDUCE-5583 (it can be set per-job). On Thu, Oct 1, 2015 at 3:03 PM Laxman Ch wrote: > Hi Naga, > > Like most of the app-level configurations, admin can configure the > defaults which user may want override at

Re: Who will Responsible for Handling DFS Write Pipe line Failure

2015-09-07 Thread Harsh J
These 2-part blog posts from Yongjun should help you understand the HDFS file write recovery process better: http://blog.cloudera.com/blog/2015/02/understanding-hdfs-recovery-processes-part-1/ and http://blog.cloudera.com/blog/2015/03/understanding-hdfs-recovery-processes-part-2/ On Mon, Sep 7,

Re: Performance impact of curl command on linux server

2015-09-05 Thread Harsh J
That depends - what resource did it consume, concerning your admins? CPU? Were you uploading using the chunked technique? On Fri, Sep 4, 2015 at 9:06 PM Shashi Vishwakarma wrote: > Hi > > I have been using curl command for ingesting data into HDFS using WebHDFS. >

Re: MultithreadedMapper - Sharing Data Structure

2015-08-24 Thread Harsh J
The MultiThreadedMapper won't solve your problem, as all it does is run parallel maps within the same map task JVM as a non-MT one. Your data structure won't be shared across the different map task JVMs on the host, but just within the map tasks's own multiple threads running the map() function

Re: MultithreadedMapper - Sharing Data Structure

2015-08-24 Thread Harsh J
reason of sharing the same structure across multiple Map Tasks. Multithreaded Map task does that partially, as within the multiple threads, same copy is used. Depending upon the hardware availability, one can get the same performance. Thanks, On Mon, Aug 24, 2015 at 1:37 PM, Harsh J ha

Re: Sorting the inputSplits

2015-07-30 Thread Harsh J
If you meant 'scheduled' first perhaps thats doable by following (almost) what Gera says. The framework actually explicitly sorts your InputSplits list by its reported lengths, which would serve as the hack point for inducing a reordering. See

Re: dfs.permissions.superusergroup not working

2015-07-26 Thread Harsh J
not true, please advise me whar would have went wrong Thanks, Venkat *From:* Harsh J [mailto:ha...@cloudera.com] *Sent:* Friday, July 24, 2015 9:26 PM *To:* user@hadoop.apache.org *Subject:* Re: dfs.permissions.superusergroup not working Is there a typo in your email, or did you

Re: dfs.permissions.superusergroup not working

2015-07-24 Thread Harsh J
Is there a typo in your email, or did you set dfs.cluster.administrators instead of intending to set dfs.permissions.superusergroup? Also, are your id outputs from the NameNode machines? Cause by default the group lookups happen local to your NameNode machine. On Sat, Jul 25, 2015 at 1:31 AM

Re: total vcores per node containers in yarn

2015-07-19 Thread Harsh J
hadoop 2.5.0. Whats the logic of default using hardware detection. Say My node has 8 actual core and 32 virtual cores. Its taking 26 as value of vcores available of this node on RM UI. On Sat, Jul 18, 2015 at 7:22 PM, Harsh J ha...@cloudera.com wrote: What version of Apache Hadoop are you

Re: total vcores per node containers in yarn

2015-07-18 Thread Harsh J
What version of Apache Hadoop are you running? Recent changes have made YARN to auto-compute this via hardware detection, by default (rather than the 8 default). On Fri, Jul 17, 2015 at 11:31 PM Shushant Arora shushantaror...@gmail.com wrote: In Yarn there is a setting to specify no of vcores

Re: issues about hadoop-0.20.0

2015-07-18 Thread Harsh J
Apache Hadoop 0.20 and 0.21 are both very old and unmaintained releases at this point, and may carry some issues unfixed via further releases. Please consider using a newer release. Is there a specific reason you intend to use 0.21.0, which came out of a branch long since abandoned? On Sat, Jul

Re: tools.DistCp: Invalid arguments

2015-07-10 Thread Harsh J
PM, Harsh J ha...@cloudera.com javascript:_e(%7B%7D,'cvml','ha...@cloudera.com'); wrote: Yes, if the length matches and if you haven't specifically asked it to ignore checksums. On Thursday, July 9, 2015, Giri P gpatc...@gmail.com javascript:_e(%7B%7D,'cvml','gpatc...@gmail.com'); wrote

Re: tools.DistCp: Invalid arguments

2015-07-10 Thread Harsh J
: local host is: hadoop-coc-1/127.0.1.1; destination host is: hadoop-coc-2:50070; usage: distcp OPTIONS [source_path...] target_path Thanks, ​ ​ -- Harsh J

Re: Different outputformats in avro map reduce job

2015-07-08 Thread Harsh J
textformat as well along with these avro files.Is it possible. Thanks, Nishanth -- Harsh J

Re: query uses WITH blocks and throws exception if run as Oozie hive action (hive-0.13.1)

2015-05-17 Thread Harsh J
-05_500_5122268870471366216-1/-ext-10002 -- Harsh J

Re: namenode uestion

2015-05-12 Thread Harsh J
-- Harsh J

Re: Is there any way to limit the concurrent running mappers per job?

2015-04-22 Thread Harsh J
queue to limit it, but it's not easy to control it from job submitter. Is there any way to limit the concurrent running mappers per job? Any documents or discussions before? BTW, any way to search this mailing list before I post a new question? Thanks very much. -- Harsh J

Re: Hadoop and HttpFs

2015-04-07 Thread Harsh J
cache. And at that point it fails because the client doesn’t have access to the datanodes. Am I right in my understanding of what happens in that case ? Also, anyone meets this issue already? Any solution? Workaround? Thanks a lot in advance, Rémy. -- Harsh J

Re: CPU utilization in map function

2015-04-07 Thread Harsh J
CPU utilization as well as time spent in sending messages in each superstep for a Giraph application. I am not familiar with hadoop code. Can you suggest the functions I should look into to get this information ? Thanks Ravikant -- Harsh J

Re: Can block size for namenode be different from wdatanode block size?

2015-03-25 Thread Harsh J
subsidiaries or their employees, unless expressly so stated. It is the responsibility of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd, its subsidiaries nor their employees accept any responsibility. -- Harsh J

Re: Identifying new files on HDFS

2015-03-25 Thread Harsh J
not copy, forward or otherwise disclose the content of the e-mail. The views expressed in this communication may not necessarily be the view held by WHISHWORKS. -- Harsh J

Re: Swap requirements

2015-03-25 Thread Harsh J
Yarn.nodemanager.Vmem-pmem-ratio parameter... If data nodes does not require swap then what about the above parameter? What is that used for in yarn? -- Harsh J

Re: namenode recovery

2015-03-25 Thread Harsh J
that was checkpointed by the secondary namenode? Thanks Brian -- Harsh J

Re: Tell which reduce task run which partition

2015-03-24 Thread Harsh J
partition 0, reduce task 1 will read partition 1, etc... Thanks, -- -- -- Harsh J

Re: how does datanodes know where to send the block report in HA

2015-01-31 Thread Harsh J
node send report directly to Name Node or will it send to Journal Nodes / ZKFC? Thanks SP -- Harsh J

Re: How to get Hadoop Admin job being fresher in Big Data Field?

2015-01-04 Thread Harsh J
experience like i have as a DBA and give job to a newbie like me? What are the skills/tool knowledge i should have to get a job in Big Data Field ? Thanks Krish -- Harsh J

Re: Question about the QJM HA namenode

2014-12-03 Thread Harsh J
=1000 MILLISECONDS) {code} I have the QJM on l-hbase1.dba.dev.cn0, does it matter? I am a newbie, Any idea will be appreciated!! -- Harsh J

Re: Load csv files into drill tables

2014-10-25 Thread Harsh J
me. Laszlo -- Harsh J

Re: why does hadoop creates /tmp/hadoop-user/hadoop-unjar-xxxx/ dir and unjar my fat jar?

2014-10-25 Thread Harsh J
until on the compute nodes, right? -- Harsh J

Re: hadoop 2.4 using Protobuf - How does downgrade back to 2.3 works ?

2014-10-18 Thread Harsh J
using protobuf. So upgrade from 2.3.0 to 2.4 would work since 2.4 can read old (2.3) binary format and write the new 2.4 protobuf format. After using 2.4, if there is a need to downgrade back to 2.3, how would that work ? Thanks, -- Harsh J

Re: S3 with Hadoop 2.5.0 - Not working

2014-09-10 Thread Harsh J
There is an s3.impl until 1.2.1 release. So does the 2.5.0 release support s3 or do i need to do anything else. cheers, Dhiraj -- Harsh J

Re: Hadoop 2.0.0 stopping itself

2014-09-03 Thread Harsh J
new in Hadoop. Thanks in advance. -- Harsh J

Re: cannot start tasktracker because java.lang.NullPointerException

2014-09-01 Thread Harsh J
It appears you have made changes to the source and recompiled it. The actual release source line 247 of the failing class can be seen at https://github.com/apache/hadoop-common/blob/release-1.2.1/src/mapred/org/apache/hadoop/mapred/TaskTracker.java#L247, which can never end in a NPE. You need to

RE: hadoop/yarn and task parallelization on non-hdfs filesystems

2014-08-15 Thread Harsh J
utilization? Thanks, Calvin -- Harsh J

Re: Don't want to read during namenode is in safemode

2014-08-15 Thread Harsh J
can i fix this problem. Regards, Satyam -- Harsh J

Re: Hadoop 2.2 Built-in Counters

2014-08-14 Thread Harsh J
in resource manager website anymore. I know I can get them from client output. I was wondering if there is other place in name node or data node to get the final counter measures regarding job id? Thanks, Shaw -- Harsh J

Re: Ideal number of mappers and reducers to increase performance

2014-08-07 Thread Harsh J
would be much appreciated. Thank you . (singlenodecuda)conf.zip Regards, Sindhu -- Harsh J

Re: Hadoop 2.4.0 How to change Configured Capacity

2014-08-02 Thread Harsh J
capacity” e.g. 2T or more per node? Name node Configured Capacity: 264223436800 (246.08 GB) Each Datanode Configured Capacity: 52844687360 (49.22 GB) regards Arthur -- Harsh J

Re: Ideal number of mappers and reducers to increase performance

2014-07-31 Thread Harsh J
datanodes running on same machine. Your help is very much appreciated. Regards, sindhu -- Harsh J

Re: where are the old hadoop documentations for v0.22.0 and below ?

2014-07-30 Thread Harsh J
: harsh, those are just javadocs. i'm talking about the full documentations (see original post). On Tue, Jul 29, 2014 at 2:17 PM, Harsh J ha...@cloudera.com wrote: Precompiled docs are available in the archived tarballs of these releases, which you can find on: https://archive.apache.org/dist

Re: Master /slave file configuration for multiple datanodes on same machine

2014-07-30 Thread Harsh J
both in my hadoop directory. How should master and slave files of conf and conf2 look like if i want conf to be master and conf2 to be slave .? Also how should /etc/hosts file look like ? Please help me. I am really stuck Regards, Sindhu -- Harsh J

Re: where are the old hadoop documentations for v0.22.0 and below ?

2014-07-29 Thread Harsh J
, 2.4.1, 0.23.11 -- Harsh J

Re: Performance on singlenode and multinode hadoop

2014-07-29 Thread Harsh J
datanode uses different cores of the ubuntu machine. (Note: i know multiple datanodes on same machine is not that advantageous , but assuming my machine is powerful ..i set it up..) would appreciate any advices on this. Regards, Sindhu -- Harsh J

Re: Cannot compaile a basic PutMerge.java program

2014-07-28 Thread Harsh J
) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) Could not find the main class: PutMerge. Program will exit. I get the above error. I tried: $set CLASSPATH=/usr/lib/hadoop/bin/hadoop $java PutMerge I still get the error. On Sunday, July 27, 2014 10:16 PM, Harsh J ha...@cloudera.com wrote: The javac

Re: Question about sqoop command error

2014-07-28 Thread Harsh J
$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) ... 26 more On Sunday, July 27, 2014 10:17 PM, Harsh J ha...@cloudera.com wrote: The jar must be placed under the $SQOOP_HOME/lib/ directory (the libraries location

Re: Cannot compaile a basic PutMerge.java program

2014-07-27 Thread Harsh J
(); } catch (IOException e) { e.printStackTrace(); } } } = -- Harsh J

Re: Building custom block placement policy. What is srcPath?

2014-07-24 Thread Harsh J
result in placement policy calls iff there's an under-replication/losses/etc. to block replicas of the original set. Only for such operations would you have a possibility to determine the actual full length of file (as explained above). Thank you, AB -- Harsh J

Re: Building custom block placement policy. What is srcPath?

2014-07-24 Thread Harsh J
the namenode and fsnamesystem code just to see if I can do what I want from there. Any suggestions will be appreciated. Thank you, AB On 07/24/2014 02:12 PM, Harsh J wrote: Hello, (Inline) On Thu, Jul 24, 2014 at 11:11 PM, Arjun Bakshi baksh...@mail.uc.edu wrote: Hi, I want

Re: Re: HDFS input/output error - fuse mount

2014-07-19 Thread Harsh J
the cause of this issue? What is the correct java version to be used for this version of hadoop. I have also tried 1.6.0_31 but no changes were seen. If java isn't my issue, then what is? Best regards, Andrew -- Harsh J

Re: OIV Compatiblity

2014-07-14 Thread Harsh J
as there is no hdfs.sh file there. Thanks. -- Harsh J

Re: OIV Compatiblity

2014-07-14 Thread Harsh J
it in offline mode using the tool in the hadoop 1.2 or higher distributions.I guess the structure of fsimage would be same for both the distributions. On Mon, Jul 14, 2014 at 11:53 PM, Ashish Dobhal dobhalashish...@gmail.com wrote: Harsh thanks On Mon, Jul 14, 2014 at 11:39 PM, Harsh J ha

Re: OIV Compatiblity

2014-07-14 Thread Harsh J
There shouldn't be any - it basically streams over the existing local fsimage file. On Tue, Jul 15, 2014 at 12:21 AM, Ashish Dobhal dobhalashish...@gmail.com wrote: Sir I tried it it works. Are there any issues in downloading the gsimage using wget. On Tue, Jul 15, 2014 at 12:17 AM, Harsh J

Re: Where is hdfs result ?

2014-06-23 Thread Harsh J
. --- -- Harsh J

Re: Recover HDFS lease after crash

2014-06-16 Thread Harsh J
again. What is the correct way to recover from this? Is there API for recovering the lease and resuming appending faster? DFSClient sets a randomized client name. If it were to send the same client name as before the crash, would it receive a lease on the file faster? Thanks -- Harsh J

Re: should i just assign history server address on NN or i have to assign on each node?

2014-06-04 Thread Harsh J
server on my one of NN(i use NN HA) ,i want to ask if i need set history server address on each node? -- Harsh J

Re: Building Mahout Issue

2014-06-03 Thread Harsh J
Education Services Email: andrew.bote...@emc.com -- Harsh J

Re: listing a 530k files directory

2014-05-30 Thread Harsh J
at least at the first 10 file names, see the size, maybe open one thanks, G. -- Harsh J

Re: Problem with simple-yarn-app

2014-05-30 Thread Harsh J
(RunJar.java:212) Does anyone know what I'm doing wrong? Thanks, Lars -- Harsh J

Re: How to set the max mappers per node on a per-job basis?

2014-05-30 Thread Harsh J
can we request a different number of mappers per node for each job? From what I've read, mapred.tasktracker.map.tasks.maximum and mapred.tasktracker.reduce.tasks.maximum cannot be overridden from the client. --Jeremy -- Harsh J

Re: Can not find hadoop packages

2014-05-29 Thread Harsh J
.jar) into my classpath but it still does not work. Thanks! Best, Isaiah -- Harsh J

Re: fuse-dfs on hadoop-2.2.0

2014-05-27 Thread Harsh J
I forgot to send this earlier, but here's an answer with added links that may help: http://stackoverflow.com/a/21655102/1660002 On Sat, May 17, 2014 at 9:54 AM, Harsh J ha...@cloudera.com wrote: The issue here is that JNI doesn't like wildcards in the classpath string - it does not evaluate

Re: question on yarn and fairscheduler

2014-05-20 Thread Harsh J
failed 4 times . is it possible that this is due to preempted too many times? or any other issue. At the same job, there are also tasks get killed with note: Attmpt state missing from History : marked as KILLED any help would be appreciated. Thanks. -- Harsh J

Re: hadoop 2.2.0 nodemanager graceful stop

2014-05-19 Thread Harsh J
shall be deemed as neither given nor endorsed by Winbond. -- Regards Shengjun -- Harsh J

Re: about hadoop upgrade

2014-05-19 Thread Harsh J
/property -- Harsh J

Re: fuse-dfs on hadoop-2.2.0

2014-05-17 Thread Harsh J
the configuration? Thanks in advance! Cheng -- Harsh J

Re: Realtime sensor's tcpip data to hadoop

2014-05-11 Thread Harsh J
traffic from sensors are over the limit of one lan port, how to share the loads, is there any component in hadoop to make this done automatically. Any suggestions, thanks. -- Harsh J

Re: HDFS mounting issue using Hadoop-Fuse on Fully Distributed Cluster?

2014-05-03 Thread Harsh J
are subscribed to the Google Groups CDH Users group. To unsubscribe from this group and stop receiving emails from it, send an email to cdh-user+unsubscr...@cloudera.org. For more options, visit https://groups.google.com/a/cloudera.org/d/optout. -- Harsh J

Re: For QJM HA solution, after failover, application must update NameNode IP?

2014-04-30 Thread Harsh J
). However, after failover, the IP of active NameNode is changed to 9.123.22.2 which is the IP of previous standby NameNode. In this case, application must update NameNode IP? Thanks! -- Harsh J

Re: copyFromLocal: unexpected URISyntaxException

2014-04-28 Thread Harsh J
@compute-1-0 ~]$ hadoop fs -copyFromLocal wrfout_d01_2001-01-01_00:00:00 netcdf_data/ copyFromLocal: unexpected URISyntaxException I am using Hadoop 2.2.0. Any suggestions? Patcharee -- Nitin Pawar -- Harsh J

Re: 答复: hdfs write partially

2014-04-28 Thread Harsh J
Huang 发件人: user-return-15182-tdhkx=126@hadoop.apache.org [mailto:user-return-15182-tdhkx=126@hadoop.apache.org] 代表 Harsh J 发送时间: 2014年4月28日 13:30 收件人: user@hadoop.apache.org 主题: Re: hdfs write partially Packets are chunks of the input you try to pass to the HDFS writer. What

Re: hdfs write partially

2014-04-27 Thread Harsh J
is 64K and it can't be bigger than 16M. So if write bigger than 16M a time, how to make sure it doesn't write partially ? Does anyone knows how to fix this? Thanks a lot. -- Ken Huang -- Harsh J

Re: configure HBase

2014-04-24 Thread Harsh J
-env.sh, can you explain in detail? Thanks for any inputs. -- Harsh J

Re: Changing default scheduler in hadoop

2014-04-13 Thread Harsh J
the configuration override, and it will always go back to the default FIFO based scheduler, the same whose source has been linked above. I am struggling since 4 months to get help on Apache Hadoop?? Are you unsure about this? -- Harsh J

Re: Changing default scheduler in hadoop

2014-04-13 Thread Harsh J
the configuration override, and it will always go back to the default FIFO based scheduler, the same whose source has been linked above. I am struggling since 4 months to get help on Apache Hadoop?? Are you unsure about this? -- Harsh J

Re: Number of map task

2014-04-12 Thread Harsh J
think the job will be done quicker if there are more Map tasks? Patcharee -- Harsh J

Re: Hadoop 2.2.0-cdh5.0.0-beta-1 - MapReduce Streaming - Failed to run on a larger jobs

2014-04-10 Thread Harsh J
committed heap usage (bytes)=128240713728 File Input Format Counters Bytes Read=21753888768 14/04/10 10:28:24 ERROR streaming.StreamJob: Job not Successful! Streaming Command Failed! Thanks and Regards, Truong Phan -- Harsh J

Re: InputFormat and InputSplit - Network location name contains /:

2014-04-10 Thread Harsh J
) at java.lang.Thread.run(Thread.java:662) 2014-04-10 17:09:01,986 INFO [AsyncDispatcher event handler] org.apache.hadoop. -- Harsh J

Re: File requests to Namenode

2014-04-09 Thread Harsh J
-- Harsh J

Re: MapReduce for complex key/value pairs?

2014-04-08 Thread Harsh J
, the key would be the ngram but the value would be an integer (the count) _and_ an array of document id's. Is this something that can be done? Any pointers would be appreciated. I am using Java, btw. Thank you, Natalia Connolly -- Harsh J

Re: Why block sizes shown by 'fsck' and '-stat' are inconsistent?

2014-04-05 Thread Harsh J
size of /user/user1/filesize/derby.jar equals to 2.5 MB(2673375 B), however the block size equals to 128 MB(134217728 B). Why block sizes shown by 'fsck' and '-stat' are inconsistent? -- Harsh J

  1   2   3   4   5   6   7   8   9   10   >