Self-healing hdfs functionality

2018-02-02 Thread sidharth kumar
face day to day life which can be automated as self healing feature . So I would like to request all community member to provide list of issue they face everyday and can be taken as a feature for self healing hdfs Warm Regards Sidharth Kumar | Mob: +91 8197 555 599 LinkedIn:www.linkedin.com/in

Hbase trace

2018-01-24 Thread sidharth kumar
Hi Team, I want to know what read and write requests operations are being carried out on HBase. I enabled trace in log4j but could not get info. Could you please help me how to extract this info from hbase and which log could give me better info. Warm Regards Sidharth Kumar | Mob: +91 8197

Local read-only users in ambari

2017-12-01 Thread sidharth kumar
server and register cluster as remote cluster and set hive views but still i have the same problem. Kindly help to resolve this. Warm Regards Sidharth Kumar | Mob: +91 8197 555 599 LinkedIn:www.linkedin.com/in/sidharthkumar2792<http://:www.linkedin.com/in/sidharthkumar2792>

Apache ambari

2017-09-08 Thread sidharth kumar
Hi, Apache ambari is open source. So,can we setup Apache ambari to manage existing Apache Hadoop cluster ? Warm Regards Sidharth Kumar | Mob: +91 8197 555 599 / 7892 192 367 LinkedIn:www.linkedin.com/in/sidharthkumar2792

spark on yarn error -- Please help

2017-08-28 Thread sidharth kumar
Hi, I have configured apace spark over yarn. I am able to run map reduce job successfully but spark-shell gives below error. Kindly help me to resolve this issue *SPARK-DEFAULT.CONF* spark.master spark://master2:7077 spark.eventLog.enabled true spark.eventLog

Hadoop 3.0

2017-07-09 Thread sidharth kumar
Hi, Is there any documentation through which we can know what are the changes targeted in Hadoop 3.0 Warm Regards Sidharth Kumar | Mob: +91 8197 555 599 / 7892 192 367 LinkedIn:www.linkedin.com/in/sidharthkumar2792

Re: reconfiguring storage

2017-07-07 Thread sidharth kumar
Hi, Just want to add on daemeon, if the miss configuration happened  on couple of nodes. It's better to do it one at a time or else take backup of your data. Warm Regards Sidharth Kumar | Mob: +91 8197 555 599 / 7892 192 367 LinkedIn:www.linkedin.com/in/sidharthkuma

Re: Kafka or Flume

2017-07-02 Thread Sidharth Kumar
Thank you very much for your help. What about the flow as Nifi --> Kafka --> storm for real time processing and then storing into HBase ? Warm Regards Sidharth Kumar | Mob: +91 8197 555 599/7892 192 367 | LinkedIn: www.linkedin.com/in/sidharthkumar2792 On 02-Jul-2017 12:40 PM,

Re: Kafka or Flume

2017-07-01 Thread Sidharth Kumar
data stored in hadoop. So can you suggest a flow with a little more in detail Warm Regards Sidharth Kumar | Mob: +91 8197 555 599/7892 192 367 | LinkedIn: www.linkedin.com/in/sidharthkumar2792 On 01-Jul-2017 9:46 PM, "Gagan Brahmi" wrote: I'd say the data flow should be

Re: Kafka or Flume

2017-07-01 Thread Sidharth Kumar
gt; connector) to put data into HDFS then >> Write tool to read data from topic, validate and store in other topic. >> >> We are using combination of these steps to process over 10 million >> events/second. >> >> I hope it helps.. >> >> Thanks >> M

RE: Kafka or Flume

2017-06-29 Thread Sidharth Kumar
Thanks! What about Kafka with Flume? And also I would like to tell that everyday data intake is in millions and can't afford to loose even a single piece of data. Which makes a need of high availablity. Warm Regards Sidharth Kumar | Mob: +91 8197 555 599/7892 192 367 | Lin

Kafka or Flume

2017-06-29 Thread Sidharth Kumar
historical data which is stored in hadoop. So, my question is which injestion tool will be best for this Kafka or Flume? Any suggestions will be a great help for me. Warm Regards Sidharth Kumar | Mob: +91 8197 555 599/7892 192 367 | LinkedIn: www.linkedin.com/in/sidharthkumar2792

RE: Lots of warning messages and exception in namenode logs

2017-06-29 Thread Sidharth Kumar
Regards Sidharth Kumar | Mob: +91 8197 555 599/7892 192 367 | LinkedIn: www.linkedin.com/in/sidharthkumar2792 On 29-Jun-2017 3:45 PM, "omprakash" wrote: > Hi Ravi, > > > > I have 5 nodes in Hadoop cluster and all have same configurations. After > setting *dfs.re

Re: GARBAGE COLLECTOR

2017-06-19 Thread Sidharth Kumar
n, 19 Jun 2017 at 14:20 Sidharth Kumar > wrote: > >> Hi Team, >> >> How feasible will it be, if I configure CMS Garbage collector for Hadoop >> daemons and configure G1 for Map Reduce jobs which run for hours? >> >> Thanks for your help ...! >> &

GARBAGE COLLECTOR

2017-06-19 Thread Sidharth Kumar
Hi Team, How feasible will it be, if I configure CMS Garbage collector for Hadoop daemons and configure G1 for Map Reduce jobs which run for hours? Thanks for your help ...! -- Regards Sidharth Kumar | Mob: +91 8197 555 599 | LinkedIn <https://www.linkedin.com/in/sidharthkumar2792/>

Re: How to monitor YARN application memory per container?

2017-06-13 Thread Sidharth Kumar
Hi, I guess you can get it from http://:/jmx or /metrics Regards Sidharth LinkedIn: www.linkedin.com/in/sidharthkumar2792 On 13-Jun-2017 6:26 PM, "Shmuel Blitz" wrote: > (This question has also been published on StackOveflow > ) > > I am looking for

Re: When i run wordcount of Hadoop in Win10, i got wrong info

2017-06-12 Thread Sidharth Kumar
Check /tmp directory permissions and owners Sidharth On 13-Jun-2017 3:20 AM, "Deng Yong" wrote: > D:\hdp\sbin>yarn jar d:/hdp/share/hadoop/mapreduce/ > hadoop-mapreduce-examples-2.7.3.jar wordcount /aa.txt /out > > 17/06/10 15:27:32 INFO client.RMProxy: Connecting to ResourceManager at / > 0.0

How to Contribute as hadoop admin

2017-05-31 Thread Sidharth Kumar
Hi, I have been working as hadoop admin since 2 years, I subscribed to this group 3 months before but since then never able to figure out something which a hadoop admin contribute can do. It will be great full if someone help me out to contribute in hadoop 3.0 development. Thanks for help in adv

Re: Why hdfs don't have current working directory

2017-05-26 Thread Sidharth Kumar
o something like hdfs cat foo instead of > hdfs cat /user/me/foo). HDFS does have this to a limited extent - if your > path is not absolute, it is relative from your home directory (or root if > there is no home directory for your user). > > Thanks, > Hariharan >

Why hdfs don't have current working directory

2017-05-26 Thread Sidharth Kumar
Hi, Can you kindly explain me why hdfs doesnt have current directory concept. Why Hadoop is not implement to use pwd? Why command like cd and PWD cannot be implemented in hdfs? Regards Sidharth Mob: +91 819799 LinkedIn: www.linkedin.com/in/sidharthkumar2792

Re: access error while trying to run distcp from source cluster

2017-05-25 Thread Sidharth Kumar
Hi , It may be because user don't have the write permission in destination cluster path. For example $Su - abcde $hadoop distcp /data/sample1 hdfs://destclstnn:8020/data/ So,in the above case user abcde should have the write permission at destination path hdfs://destclstnn:8020/data/ Regards

Re: Block pool error in datanode

2017-05-23 Thread Sidharth Kumar
So I guess this is due to change in blockpool id. If you have older fsimage backup ,start namenode using that fsimage or delete the current directory of datanodes hdfs storage and re-format the namenode once again Regards Sidharth Mob: +91 819799 LinkedIn: www.linkedin.com/in/sidharthkumar2792

RE: Hdfs default block size

2017-05-22 Thread Sidharth Kumar
Thank you for your help. On 22-May-2017 5:23 PM, "surendra lilhore" wrote: > Hi Sidharth, > > > > It is 128MB. > > > > You can refer this link https://hadoop.apache.org/ > docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml > > > &g

Hdfs default block size

2017-05-22 Thread Sidharth Kumar
Hi, Can you kindly tell me what is the default block size in apache hadoop 2.7.3? Is it 64mb or 128mb? Thanks Sidharth

Re: Hadoop 2.7.3 cluster namenode not starting

2017-05-17 Thread Sidharth Kumar
ot; wrote: Apologies for the delayed reply, was away due to some personal issues. I tried the telnet command as well, but no luck. I get the response that 'Name or service not known' Thanks Bhushan Pathak Thanks Bhushan Pathak On Wed, May 3, 2017 at 7:48 AM, Sidharth Kumar wrote: &g

Re: Hadoop 2.7.3 cluster namenode not starting

2017-05-02 Thread Sidharth Kumar
Can you check if the ports are opened by running telnet command. Run below command from source machine to destination machine and check if this help $telnet Ex: $telnet 192.168.1.60 9000 Let's Hadooping! Bests Sidharth Mob: +91 819799 LinkedIn: www.linkedin.com/in/sidharthkumar2792 O

Re: Noob question about Hadoop job that writes output to HBase

2017-04-22 Thread Sidharth Kumar
I guess even aggregated log will have error which can be collected by using yarn logs -applicationId > .log Sidharth Mob: +91 819799 LinkedIn: www.linkedin.com/in/sidharthkumar2792 On 22-Apr-2017 5:40 PM, "Ravi Prakash" wrote: > Hi Evelina! > > You've posted the logs for the MapReduce A

Re: Hdfs read and write operation

2017-04-20 Thread Sidharth Kumar
Hi, Could anyone kindly help me to clear my below doubts Thanks On 19-Apr-2017 8:08 PM, "Sidharth Kumar" wrote: Hi, please help me to understand it 1) If we read anatomy of hdfs read in hadoop definitive guide it says data queue is consumed by streamer. So, can you just tell me

Re: Hadoop namespace format user and permissions

2017-04-20 Thread Sidharth Kumar
Hi James, Please create a user hadoop or hdfs and change the ownership of directory to hdfs:hadoop. Hdfs run with hdfs user. This should probably resolve your​ issue. If you need I can share document which i made for pseudo mode installation to help my mates. Please let me know if issue still pe

Hdfs read and write operation

2017-04-19 Thread Sidharth Kumar
Hi, please help me to understand it 1) If we read anatomy of hdfs read in hadoop definitive guide it says data queue is consumed by streamer. So, can you just tell me that will there be only one streamer in a cluster which consume packets from data queue and create pipeline for each packets to sto

Re: Disk full errors in local-dirs, what data is stored in yarn.nodemanager.local-dirs?

2017-04-12 Thread Sidharth Kumar
y > > > > ----- > To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org > For additional commands, e-mail: user-h...@hadoop.apache.org > > > -- Regards Sidharth Kumar | Mob: +91 8197 555 599 | LinkedIn <https://www.linkedin.com/in/sidharthkumar2792/>

Re: Anatomy of read in hdfs

2017-04-10 Thread Sidharth Kumar
r 10, 2017 at 11:46 AM, Sidharth Kumar < > sidharthkumar2...@gmail.com> wrote: > >> Thanks Philippe, >> >> I am looking for answer only restricted to HDFS. Because we can do read >> and write operations from CLI using commands like "*hadoop fs >> -

Re: Anatomy of read in hdfs

2017-04-10 Thread Sidharth Kumar
. There is no java framework per se for splitting up an file >> (technically not so, but let's simplify, outside of your own custom code). >> >> >> *...* >> >> >> >> *Daemeon C.M. ReiydelleUSA (+1) 415.501.0198 <(415)%20501-0198>London >> (+44) (0) 20 8144

Re: Anatomy of read in hdfs

2017-04-09 Thread Sidharth Kumar
#x27;s just a single thread process and will read the data sequentially. On Friday, April 7, 2017, Sidharth Kumar wrote: > Thanks for your response . But I dint understand yet,if you don't mind can > you tell me what do you mean by "*With Hadoop, the idea is to parallelize > th

Re: Anatomy of read in hdfs

2017-04-07 Thread Sidharth Kumar
like MapReduce. Regards, Philippe On Thu, Apr 6, 2017 at 9:55 PM, Sidharth Kumar wrote: > Hi Genies, > > I have a small doubt that hdfs read operation is parallel or sequential > process. Because from my understanding it should be parallel but if I read > "hadoop definitive

Anatomy of read in hdfs

2017-04-06 Thread Sidharth Kumar
Hi Genies, I have a small doubt that hdfs read operation is parallel or sequential process. Because from my understanding it should be parallel but if I read "hadoop definitive guide 4" in anatomy of read it says "*Data is streamed from the datanode back **to the client, which calls read() repeate

Customize Sqoop default property

2017-04-06 Thread Sidharth Kumar
Hi, I am importing data from RDBMS to hadoop using sqoop but my RDBMS data is multi valued and contains "," special character. So, While importing data using sqoop into hadoop ,sqoop by default it separate the columns by using "," character. Is there any property through which we can customize thi

Request for Hadoop mailing list subscription and 3.0.0 issues

2017-03-27 Thread Sidharth Kumar
configurations.While the same set of configuration worked fine for hadoop2.7.2 and other stable versions. Thanks for your help in advance -- Regards Sidharth Kumar | Mob: +91 8197 555 599 | LinkedIn <https://www.linkedin.com/in/sidharthkumar2792/>