RE: Disk maintenance

2017-07-17 Thread Adaryl Wakefield
Sorry for the slow response. I have to do this in my off hours. Here is the 
output.

Filesystem   Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   50G   31G   20G  61% /
devtmpfs  16G 0   16G   0% /dev
tmpfs 16G  8.0K   16G   1% /dev/shm
tmpfs 16G   18M   16G   1% /run
tmpfs 16G 0   16G   0% /sys/fs/cgroup
/dev/sda1494M  173M  321M  36% /boot
/dev/mapper/centos-home  866G   48M  866G   1% /home
tmpfs3.1G 0  3.1G   0% /run/user/1000
tmpfs3.1G 0  3.1G   0% /run/user/1006
tmpfs3.1G 0  3.1G   0% /run/user/1003
tmpfs3.1G 0  3.1G   0% /run/user/1004
tmpfs3.1G 0  3.1G   0% /run/user/1016
tmpfs3.1G 0  3.1G   0% /run/user/1020
tmpfs3.1G 0  3.1G   0% /run/user/1015
tmpfs3.1G 0  3.1G   0% /run/user/1021
tmpfs3.1G 0  3.1G   0% /run/user/1012
tmpfs3.1G 0  3.1G   0% /run/user/1018
tmpfs3.1G 0  3.1G   0% /run/user/1002
tmpfs3.1G 0  3.1G   0% /run/user/1009

Adaryl "Bob" Wakefield, MBA
Principal
Mass Street Analytics, LLC
913.938.6685
www.massstreet.net
www.linkedin.com/in/bobwakefieldmba
Twitter: @BobLovesData


From: Philippe Kernévez [mailto:pkerne...@octo.com]
Sent: Friday, July 14, 2017 3:08 AM
To: Adaryl Wakefield 
Cc: user@hadoop.apache.org
Subject: Re: Disk maintenance

Hi,

Would you run the command 'sudo df -kh'.

Regards,
Philippe

On Fri, Jul 14, 2017 at 6:40 AM, Adaryl Wakefield 
> wrote:
So I did the first command and did find some offenders:
5.9G/var/log/ambari-infra-solr
5.9G/var/log/Hadoop

While those are big numbers, they are sitting on a 1TB disk. This is the actual 
message I’m getting:
Capacity Used: [60.52%, 32.5 GB], Capacity Total: [53.7 GB], path=/usr/hdp

I figured out that HDFS isn’t actually taking up the whole disk which I didn’t 
know. I figured out how to expand that but before I do that, I want to know 
what is eating my space. I ran your command again with a modification:
sudo du -h --max-depth=1 /usr/hdp

That output is shown here:
395M/usr/hdp/share
4.8G/usr/hdp/2.5.0.0-1245
4.0K/usr/hdp/current
5.2G/usr/hdp

None of that adds up to 32.5 GB.

Adaryl "Bob" Wakefield, MBA
Principal
Mass Street Analytics, LLC
913.938.6685
www.massstreet.net
www.linkedin.com/in/bobwakefieldmba
Twitter: @BobLovesData


From: Shane Kumpf 
[mailto:shane.kumpf.apa...@gmail.com]
Sent: Wednesday, July 12, 2017 7:17 AM
To: Adaryl Wakefield 
>
Cc: user@hadoop.apache.org
Subject: Re: Disk maintenance

Hello Bob,

It's difficult to say based on the information provided, but I would suspect 
namenode and datanode logs to be the culprit. What does "sudo du -h 
--max-depth=1 /var/log" return?

If it is not logs, is there a specific filesystem/directory that you see 
filling up/alerting? i.e. /, /var, /data, etc? If you are unsure, you can start 
at / to try to track down where the space is going via "sudo du -xm 
--max-depth=1 / | sort -rn" and then walk the filesystem hierarchy for the 
directory listed as using the most space (change / in the previous command to 
the directory reported as using all the space, continue that process until you 
locate the files using up all the space).

-Shane

On Tue, Jul 11, 2017 at 9:22 PM, Adaryl Wakefield 
> wrote:
I'm running a test cluster that normally has no data in it. Despite that, I've 
been getting warnings of disk space usage. Something is growing on disk and I'm 
not sure what. Are there scrips that I should be running to clean out logs or 
something? What is really interesting is that this is only affecting the name 
node and one data node. The other data node isn’t having a space issue.

I'm running Hortonworks Data Platform 2.5 with HDFS 2.7.3 on CENTOS 7. I 
thought it might be a Linux issue but the problem is clearly confined to the 
parts of the disk taken up by HDFS.

Adaryl "Bob" Wakefield, MBA
Principal
Mass Street Analytics, LLC
913.938.6685
www.massstreet.net
www.linkedin.com/in/bobwakefieldmba
Twitter: @BobLovesData





--
Philippe Kernévez



Directeur technique (Suisse),

Re: confirm subscribe to annou...@apache.org

2017-07-17 Thread iain wright
What is the value for ha.zookeeper.quorum in your cfgs?

The error msg looks like there might be a trailing whitespace or other
character after 2181 preventing from converting str to int

-- 
Iain Wright

This email message is confidential, intended only for the recipient(s)
named above and may contain information that is privileged, exempt from
disclosure under applicable law. If you are not the intended recipient, do
not disclose or disseminate the message to anyone except the intended
recipient. If you have received this message in error, or are not the named
recipient(s), please immediately notify the sender by return email, and
delete all copies of this message.

On Mon, Jul 17, 2017 at 4:42 AM, Donald Nelson 
wrote:

> Hello everyone,
>
> I am having a problem with Apache Hadoop 2.8 HA. I am able to to manual
> failover but I can't format my ZKnode and receive the following errors
>
>  sessionTimeout=5000 watcher=org.apache.hadoop.ha.A
> ctiveStandbyElector$WatcherWithClientRef@22ff4249
> 17/07/17 13:40:52 FATAL ha.ZKFailoverController: The failover controller
> encounters runtime error: java.lang.RuntimeException:
> java.lang.NumberFormatException: For input string: "2181 "
> 17/07/17 13:40:52 FATAL tools.DFSZKFailoverController: Got a fatal error,
> exiting now
> java.lang.NumberFormatException: For input string: "2181 "
> at java.lang.NumberFormatException.forInputString(Unknown Source)
> at java.lang.Integer.parseInt(Unknown Source)
> at java.lang.Integer.parseInt(Unknown Source)
> at org.apache.zookeeper.client.ConnectStringParser.(Conne
> ctStringParser.java:72)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:443)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
> at org.apache.hadoop.ha.ActiveStandbyElector.createZooKeeper(Ac
> tiveStandbyElector.java:706)
> at org.apache.hadoop.ha.ActiveStandbyElector.connectToZooKeeper
> (ActiveStandbyElector.java:685)
> at org.apache.hadoop.ha.ActiveStandbyElector.createConnection(A
> ctiveStandbyElector.java:844)
> at org.apache.hadoop.ha.ActiveStandbyElector.(ActiveStand
> byElector.java:271)
> at org.apache.hadoop.ha.ActiveStandbyElector.(ActiveStand
> byElector.java:214)
> at org.apache.hadoop.ha.ZKFailoverController.initZK(ZKFailoverC
> ontroller.java:354)
> at org.apache.hadoop.ha.ZKFailoverController.doRun(ZKFailoverCo
> ntroller.java:195)
> at org.apache.hadoop.ha.ZKFailoverController.access$000(
> ZKFailoverController.java:61)
> at org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverCo
> ntroller.java:175)
> at org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverCo
> ntroller.java:171)
> at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal
> (SecurityUtil.java:455)
> at org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverCont
> roller.java:171)
> at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DF
> SZKFailoverController.java:193)
> 17/07/17 13:40:52 INFO tools.DFSZKFailoverController: SHUTDOWN_MSG:
> /
> SHUTDOWN_MSG: Shutting down DFSZKFailoverController a
>
>
> I have one leader and two followers using zookeepers running on port 2181.
> Any help would be appreciated.
>
> Regards,
>
> Donald Nelson
>
>
>
>
> On 07/17/2017 01:36 PM, announce-h...@apache.org wrote:
>
>> Hi! This is the ezmlm program. I'm managing the
>> annou...@apache.org mailing list.
>>
>> To confirm that you would like
>>
>> donald.nel...@uniscon.de
>>
>> added to the announce mailing list, please send
>> a short reply to this address:
>>
>> announce-sc.1500291382.enmadeoohcaimoamdnld-donald.nelson=un
>> iscon...@apache.org
>>
>> Usually, this happens when you just hit the "reply" button.
>> If this does not work, simply copy the address and paste it into
>> the "To:" field of a new message.
>>
>> This confirmation serves two purposes. First, it verifies that I am able
>> to get mail through to you. Second, it protects you in case someone
>> forges a subscription request in your name.
>>
>> Please note that ALL Apache dev- and user- mailing lists are publicly
>> archived.  Do familiarize yourself with Apache's public archive policy at
>>
>>  http://www.apache.org/foundation/public-archives.html
>>
>> prior to subscribing and posting messages to annou...@apache.org.
>> If you're not sure whether or not the policy applies to this mailing list,
>> assume it does unless the list name contains the word "private" in it.
>>
>> Some mail programs are broken and cannot handle long addresses. If you
>> cannot reply to this request, instead send a message to
>>  and put the
>> entire address listed above into the "Subject:" line.
>>
>>
>> --- Administrative commands for the announce list ---
>>
>> I can handle administrative requests automatically. Please
>> do not send them to the list address! Instead, send
>> your message to the 

Re: Regarding Simulation of Hadoop

2017-07-17 Thread Ravi Prakash
Hi Vinod!

You can look at static code analysis tools. I'm sure there are ones
specific for security. I'd suggest you to set up a Kerberized hadoop
cluster first.

HTH
Ravi

On Sat, Jul 15, 2017 at 2:08 AM, vinod Saraswat  wrote:

> Dear Sir/Mam,
>
>
>
> I am Vinod Sharma (Research Scholar). My research is based on Hadoop
> security and want to perform simulation for Hadoop. This simulation checks
> current security on Hadoop. Please tell me that it is possible or not and
> how can I perform it.
>
>
>
> *Thanks and Regards*
> *Vinod Sharma(Saraswat)*
> *Research Scholar*
> *Department of Computer Science,*
> *Career Point University of Kota, Rajasthan*
>


Re: confirm subscribe to annou...@apache.org

2017-07-17 Thread Donald Nelson

Hello everyone,

I am having a problem with Apache Hadoop 2.8 HA. I am able to to manual 
failover but I can't format my ZKnode and receive the following errors


 sessionTimeout=5000 
watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@22ff4249
17/07/17 13:40:52 FATAL ha.ZKFailoverController: The failover controller 
encounters runtime error: java.lang.RuntimeException: 
java.lang.NumberFormatException: For input string: "2181 "
17/07/17 13:40:52 FATAL tools.DFSZKFailoverController: Got a fatal 
error, exiting now

java.lang.NumberFormatException: For input string: "2181 "
at java.lang.NumberFormatException.forInputString(Unknown Source)
at java.lang.Integer.parseInt(Unknown Source)
at java.lang.Integer.parseInt(Unknown Source)
at 
org.apache.zookeeper.client.ConnectStringParser.(ConnectStringParser.java:72)

at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:443)
at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
at 
org.apache.hadoop.ha.ActiveStandbyElector.createZooKeeper(ActiveStandbyElector.java:706)
at 
org.apache.hadoop.ha.ActiveStandbyElector.connectToZooKeeper(ActiveStandbyElector.java:685)
at 
org.apache.hadoop.ha.ActiveStandbyElector.createConnection(ActiveStandbyElector.java:844)
at 
org.apache.hadoop.ha.ActiveStandbyElector.(ActiveStandbyElector.java:271)
at 
org.apache.hadoop.ha.ActiveStandbyElector.(ActiveStandbyElector.java:214)
at 
org.apache.hadoop.ha.ZKFailoverController.initZK(ZKFailoverController.java:354)
at 
org.apache.hadoop.ha.ZKFailoverController.doRun(ZKFailoverController.java:195)
at 
org.apache.hadoop.ha.ZKFailoverController.access$000(ZKFailoverController.java:61)
at 
org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:175)
at 
org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:171)
at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:455)
at 
org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:171)
at 
org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:193)

17/07/17 13:40:52 INFO tools.DFSZKFailoverController: SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down DFSZKFailoverController a


I have one leader and two followers using zookeepers running on port 
2181. Any help would be appreciated.


Regards,

Donald Nelson




On 07/17/2017 01:36 PM, announce-h...@apache.org wrote:

Hi! This is the ezmlm program. I'm managing the
annou...@apache.org mailing list.

To confirm that you would like

donald.nel...@uniscon.de

added to the announce mailing list, please send
a short reply to this address:


announce-sc.1500291382.enmadeoohcaimoamdnld-donald.nelson=uniscon...@apache.org

Usually, this happens when you just hit the "reply" button.
If this does not work, simply copy the address and paste it into
the "To:" field of a new message.

This confirmation serves two purposes. First, it verifies that I am able
to get mail through to you. Second, it protects you in case someone
forges a subscription request in your name.

Please note that ALL Apache dev- and user- mailing lists are publicly
archived.  Do familiarize yourself with Apache's public archive policy at

 http://www.apache.org/foundation/public-archives.html

prior to subscribing and posting messages to annou...@apache.org.
If you're not sure whether or not the policy applies to this mailing list,
assume it does unless the list name contains the word "private" in it.

Some mail programs are broken and cannot handle long addresses. If you
cannot reply to this request, instead send a message to
 and put the
entire address listed above into the "Subject:" line.


--- Administrative commands for the announce list ---

I can handle administrative requests automatically. Please
do not send them to the list address! Instead, send
your message to the correct command address:

To subscribe to the list, send a message to:


To remove your address from the list, send a message to:


Send mail to the following for info and FAQ for this list:



Similar addresses exist for the digest list:



To get messages 123 through 145 (a maximum of 100 per request), mail:


To get an index with subject and author for messages 123-456 , mail:


They are always returned as sets of 100, max 2000 per request,
so you'll actually get 100-499.

To receive all messages with the same subject as message 12345,
send a short message to:


The messages should contain one line or word of text to avoid being
treated 

RE: Unsubscribe

2017-07-17 Thread Brahma Reddy Battula
It doesn’t like that.. Kindly drop a mail to 
"user-unsubscr...@hadoop.apache.org"



--Brahma Reddy Battula

From: Shawn Du [mailto:shawndow...@gmail.com]
Sent: 17 July 2017 13:30
To: user@hadoop.apache.org
Subject: Unsubscribe

Unsubscribe

Thanks
Shawn