Re: Unsubscribe

2015-12-02 Thread Sath
Namikaze 

I know its a mailing list. I am not expecting you to do my dirty work. 
First of all who are you. I didn't address you. 

   You have no right to use such language. I can sue you for this. 

   I get every day loads of emails for unsubscribe. I could have reacted but 
being professional is to have an empathy for understanding. I unsubscribed 
earlier as suggested by the apache rules but it didn't unsubscribe. 

   So i waited 4 months to send request again, as per loads of unsubscribe 
emails i thought probably its the way. 

 If you are associated with this responsibility then you should have changed 
the system for better, not being abusive with fowl language. 

That shows your character and unprofessional attitude. It turn out to be as you 
did the same mistake by using fowl language in email list. Whats about that. 

You owe an apology!!!

With all respect 
Sath 


Sent from my iPhone

> On Dec 2, 2015, at 1:21 AM, Namikaze Minato  wrote:
> 
> Hello. This is not a forum but a mailing list.
> 
> And I am saying this because you somehow expect people in the mailing list 
> will do your dirty work of unsubscribing you when you can do it yourself???
> 
> I have already addressed politely dozens of other users in the same situation 
> as you, had you bothered reading my emails, you would have known how to 
> unsubscribe instead of polluting everyone's mailbox.
> 
> I could write to you once again how to do it but I won't give you this 
> pleasure and let you look into your own mailbox for that piece of information.
> 
>> On 2 December 2015 at 04:48, Sath  wrote:
>> Namikaze 
>> 
>> Its not professional to use fowl language in a forum like this. 
>> 
>> If you have an issue with my unsubscribe address it politely.. 
>> 
>> This shows what kind of people are heading this forum. 
>> 
>> With all due respect 
>> 
>> Sent from my iPhone
>> 
>>> On Dec 1, 2015, at 4:05 PM, Namikaze Minato  wrote:
>>> 
>>> ARE YOU FUCKING KIDDING ME
>>> 
>>>> On 2 December 2015 at 00:47, Sath  wrote:
>>>> Unsubscribe 
>>>> 
>>>> Sent from my iPhone
> 


Re: Unsubscribe

2015-12-01 Thread Sath
Namikaze 

Its not professional to use fowl language in a forum like this. 

If you have an issue with my unsubscribe address it politely.. 

This shows what kind of people are heading this forum. 

With all due respect 

Sent from my iPhone

> On Dec 1, 2015, at 4:05 PM, Namikaze Minato  wrote:
> 
> ARE YOU FUCKING KIDDING ME
> 
>> On 2 December 2015 at 00:47, Sath  wrote:
>> Unsubscribe 
>> 
>> Sent from my iPhone


Unsubscribe

2015-12-01 Thread Sath
Unsubscribe 

Sent from my iPhone

> On Dec 1, 2015, at 3:27 PM, Aladdin Elston  wrote:
> 
> 
> Thank you, Aladdin
> 
> 
> From: Robert Schmidtke
> Date: Tuesday, December 1, 2015 at 7:21 AM
> To: "user@hadoop.apache.org"
> Subject: Re: HDFS TestDFSIO performance with 1 mapper/file of 16G per node
> 
> Hi Andreas,
> 
> thanks for your reply. I'm glad I'm not the only one running into this issue. 
> I'm currently in the process of profiling HDFS carefully and hopefully I'll 
> come across something. It would seem that the OS caches all writes to disk, 
> and there may be problems with HDFS being on-heap mostly (wild guess).
> 
> I am running on bare metal here, with Yarn being the only layer in between.
> 
> Robert
> 
> 
>> On Tue, Dec 1, 2015 at 11:07 AM, Andreas Fritzler 
>>  wrote:
>> Hi Robert,
>> 
>> I'm experiencing the same effect regarding the RAM consumption when running 
>> TestDFSIO with huge files. I cound't really find out yet why this is 
>> happening though.
>> 
>> One question regarding your setup: Are you running your cluster on bare 
>> metal or virtualized? 
>> 
>> Regards,
>> Andreas
>> 
>>> On Fri, Nov 27, 2015 at 5:07 PM, Robert Schmidtke  
>>> wrote:
>>> Hi everyone,
>>> 
>>> I cannot seem to wrap my head around the TestDFSIO benchmark results. I 
>>> have a cluster of 8 nodes, the first one runs the NameNode and the 
>>> ResourceManager, the others each run a DataNode and a NodeManager. Each 
>>> node has 64G of RAM. I am using a block size for HDFS of 128M, and a 
>>> replication factor of 3. I'm running the benchmark with 7 map tasks, each 
>>> processing one file of 16G. The results are as follows:
>>> 
>>> For the write phase:
>>> 15/11/27 14:50:32 INFO mapreduce.Job: Counters: 49
>>> File System Counters
>>> FILE: Number of bytes read=626
>>> FILE: Number of bytes written=929305
>>> FILE: Number of read operations=0
>>> FILE: Number of large read operations=0
>>> FILE: Number of write operations=0
>>> HDFS: Number of bytes read=1673
>>> HDFS: Number of bytes written=120259084369
>>> HDFS: Number of read operations=31
>>> HDFS: Number of large read operations=0
>>> HDFS: Number of write operations=9
>>> Job Counters
>>> Launched map tasks=7
>>> Launched reduce tasks=1
>>> Data-local map tasks=7
>>> Total time spent by all maps in occupied slots (ms)=3816135
>>> Total time spent by all reduces in occupied slots (ms)=277826
>>> Total time spent by all map tasks (ms)=3816135
>>> Total time spent by all reduce tasks (ms)=277826
>>> Total vcore-seconds taken by all map tasks=3816135
>>> Total vcore-seconds taken by all reduce tasks=277826
>>> Total megabyte-seconds taken by all map tasks=3907722240
>>> Total megabyte-seconds taken by all reduce tasks=284493824
>>> Map-Reduce Framework
>>> Map input records=7
>>> Map output records=35
>>> Map output bytes=550
>>> Map output materialized bytes=662
>>> Input split bytes=889
>>> Combine input records=0
>>> Combine output records=0
>>> Reduce input groups=5
>>> Reduce shuffle bytes=662
>>> Reduce input records=35
>>> Reduce output records=5
>>> Spilled Records=70
>>> Shuffled Maps =7
>>> Failed Shuffles=0
>>> Merged Map outputs=7
>>> GC time elapsed (ms)=73040
>>> CPU time spent (ms)=754470
>>> Physical memory (bytes) snapshot=2070585344
>>> Virtual memory (bytes) snapshot=7448678400
>>> Total committed heap usage (bytes)=1346895872
>>> Shuffle Errors
>>> BAD_ID=0
>>> CONNECTION=0
>>> IO_ERROR=0
>>> WRONG_LENGTH=0
>>> WRONG_MAP=0
>>> WRONG_REDUCE=0
>>> File Input Format Counters
>>> Bytes Read=784
>>> File Output Format Counters
>>> Bytes Written=81
>>> 15/11/27 14:50:32 INFO fs.TestDFSIO: - TestDFSIO - : write
>>> 15/11/27 14:50:32 INFO fs.TestDFSIO:Date & time: Fri Nov 27 
>>> 14:50:32 CET 2015
>>> 15/11/27 14:50:32 INFO fs.TestDFSIO:Number of files: 7
>>> 15/11/27 14:50:32 INFO fs.TestDFSIO: Total MBytes processed: 114688.0
>>> 15/11/27 14:50:32 INFO fs.TestDFSIO:  Throughput mb/sec: 
>>> 30.260630125485257
>>> 15/11/27 14:50:32 INFO fs.TestDFSIO: Average IO rate mb/sec: 
>>> 31.812650680541992
>>> 15/11/27 14:50:32 INFO fs.TestDFSIO:  IO rate std deviation: 
>>> 6.862839589024802
>>> 15/11/27 14:50:32 INFO fs.TestDFSIO: Test exec time sec: 705.559
>>> 
>>> For the read phase:
>>> 15/11/27 14:56:46 INFO mapreduce.Job: Counters: 51
>>> File System Counters
>>> FILE: Number of bytes read=637
>>> FILE: Number of bytes written=929311
>>> FILE: Number of read operations=0
>>> FILE: Number of large read operations=0
>>> FILE: Number of write operations=0
>>> HDFS: Number of bytes read=120259085961
>>> HDFS: Number of bytes written=84
>>> HDFS: Number of read operations=38
>>> HDFS: Number of large read operations=0
>>> HDFS: Number of write operations=2
>>> Job Counters
>>> Killed map tasks=4
>>> Launched map tasks=11
>>> Launched reduce tasks=1
>>> Data-local map tasks=9
>>> Rack-local map tasks=2
>>> Total time spent by all maps in occupied slots (ms)=2096182
>>> Total time spent by all reduces in occupie