MapReduce Streaming on Solaris

2014-06-25 Thread Rich Haase
Hi all,

I have a 20 node cluster that is running on Solaris x86 (OpenIndian).  I'm
not really familiar with OpenIndiana having moved from Solaris to Linux
many years ago, but it's the OS of choice for the systems administrator at
my company.

Each worker has 24 700xGB drives, 24 cores and 96 GB of memory.

All hadoop daemons are running with 2g of memory except for the jobtracker,
which has 4g.   I am only running 10 map and 8 reduce slots per cluster
node to leave lots of memory free for streaming jobs.

Yesterday I was running some performance tests after noticing that all of
my teams streaming jobs were running appallingly slow.  My test dataset is
a set of key value pairs with a heavy skew towards one key.

My test program is word count implemented in each of the test languages.

Here were the results of my test:

LanguageOSDatasetSizeRuntime/usr/bin/pythonSolaris41GB Counts of unique ids
for a day in raw text410:46:08/opt/local/bin/pythonSolaris41GB Counts of
unique ids for a day in raw text410:39:02/usr/local/bin/pythonSolaris41GB
Counts of unique ids for a day in raw text410:35:31rubySolaris41GB Counts
of unique ids for a day in raw text410:37:31perlSolaris41GB Counts of
unique ids for a day in raw text410:17:38pigSolaris41GB Counts of unique
ids for a day in raw text410:08:35javaSolaris41GB Counts of unique ids for
a day in raw text410:03:44pythonOS X4.9 GB Counts of unique ids for a day
in raw text4.90:09:07rubyOS X4.9 GB Counts of unique ids for a day in raw
text4.90:08:21perlOS X4.9 GB Counts of unique ids for a day in raw text4.9
0:06:41javaOS X4.9 GB Counts of unique ids for a day in raw text4.90:04:03

As you can see the runtime deltas between streaming and java M/R was far
worse than is general quoted in the community, and definitely worse than I
have experienced on similar clusters running on Linux.  The performance of
the pseudo-distributed cluster on my macbook is much closer to my
expectations for streaming.

My question to all of you is: has anyone run into performance problems with
streaming on Solaris?   If so, is there remedy?  Other than run Hadoop on
Linux?

Cheers,

Rich


-- 
*Kernighan's Law*
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."


Re: ListWritable In Hadoop

2014-07-09 Thread Rich Haase
No, but hadoop 2.2 has ArrayWritable.

https://hadoop.apache.org/docs/r2.2.0/api/org/apache/hadoop/io/ArrayWritable.html

Would you provide more information about your use case you can describe?
 There maybe an alternative that will meet your needs.


On Thu, Jul 10, 2014 at 12:31 AM, unmesha sreeveni 
wrote:

> hi
>  Do we have a ListWritable in hadoop ?
>
> --
> *Thanks & Regards *
>
>
> *Unmesha Sreeveni U.B*
> *Hadoop, Bigdata Developer*
>
>
>


-- 
*Kernighan's Law*
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."


Upgrading from 1.1.2 to 2.2.0

2014-07-17 Thread Rich Haase
Has anyone upgraded directly from 1.1.2 to 2.2.0?  If so, is there anything
I should be concerned about?

Thanks,

Rich

-- 
*Kernighan's Law*
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."


Re: MR JOB

2014-07-18 Thread Rich Haase
File copy operations do not run as map reduce jobs.  All hadoop fs commands
are run as operations against HDFS and do not use the MapReduce.


On Fri, Jul 18, 2014 at 10:49 AM, Ashish Dobhal 
wrote:

> Does the normal operations of hadoop such as uploading and downloading a
> file into the HDFS run as a MR job.
> If so why cant I see the job being run on my task tracker and job tracker.
> Thank you.
>



-- 
*Kernighan's Law*
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."


Re: MR JOB

2014-07-18 Thread Rich Haase
HDFS handles the splitting of files into multiple blocks.  It's a file
system operation that is transparent to the user.


On Fri, Jul 18, 2014 at 11:07 AM, Ashish Dobhal 
wrote:

> Rich Haase Thanks,
> But if the copy ops do not occur as a MR job then how does the splitting
> of a file into several blocks takes place.
>
>
> On Fri, Jul 18, 2014 at 10:24 PM, Rich Haase  wrote:
>
>> File copy operations do not run as map reduce jobs.  All hadoop fs
>> commands are run as operations against HDFS and do not use the MapReduce.
>>
>>
>> On Fri, Jul 18, 2014 at 10:49 AM, Ashish Dobhal <
>> dobhalashish...@gmail.com> wrote:
>>
>>> Does the normal operations of hadoop such as uploading and downloading a
>>> file into the HDFS run as a MR job.
>>> If so why cant I see the job being run on my task tracker and job
>>> tracker.
>>> Thank you.
>>>
>>
>>
>>
>> --
>> *Kernighan's Law*
>> "Debugging is twice as hard as writing the code in the first place.
>> Therefore, if you write the code as cleverly as possible, you are, by
>> definition, not smart enough to debug it."
>>
>
>


-- 
*Kernighan's Law*
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."


Re: unsubscribe

2014-07-18 Thread Rich Haase
To unsubscribe from this list send an email to user-unsubscribe@hadoop.
apache.org


On Fri, Jul 18, 2014 at 10:54 AM, Zilong Tan  wrote:

>
>
>


-- 
*Kernighan's Law*
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."


Re: No such file or directory

2014-07-29 Thread Rich Haase
Try the same commands, but set the config path.

e.g.

$ hadoop --config /path/to/hdfs/config/dir dfs ...


On Tue, Jul 29, 2014 at 12:16 PM, Bhupathi, Ramakrishna 
wrote:

>   Folks,
>
>
>
> Can you help me with this ? I am not sure why I am getting this  “No such
> file or directory” for every hdfs command I use. The Hadoop deamons are
> running.
>
>
>
> The /home/hduser/mydata directory has been formatted by the NameNode.
>
>
>
> hduser@hadoop-master:~/mydata$ hdfs dfs -ls ./hdfs/
>
> 14/07/29 18:12:48 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes
>
> where applicable
>
> ls: `./hdfs/': No such file or directory
>
> hduser@hadoop-master:~/mydata$ pwd
>
> /home/hduser/mydata
>
> hduser@hadoop-master:~/mydata$ hdfs dfs -ls /home/hduser/mydata
>
> 14/07/29 18:13:19 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes
>
> where applicable
>
> ls: `/home/hduser/mydata': No such file or directory
>
> hduser@hadoop-master:~/mydata$
>
>
>
> --RamaK
>



-- 
*Kernighan's Law*
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."


Re: Datanode can not start with error "Error creating plugin: org.apache.hadoop.metrics2.sink.FileSink"

2014-09-04 Thread Rich Haase
The reason you can't launch your datanode is:

*2014-09-04 10:20:01,677 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain*
*java.net.BindException: Port in use: 0.0.0.0:50075 *

It appears that you already have a datanode instance listening on port
50075, or you have some other process listening on that port.

The error you mentioned in the subject of your email is a warning message
and is cause by a file system permission issue:

*Caused by: java.io.FileNotFoundException: datanode-metrics.out (Permission
denied)*



On Wed, Sep 3, 2014 at 9:09 PM, ch huang  wrote:

> hi,maillist:
>
>i have a 10-worknode hadoop cluster using CDH 4.4.0 , one of my
> datanode ,one of it's disk is full
>
> , when i restart this datanode ,i get error
>
>
> STARTUP_MSG:   java = 1.7.0_45
> /
> 2014-09-04 10:20:00,576 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal
> handlers for [TERM, HUP, INT]
> 2014-09-04 10:20:01,457 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2014-09-04 10:20:01,465 WARN
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Error creating sink
> 'file'
> org.apache.hadoop.metrics2.impl.MetricsConfigException: Error creating
> plugin: org.apache.hadoop.metrics2.sink.FileSink
> at
> org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:203)
> at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.newSink(MetricsSystemImpl.java:478)
> at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configureSinks(MetricsSystemImpl.java:450)
> at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:429)
> at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:180)
> at
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:156)
> at
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:54)
> at
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:50)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1792)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1728)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1751)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1925)
> Caused by: org.apache.hadoop.metrics2.MetricsException: Error creating
> datanode-metrics.out
> at org.apache.hadoop.metrics2.sink.FileSink.init(FileSink.java:53)
> at
> org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:199)
> ... 12 more
> Caused by: java.io.FileNotFoundException: datanode-metrics.out (Permission
> denied)
> at java.io.FileOutputStream.open(Native Method)
> at java.io.FileOutputStream.(FileOutputStream.java:221)
> at java.io.FileWriter.(FileWriter.java:107)
> at org.apache.hadoop.metrics2.sink.FileSink.init(FileSink.java:48)
> ... 13 more
> 2014-09-04 10:20:01,488 INFO
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink ganglia started
> 2014-09-04 10:20:01,546 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 5 second(s).
> 2014-09-04 10:20:01,546 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
> started
> 2014-09-04 10:20:01,547 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is ch15
> 2014-09-04 10:20:01,569 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at
> /0.0.0.0:50010
> 2014-09-04 10:20:01,572 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 10485760 bytes/s
> 2014-09-04 10:20:01,607 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2014-09-04 10:20:01,657 INFO org.apache.hadoop.http.HttpServer: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 2014-09-04 10:20:01,660 INFO org.apache.hadoop.http.HttpServer: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context datanode
> 2014-09-04 10:20:01,660 INFO org.apache.hadoop.http.HttpServer: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context static
> 2014-09-04 10:20:01,660 INFO org.apache.hadoop.http.HttpServer: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
> context logs
> 2014-09-04 10:20:01,664 INFO
> org.apache.hadoop.hd

Re: Map job not finishing

2014-09-05 Thread Rich Haase
How many tasktrackers do you have setup for your single node cluster?
 Oozie runs each action as a java program on an arbitrary cluster node, so
running a workflow requires a minimum of two tasktrackers.


On Fri, Sep 5, 2014 at 7:33 AM, Charles Robertson <
charles.robert...@gmail.com> wrote:

> Hi all,
>
> I'm using oozie to run a hive script, but the map job is not completing.
> The tracking page shows its progress as 100%, and there's no warnings or
> errors in the logs, it's just sitting there with a state of 'RUNNING'.
>
> As best I can make out from the logs, the last statement in the hive
> script has been successfully parsed and it tries to start the command,
> saying "launching job 1 of 3". That job is sitting there in the "ACCEPTED"
> state, but doing nothing.
>
> This is on a single-node cluster running Hortonworks Data Platform 2.1.
> Can anyone suggest what might be the cause, or where else to look for
> diagnostic information?
>
> Thanks,
> Charles
>



-- 
*Kernighan's Law*
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."


Re: Map job not finishing

2014-09-06 Thread Rich Haase
You're welcome.  Glad I could help.
On Sep 6, 2014 9:56 AM, "Charles Robertson" 
wrote:

> Hi Rich,
>
> Default setup, so presumably one. I opted to add a node rather than change
> the number of task trackers and it now runs successfully.
>
> Thank you!
> Charles
>
>
> On 5 September 2014 16:44, Rich Haase  wrote:
>
>> How many tasktrackers do you have setup for your single node cluster?
>>  Oozie runs each action as a java program on an arbitrary cluster node, so
>> running a workflow requires a minimum of two tasktrackers.
>>
>>
>> On Fri, Sep 5, 2014 at 7:33 AM, Charles Robertson <
>> charles.robert...@gmail.com> wrote:
>>
>>> Hi all,
>>>
>>> I'm using oozie to run a hive script, but the map job is not completing.
>>> The tracking page shows its progress as 100%, and there's no warnings or
>>> errors in the logs, it's just sitting there with a state of 'RUNNING'.
>>>
>>> As best I can make out from the logs, the last statement in the hive
>>> script has been successfully parsed and it tries to start the command,
>>> saying "launching job 1 of 3". That job is sitting there in the "ACCEPTED"
>>> state, but doing nothing.
>>>
>>> This is on a single-node cluster running Hortonworks Data Platform 2.1.
>>> Can anyone suggest what might be the cause, or where else to look for
>>> diagnostic information?
>>>
>>> Thanks,
>>> Charles
>>>
>>
>>
>>
>> --
>> *Kernighan's Law*
>> "Debugging is twice as hard as writing the code in the first place.
>> Therefore, if you write the code as cleverly as possible, you are, by
>> definition, not smart enough to debug it."
>>
>
>


Re: Regular expressions in fs paths?

2014-09-10 Thread Rich Haase
HDFS doesn't support he full range of glob matching you will find in Linux.
 If you want to exclude all files from a directory listing that meet a
certain criteria try doing your listing and using grep -v to exclude the
matching records.


Re: Hadoop Smoke Test: TERASORT

2014-09-10 Thread Rich Haase
You can set the number of reducers used in any hadoop job from the command
line by using -Dmapred.reduce.tasks=XX.

e.g.  hadoop jar hadoop-mapreduce-examples.jar terasort
-Dmapred.reduce.tasks=10  /terasort-input /terasort-output


Re: Writing output from streaming task without dealing with key/value

2014-09-10 Thread Rich Haase
In python, or any streaming program just set the output value to the empty
string and you will get something like "key"\t"".

On Wed, Sep 10, 2014 at 12:03 PM, Susheel Kumar Gadalay  wrote:

> If you don't want key in the final output, you can set like this in Java.
>
> job.setOutputKeyClass(NullWritable.class);
>
> It will just print the value in the output file.
>
> I don't how to do it in python.
>
> On 9/10/14, Dmitry Sivachenko  wrote:
> > Hello!
> >
> > Imagine the following common task: I want to process big text file
> > line-by-line using streaming interface.
> > Run unix grep command for instance.  Or some other line-by-line
> processing,
> > e.g. line.upper().
> > I copy file to HDFS.
> >
> > Then I run a map task on this file which reads one line, modifies it some
> > way and then writes it to the output.
> >
> > TextInputFormat suites well for reading: it's key is the offset in bytes
> > (meaningless in my case) and the value is the line itself, so I can
> iterate
> > over line like this (in python):
> > for line in sys.stdin:
> >   print(line.upper())
> >
> > The problem arises with TextOutputFormat:  It tries to split the
> resulting
> > line on mapreduce.output.textoutputformat.separator which results in
> extra
> > separator in output if this character is missing in the line, for
> instance
> > (extra TAB at the end if we stick to defaults).
> >
> > Is there any way to write the result of streaming task without any
> internal
> > processing so it appears exactly as the script produces it?
> >
> > If it is impossible with Hadoop, which works with key/value pairs, may be
> > there are other frameworks which work on top of HDFS which allow to do
> > this?
> >
> > Thanks in advance!
>



-- 
*Kernighan's Law*
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."


Re: Writing output from streaming task without dealing with key/value

2014-09-10 Thread Rich Haase
You can write a custom output format, or you can write your mapreduce job
in Java and use a NullWritable as Susheel recommended.

grep (and every other *nix text processing command) I can think of would
not be limited by a trailing tab character.  It's even quite easy to strip
away that tab character if you don't want it during the post processing
steps you want to perform with *nix commands.

On Wed, Sep 10, 2014 at 12:12 PM, Dmitry Sivachenko 
wrote:

>
> On 10 сент. 2014 г., at 22:05, Rich Haase  wrote:
>
> > In python, or any streaming program just set the output value to the
> empty string and you will get something like "key"\t"".
> >
>
>
> I see, but I want to use many existing programs (like UNIX grep), and I
> don't want to have and extra "\t" in the output.
>
> Is there any way to achieve this?  Or may be it is possible to write
> custom XxxOutputFormat to workaround that issue?
>
> (something opposite to TextInputFormat: it passes input line without any
> modification to script's stdin, there should be a way to write stdout to
> file "as is").
>
>
> Thanks!
>
>
> > On Wed, Sep 10, 2014 at 12:03 PM, Susheel Kumar Gadalay <
> skgada...@gmail.com> wrote:
> > If you don't want key in the final output, you can set like this in Java.
> >
> > job.setOutputKeyClass(NullWritable.class);
> >
> > It will just print the value in the output file.
> >
> > I don't how to do it in python.
> >
> > On 9/10/14, Dmitry Sivachenko  wrote:
> > > Hello!
> > >
> > > Imagine the following common task: I want to process big text file
> > > line-by-line using streaming interface.
> > > Run unix grep command for instance.  Or some other line-by-line
> processing,
> > > e.g. line.upper().
> > > I copy file to HDFS.
> > >
> > > Then I run a map task on this file which reads one line, modifies it
> some
> > > way and then writes it to the output.
> > >
> > > TextInputFormat suites well for reading: it's key is the offset in
> bytes
> > > (meaningless in my case) and the value is the line itself, so I can
> iterate
> > > over line like this (in python):
> > > for line in sys.stdin:
> > >   print(line.upper())
> > >
> > > The problem arises with TextOutputFormat:  It tries to split the
> resulting
> > > line on mapreduce.output.textoutputformat.separator which results in
> extra
> > > separator in output if this character is missing in the line, for
> instance
> > > (extra TAB at the end if we stick to defaults).
> > >
> > > Is there any way to write the result of streaming task without any
> internal
> > > processing so it appears exactly as the script produces it?
> > >
> > > If it is impossible with Hadoop, which works with key/value pairs, may
> be
> > > there are other frameworks which work on top of HDFS which allow to do
> > > this?
> > >
> > > Thanks in advance!
> >
> >
> >
> > --
> > Kernighan's Law
> > "Debugging is twice as hard as writing the code in the first place.
> Therefore, if you write the code as cleverly as possible, you are, by
> definition, not smart enough to debug it."
>
>


-- 
*Kernighan's Law*
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."


Re: UNSUBSCRIBE

2014-12-02 Thread Rich Haase
Email user-unsubscr...@hadoop.apache.org to unsubscribe.

Rich Haase | Sr. Software Engineer | Pandora
m 303.887.1146 | rha...@pandora.com

From: Naveen teja J N V 
mailto:naveen.teja.j@gmail.com>>
Reply-To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>" 
mailto:user@hadoop.apache.org>>
Date: Tuesday, December 2, 2014 at 1:52 PM
To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>" 
mailto:user@hadoop.apache.org>>
Subject: UNSUBSCRIBE

UNSUBSCRIBE
Thank you...!!


Re: to all this unsubscribe sender

2014-12-05 Thread Rich Haase
+1

Sent from my iPhone

On Dec 5, 2014, at 08:29, Ted Yu 
mailto:yuzhih...@gmail.com>> wrote:

+1

On Fri, Dec 5, 2014 at 7:22 AM, Aleks Laz 
mailto:al-userhad...@none.at>> wrote:

+1

Am 05-12-2014 16:12, schrieb mark charts:

I concur. Good idea.


On Friday, December 5, 2014 10:10 AM, Amjad Syed 
mailto:amjad...@gmail.com>> wrote:


My take on this is simple. The owner/maintainer / administrator of the list can 
implement a filter where if the subject of the email has word "unsubscribe" . 
That email gets blocked and sender gets automated reply stating that if you 
want to unsubscribe use the unsubscribe  list.
On 5 Dec 2014 18:05, "Niels Basjes" mailto:ni...@basjes.nl>> 
wrote:
Yes, I agree. We should accept people as they are.
So perhaps we should increase the hurdle to subscribe in the first place?
Something like adding a question like "What do you do if you want to 
unsubscribe from a mailing list?"

That way the people who are lazy or need hand holding are simply unable to 
subscribe, thus avoiding these people who send these "unsubscribe me" messages 
from ever entering the list.


On Fri, Dec 5, 2014 at 2:49 PM, mark charts 
mailto:mcha...@yahoo.com>> wrote:
Trained as an electrical engineer I learned again and again from my 
instructors: man is inherently lazy. Maybe that's the case. Also, some people 
have complicated lives and need hand holding. It is best to accept people as 
they are.


On Friday, December 5, 2014 8:40 AM, Chana 
mailto:chana.ole...@gmail.com>> wrote:


Actually, it's "capisce".

And, obviously, we are all here because we are learning. But, now that 
instructions have been posted again - I would expect that people would not 
expect to have their
hands held to do something so simple as "unsubscribe". They are supposedly 
technologists. I would think they would be able to do whatever research is 
necessary to
unsubscribe without flooding the list with requests. It's a fairly common 
operation for anyone who has been on the web for a while.



On Fri, Dec 5, 2014 at 7:35 AM, mark charts 
mailto:mcha...@yahoo.com>> wrote:
Not rude at all. We were not born with any knowledge. We learn as we go. We 
learn by asking. Capiche?


Mark Charts


On Friday, December 5, 2014 8:13 AM, Chana 
mailto:chana.ole...@gmail.com>> wrote:


THANK YOU for posting this Very rude indeed to not learn for oneself - as 
opposed to expecting to be "spoon fed" information like a small child.

On Fri, Dec 5, 2014 at 7:00 AM, Aleks Laz 
mailto:al-userhad...@none.at>> wrote:
Dear wished unsubscriber

Is it really that hard to start brain vX.X?

You or anyone which have used your email account, have subscribed to this list, 
as described here.

http://hadoop.apache.org/mailing_lists.html

You or this Person must unsubscribe at the same way as the subscription was, as 
described here.

http://hadoop.apache.org/mailing_lists.html

In the case that YOU don't know how a mailinglist works, please take a look 
here.

http://en.wikipedia.org/wiki/Electronic_mailing_list

Even the month is young there are a lot of

unsubscribe

in the archive.

http://mail-archives.apache.org/mod_mbox/hadoop-user/201412.mbox/thread

I'm new to this list but from my point of view it is very disrespectful to the 
list members and developers that YOU don't invest a little bit of time by your 
self to search how you can unsubscribe from a list on which YOU have subscribed 
or anyone which have used your email account.

cheers Aleks







--
Best regards / Met vriendelijke groeten,

Niels Basjes





Re: What happens to data nodes when name node has failed for long time?

2014-12-12 Thread Rich Haase
The remaining cluster services will continue to run.  That way when the 
namenode (or other failed processes) is restored the cluster will resume 
healthy operation.  This is part of hadoop’s ability to handle network 
partition events.

Rich Haase | Sr. Software Engineer | Pandora
m 303.887.1146 | rha...@pandora.com

From: Chandrashekhar Kotekar 
mailto:shekhar.kote...@gmail.com>>
Reply-To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>" 
mailto:user@hadoop.apache.org>>
Date: Friday, December 12, 2014 at 3:57 AM
To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>" 
mailto:user@hadoop.apache.org>>
Subject: What happens to data nodes when name node has failed for long time?

Hi,

What happens if name node has crashed for more than one hour but secondary name 
node, all the data nodes, job tracker, task trackers are running fine? Do those 
daemon services also automatically shutdown after some time? Or those services 
keep running hoping for namenode to come back?

Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455


Re: Copying files to hadoop.

2014-12-17 Thread Rich Haase
Anil,

You have two main options:

  1.  install the hadoop software on OSX and add the configuration files 
appropriate for your sandbox, then do use hdfs dfs –put  
  2.  Setup your sandbox VM to share a directory between OS X and Linux.  All 
virtual machines that I know of support sharing a file system between the VM 
and host.  This is probably the easiest solution since it will allow you to see 
the files you have on OS X in your Linux VM and then you can use the 
hdfs/hadoop/yarn commands on linux (which you already have configured).

Cheers,

Rich

Rich Haase | Sr. Software Engineer | Pandora
m 303.887.1146 | rha...@pandora.com

From: Anil Jagtap mailto:anil.jag...@gmail.com>>
Reply-To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>" 
mailto:user@hadoop.apache.org>>
Date: Wednesday, December 17, 2014 at 3:58 PM
To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>" 
mailto:user@hadoop.apache.org>>
Subject: Re: Copying files to hadoop.

Yes i can do that but I have connected from my mac os terminal to linux using 
ssh.
Now when I run LS command it shows me list of files & folders from Linux and 
not from Mac OS.
I have files which I need to put onto Hadoop directly from Mac OS.
So something like below.

>From Mac OS Terminal:

[root@sandbox ~]#hadoop fs -put  

Hope my requirement is clear.

Rgds, Anil




On Thu, Dec 18, 2014 at 9:39 AM, johny casanova 
mailto:pcgamer2...@outlook.com>> wrote:
Hi Anil,

you can use the  hadoop fs put "file" or directory and that should add it to 
your hdfs


Date: Thu, 18 Dec 2014 09:29:34 +1100
Subject: Copying files to hadoop.
From: anil.jag...@gmail.com<mailto:anil.jag...@gmail.com>
To: user@hadoop.apache.org<mailto:user@hadoop.apache.org>

Dear All,

I'm pretty new to Hadoop technology and Linux environment hence struggling even 
to find solutions for the basic stuff.

For now, Hortonworks Sandbox is working fine for me and i managed to connect to 
it thru SSH.

Now i have some csv files in my mac os folders which i want to copy onto 
Hadoop. As per my knowledge i can copy those files first to Linux and then put 
to Hadoop. But is there a way in which just in one command it will copy to 
Hadoop directly from mac os folder?

Appreciate your advices.

Thank you guys...

Rgds, Anil



Re: Copying files to hadoop.

2014-12-17 Thread Rich Haase
Anil,

Happy to help!

Cheers,
Rich

Rich Haase | Sr. Software Engineer | Pandora
m 303.887.1146 | rha...@pandora.com

From: Anil Jagtap mailto:anil.jag...@gmail.com>>
Reply-To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>" 
mailto:user@hadoop.apache.org>>
Date: Wednesday, December 17, 2014 at 4:35 PM
To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>" 
mailto:user@hadoop.apache.org>>
Subject: Re: Copying files to hadoop.

Hi Rich,

Yes infact i was too thinking the same but then somehow slipped of my mind. I 
guess the second option would be really great so i don't even need to build the 
complex and length commands. The shared folder will be anyways appear as local 
in vm.

Thanks a lot Rich.

Rgds, Anil


On Thu, Dec 18, 2014 at 10:03 AM, Rich Haase 
mailto:rha...@pandora.com>> wrote:
Anil,

You have two main options:

  1.  install the hadoop software on OSX and add the configuration files 
appropriate for your sandbox, then do use hdfs dfs –put  
  2.  Setup your sandbox VM to share a directory between OS X and Linux.  All 
virtual machines that I know of support sharing a file system between the VM 
and host.  This is probably the easiest solution since it will allow you to see 
the files you have on OS X in your Linux VM and then you can use the 
hdfs/hadoop/yarn commands on linux (which you already have configured).

Cheers,

Rich

Rich Haase | Sr. Software Engineer | Pandora
m 303.887.1146 | rha...@pandora.com<mailto:rha...@pandora.com>

From: Anil Jagtap mailto:anil.jag...@gmail.com>>
Reply-To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>" 
mailto:user@hadoop.apache.org>>
Date: Wednesday, December 17, 2014 at 3:58 PM
To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>" 
mailto:user@hadoop.apache.org>>
Subject: Re: Copying files to hadoop.

Yes i can do that but I have connected from my mac os terminal to linux using 
ssh.
Now when I run LS command it shows me list of files & folders from Linux and 
not from Mac OS.
I have files which I need to put onto Hadoop directly from Mac OS.
So something like below.

>From Mac OS Terminal:

[root@sandbox ~]#hadoop fs -put  

Hope my requirement is clear.

Rgds, Anil




On Thu, Dec 18, 2014 at 9:39 AM, johny casanova 
mailto:pcgamer2...@outlook.com>> wrote:
Hi Anil,

you can use the  hadoop fs put "file" or directory and that should add it to 
your hdfs


Date: Thu, 18 Dec 2014 09:29:34 +1100
Subject: Copying files to hadoop.
From: anil.jag...@gmail.com<mailto:anil.jag...@gmail.com>
To: user@hadoop.apache.org<mailto:user@hadoop.apache.org>

Dear All,

I'm pretty new to Hadoop technology and Linux environment hence struggling even 
to find solutions for the basic stuff.

For now, Hortonworks Sandbox is working fine for me and i managed to connect to 
it thru SSH.

Now i have some csv files in my mac os folders which i want to copy onto 
Hadoop. As per my knowledge i can copy those files first to Linux and then put 
to Hadoop. But is there a way in which just in one command it will copy to 
Hadoop directly from mac os folder?

Appreciate your advices.

Thank you guys...

Rgds, Anil



Re: Enable symlinks

2015-01-22 Thread Rich Haase
Hadoop does not currently support symlinks.  Hence the “Symlinks not supported” 
exception message.

You can follow progress on making symlinks production ready via this JIRA: 
https://issues.apache.org/jira/browse/HADOOP-10019

Cheers,

Rich

From: Tang mailto:shawndow...@gmail.com>>
Reply-To: "user@hadoop.apache.org" 
mailto:user@hadoop.apache.org>>
Date: Wednesday, January 21, 2015 at 10:04 PM
To: "user@hadoop.apache.org" 
mailto:user@hadoop.apache.org>>
Subject: Enable symlinks

org.apache.hadoop.ipc.RemoteException: Symlinks not supported



Re: Cloudera Manager Installation is failing

2015-03-02 Thread Rich Haase
Try posting this question on the Cloudera forum. http://community.cloudera.com/

On Mar 2, 2015, at 3:21 PM, Krish Donald 
mailto:gotomyp...@gmail.com>> wrote:

Hi,

I am trying to install Cloudera manager but it is failing and below is the log 
file:
I have uninstalled postgres and tried again but still the same error.

[root@nncloudera cloudera-manager-installer]# more 5.start-embedded-db.log
mktemp: failed to create file via template `/tmp/': Permission denied
/usr/share/cmf/bin/initialize_embedded_db.sh: line 393: $PASSWORD_TMP_FILE: 
ambiguous redirect
The files belonging to this database system will be owned by user 
"cloudera-scm".
This user must also own the server process.
The database cluster will be initialized with locale en_US.UTF8.
The default text search configuration will be set to "english".
fixing permissions on existing directory /var/lib/cloudera-scm-server-db/data 
... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 32MB
creating configuration files ... ok
creating template1 database in /var/lib/cloudera-scm-server-db/data/base/1 ... 
ok
initializing pg_authid ... ok
initdb: could not open file "" for reading: No such file or directory
initdb: removing contents of data directory 
"/var/lib/cloudera-scm-server-db/data"
Could not initialize database server.
  This usually means that your PostgreSQL installation failed or isn't working 
properly.
  PostgreSQL is installed using the set of repositories found on this machine. 
Please
  ensure that PostgreSQL can be installed. Please also uninstall any other 
instances of
  PostgreSQL and then try again., giving up


Please suggest.

Thanks
Krish



Re: Cloudera monitoring Services not starting

2015-03-05 Thread Rich Haase
Please ask cloudera related questions on Cloudera’s forums.  
community.cloudera.com

On Mar 5, 2015, at 11:56 AM, Krish Donald  wrote:

> Hi,
> 
> I have setup a 4 node cliuster , 1 namenode and 3 datanode using cloudera 
> manager 5.2 .
> But it is not starting Cloudra Monitorinf service and for Hosts health it is 
> showing unknown.
> 
> How can I disable Monitoring service completely and work with only cluster 
> different feature.
> 
> Thanks
> Krish



Re:

2015-05-07 Thread Rich Haase
I’m not a Sqoop user, but it looks like you have an error in your SQL.  -> 
Caused by: java.sql.SQLSyntaxErrorException: ORA-00907: missing right 
parenthesis

On May 7, 2015, at 11:34 PM, Kumar Jayapal 
mailto:kjayapa...@gmail.com>> wrote:

ORA-00907

Rich Haase| Sr. Software Engineer | Pandora
m (303) 887-1146 | rha...@pandora.com<mailto:rha...@pandora.com>






Re:

2015-05-07 Thread Rich Haase
If Sqoop is generating the SQL for your import then you may have hit a bug in 
the way the SQL for Oracle is being generated.  I’d recommend emailing the 
Sqoop user mailing list: u...@sqoop.apache.org<mailto:u...@sqoop.apache.org>.

On May 7, 2015, at 11:45 PM, Rich Haase 
mailto:rha...@pandora.com>> wrote:

I’m not a Sqoop user, but it looks like you have an error in your SQL.  -> 
Caused by: java.sql.SQLSyntaxErrorException: ORA-00907: missing right 
parenthesis

On May 7, 2015, at 11:34 PM, Kumar Jayapal 
mailto:kjayapa...@gmail.com>> wrote:

ORA-00907

Rich Haase| Sr. Software Engineer | Pandora
m (303) 887-1146 | rha...@pandora.com<mailto:rha...@pandora.com>





Rich Haase| Sr. Software Engineer | Pandora
m (303) 887-1146 | rha...@pandora.com<mailto:rha...@pandora.com>






Re: Jr. to Mid Level Big Data jobs in Bay Area

2015-05-18 Thread Rich Haase
Hi Adam,

Questions about employment and career advice aren’t appropriate for this, or 
any, Apache mailing list .   However, there are a number of forums on Linkedin 
where this question will be much better received.

The Hadoop mailing lists get bombarded with questions from all over the 
ecosystem and in many cases not related to Hadoop at all. The community is very 
protective (as you can see) of these forums as they are often the best possible 
place to get and share expertise on the development and operation of Hadoop.

Thanks,

Rich



On May 18, 2015, at 5:19 AM, mark charts 
mailto:mcha...@yahoo.com>> wrote:

I agree.

I think its OK. But be advised there are jerks even in "sheep's clothing." 
Humans are good at complaining for no reason at all. Simply to hears their own 
voice, I suppose.



On Sunday, May 17, 2015 9:15 PM, Juan Suero 
mailto:juan.su...@gmail.com>> wrote:


Hes a human asking for human advice.. its ok methinks.
we should live in a more tolerant world.
Thanks.

On Sun, May 17, 2015 at 8:10 PM, Stephen Boesch 
mailto:java...@gmail.com>> wrote:
Hi,  This is not a job board. Thanks.

2015-05-17 16:00 GMT-07:00 Adam Pritchard 
mailto:apritchard...@gmail.com>>:

Hi everyone,

I was wondering if any of you know any openings looking to hire a big data dev 
in the Palo Alto area.

Main thing I am looking for is to be on a team that will embrace having a Jr to 
Mid level big data developer, where I can grow my skill set and contribute.


My skills are:

3 years Java
1.5 years Hadoop
1.5 years Hbase
1 year map reduce
1 year Apache Storm
<1 year Apache Spark (did a Spark Streaming project in Scala)

5 years PHP
3 years iOS development
4 years Amazon ec2 experience


Currently I am working in San Francisco as a big data developer, but the team 
I'm on is content leaving me work that I already knew how to do when I came to 
the team (web services) and I want to work with big data technologies at least 
70% of the time.


I am not a senior big data dev, but I am motivated to be and am just looking 
for an opportunity where I can work all day or most of the day with big data 
technologies, and contribute and learn from the project at hand.


Thanks if anyone can share any information,


Adam







Rich Haase| Sr. Software Engineer | Pandora
m (303) 887-1146 | rha...@pandora.com<mailto:rha...@pandora.com>






Re: Move blocks between Nodes

2015-07-01 Thread Rich Haase
Hi Bing,

I would recommend that you add your 50 new nodes to the cluster and the 
decommission the 50 nodes you want to get rid of.  You can do the decommission 
in one operation (albeit a lengthy operation) by adding the nodes you want to 
decommission to your HDFS exclude file and running `hdfs dfsadmin 
-refreshNodes`.  The decommission process will ensure that the data from your 
old nodes are distributed across your cluster.

Cheers,

Rich

On Jul 1, 2015, at 3:30 AM, Bing Jiang 
mailto:jiangbinglo...@gmail.com>> wrote:

hi, guys.

I want to move all the blocks from 50 datanodes to another 50 new datanodes. 
There is a very easy idea that we can add the new 50 nodes to hadoop cluster 
firstly, then decommission the other 50 nodes one by one. But, I believe it is 
not an efficient way to reach the goal.
So I plan to get the help of  the idea of Hdfs balancer, which limits the 
movements  happen to the 100 nodes. But I need to disable write_op for those 
nodes to be decommissioned.

Is there a way to make DataNode in safe mode (read only) mode?

Of course, regarding to blocks movement between nodes,  Any thoughts will be 
appreciated.



--
Bing Jiang


Rich Haase| Sr. Software Engineer | Pandora
m (303) 887-1146 | rha...@pandora.com<mailto:rha...@pandora.com>






Re: println in MapReduce job

2015-09-24 Thread Rich Haase
To unsubscribe from this list send an email to 
user-unsubscr...@hadoop.apache.org.

https://hadoop.apache.org/mailing_lists.html

On Sep 24, 2015, at 9:40 AM, sukesh kumar 
mailto:s724...@gmail.com>> wrote:

unsubscribe

On Thu, Sep 24, 2015 at 8:49 PM, xeonmailinglist 
mailto:xeonmailingl...@gmail.com>> wrote:

Hi,

  1.
I have this example of MapReduce [1], and I want to print info in the stdout 
and in a log file. It seems that the logs isn’t print anything. How can I make 
my class print these words?
  2.
I also have set in the yarn-site.xml to retain log. Although the logs are 
retained in the /app-logs dir, the userlogs dir is deleted at the end of the 
job execution. How can I make MapReduce to not delete files in the userlogs dir?

I am using Yarn.

Thanks,

[1] Wordcount exampla with just the map part.


public class MyWordCount {
public static class MyMap extends Mapper {
Log log = LogFactory.getLog(MyWordCount.class);
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();

public void map(LongWritable key, Text value, OutputCollector output, Reporter reporter) throws IOException {
StringTokenizer itr = new StringTokenizer(value.toString());
System.out.println("HERRE");
log.info("HERE");
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
output.collect(word, one);
}
}

public void run(Context context) throws IOException, 
InterruptedException {
setup(context);
try {
while (context.nextKeyValue()) {
System.out.println("Key: " + context.getCurrentKey() + " 
Value: " + context.getCurrentValue());
map(context.getCurrentKey(), context.getCurrentValue(), 
context);
}
} finally {
cleanup(context);
}
}

public void cleanup(Mapper.Context context) {}
}


[2] yarn-site.xml


 yarn.log-aggregation-enable true 

 yarn.nodemanager.log.retain-seconds 
90 
 yarn.nodemanager.remote-app-log-dir 
/app-logs 


​



--
Thanks & Best Regards
Sukesh Kumar



Re: unsubscribe

2015-10-13 Thread Rich Haase
Please see https://hadoop.apache.org/mailing_lists.html for unsubscribe 
instructions.

On Oct 13, 2015, at 3:55 AM, shanthi k 
mailto:kshanthi...@gmail.com>> wrote:


Who is thiz?

On 13 Oct 2015 15:18, "MANISH SINGLA" 
mailto:coolmanishh...@gmail.com>> wrote:


--
Regards
Manish Singla



Re: Unsubscribe

2015-12-02 Thread Rich Haase
Sath,

Please read the attached link for instructions to properly unsubscribe from the 
list.

https://hadoop.apache.org/mailing_lists.html

Sent from my iPhone

On Dec 1, 2015, at 7:48 PM, Sath 
mailto:neiodavi...@gmail.com>> wrote:

Namikaze

Its not professional to use fowl language in a forum like this.

If you have an issue with my unsubscribe address it politely..

This shows what kind of people are heading this forum.

With all due respect

Sent from my iPhone

On Dec 1, 2015, at 4:05 PM, Namikaze Minato 
mailto:lloydsen...@gmail.com>> wrote:

ARE YOU FUCKING KIDDING ME

On 2 December 2015 at 00:47, Sath 
mailto:neiodavi...@gmail.com>> wrote:
Unsubscribe

Sent from my iPhone



Re: 2 items - RE: Unsubscribe & A better ListSrv for beginners.

2015-12-02 Thread Rich Haase
Hi Carl,

I would recommend picking up a copy of the Tom White book Hadoop: The 
Definitive Guide, 
http://www.amazon.com/Hadoop-Definitive-Guide-Tom-White/dp/1449311520.  It is 
very reasonably priced and has stayed well updated.  This will give you the 
introduction you need to start asking pertinent questions where you have 
confusion.


Cheers,

Rich


On Dec 2, 2015, at 1:01 PM, Boudreau, Carl 
mailto:carl_boudr...@optum.com>> wrote:

Dear Sys Admin,
Can the footer on the ListSrv be edited to include unsubscribe steps?

To the rest of the group,
I have been listen quietly on the side in an attempt to learn Hadoop prior 
to attempting the install and configuration process.  Unfortunately, most of 
the terms I am reading are going over my head (Microsoft/C# background, with a 
beginner Red Hat installer and user)  My Manager had asked I start looking into 
it, thus why I have joined this list.  But this group might be too high level 
for my needs.  Is there a better ListSrv for me to join?

If there is anyone here that has been where I have and got to the place I need 
to be; please feel free to connect with me.

Regards Carl

P.S. Sath,
  I am sorry you were sent the response; it was unwarranted.  Would you 
mind taking the high road and dropping it?  :)


From: Sath [mailto:neiodavi...@gmail.com]
Sent: Wednesday, December 02, 2015 2:45 PM
To: Namikaze Minato
Cc: user@hadoop.apache.org<mailto:user@hadoop.apache.org>
Subject: Re: Unsubscribe

Namikaze

I know its a mailing list. I am not expecting you to do my dirty work. 
First of all who are you. I didn't address you.

   You have no right to use such language. I can sue you for this.

   I get every day loads of emails for unsubscribe. I could have reacted but 
being professional is to have an empathy for understanding. I unsubscribed 
earlier as suggested by the apache rules but it didn't unsubscribe.

   So i waited 4 months to send request again, as per loads of unsubscribe 
emails i thought probably its the way.

 If you are associated with this responsibility then you should have changed 
the system for better, not being abusive with fowl language.

That shows your character and unprofessional attitude. It turn out to be as you 
did the same mistake by using fowl language in email list. Whats about that.

You owe an apology!!!

With all respect
Sath

Sent from my iPhone

On Dec 2, 2015, at 1:21 AM, Namikaze Minato 
mailto:lloydsen...@gmail.com>> wrote:
Hello. This is not a forum but a mailing list.

And I am saying this because you somehow expect people in the mailing list will 
do your dirty work of unsubscribing you when you can do it yourself???

I have already addressed politely dozens of other users in the same situation 
as you, had you bothered reading my emails, you would have known how to 
unsubscribe instead of polluting everyone's mailbox.

I could write to you once again how to do it but I won't give you this pleasure 
and let you look into your own mailbox for that piece of information.

On 2 December 2015 at 04:48, Sath 
mailto:neiodavi...@gmail.com>> wrote:
Namikaze

Its not professional to use fowl language in a forum like this.

If you have an issue with my unsubscribe address it politely..

This shows what kind of people are heading this forum.

With all due respect

Sent from my iPhone

On Dec 1, 2015, at 4:05 PM, Namikaze Minato 
mailto:lloydsen...@gmail.com>> wrote:
ARE YOU FUCKING KIDDING ME

On 2 December 2015 at 00:47, Sath 
mailto:neiodavi...@gmail.com>> wrote:
Unsubscribe

Sent from my iPhone



This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.

Rich Haase| Sr. Software Engineer | Pandora
m (303) 887-1146 | rha...@pandora.com<mailto:rha...@pandora.com>






Re: 2 items - RE: Unsubscribe & A better ListSrv for beginners.

2015-12-02 Thread Rich Haase
Here’s a link to the most recent edition: 
http://www.amazon.com/Hadoop-Definitive-Guide-Tom-White/dp/1491901632/ref=dp_ob_title_bk

I didn’t realize the 4th edition was out. :)

On Dec 2, 2015, at 1:13 PM, Rich Haase 
mailto:rha...@pandora.com>> wrote:

Hi Carl,

I would recommend picking up a copy of the Tom White book Hadoop: The 
Definitive Guide, 
http://www.amazon.com/Hadoop-Definitive-Guide-Tom-White/dp/1449311520.  It is 
very reasonably priced and has stayed well updated.  This will give you the 
introduction you need to start asking pertinent questions where you have 
confusion.


Cheers,

Rich


On Dec 2, 2015, at 1:01 PM, Boudreau, Carl 
mailto:carl_boudr...@optum.com>> wrote:

Dear Sys Admin,
Can the footer on the ListSrv be edited to include unsubscribe steps?

To the rest of the group,
I have been listen quietly on the side in an attempt to learn Hadoop prior 
to attempting the install and configuration process.  Unfortunately, most of 
the terms I am reading are going over my head (Microsoft/C# background, with a 
beginner Red Hat installer and user)  My Manager had asked I start looking into 
it, thus why I have joined this list.  But this group might be too high level 
for my needs.  Is there a better ListSrv for me to join?

If there is anyone here that has been where I have and got to the place I need 
to be; please feel free to connect with me.

Regards Carl

P.S. Sath,
  I am sorry you were sent the response; it was unwarranted.  Would you 
mind taking the high road and dropping it?  ☺


From: Sath [mailto:neiodavi...@gmail.com]
Sent: Wednesday, December 02, 2015 2:45 PM
To: Namikaze Minato
Cc: user@hadoop.apache.org<mailto:user@hadoop.apache.org>
Subject: Re: Unsubscribe

Namikaze

I know its a mailing list. I am not expecting you to do my dirty work. 
First of all who are you. I didn't address you.

   You have no right to use such language. I can sue you for this.

   I get every day loads of emails for unsubscribe. I could have reacted but 
being professional is to have an empathy for understanding. I unsubscribed 
earlier as suggested by the apache rules but it didn't unsubscribe.

   So i waited 4 months to send request again, as per loads of unsubscribe 
emails i thought probably its the way.

 If you are associated with this responsibility then you should have changed 
the system for better, not being abusive with fowl language.

That shows your character and unprofessional attitude. It turn out to be as you 
did the same mistake by using fowl language in email list. Whats about that.

You owe an apology!!!

With all respect
Sath

Sent from my iPhone

On Dec 2, 2015, at 1:21 AM, Namikaze Minato 
mailto:lloydsen...@gmail.com>> wrote:
Hello. This is not a forum but a mailing list.

And I am saying this because you somehow expect people in the mailing list will 
do your dirty work of unsubscribing you when you can do it yourself???

I have already addressed politely dozens of other users in the same situation 
as you, had you bothered reading my emails, you would have known how to 
unsubscribe instead of polluting everyone's mailbox.

I could write to you once again how to do it but I won't give you this pleasure 
and let you look into your own mailbox for that piece of information.

On 2 December 2015 at 04:48, Sath 
mailto:neiodavi...@gmail.com>> wrote:
Namikaze

Its not professional to use fowl language in a forum like this.

If you have an issue with my unsubscribe address it politely..

This shows what kind of people are heading this forum.

With all due respect

Sent from my iPhone

On Dec 1, 2015, at 4:05 PM, Namikaze Minato 
mailto:lloydsen...@gmail.com>> wrote:
ARE YOU FUCKING KIDDING ME

On 2 December 2015 at 00:47, Sath 
mailto:neiodavi...@gmail.com>> wrote:
Unsubscribe

Sent from my iPhone



This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.

Rich Haase| Sr. Software Engineer | Pandora
m (303) 887-1146 | rha...@pandora.com<mailto:rha...@pandora.com>





Rich Haase| Sr. Software Engineer | Pandora
m (303) 887-1146 | rha...@pandora.com<mailto:rha...@pandora.com>