Re: Regular expressions in fs paths?

2014-09-10 Thread Mahesh Khandewal
I want to unsubscribe from this mailing list

On Wed, Sep 10, 2014 at 4:42 PM, Charles Robertson 
charles.robert...@gmail.com wrote:

 Hi all,

 Is it possible to use regular expressions in fs commands? Specifically, I
 want to use the copy (-cp) and move (-mv) commands on all files in a
 directory that match a pattern (the pattern being all files that do not end
 in '.tmp').

 Can this be done?

 Thanks,
 Charles



Re: Unsubscribe

2014-09-03 Thread Mahesh Khandewal
unsubscibe


On Tue, Sep 2, 2014 at 1:09 AM, Sankar sankarram.p...@gmail.com wrote:

  user@hadoop.apache.org
 X-Mailer: iPhone Mail (11D257)



 Sent from my iPhone



Want to unsubscibe myself

2014-09-03 Thread Mahesh Khandewal



Re: What happens when .....?

2014-08-28 Thread Mahesh Khandewal
unsubscribe


On Thu, Aug 28, 2014 at 6:42 PM, Eric Payne eric.payne1...@yahoo.com
wrote:

 Or, maybe have a look at Apache Falcon:
 Falcon - Apache Falcon - Data management and processing platform
 http://falcon.incubator.apache.org/






 Falcon - Apache Falcon - Data management and processing platform
 http://falcon.incubator.apache.org/
 Apache Falcon - Data management and processing platform
 View on falcon.incubator.apache.org http://falcon.incubator.apache.org/
 Preview by Yahoo


*From:* Stanley Shi s...@pivotal.io
 *To:* user@hadoop.apache.org user@hadoop.apache.org
 *Sent:* Thursday, August 28, 2014 1:15 AM
 *Subject:* Re: What happens when .?

 Normally MR job is used for batch processing. So I don't think this is a
 good use case here for MR.
 Since you need to run the program periodically, you cannot submit a single
 mapreduce job for this.
 An possible way is to create a cron job to scan the folder size and submit
 a MR job if necessary;



 On Wed, Aug 27, 2014 at 7:38 PM, Kandoi, Nikhil nikhil.kan...@emc.com
 wrote:

 Hi All,

 I have a system where files are coming in hdfs at regular intervals and I
 perform an operation everytime the directory size goes above a particular
 point.
 My Question is that when I submit a map reduce job, would it only work on
 the files present at that point ??

 Regards,
 Nikhil Kandoi






 --
 Regards,
 *Stanley Shi,*






Does any one have .jar file of ResourceAwareScheduler(i.e., AdaptiveScheduler)

2014-05-16 Thread Mahesh Khandewal
If someone is having please mail me.
Thanks in advance


Re: how to build .jar file for fair scheduler in hadoop 1.2.1

2014-05-03 Thread Mahesh Khandewal
Thanks a lot sir i will try


On Sat, May 3, 2014 at 9:38 AM, Ted Yu yuzhih...@gmail.com wrote:

 Go to src/contrib/fairscheduler folder, run:

 ant jar

 You would find the jar file under build/contrib/fairscheduler/ of your
 workspace.

 Cheers


 On Fri, May 2, 2014 at 8:43 PM, Mahesh Khandewal 
 mahesh.k@gmail.comwrote:

  up vote0down 
 votefavoritehttp://stackoverflow.com/questions/23439931/how-to-create-a-jar-file-for-fairscheduler-in-hadoop-1-2-1#

 I have Hadoop 1.2.1 installed on my single node system. The path to
 hadoop is /usr/local/hadoop. Now how to create a .jar file of
 fair-scheduler. From while directory do i need to create .jar file. How to
 call ant command?? I am using ant 1.7 version.





Re: how to build .jar file for fair scheduler in hadoop 1.2.1

2014-05-03 Thread Mahesh Khandewal
Sir is it the same way to create a jar file for Adaptive Scheduler i.e.,
Resource Aware scheduler


On Sat, May 3, 2014 at 9:38 AM, Ted Yu yuzhih...@gmail.com wrote:

 Go to src/contrib/fairscheduler folder, run:

 ant jar

 You would find the jar file under build/contrib/fairscheduler/ of your
 workspace.

 Cheers


 On Fri, May 2, 2014 at 8:43 PM, Mahesh Khandewal 
 mahesh.k@gmail.comwrote:

  up vote0down 
 votefavoritehttp://stackoverflow.com/questions/23439931/how-to-create-a-jar-file-for-fairscheduler-in-hadoop-1-2-1#

 I have Hadoop 1.2.1 installed on my single node system. The path to
 hadoop is /usr/local/hadoop. Now how to create a .jar file of
 fair-scheduler. From while directory do i need to create .jar file. How to
 call ant command?? I am using ant 1.7 version.





how to build .jar file for fair scheduler in hadoop 1.2.1

2014-05-02 Thread Mahesh Khandewal
up vote0down 
votefavoritehttp://stackoverflow.com/questions/23439931/how-to-create-a-jar-file-for-fairscheduler-in-hadoop-1-2-1#

I have Hadoop 1.2.1 installed on my single node system. The path to hadoop
is /usr/local/hadoop. Now how to create a .jar file of fair-scheduler. From
while directory do i need to create .jar file. How to call ant command?? I
am using ant 1.7 version.


Fwd: Changing default scheduler in hadoop

2014-04-14 Thread Mahesh Khandewal
-- Forwarded message --
From: Mahesh Khandewal mahesh.k@gmail.com
Date: Mon, 14 Apr 2014 08:42:16 +0530
Subject: Re: Changing default scheduler in hadoop
To: user@hadoop.apache.org
Cc: Ekta Agrawal ektacloudst...@gmail.com,
common-u...@hadoop.apache.org common-u...@hadoop.apache.org,
hdfs-u...@hadoop.apache.org hdfs-u...@hadoop.apache.org

Hi i have patch file of Resource Aware Scheduler
 
MAPREDUCE-1380_1.1.patch.txthttps://docs.google.com/file/d/0B11rCKdcyN82d1FuZENwZm12MTg/edit?usp=drive_web
Now the patch contains the java code also xml code also. how to compile
and create a jar file for this patch??
I want to change the default fifo scheduler and run this resource aware
scheduler.


On Sun, Apr 13, 2014 at 10:54 PM, Harsh J ha...@cloudera.com wrote:

 Hi,

 On Sun, Apr 13, 2014 at 10:47 AM, Mahesh Khandewal
 mahesh.k@gmail.com wrote:
  Sir i am using Hadoop 1.1.2
  I don't know where is the code residing of default scheduler?

 Doing a simple 'find' in the source checkout for name pattern
 'Scheduler' should reveal pretty relevant hits. We do name our Java
 classes seriously :)


 https://github.com/apache/hadoop-common/blob/release-1.1.2/src/mapred/org/apache/hadoop/mapred/JobQueueTaskScheduler.java

  I want to change the default scheduler to fair how can i do this??

 You can override the mapred-site.xml placed property
 'mapred.jobtracker.taskScheduler' to specify a custom scheduler (or a
 supplied one, such as Fair
 [http://hadoop.apache.org/docs/r1.1.2/fair_scheduler.html] or Capacity
 [http://hadoop.apache.org/docs/r1.1.2/capacity_scheduler.html]
 Schedulers).

  And if i want to get back to default scheduler how can i do this?

 Remove the configuration override, and it will always go back to the
 default FIFO based scheduler, the same whose source has been linked
 above.

  I am struggling since 4 months to get help on Apache Hadoop??

 Are you unsure about this?

 --
 Harsh J



Re: HDFS Installation

2014-04-13 Thread Mahesh Khandewal
Ekta it may be ssh problem. first check for ssh


On Sun, Apr 13, 2014 at 8:46 PM, Ekta Agrawal ektacloudst...@gmail.comwrote:

 I already used the same guide to install hadoop.

 If HDFS does not require anything except Hadoop single node
 installation then the installation part is complete.

 I tried running bin/hadoop dfs -mkdir /foodir
 bin/hadoop dfsadmin -safemode enter

 these commands are giving following exception:

 14/04/07 00:23:09 INFO ipc.Client: Retrying connect to server:localhost/
 127.0.0.1:54310. Already tried 9 time(s).
 Bad connection to FS. command aborted. exception: Call to localhost/
 127.0.0.1:54310 failed on connection exception:
 java.net.ConnectException: Connection refused

 Can somebody help me to understand that why it is happening?





 On Sun, Apr 13, 2014 at 10:33 AM, Mahesh Khandewal mahesh.k@gmail.com
  wrote:

 I think in hadoop installation only hdfs comes.
 Like you need to insert script like
 bin/hadoop start-dfs.sh in $hadoop_home path


 On Sun, Apr 13, 2014 at 10:27 AM, Ekta Agrawal 
 ektacloudst...@gmail.comwrote:

 Can anybody suggest any good tutorial to install hdfs and work with hdfs?

 I installed hadoop on Ubuntu as single node. I can see those service
 running.

 But how to install and work with hdfs? Please give some guidance.






Re: HDFS Installation

2014-04-13 Thread Mahesh Khandewal
Hi  Ekta,
when i had ssh connection problem i tried to update the Ubuntu and then
upgrade the Ubuntu . for this there are 2 commands check Google. after that
you try for ssh commands. also in YouTube there is a video for installing
single node setup along with pig setup. In that video you will get somewhat
more accurate single node setup instructions. Moreover what i found is that
watching videos and then working on installation works pretty well than
reading and installing.


On Sun, Apr 13, 2014 at 11:06 PM, Ekta Agrawal ektacloudst...@gmail.comwrote:

 Hi,

 I started with ssh localhost command.
 Does anything else is needed to check SSH?

 Then I stopped all the services which were running by stop-all.sh
 and start them again with start-all.sh.

 I have copied the way it executed on the terminal for some commands.

 I don't know, why after start-all.sh it says starting namenode and does
 not show any failure but
 when I check through jps it does not list namenode.

 I tried opening namenode in browser. It is also not getting open.


 

 These is the way it executed on terminal:

 hduser@ubuntu:~$ ssh localhost
 hduser@localhost's password:
 Welcome to Ubuntu 12.04.2 LTS

  * Documentation:  https://help.ubuntu.com/

 459 packages can be updated.
 209 updates are security updates.

 Last login: Sun Feb  2 00:28:46 2014 from localhost




 hduser@ubuntu:~$ /usr/local/hadoop/bin/hadoop namenode -format
 14/04/07 01:44:20 INFO namenode.NameNode: STARTUP_MSG:
 /
 STARTUP_MSG: Starting NameNode
 STARTUP_MSG:   host = ubuntu/127.0.0.1
 STARTUP_MSG:   args = [-format]
 STARTUP_MSG:   version = 1.0.3
 STARTUP_MSG:   build =
 https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
 1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
 /
 Re-format filesystem in /app/hadoop/tmp/dfs/name ? (Y or N) y
 Format aborted in /app/hadoop/tmp/dfs/name
 14/04/07 01:44:27 INFO namenode.NameNode: SHUTDOWN_MSG:
 /
 SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.0.1
 /


 hduser@ubuntu:~$ /usr/local/hadoop/bin/start-all.sh
 starting namenode, logging to /usr/local/hadoop/libexec/../
 logs/hadoop-hduser-namenode-ubuntu.out
 ehduser@localhost's password:
 hduser@localhost's password: localhost: Permission denied, please try
 again.
 localhost: starting datanode, logging to /usr/local/hadoop/libexec/../
 logs/hadoop-hduser-datanode-ubuntu.out
 hduser@





 On Sun, Apr 13, 2014 at 9:14 PM, Mahesh Khandewal 
 mahesh.k@gmail.comwrote:

 Ekta it may be ssh problem. first check for ssh


 On Sun, Apr 13, 2014 at 8:46 PM, Ekta Agrawal 
 ektacloudst...@gmail.comwrote:

 I already used the same guide to install hadoop.

 If HDFS does not require anything except Hadoop single node
 installation then the installation part is complete.

 I tried running bin/hadoop dfs -mkdir /foodir
 bin/hadoop dfsadmin -safemode enter

 these commands are giving following exception:

 14/04/07 00:23:09 INFO ipc.Client: Retrying connect to server:localhost/
 127.0.0.1:54310. Already tried 9 time(s).
 Bad connection to FS. command aborted. exception: Call to localhost/
 127.0.0.1:54310 failed on connection exception:
 java.net.ConnectException: Connection refused

 Can somebody help me to understand that why it is happening?





 On Sun, Apr 13, 2014 at 10:33 AM, Mahesh Khandewal 
 mahesh.k@gmail.com wrote:

 I think in hadoop installation only hdfs comes.
 Like you need to insert script like
 bin/hadoop start-dfs.sh in $hadoop_home path


 On Sun, Apr 13, 2014 at 10:27 AM, Ekta Agrawal 
 ektacloudst...@gmail.com wrote:

 Can anybody suggest any good tutorial to install hdfs and work with
 hdfs?

 I installed hadoop on Ubuntu as single node. I can see those service
 running.

 But how to install and work with hdfs? Please give some guidance.








Re: Changing default scheduler in hadoop

2014-04-13 Thread Mahesh Khandewal
Hi i have patch file of Resource Aware Scheduler
 
MAPREDUCE-1380_1.1.patch.txthttps://docs.google.com/file/d/0B11rCKdcyN82d1FuZENwZm12MTg/edit?usp=drive_web
Now the patch contains the java code also xml code also. how to compile
and create a jar file for this patch??
I want to change the default fifo scheduler and run this resource aware
scheduler.


On Sun, Apr 13, 2014 at 10:54 PM, Harsh J ha...@cloudera.com wrote:

 Hi,

 On Sun, Apr 13, 2014 at 10:47 AM, Mahesh Khandewal
 mahesh.k@gmail.com wrote:
  Sir i am using Hadoop 1.1.2
  I don't know where is the code residing of default scheduler?

 Doing a simple 'find' in the source checkout for name pattern
 'Scheduler' should reveal pretty relevant hits. We do name our Java
 classes seriously :)


 https://github.com/apache/hadoop-common/blob/release-1.1.2/src/mapred/org/apache/hadoop/mapred/JobQueueTaskScheduler.java

  I want to change the default scheduler to fair how can i do this??

 You can override the mapred-site.xml placed property
 'mapred.jobtracker.taskScheduler' to specify a custom scheduler (or a
 supplied one, such as Fair
 [http://hadoop.apache.org/docs/r1.1.2/fair_scheduler.html] or Capacity
 [http://hadoop.apache.org/docs/r1.1.2/capacity_scheduler.html]
 Schedulers).

  And if i want to get back to default scheduler how can i do this?

 Remove the configuration override, and it will always go back to the
 default FIFO based scheduler, the same whose source has been linked
 above.

  I am struggling since 4 months to get help on Apache Hadoop??

 Are you unsure about this?

 --
 Harsh J



Re: HDFS Installation

2014-04-12 Thread Mahesh Khandewal
I think in hadoop installation only hdfs comes.
Like you need to insert script like
bin/hadoop start-dfs.sh in $hadoop_home path


On Sun, Apr 13, 2014 at 10:27 AM, Ekta Agrawal ektacloudst...@gmail.comwrote:

 Can anybody suggest any good tutorial to install hdfs and work with hdfs?

 I installed hadoop on Ubuntu as single node. I can see those service
 running.

 But how to install and work with hdfs? Please give some guidance.



Changing default scheduler in hadoop

2014-04-12 Thread Mahesh Khandewal
Sir i am using Hadoop 1.1.2
I don't know where is the code residing of default scheduler?
I want to change the default scheduler to fair how can i do this??
And if i want to get back to default scheduler how can i do this?
I am struggling since 4 months to get help on Apache Hadoop??
I am new to this mailing list