gt;
>
> http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Applications_APISubmit_Application
>
> What's the version of hadoop on Linux ?
>
> Cheers
>
> On Sun, Oct 11, 2015 at 11:36 AM, hadoop hive <hadooph...@gmail.c
Thanks Daniel.
On Mon, Oct 12, 2015 at 12:52 AM, Daniel Schulz <
danielschulz2...@hotmail.com> wrote:
> Hi Vikas,
>
> Thanks for reaching out to us.
>
> For local development to me MRUnit is a game changer: you may speed up
> your development significantly due to better debugging features.
>
Try running fsck
On Wed, Jun 24, 2015 at 2:54 PM, Ja Sam ptrstp...@gmail.com wrote:
I had a running Hadoop cluster (version 2.2.0.2.0.6.0-76 from
Hortonworks). Yesterday a lot of things happened nad in some point of time
we decided to one by one reboot all datanodes. Unfortunate the operator
You can use node.js for this.
On Tue, Jun 23, 2015 at 8:15 PM, Divya Gehlot divya.htco...@gmail.com
wrote:
Can you please elaborate it more.
On 20 Jun 2015 2:46 pm, SF Hadoop sfhad...@gmail.com wrote:
Really depends on your requirements for the format of the data.
The easiest way I can
you can write small shell to do that :)
On Sun, Dec 28, 2014 at 3:49 AM, Anil Jagtap anil.jag...@gmail.com wrote:
Dear All,
Just wanted to know if there is a way to copy multiple files using hadoop
fs -put.
Instead of specifying individual name I provide wild-chars and respective
files
Make it final and bounce namenode
On Nov 20, 2014 3:42 PM, Tomás Fernández Pena tf.p...@usc.es wrote:
Hello everyone,
I've just installed Hadoop 2.5.1 from source code, and I have problems
changing the default block size. My hdfs-site.xml file I've set the
property
property
Hey,
1. Stop datanode
2. Copy blocks from 1 disk to another on same path
3. Run fsck so that can be updated in metastore
You can find steps at www.bigdataboard.in
Thanks
Vikas srivastava
On Nov 7, 2014 7:56 AM, cho ju il tjst...@kgrid.co.kr wrote:
My Hadoop Cluster Version
Hadoop 1.1.2,
Hi Experts,
I am having issue in oozie with kerberos implemented on cluster. I am using
hdp2.1.2 with kerberos and oozie 4. I am able to run oozie successfully but
oozie web console is not showing.
Does any have oozie setup with kerberos on HDP2.1.2.
Any help would be appreciated.
Thanks
Vikas
Hi folks,
Is there anyway that Hadoop re-read config file like core-site.XML without
restarting datanode.
Ex like we dor slave or exclude file by running
Hadoop dfsadmin - refreshNodes
Thanks
Vikas Srivastava
Move it to some tmp directory and delete parent directory.
On Aug 20, 2014 4:23 PM, praveenesh kumar praveen...@gmail.com wrote:
Hi team
I am in weird situation where I have following HDFS sample folders
/data/folder/
/data/folder*
/data/folder_day
/data/folder_day/monday
/data/folder/1
How much memory it have and how many maps and reducer you have set with how
much heap size?
On Aug 11, 2014 11:17 AM, Sindhu Hosamane sindh...@gmail.com wrote:
Hello,
I have set up multiple datanodes on a single machine following the
instructions in
Remove the entry from dfs.exclude if there is any
On Aug 4, 2014 3:28 AM, S.L simpleliving...@gmail.com wrote:
Hi All,
I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and
three slave nodes , the slave nodes are listed in the
$HADOOP_HOME/etc/hadoop/slaves file and I can
Can you check the ulimit for tour user. Which might be causing this.
On Aug 2, 2014 8:54 PM, Ana Gillan ana.gil...@gmail.com wrote:
Hi everyone,
I am having an issue with MapReduce jobs running through Hive being killed
after 600s timeouts and with very simple jobs taking over 3 hours (or
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 800
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
From: hadoop hive hadooph...@gmail.com
Reply-To: user
(DFSOutputStream.java:464)
Thanks a lot for your attention!
From: hadoop hive hadooph...@gmail.com
Reply-To: user@hadoop.apache.org
Date: Saturday, 2 August 2014 17:36
To: user@hadoop.apache.org
Subject: Re:
org.apache.hadoop.ipc.RemoteException
Its should pick whatever value you are providing in your api only If in
hdfs-site.XML doesn't have set final in replication parameter.
On Jul 29, 2014 1:26 PM, Satyam Singh satyam.si...@ericsson.com wrote:
Hello,
I have given dfs.replication=2 in hdfs-site.xml as:
property
You are looking for local folder by hdfs, you need to create on hdfs then
only you can find it.
Hadoop fs -mkdir /home/hduser/mydata
Then try ls
On Jul 29, 2014 11:47 PM, Bhupathi, Ramakrishna rama.kr...@hp.com wrote:
Folks,
Can you help me with this ? I am not sure why I am getting this
If you have 2 an live initially and rep set to 2 which is perfectly fine
but you killed one dn... There is no place to put another replica of new
files as well as old files... Which causing issue in writing blocks.
On Jul 28, 2014 10:15 PM, Satyam Singh satyam.si...@ericsson.com wrote:
@vikas i
You need to add each disk inside dfs.name.data.dir parameter.
On Jul 28, 2014 5:14 AM, arthur.hk.c...@gmail.com
arthur.hk.c...@gmail.com wrote:
Hi,
I have installed Hadoop 2.4.0 with 5 nodes, each node physically has 4T
hard disk, when checking the configured capacity, I found it is about
Did you allowed RPC and TCP communication in you security group, which you
have added to you hosts.
Please also check your exclude file and third point is to increase your dn
heapsize and start it.
Thanks
On Jul 27, 2014 1:01 AM, Ed Sweeney ed.swee...@falkonry.com wrote:
All,
New AWS cluster
Hi folks,
I am bit confused about the directories present inside datanode data
directories like
In_use.lock
Detach
Storage
Current
Thanks
Use Hadoop dfsadmin -safemode leave
Then you can delete
On Jul 2, 2014 6:37 PM, Chris Mawata chris.maw...@gmail.com wrote:
The NameNode is in safe mode so it is read only.
On Jul 2, 2014 2:28 AM, EdwardKing zhan...@neusoft.com wrote:
I want to remove hadoop directory,so I use hadoop fs
Try running fsck, it will also validate the block placement as well as
replication.
On Jun 27, 2014 6:49 AM, Kilaru, Sambaiah sambaiah_kil...@intuit.com
wrote:
My topology script is working fine for data I am writing to hdfs. My
question is how to make the
Existing data topoloy compaliant?
you can keep same on both, but in your case it wont possible if you are
running with two datanode one maching.
On Tue, May 27, 2014 at 3:31 PM, sindhu hosamane sindh...@gmail.com wrote:
Hello friends ,
i set up 2 datanodes on a single machine accordingly mentioned in the
thread
RE:
Hi Folks,
Does anyone have idea regrading, start and stop services on all the nodes
in parallel. i am facing an issue i have a big cluster like 1000 nodes and
i want services to start and stop , if i do it sequentially it will take
around 30-50mins , what can i do to make it parallelly, does
Hi Indrashish,
Can you please check if your you DN is accessible by nn , and the other
this is hdfs-site.xml of DN is NN ip is given or not becoz if DN is up and
running the issue is DN is not able to attached to NN for getting register.
You can add DN in include file as well .
thanks
Vikas
Hi Folks,
I want to use hbase for my data storage on the top of HDFS, Please help me
to find out the best version which i should used , like CDH4
I data size would be around 500gb - 5Tb.
My operations would be write intensive
Thanks
Hi Folks,
i am getting an error while starting oozie,
ERROR: Oozie could not be started
REASON: org.apache.oozie.service.ServiceException: E0103: Could not load
service classes, Cannot create PoolableConnectionFactory (null, message
from server: Host 'Abc-new.corp.apple.com' is not allowed to
Here its showing like you are not using mapreduce.framework.name as yarn,
please resend it we are unable to see the configuration
On Wed, Jul 10, 2013 at 1:33 AM, Francis.Hu francis...@reachjunction.comwrote:
Hi,All
** **
I have a hadoop- 2.0.5-alpha cluster with 3 data nodes . I have
check your hadoop-env.sh file, set hadoop_home path correctly.
On Wed, Mar 13, 2013 at 7:51 PM, Cyril Bogus cyrilbo...@gmail.com wrote:
Hi Nitin,
As part of my configuration I have set all the environment variables AND
added HADOOP_PREFIX. But the problem still persist so I will just keep
HI Folks,
I m using hortonworks hdp. please let me knw if i need to change a
configuration in mapred-site.xml, do i need to do it by manually or i need
a puppet push.
if i need to push it by puppet then on only application servers or on all
the nodes as well as Namenode so that it will be in
-- Forwarded message --
From: hadoop hive hadooph...@gmail.com
Date: Fri, Mar 16, 2012 at 2:04 PM
Subject: Hive with JDBC
To: u...@hive.apache.org
HI folks,
I m facing a problem while when i fired a query through java code, its
returns around half a million records which make
Hey folks,
i m using hadoop 0.20.2 + r911707 , please tell me the installation and how
to use snappy for compression and decompression
Regards
Vikas Srivastava
+Installation#SnappyInstallation-UsingSnappyforMapReduceCompression
best,
Alex
--
Alexander Lorenz
http://mapredit.blogspot.com
On Feb 27, 2012, at 7:16 AM, hadoop hive wrote:
Hey folks,
i m using hadoop 0.20.2 + r911707 , please tell me the installation and
how
to use snappy
for.
For storing snappy compressed files in HDFS you should use Pig or Flume.
--
Alexander Lorenz
http://mapredit.blogspot.com
On Feb 27, 2012, at 7:28 AM, hadoop hive wrote:
thanks Alex,
i m using Apache hadoop, steps i followed
1:- untar snappy
2:- entry in mapred site
the jars in your classpath for.
For storing snappy compressed files in HDFS you should use Pig or Flume.
--
Alexander Lorenz
http://mapredit.blogspot.com
On Feb 27, 2012, at 7:28 AM, hadoop hive wrote:
thanks Alex,
i m using Apache hadoop, steps i followed
1:- untar snappy
HI Folks,
Rite now i m having replication factor 2, but now i want to make it three
for sum tables so how can i do that for specific tables, so that whenever
the data would be loaded in those tables it can automatically replicated
into three nodes.
Or i need to replicate for all the tables.
and
did you make check the ssh between localhost means its should be
ssh password less between localhost
public-key =authorized_key
On Thu, Feb 9, 2012 at 1:06 AM, Robin Mueller-Bady
robin.mueller-b...@oracle.com wrote:
Dear Guruprasad,
it would be very helpful to provide details from your
hey luca,
you can use
conf.set(*mapred.textoutputformat.separator*, );
hope it works fine
regards
Vikas Srivastava
On Thu, Feb 9, 2012 at 3:57 PM, Luca Pireddu pire...@crs4.it wrote:
Hello list,
I'm trying to specify from the command line an empty string as the
key-value separator for
hey Lac,
its showing like you dont have DBS table in metastore(derby or mysql),
actually you have to again install the hive or again build hive through ANT.
Check you metastore(that DBS is exists or not)
Thanks regards
Vikas Srivastava
On Fri, Feb 10, 2012 at 8:33 AM, Lac Trung
Hi Folks,
I added a node in cluster , and restart the cluster but its taking much
time to come all the server live in Jobtracker UI, its only showing the
added server in cluster.
I there any specific reason for this or anything,
Thanks
Vikas Srivastava
is also adding
by their ip's
Regards
Vikas Srivastava
On Wed, Feb 8, 2012 at 11:28 AM, Harsh J ha...@cloudera.com wrote:
Hi,
Can you provide your tasktracker startup log as a pastebin.com link?
Also your JT log grepped for Adding a new node?
On Wed, Feb 8, 2012 at 11:13 AM, hadoop hive hadooph
hey folks,
i m getting this error while running mapreduce and these comes up in reduce
phase..
2012-02-03 16:41:19,780 WARN org.apache.hadoop.mapred.ReduceTask:
attempt_201201271626_5282_r_00_0 copy failed:
attempt_201201271626_5282_m_07_2 from hadoopdata3
2012-02-03 16:41:19,954 WARN
On Fri, Feb 3, 2012 at 4:56 PM, hadoop hive hadooph...@gmail.com wrote:
hey folks,
i m getting this error while running mapreduce and these comes up in
reduce phase..
2012-02-03 16:41:19,780 WARN org.apache.hadoop.mapred.ReduceTask:
attempt_201201271626_5282_r_00_0 copy failed
hey folks ,
I m getting an when i starting my datanode. can any1 have the idea what
this error about.
2012-02-03 11:57:02,947 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
10.0.3.31:50010, storageID=DS-1677953808-10.0.3.31-50010-1318330317888,
infoPort=50075,
Hey ,
Can any1 help me with this, i have increases the reduce slowstart to .25
but its still hangs after copy .
tell me what else i can change it to make it working fine.
regards
Vikas Srivastava
On Wed, Jan 25, 2012 at 7:45 PM, praveenesh kumar praveen...@gmail.comwrote:
Yeah , I am doing
hey Hema,
I m not sure but the problem is with you hdfs name
*hdfs://vm-acd2-4c51:54310/ ,
*change you host name and it ll run fine. specially remove - from
hostname.
regards
Vikas Srivastava
On Tue, Jan 31, 2012 at 4:07 AM, Subramanian, Hema
hema.subraman...@citi.com wrote:
I am facing
then a couple of days.
On Friday, January 27, 2012, hadoop hive hadooph...@gmail.com wrote:
Hey Harsh,
but after sumtym they are available 1 by 1 in jobtracker URL.
any idea how they add up slowly slowly.
regards
Vikas
On Fri, Jan 27, 2012 at 5:05 PM, Harsh J ha...@cloudera.com
hey there must be sum problem with the key or value, reducer didnt find the
expected value.
On Fri, Jan 27, 2012 at 1:23 AM, Rajesh Sai T tsairaj...@gmail.com wrote:
Hi,
I'm new to Hadoop. I'm trying to write my custom data types for Writable
types. So, that Map class will produce my
Hey folks,
i m facing a problem, with job Tracker URL, actually i added a node to the
cluster and after sometime i restart the cluster, then i found that my job
tracker is showing recent added node in *nodes * but rest of nodes are not
available not even in *blacklist. *
*
*
can any1 have any
no communication errors in their logs? Did you
perhaps bring up a firewall accidentally, that was not present before?
On Fri, Jan 27, 2012 at 4:47 PM, hadoop hive hadooph...@gmail.com wrote:
Hey folks,
i m facing a problem, with job Tracker URL, actually i added a node to
the
cluster
i face the same issue but after sumtime when i balanced the cluster the
jobs started running fine,
On Wed, Jan 25, 2012 at 3:34 PM, praveenesh kumar praveen...@gmail.comwrote:
Hey,
Can anyone explain me what is reduce copy phase in the reducer section ?
The (K,List(V)), is passed to the
this problem arise after adding a node , so then i start balancer to make
it balance ,
On Wed, Jan 25, 2012 at 4:38 PM, praveenesh kumar praveen...@gmail.comwrote:
@hadoophive
Can you explain more by balance the cluster ?
Thanks,
Praveenesh
On Wed, Jan 25, 2012 at 4:29 PM, hadoop hive
your job tracker is not running
On Wed, Jan 11, 2012 at 7:08 PM, praveenesh kumar praveen...@gmail.comwrote:
Jobtracker webUI suddenly stopped showing. It was working fine before.
What could be the issue ? Can anyone guide me how can I recover my WebUI ?
Thanks,
Praveenesh
54 matches
Mail list logo