ove in a loop,
I get,
java.lang.Exception: Shell Process Exception: Python HdfsError raised
> Traceback (most recent call last):
> File "Hdfsfile.py", line 49, in process
> writer.write(data)
> File "/home/ram/lib/python2.7/site-packages/hdfs/ext/avro/__init__.
Hi,
I want to submit a mapreduce job using rest api,
and get the status of the job every n interval.
Is there a way to do it?
Thanks
Hi,
I don't have much data, but it took around 40 minutes to decommission.
How long will it take to decommission a datanode?
Is there any way to optimize the process?
Thanks.
Hi,
Is there a java api to get decommission status for a particular data node?
Thanks.
Hello!
New message, please read <http://millenniumgroups.co.in/speaking.php?xe>
Ram
Hello!
New message, please read <http://www.amenestate.com/standing.php?vl7>
Ram
unsubscribe
Anand,
Try Oracle JDK instead of Open JDK.
Regards,
Ramkumar Bashyam
On Wed, Apr 1, 2015 at 1:25 PM, Anand Murali anand_vi...@yahoo.com wrote:
Tried export in hadoop-env.sh. Does not work either
Anand Murali
11/7, 'Anand Vihar', Kandasamy St, Mylapore
Chennai - 600 004, India
Ph: (044)-
Hi Jonathan,
For Audit Log you can look log4.properties file. By default, the
log4j.properties file has the log threshold set to WARN. By setting this
level to INFO, audit logging can be turned on. The following snippet shows
the log4j.properties configuration when HDFS and MapReduce audit logs
Check http://hadoop.apache.org/mailing_lists.html#User
Regards,
Ramkumar Bashyam
On Sun, Feb 22, 2015 at 1:48 PM, Mainak Bandyopadhyay
mainak.bandyopadh...@gmail.com wrote:
unsubscribe.
Check http://hadoop.apache.org/mailing_lists.html#User
Regards,
Ramkumar Bashyam
On Mon, Feb 23, 2015 at 12:29 AM, Umesh Reddy ur2...@yahoo.com wrote:
unsubscribe
Check http://hadoop.apache.org/mailing_lists.html#User
Regards,
Ramkumar Bashyam
On Wed, Jan 7, 2015 at 7:01 PM, Kiran Prasad Gorigay
kiranprasa...@imimobile.com wrote:
unsubscribe
Email to user-unsubscr...@hadoop.apache.org to unsubscribe.
Regards,
Ramkumar Bashyam
On Wed, Dec 3, 2014 at 4:43 PM, chandu banavaram chandu.banava...@gmail.com
wrote:
please unsubscribe me
the status and start if the listener isn't running.
$ lsnrctl status
$ lsnrctl start
Regards
Ravi Magham
On Sat, Aug 31, 2013 at 2:05 PM, Krishnan Narayanan
krishnan.sm...@gmail.com wrote:
Hi Ram,
I get the same error.If you find an answer pls dp fwd it to me. I will do
the same.
Thx
Adapter could not
establish the connection
any work around this.
the query is:
sqoop import --connect
jdbc:oracle:thin:@//ramesh.ops.cloudwick.com/cloud--username ramesh
--password password --table cloud.test -m 1
the output is as follows;
[root@ramesh ram]# sqoop import --connect jdbc:oracle:thin
Hi,
Any one can suggest the following.
how exactly Impala works? What happens when you submit a query? How the
data will be transferred to different nodes?
From,
Ramesh.
Hi,
I had installed nagios, and hadoop 2.0.0. I ant integrate hadoop
services, hosts and hadoop parameters like total HDFS storage, how much
HDFS storage available and datanodes up and running.to get alerts.
any one work around.
Thanks,
Ramesh.
Hi,
Please replace 0.0.0.0.with your ftp host ip address and try it.
Hi,
From,
Ramesh.
On Mon, Jul 15, 2013 at 3:22 PM, Hao Ren h@claravista.fr wrote:
Thank you, Ram
I have configured core-site.xml as following:
?xml version=1.0?
?xml-stylesheet type=text/xsl href
Hi jay,
what hadoop command you are given.
Hi,
From,
Ramesh.
On Fri, Jul 12, 2013 at 7:54 AM, Devaraj k devara...@huawei.com wrote:
Hi Jay,
** **
Here client is trying to create a staging directory in local file
system, which actually should create in HDFS.
**
Hi,
Please configure the following in core-ste.xml and try.
Use hadoop fs -ls file:/// -- to display local file system files
Use hadoop fs -ls ftp://your ftp location -- to display ftp files if
it is listing files go for distcp.
reference from
.
Thanks
Devaraj k
*From:* Ramya S [mailto:ram...@suntecgroup.com ram...@suntecgroup.com]
*Sent:* 12 July 2013 14:46
*To:* user@hadoop.apache.org
*Subject:* Taktracker in namenode failure
Hi,
Why only tasktracker in namenode faill during job
Hi,
Go through the links.
http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Cloudera-Manager-Managing-Clusters/cmmc_CM_architecture.html
Hi,
Please check all directories/files are existed in local system
configured mapres-site.xml and permissions to the files/directories as
mapred as user and hadoop as a group.
Hi,
From,
P.Ramesh Babu,
+91-7893442722.
On Wed, Jul 10, 2013 at 9:36 PM, Leonid Fedotov
Hi,
I am using Cloudera Manager 4.1.2 not having hive as a service, so I
was installed hive and configured mysql as metastore. Using Cloudera
Manager i was installed HUE. In the Hue, Beeswax (Hive UI) which is using
by default derby database i want configure metastore same as what hive is
Hi,
Can anyone give the procedure about how to run Distibuted shell
example in hadoop yarn.So that i try to understand how applicatin master
really works.
Hi,
Can anyone give the procedure about how to run Distibuted shell
example in hadoop yarn.So that i try to understand how applicatin master
really works.
Hi,
I receive the following error while starting datanode in secure
mode of hadoop 0.23
2011-12-14 14:35:48,468 INFO http.HttpServer
(HttpServer.java:addGlobalFilter(476)) - Added global filter 'safety'
(class=org.apache.hadoop.http.HttpServer$
2011-12-14 14:35:48,471 WARN
Hi,
I receive the following error while starting datanode in secure
mode of hadoop 0.23
2011-12-14 14:35:48,468 INFO http.HttpServer
(HttpServer.java:addGlobalFilter(476)) - Added global filter 'safety'
(class=org.apache.hadoop.http.HttpServer$
2011-12-14 14:35:48,471 WARN
)) -
Stopped Krb5AndCertsSslSocketConnector@0.0.0.0:1005
0.0.0.0 as Kerberos - IP can't work.
- Alex
On Tue, Dec 13, 2011 at 10:27 AM, sri ram rsriram...@gmail.com wrote:
Hi,
I receive the following error while starting datanode in secure
mode of hadoop 0.23
2011-12-14 14:35:48,468
:
telnet KERBEROS_IP 1005
As I know use kerberos per default ports 88, AFS token 746 (), kx509
9878
- Alex
On Tue, Dec 13, 2011 at 11:39 AM, sri ram rsriram...@gmail.com wrote:
Thanks for the reply,
I have tried with the ip of the individual systems
also.But
the same
DataNode at master.example.com/147.128.152.179
On Tue, Dec 13, 2011 at 4:23 PM, sri ram rsriram...@gmail.com wrote:
telnet master ip says unable to connect to remote host.
This is my following property for datanode in secure mode
property
namedfs.datanode.https.address/name
valuemaster:1005/value
misconfigured, you have to setup a working environment.
- Alex
On Tue, Dec 13, 2011 at 12:02 PM, sri ram rsriram...@gmail.com wrote:
The following is the content of mapred-site.xml
?xml version=1.0?
?xml-stylesheet href=configuration.xsl?
configuration
property
namedfs.replication
HI,
I TRIED INSTALLING HADOOP SECURE MODE IN 0.23 VERSION.I STUCK UP
WITH THE ERROR OF
java.lang.RuntimeException: Cannot start secure cluster without privileged
resources.
at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1487)
at
Hi,
I am trying to form a hadoop cluster of 0.23 version in secure
mode.
While starting nodemanager i get the following error
2011-12-12 15:37:26,874 INFO ipc.HadoopYarnRPC
(HadoopYarnProtoRPC.java:getProxy(48)) - Creating a HadoopYarnProtoRpc
proxy for protocol interface
Hi,
I am trying to form a hadoop cluster of 0.23 version in secure
mode.
While starting nodemanager i get the following error
2011-12-12 15:37:26,874 INFO ipc.HadoopYarnRPC
(HadoopYarnProtoRPC.java:getProxy(48)) - Creating a HadoopYarnProtoRpc
proxy for protocol interface
...@master.example.com configured in
your kerberos setup. I wander how much traffic example.com gets on a
daily basis.
--Bobby Evans
On 12/12/11 4:15 AM, sri ram rsriram...@gmail.com wrote:
Hi,
I am trying to form a hadoop cluster of 0.23 version in secure
mode.
While starting
Hi,
I try to install hadoop 0.23 and form a small cluster with 3
machines.
Whenever i try to start nodemanager and resource manager.The
nodemanager fails to start by throwing the following error log.And the
nodemanager fails in both master and slaves.
2011-11-25 13:40:15,244
Hi,
I try to install hadoop 0.23 and form a small cluster with 3
machines.
Whenever i try to start nodemanager and resource manager.The
nodemanager fails to start by throwing the following error log.And the
nodemanager fails in both master and slaves.
2011-11-25 13:40:15,244
Hi,
I try to install hadoop 0.23 and form a small cluster with 3
machines.
Whenever i try to start nodemanager and resource manager.The
nodemanager fails to start by throwing the following error log.And the
nodemanager fails in both master and slaves.
2011-11-25 13:40:15,244
Hi,
I am using a standalone linux machine. Namenode and Datanode are running.
But when I try to access the UI in my browser its showing unable to
connect error. I know its a basic question please help me. I have given
below the configuration I am using.
*Core-site.xml*
property
How to remove the multiple datanodes dynamically from the masternode without
stopping it?
--
View this message in context:
http://old.nabble.com/Stopping-datanodes-dynamically-tp30804859p30804859.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.
Technical expert in EDW
Please send me resume with contact information.
Thanks,
Ram Prakash
E-Solutionsin, Inc
ram.prak...@e-solutionsinc.com
www.e-solutionsinc.com
--
View this message in context:
http://old.nabble.com/Urgent-Need%3A-Sr.-Developer---Hadoop-Hive-%7C-Cupertino%2C-CA
page to test your setup
Many usage examples can be found in the tests or you can ask me, I'll be
glad to help.
Yoram
On Fri, May 14, 2010 at 12:32 AM, Ram Kulbak ram.kul...@gmail.com wrote:
Hi Renato,
IHBASE is currently broken. I expect to have it fixed tomorrow or the day
after.
When
Hi Renato,
IHBASE is currently broken. I expect to have it fixed tomorrow or the day
after.
When it's fixed, I'll publish a release under
http://github.com/ykulbak/ihbase and add a wiki page explaining how to get
started. I'll also send a note to the mailing list.
Please feel free to contact me
Hi Lekhnath,
The IntSets are package protected so that their callers will always use the
IntSet interface, thus preventing manipulation of the IntSet after it was
built and hiding implementation details. It seems to me that having an index
which can spill to disk may be a handy feature, perhaps
Hi Shen,
The first thing you need to verify is that you can switch to the
IdxRegion implementation without problems. I've just checked that the
following steps work on the PerformanceEvaluation tables. I would
suggest you backup your hbase production instance before attempting
this (or create and
Posting a reply to a question I got off list:
Ram:
How do I specify index in HColumnDescriptor that is passed to modifyColumn()
?
Thanks
You will need to use an IdxColumnDescriptor:
Here's a code example for creating a table with a byte array index:
HTableDescriptor tableDescriptor
I think that the scanning logic was fixed in 0.20.3 (memstore is now cloned).
It's actually GETs that are still not atomic, try running
TestHRegion.testWritesWhileGetting while increasing numQualifiers to
1000.
Regards,
Yoram
On Wed, Jan 27, 2010 at 8:48 AM, Ryan Rawson ryano...@gmail.com wrote:
Hi Paul,
I've encountered the same problem. I think its fixed as part of
https://issues.apache.org/jira/browse/HBASE-2037
Regards,
Yoram
On Wed, Dec 16, 2009 at 10:45 AM, Paul Ambrose pambr...@mac.com wrote:
I ran into some problems with FilterList and SingleColumnValueFilter.
I created a
Hi,
I've noticed that hbase 0.20.0-alpha comes with a non official zookeeper jar
(zookeeper-r785019-hbase-1329.jar).
Can I deploy hbase 0.20.0-alpha with zookeeper 3.1.1 ?
Thanks,
Ram
Hi,
I can't find the classes TrasnactionalTable, IndexedTable or any of the
indexed or transactional functionality in hbase 0.20.0-alpha, is this a
mistake?
Thanks,
Ram
by the Configuration class(line 1045) making sure
it loads the DocumentBuilderFactory bundled with the JVM and not a 'random'
classpath-dependent factory..
Hope this helps,
Ram
On Wed, Jun 24, 2009 at 6:42 PM, murali krishna muralikpb...@yahoo.comwrote:
Hi,
Recently migrated to hadoop-0.20.0 and I am
Anyone ?
Any help to understand this package is appreciated.
Thanks,
T
On Thu, May 28, 2009 at 3:18 PM, Tenaali Ram tenaali...@gmail.com wrote:
Hi,
I am trying to understand the code of index package to build a distributed
Lucene index. I have some very basic questions and would really
Thanks Jun!
On Fri, May 29, 2009 at 2:49 PM, Jun Rao jun...@almaden.ibm.com wrote:
Reply inlined below.
Jun
IBM Almaden Research Center
K55/B1, 650 Harry Road, San Jose, CA 95120-6099
jun...@almaden.ibm.com
Tenaali Ram tenaali...@gmail.com wrote on 05/28/2009 03:18:53 PM:
Hi,
I
Hi,
I am trying to understand the code of index package to build a distributed
Lucene index. I have some very basic questions and would really appreciate
if someone can help me understand this code-
1) If I already have Lucene index (divided into shards), should I upload
these indexes into HDFS
Hi,
I want to sort my records ( consisting of string, int, float) using Hadoop.
One way I have found is to set number of reducers = 1, but this would mean
all the records go to 1 reducer and it won't be optimized. Can anyone point
me to some better way to do sorting using Hadoop ?
Thanks,
Hi,
I am new to hadoop. What I have understood so far is- hadoop is used to
process huge data using map-reduce paradigm.
I am working on problem where I need to perform large number of
computations, most computations can be done independently of each other (so
I think each mapper can handle one
61 matches
Mail list logo