Please post on Ambari mailing list.
> On Apr 5, 2017, at 6:36 AM, "che...@showingdata.cn"
> wrote:
>
> Hi,
> Any documentation/procedures available for Ambari mange a exist Hadoop
> cluster.
>
> Thanks in advance,
> Chen Yunfei
>
This question should be for Hive mailing list.
The exception message is quite clear: you're casting to ArrayList[]
On Wed, Feb 22, 2017 at 5:41 AM, merp queen wrote:
> Object values[] = (Object[])str.getValue(); // Target values
>
> I changed above line to below and
Can you phrase your post in English ?
2017-01-01 4:22 GMT-08:00 Philippe Kernévez :
> kk1) Maintenant que Knox est en place j'aimerai l'utiliser.
> En particulier depuis un client HDFS.
> Je peux faire (ça marche) :
> a) HDFS en RPC sur mon name node actif : "hdfs dfs -ls
There is nothing to worry about on your side.
I received such email too.
> On Sep 3, 2016, at 5:57 PM, Jonathan Aquilina wrote:
>
> Can someone tell me if the below is something to worry about as I hardly post
> to the list and I know when I have posted to the list
For 1) you don't have to introduce external storage.
You can define case classes for the known formats.
FYI
On Thu, Jul 7, 2016 at 4:40 PM, venito camelas
wrote:
> I'm pretty new to this and I have a use case I'm not sure how to
> implement, I'll try to explain it and
Viswanathan :
Please post the question on user@hbase mailing list.
On Mon, Apr 4, 2016 at 1:23 AM, Viswanathan J
wrote:
> Hi Rajat,
>
> Thanks for the update.
>
> HBase export will impact the cluster performance right?
>
> On Fri, Apr 1, 2016 at 5:24 PM, Rajat Dua
Here is isEmpty() :
public boolean isEmpty() {
return this.cells == null || this.cells.length == 0;
And getRow:
public byte [] getRow() {
if (this.row == null) {
this.row = (this.cells == null || this.cells.length == 0) ?
null :
So row key would be null.
FYI
You can include the following command line parameter when you build from
1.1.2 source:
-Dhadoop-two.version=2.7.1
FYI
On Thu, Feb 25, 2016 at 8:37 AM, Micha wrote:
> Hi,
>
> I ran into classpath problems while trying to use MiniDFSCluster:
>
> My code uses the hadoop
Have you looked at the examples ?
examples/src/main/java/org/apache/spark/examples/streaming/JavaTwitterHashTagJoinSentiments.java
examples/src/main/scala/org/apache/spark/examples/streaming/TwitterAlgebirdCMS.scala
Siva:
How did you build your project ?
You can use mvn denpendency:tree to find out the version
of spark-streaming-twitter
or take a look at ./external/twitter/pom.xml
ls ~/.m2/repository/org/apache/spark/spark-streaming-twitter_2.10/
1.4.0-SNAPSHOT/ 1.5.0-SNAPSHOT/ 1.5.2/
The question is better suited for sqoop mailing list.
Meanwhile, you can search for past threads / JIRAs on this subject.
e.g.
http://search-hadoop.com/m/C63T91ojE6B14UByZ=sqoop+1+4+4+hive+import+error
> On Feb 2, 2016, at 5:28 AM, Arun Pandian wrote:
>
> How to
See this:
http://search-hadoop.com/m/uOzYtnmaGh23gOoB1=IMPORTANT+HOW+TO+UNSUBSCRIBE
> On Dec 27, 2015, at 2:32 AM, Carmen Manzulli wrote:
>
>
-
To unsubscribe, e-mail:
mber 2015 at 16:23, Ted Yu <yuzhih...@gmail.com> wrote:
> > See this:
> >
> http://search-hadoop.com/m/uOzYtnmaGh23gOoB1=IMPORTANT+HOW+TO+UNSUBSCRIBE
> >
> >> On Dec 27, 2015, at 2:32
For #1, please see:
https://issues.apache.org/jira/browse/INFRA-10725
Unfortunately, as of yesterday this footer didn't work.
FYI
On Wed, Dec 2, 2015 at 1:01 PM, Boudreau, Carl
wrote:
> Dear Sys Admin,
>
> Can the footer on the ListSrv be edited to include
The INFRA JIRA was closed 2 days ago.
But the following post from today still doesn't carry footer:
http://search-hadoop.com/m/uOzYthBKLf2YvP0O1
FYI
On Thu, Nov 5, 2015 at 7:33 PM, Arpit Agarwal
wrote:
> Created https://issues.apache.org/jira/browse/INFRA-10725
>
>
>
See /usr/hdp/current/hadoop-hdfs-client/bin/hdfs which calls hdfs.distro
At the top of hdfs.distro, you would see the usage:
function print_usage(){
echo "Usage: hdfs [--config confdir] COMMAND"
echo " where COMMAND is one of:"
echo " dfs run a filesystem command on
Abhishek:
Consider asking on user@hive for Metastore related questions.
On Mon, Oct 19, 2015 at 4:28 AM, Kiyoshi Mizumaru <
kiyoshi.mizum...@gmail.com> wrote:
> You need to deploy odd number of ZooKeeper Servers.
> The minimum is three and sufficient for small and mid size clusters.
>
> Regards,
After a brief search, I found HADOOP-12194
Can you clarify what your goal is ?
Cheers
On Thu, Oct 15, 2015 at 2:42 PM, Haopeng Liu wrote:
> Hi All,
>
> I'm building hadoop system from the source code via maven. Does it support
> incremental build? Thank you!
>
> Best,
>
l be used in Hadoop-2.8.0
> version.
>
> I work on hadoop-0.23.1. How can I build old version hadoop incrementally?
>
> Regards
>
> On 10/15/15 4:44 PM, Ted Yu wrote:
>
> After a brief search, I found HADOOP-12194
>
> Can you clarify what your goal is ?
>
> Cheers
See ResourceManager REST API to submit new applications :
http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Applications_APISubmit_Application
What's the version of hadoop on Linux ?
Cheers
On Sun, Oct 11, 2015 at 11:36 AM, hadoop hive
Take a look at search-hadoop.com - you will easily find the projects which
are gaining momentum.
Do you use hadoop or any open source project(s) at work ?
If so, those projects would be good starting point.
On Sun, Oct 11, 2015 at 5:17 PM, snehil wakchaure
wrote:
> Hello,
>
Looks like htrace jar was missing from the classpath.
jar tvf htrace-core-3.1.0-incubating.jar | grep Trace
1187 Thu Jan 15 11:36:52 UTC 2015
org/apache/htrace/HTraceConfiguration$MapConf.class
3195 Thu Jan 15 11:36:52 UTC 2015
org/apache/htrace/HTraceConfiguration.class
5247 Thu Jan 15
gt;:� b � ���w @�B� ~Y�H�(�h/FR
> _+��nX `#�
> |D��� �j���� f ��ƨT��k/ 颚h ��4` +Q#�ⵕ�,Z�80�V:�
> )Y)4Lq��[� z#���T<b� �-*.�/��m{? �8|�] �6� �sk�t�L\UjT=
>
> -Håvard
>
> On Sun, Sep 27, 2015 at 4:06 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> > H
ssion Status : Normal
>
> Configured Capacity: 2876708585472 (2.62 TB)
>
> DFS Used: 343284232192 (319.71 GB)
>
> Non DFS Used: 885193736192 (824.40 GB)
>
> DFS Remaining: 1648230617088 (1.50 TB)
>
> DFS Used%: 11.93%
>
> DFS Remaining%: 57.30%
>
> Last conta
Is the single node system secure ?
Have you checked hdfs healthiness ?
To which release of hbase were you importing ?
Thanks
> On Sep 27, 2015, at 3:06 AM, Håvard Wahl Kongsgård
> wrote:
>
> Hi, Iam trying to import a old backup to a new smaller system (just
>
Which release of hadoop are you installing ?
Have you tried supplying -X switch to see debug logs ?
Cheers
On Sat, Sep 26, 2015 at 5:46 AM, Onder SEZGIN wrote:
> Hi,
>
> I am trying to build hadoop on windows.
> and i am getting the error below.
>
> Is there anyone who
wrote:
> It was the latest release.
> -X gives the same output because i had already supplied -e option while
> running mvn package command.
>
> On Saturday, September 26, 2015, Ted Yu <yuzhih...@gmail.com> wrote:
>
>> Which release of hadoop are you installing ?
>>
>> Have
This question seems better suited for user@hive mailing list.
Cheers
On Fri, Sep 25, 2015 at 9:14 AM, Gangavarapu, Venkata <
venkata.gangavar...@bcbsa.com> wrote:
> Hi,
>
>
>
> I am trying to connect to Hive via Hiveserver2 using beeline.
>
>
>
> !connect
See http://hadoop.apache.org/mailing_lists.html#User
> On Sep 9, 2015, at 2:53 AM, YIMEN GAEL wrote:
>
>
gt; ee7d0634-89a3-4ada-a8ad-7848214397be, blockid:
>>>>>>>> BP-439084760-10.32.0.180-1387281790961:blk_1075349331_1612273,
>>>>>>>> duration: 276448307
>>>>>>>> 2015-09-03 12:03:56,494 INFO
>>>>>>>> org.apache.hadoop.hd
Just realized this discussion should be done on user@hbase mailing list.
Akmal:
If you still have questions, please start a thread there.
Thanks
On Fri, Sep 4, 2015 at 8:41 AM, Ted Yu <yuzhih...@gmail.com> wrote:
> Can you do the following ?
>
> run hbck command to see if an
gt; server being checked: test-rs5,60020,1441261824016
> What else could be wrong?
>
> Thanks.
>
>
> On 03 Sep 2015, at 16:20, Ted Yu <yuzhih...@gmail.com> wrote:
>
> Did the exception lead to hbase:meta assignment failure ?
>
> Please check region server log on 10.10.
at
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.readResponse(RpcClient.java:1076)
> at
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:727)
>
> Thanks
>
> > On 03 Sep 2015, at 15:50, Ted Yu <yuzhih...@gmail.com> wrote
Can you log into the master node and check master log ?
Pastebin snippet of master log if you need more help.
Cheers
> On Sep 3, 2015, at 6:24 AM, Akmal Abbasov wrote:
>
> Hi,
> I’m having problems with accessing HBase master webui. when I try to access
> it, it
lk_1075277914_1540222, duration:
> 7881815
>
> p.s. we had to change the ip addresses of the cluster nodes, is it
> relevant?
>
> Thanks.
>
> On 02 Sep 2015, at 18:20, Ted Yu <yuzhih...@gmail.com> wrote:
>
> Please provide some more information:
>
>
Please provide some more information:
release of hbase / hadoop you're using
were region servers doing compaction ?
have you checked region server logs ?
Thanks
On Wed, Sep 2, 2015 at 9:11 AM, Akmal Abbasov
wrote:
> Hi,
> I’m having strange behaviour in hbase
> behaviour started weeks before it.
>
> yes, 10.10.8.55 is region server and 10.10.8.54 is a hbase master.
> any thoughts?
>
> Thanks
>
> On 02 Sep 2015, at 18:45, Ted Yu <yuzhih...@gmail.com> wrote:
>
> bq. change the ip addresses of the cluster nodes
>
>
Please provide a bit more information:
release of hadoop you're using
snippet of the log showing the error
Normally from Web UI, you can retrieve the complete list of parameters
Cheers
On Sat, Aug 29, 2015 at 8:19 AM, José Luis Larroque larroques...@gmail.com
wrote:
Hi guys, i was removing
I looked at
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java
but didn't find such switch.
FYI
On Wed, Aug 19, 2015 at 12:20 PM, Varun Sharma varun13...@gmail.com wrote:
Hi,
I am running a Distcp programmatically from Hadoop cluster to another -
using
Please take a look at HDFS-6133 which aims to help with hbase data locality.
It was integrated to hadoop 2.7.0 release.
FYI
On Thu, Jul 30, 2015 at 3:06 AM, Akmal Abbasov akmal.abba...@icloud.com
wrote:
I am running HBase snapshot exporting, but I stopped it, and still the
capacity used is
From log below, hbase-rs4 was writing to the datanode.
Can you take a look at region server log and see if there is some clue ?
Thanks
On Jul 28, 2015, at 9:41 AM, Akmal Abbasov akmal.abba...@icloud.com wrote:
Hi, I’m observing strange behaviour in HDFS/HBase cluster.
The disk space of
You can use the following command to see options for gzip:
gzip -h
For snappy, see:
https://github.com/kubo/snzip
https://code.google.com/p/snappy/issues/detail?id=34
FYI
On Wed, Jul 29, 2015 at 3:34 PM, SP sajid...@gmail.com wrote:
Hi All,
I am working on comparing different compression
Which hadoop release are you using ?
Is the problem described in following thread similar to what you saw ?
http://search-hadoop.com/m/uOzYtS8XJosnjj2
Cheers
On Wed, Jul 29, 2015 at 6:49 AM, Arshad Ali Sayed
arshad.ali.sayed...@gmail.com wrote:
I have a scenario where I need to read file
Putting general@ to bcc.
If you plan to ask about Apache Hadoop setup issue, consider using user@
If you are installing distro from some vendor, please use vendor-specific
mailing list.
Cheers
2015-07-26 13:13 GMT-07:00 Serkan Taş serkan@likyateknoloji.com:
Hi,
Which mail list best
DFSClient#create() allows you to pass favored nodes.
Does that serve your needs ?
On Jul 19, 2015, at 7:30 PM, Shiyao Ma i...@introo.me wrote:
Hi,
I'd like to put my data selectively on some datanodes.
Currently I can do that by shutting down un-needed datanodes. But this is a
the
.tar.gz file.
On 18 Jul 2015 4:48 pm, Vikram Bajaj vikrambajaj220...@gmail.com
wrote:
Okay! Thanks :)
On 18 Jul 2015 4:39 pm, Ted Yu yuzhih...@gmail.com wrote:
http://apache.arvixe.com/hadoop/common/stable1/
The .tar.gz has source code.
On Jul 18, 2015, at 3:20 AM, Vikram Bajaj
http://apache.arvixe.com/hadoop/common/stable1/
The .tar.gz has source code.
On Jul 18, 2015, at 3:20 AM, Vikram Bajaj vikrambajaj220...@gmail.com wrote:
Hey,
I'm new to Hadoop, so please correct me if I'm wrong and bear with me :)
I want to build Hadoop 1.2.1 on my Ubuntu VM.
I'm
bq. IOException: Die Verbindung wurde vom Kommunikationspartner
zurückgesetzt
Looks like the above means 'The connection was reset by the communication
partner'
Which hadoop release do you use ?
Can you pastebin more of the datanode log ?
Thanks
On Fri, Jul 17, 2015 at 9:11 AM, marius
Looks like you may get more help from Hive mailing list:
http://hive.apache.org/mailing_lists.html
FYI
On Thu, Jul 16, 2015 at 4:15 PM, Kumar Jayapal kjayapa...@gmail.com wrote:
Hi,
How can we convert files stored in snappy compressed parquet format in
Hive to avro format.
is it possible
Take a look at http://hadoop.apache.org/mailing_lists.html
Cheers
On Tue, Jul 7, 2015 at 1:09 AM, Akmal Abbasov akmal.abba...@icloud.com
wrote:
Can you check /var/log/messages to see if there is some clue ?
Which hadoop release are you using ?
Can you provide the command line for the resource manager ?
Thanks
On Wed, Jul 1, 2015 at 9:38 AM, xeonmailinglist-gmail
xeonmailingl...@gmail.com wrote:
I am running the hadoop MRv2 in a
Can you describe the hotspot problem in a bit more detail ?
Which scheduler are you using ?
Cheers
On Mon, Jun 15, 2015 at 5:42 PM, codercooler codercoo...@163.com wrote:
hey guys,
how is yarn assign its containers? is that a completely random behavior? I
use hadoop 2.7.0 and I got a
Can you post the complete stack trace for 'Failed to get FileSystem instance'
?
What's the permission for /apps/hbase/staging ?
Looking at commit log of SecureBulkLoadEndpoint.java, there have been a lot
bug fixes since 0.98.4
Please consider upgrading hbase
Cheers
On Fri, Jun 26, 2015 at
Do you have hbase running in your cluster ?
I ask this because bringing HBase as a new component into your deployment
incurs operational overhead which you may not be familiar with.
Cheers
On Sun, Jun 7, 2015 at 2:53 PM, Kiet Tran ktt...@gmail.com wrote:
Hi,
I have a roughly 5 GB file where
What OS are you using ?
Can you try netstat with this command ?
netstat -tulpn | grep 50070
On a system where DataNode is running as process 11272, I issued two
commands:
[root@c12 ~]# netstat -tulpn | grep 11272
tcp0 0 0.0.0.0:50010 0.0.0.0:*
LISTEN 11272/java
Have you checked the link
http://192.168.56.101:9046/proxy/application_1432817967879_0003
http://192.168.56.101:9046/proxy/application_1432817967879_0003/Then ?
You should get come clue from logs of the 2 attempts.
On Thu, May 28, 2015 at 6:42 AM, xeonmailinglist-gmail
xeonmailingl...@gmail.com
The system should have free inodes available.
See the following article:
https://wiki.gentoo.org/wiki/Knowledge_Base:No_space_left_on_device_while_there_is_plenty_of_space_available
On Wed, May 27, 2015 at 5:09 AM, Alexander Alten-Lorenz wget.n...@gmail.com
wrote:
FSError:
bq. All datanodes 112.123.123.123:50010 are bad. Aborting...
How many datanodes do you have ?
Can you check datanode namenode log ?
Cheers
On Tue, May 26, 2015 at 5:00 PM, S.L simpleliving...@gmail.com wrote:
Hi All,
I am on Apache Yarn 2.3.0 and lately I have been seeing this exceptions
The fix is in the upcoming 2.7.1 release.
See this thread: http://search-hadoop.com/m/uOzYt0soQDrSOkY
On Mon, May 18, 2015 at 3:46 PM, Caesar Samsi caesarsa...@mac.com wrote:
Hello,
*DFSClient#getServerDefaults returns null within 1 hour of system start*
bq. java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but
interface was expected
Looks like the jline jar on classpath is incompatible with the one Hive was
built with.
BTW Hive user mailing list is better place to ask this question.
Cheers
On Thu, May 14, 2015 at 12:02 AM,
Is hbase-site.xml on the classpath ?
BTW please use hbase mailing list for hbase specific questions.
Cheers
On Wed, May 13, 2015 at 11:50 AM, Ibrar Ahmed ibrar.ah...@gmail.com wrote:
Hi,
I am creating a table using hive and getting this error.
[127.0.0.1:1] hive CREATE TABLE
Looks like a question for pig mailing list:
http://pig.apache.org/mailing_lists.html#Users
Cheers
On May 12, 2015, at 4:14 AM, Anand Murali anand_vi...@yahoo.com wrote:
Dear All:
I am running pig 0.14.0 on hadoop 2.6 pseudo mode. I would like to know,
where I can set job output path,
Have you asked this on sqoop mailing list ?
http://sqoop.apache.org/mail-lists.html
On Wed, May 6, 2015 at 1:56 PM, Kumar Jayapal kjayapa...@gmail.com wrote:
Hello All,
Can I use split-by option with multiple keys to import the data ?
please help and suggest me any link.
sqoop import
Send email to user-unsubscr...@hadoop.apache.org
On Wed, May 6, 2015 at 12:49 PM, Pastrana, Rodrigo (RIS-BCT)
rodrigo.pastr...@lexisnexis.com wrote:
unsubscribe
--
The information contained in this
e-mail message is
Send email to user-unsubscr...@hadoop.apache.org
On Tue, Apr 28, 2015 at 7:30 AM, Ram pramesh...@gmail.com wrote:
unsubscribe
Have you looked at http://nuage.cs.washington.edu/repository.php ?
Cheers
On Sat, Apr 25, 2015 at 2:43 AM, Lixiang Ao aolixi...@gmail.com wrote:
Hi all,
I'm looking for some real-world Mapreduce traces (jobhistory) to analyze
the characteristics. But I couldn't found any except for SWIM
Can you use 2.7.0 ?
http://search-hadoop.com/m/LgpTk2Kk956/Vinod+hadoop+2.7.0subj=+ANNOUNCE+Apache+Hadoop+2+7+0+Release
Cheers
On Apr 23, 2015, at 3:21 AM, Казаков Сергей Сергеевич skaza...@skbkontur.ru
wrote:
Hi!
We see some serious issues in HDFS of 2.6.0, which were, according to
Please send email to user-unsubscr...@hadoop.apache.org
On Thu, Apr 23, 2015 at 6:39 AM, Nandakumar Vadivelu
nandakumar.vadiv...@ericsson.com wrote:
unsubscribe
What release of hadoop are you using ?
Maybe try regex such as:
[Ugi]*Metrics[System]*
Cheers
On Tue, Apr 21, 2015 at 9:43 AM, Akmal Abbasov akmal.abba...@icloud.com
wrote:
Hi, I am now working on hadoop cluster monitoring, and currently playing
with hadoop-metrics2.properties file. I would
Droppinh hive mailing list since you mentioned mapreduce in your email.
Can you give us a bit more detail on what you're trying to do ?
Cheers
On Sat, Apr 18, 2015 at 3:08 PM, shanthi k kshanthi...@gmail.com wrote:
I need mapreduce program in java for this input and output plz help
bq. add columns to the HBase table(from Hive)
Since desired approach is to add column through Hive, please consider Hive
mailing list as well.
Cheers
On Thu, Apr 16, 2015 at 10:45 AM, Chris Nauroth cnaur...@hortonworks.com
wrote:
Hello Manoj,
I recommend restarting this thread over at
Can you provide a bit more information please (such as hadoop release) ?
Using the following command you would get more clue on the cause (plug in
your app Id and container Id):
yarn logs -applicationId application_1386639398517_0007 -containerId
container_1386639398517_0007_01_19
Cheers
Can you tell us how you setup your project?
Do you use maven to build it ?
Please pastebin snippet of your code and the error you got.
Cheers
On Apr 12, 2015, at 4:16 AM, Anand Murali anand_vi...@yahoo.com wrote:
Dear All:
I am new to hadoop2.6.0. I managed to set up and install
See http://hadoop.apache.org/mailing_lists.html#User please
On Fri, Apr 3, 2015 at 9:22 AM, hi.ja...@gmail.com hi.ja...@gmail.com
wrote:
Unsubscribe
The error message is very clear: a class which extends Partitioner is
expected.
Maybe you meant to specify MyHashPartitioner ?
Cheers
On Wed, Apr 1, 2015 at 7:54 AM, xeonmailinglist-gmail
xeonmailingl...@gmail.com wrote:
Hi,
I have created a Mapper class[3] that filters out key values
Himawan:
Please see the following constants
in
hadoop-hdfs-project//hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
:
public static final String DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY =
dfs.namenode.heartbeat.recheck-interval;
public static final int
, 2015 at 8:13 AM, Ted Yu yuzhih...@gmail.com wrote:
Himawan:
Please see the following constants
in
hadoop-hdfs-project//hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
:
public static final String DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY
Please send email to user-unsubscr...@hadoop.apache.org
On Tue, Mar 24, 2015 at 4:56 PM, Cnewtonne cnewto...@gmail.com wrote:
Send email to user-unsubscr...@hadoop.apache.org
Cheers
On Mar 22, 2015, at 1:10 AM, harish.tange...@gmail.com
harish.tange...@gmail.com wrote:
Unsubscribe
Sent from Windows Mail
Please see http://slider.incubator.apache.org/mailing_lists.html
Let's continue discussion on the Slider dev mailing list.
On Fri, Mar 13, 2015 at 11:13 AM, Krishna Kishore Bonagiri
write2kish...@gmail.com wrote:
Hi,
I am not aware of any Slider specific group, so I am posting it here.
Which hadoop release are you using ?
In branch-2, I see this IOE in BlockManager :
if (targets.length minReplication) {
throw new IOException(File + src + could only be replicated to
+ targets.length + nodes instead of minReplication (=
+ minReplication + ).
Here are some related discussions and JIRA:
http://search-hadoop.com/m/LgpTk2gxrGx
http://search-hadoop.com/m/LgpTk2YLArE
https://issues.apache.org/jira/browse/MAPREDUCE-6190
Cheers
On Sun, Mar 1, 2015 at 8:41 PM, Krish Donald gotomyp...@gmail.com wrote:
Hi,
Wanted to understand, How to
Krishna:
Please take a look at:
http://wiki.apache.org/hadoop/BindException
Cheers
On Thu, Feb 26, 2015 at 10:30 PM, hadoop.supp...@visolve.com wrote:
Hello Krishna,
Exception seems to be IP specific. It might be occurred due to
unavailability of IP address in the system to assign. Double
Looks like this question should be directed to oozie mailing list:
http://oozie.apache.org/mail-lists.html
Cheers
On Sat, Feb 28, 2015 at 12:20 PM, hitarth trivedi t.hita...@gmail.com
wrote:
Hi,
I downloaded the latest release oozie-4.0.1 . When I try to build it
locally using
Looks like this is related:
https://issues.apache.org/jira/browse/YARN-964
On Fri, Feb 27, 2015 at 4:29 AM, Nur Kholis Majid
nur.kholis.ma...@gmail.com wrote:
Hi All,
I have many jobs failed because AM trying to rerun job in very short
interval (only in 6 second). How can I add the interval
Please take a look at:
https://issues.apache.org/jira/browse/AMBARI-249
Ambari mailing list seems to be better place to ask this question.
Cheers
On Thu, Feb 26, 2015 at 7:21 AM, Steve Edison sediso...@gmail.com wrote:
Team,
I am using Ambari to install a cluster which now needs to be
Please take a look at:
http://www.tldp.org/LDP/sag/html/basic-ntp-config.html
On Thu, Feb 26, 2015 at 10:19 AM, tesm...@gmail.com tesm...@gmail.com
wrote:
Thanks Jan. I did the follwoing:
1) Manually set the timezone of all the nodes using sudo
dpkg-reconfigure tzdata
2) Re-booted the
.
But is Slider not expected to start HBase by default without the above steps?
I remember to have read somewhere in HDP 2.2 manuals that Slider is the
default deployment mechanism for HBase.
Kishore
On Tue, Feb 24, 2015 at 8:02 AM, Ted Yu yuzhih...@gmail.com wrote:
Ambari 1.7.0 allows deploying hbase
Ambari 1.7.0 allows deploying hbase through Slider using Views.
Click on Views on the left hand side.
Expand Slider and click on 'Create Instance'
In the dialog, fill in Ambari Server Cluster REST API URL. e.g.
http://c6401.ambari.apache.org:8080/api/v1/clusters/c1
After creating the View, select
As of Hadoop 2.6, default blocksize is 128 MB (look for dfs.blocksize)
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
Cheers
On Sun, Feb 22, 2015 at 11:11 AM, Krish Donald gotomyp...@gmail.com wrote:
Hi,
I have read somewhere that default block size
Rishabh:
You can start with:
http://wiki.apache.org/hadoop/HowToContribute
There're several components: common, hdfs, YARN, mapreduce, ...
Which ones are you interested in ?
Cheers
On Sat, Feb 21, 2015 at 12:18 AM, Bhupendra Gupta bhupendra1...@gmail.com
wrote:
I have been learning and trying
Please take a look at https://issues.apache.org/jira/browse/MAPREDUCE-5874
Cheers
On Feb 20, 2015, at 3:11 AM, xeonmailinglist xeonmailingl...@gmail.com
wrote:
Hi,
Is there a way to submit a job using the YARN REST API?
Thanks,
Take a look at http://hadoop.apache.org/mailing_lists.html#User
Cheers
On Thu, Feb 19, 2015 at 6:26 AM, Menno de Bruin menno.de.br...@gmail.com
wrote:
Dear Sir/Madame,
Please take me of the mail-list for now.
Thank you,
Menno
Looks like you may get good answer from Ambari mailing list.
http://ambari.apache.org/mail-lists.html
On Thu, Feb 12, 2015 at 9:24 PM, Adaryl Bob Wakefield, MBA
adaryl.wakefi...@hotmail.com wrote:
I’m trying to set up a Hadoop cluster but Ambari is giving me issues.
At the screen where it
The exception came from DomainSocket so using netstat wouldn't reveal the
conflict.
What's the output from:
ls -l /var/run/hdfs-sockets/datanode
Which hadoop release are you using ?
Cheers
On Tue, Feb 10, 2015 at 10:12 AM, Rajesh Thallam rajesh.thal...@gmail.com
wrote:
I have been repeatedly
For hadoop 2.6, you need to install protobuf 2.5.0 first.
You can use the following command to build (at root of workspace):
mvn clean verify -Pdist -Pnative -Dtar
Cheers
On Fri, Feb 6, 2015 at 8:38 AM, xeonmailinglist xeonmailingl...@gmail.com
wrote:
I want to compile MapReduce of the
Have you considered using Apache Phoenix ?
That way all your data is stored in one place.
See http://phoenix.apache.org/
Cheers
On Tue, Feb 3, 2015 at 6:44 PM, 임정택 kabh...@gmail.com wrote:
Hello all.
We're periodically scan HBase tables to aggregate statistic information,
and store it to
Oozie handle this workflow?
On 2015년 2월 5일 (목) at 오전 5:03 Ted Yu yuzhih...@gmail.com wrote:
Have you considered using Apache Phoenix ?
That way all your data is stored in one place.
See http://phoenix.apache.org/
Cheers
On Tue, Feb 3, 2015 at 6:44 PM, 임정택 kabh...@gmail.com wrote:
Hello
See hadoop.apache.org/mailing_lists.html#User
On Wed, Feb 4, 2015 at 2:27 PM, Himanshu Vijay himansh...@gmail.com wrote:
Unsubscribe
LCE refers to Linux Container Executor
Please take a look at yarn-default.xml
Cheers
On Jan 28, 2015, at 12:49 AM, 임정택 kabh...@gmail.com wrote:
Hi!
At first, it was my mistake. :( All memory is in use.
Also I found each Container's information says that TotalMemoryNeeded 2048 /
1 - 100 of 547 matches
Mail list logo