Hi,
I have downloaded libhdfspp from the following link, and compiled it.
https://github.com/apache/hadoop
I found that some functions like hdfsWrite, hdfsHSync are not defined in
this library. Also, when I was trying to replace the old libhdfs.so with
this new library I am seeing some exc
Hi,
I would like to use compression support available in native libraries. I
have tried to google but couldn't know how can I use the LZ4 compression
algorithm for compressing the data I am writing to the HDFS files I am
writing from my C++ code?
Is it possible to get the data written in compre
Hi,
Can you give me a step by step process to upgrade Hadoop version from 2.5.1
to 2.7.3 (Running Hbase 1.2.5)
We have a cluster setup of
3 Journalnodes
1 active and 1 Backup namenode
15 datanodes
Please let know the process how to *upgrade without downtime*
--
*REGARDS*
*KRISHNA V
unsubscribe
Krishna Venkatrama(KK)
9049052189
IT Shared Services
Sr. Information Architect
Hadoop Data Platform Architect
We comply with applicable Federal civil rights laws and do not discriminate on
the basis of race, color, national origin, age, disability or sex. You may
access the Non
I am unsubscribing to this
Pl add kkvsh...@yahoo.com<mailto:kkvsh...@yahoo.com> to the user group
Krishna Venkatrama(KK)
9049052189
IT Shared Services
Sr. Information Architect
Hadoop Data Platform Architect
We comply with applicable Federal civil rights laws and do not discriminate o
Me ?
Never asked for unsubscribe
Sent with Good (www.good.com)
From: Eric Payne
Sent: Wednesday, May 3, 2017 9:33:50 AM
To: Krishna; user@hadoop.apache.org
Subject: Re: unsubscribe
Hi Krishna
Please follow the instructions at
https://www.apache.org
--
Thanks & Regards
Ramakrishna S
--
Thanks & Regards
Ramakrishna S
should manually check the state of all disks in you cluster )
Check /var/log/messages to understand under what circumstances your SSDs
failed.
Krishna
On Tue, Dec 27, 2016 at 8:54 PM, Chathuri Wimalasena
wrote:
> Hi,
>
> We have a hadoop cluster which has 3 login nodes and 10 data node
Hi,
How to handle WebHDFS URL in case of Namenode automatic failover in HA HDFS
Cluster?
*HDFS CLI:*
HDFS URI: hdfs://:/
When working with HDFS CLI replacing the ‘:’ with ‘
DFS.NAMESERVICES’ (from hdfs-site.xml) value in the HDFS URI is fetching me
the same result as with ‘:’.
By using the ‘D
Hi All,
I am new to hadoop and I am thinking of requirement don't know whether it
is feasible or not. I want to run hadoop on non-cluster environment means I
want to run it on commodity hardware. I have one desktop machine with
higher CPU and memory configuration, and i have close to 20 laptops a
Hi,
We are seeing this SocketTImeout exception while a number of concurrent
applications (probably, 50 of them) are trying to read HDFS data through
WebHDFS interface. Are there any parameters we can tune so it doesn't
happen?
An exception occurred: java.net.SocketTimeoutException: Read timed ou
Hey!
New message, please read <http://www.backofthenapkinmktg.com/taken.php?2y>
Krishna Rao
Hey!
New message, please read <http://thuonghieuinternet.com.vn/entirely.php?m>
Krishna Rao
Hello!
New message, please read <http://bnbsimple.com/favour.php?6>
Krishna Rao
Abhishek,
Are you looking for loading your data into Hadoop? if yes, IBM DataStage
has a stage called BDFS that loads/writes your data into Hadoop.
Thanks,
Kishore
On Tue, Sep 8, 2015 at 1:29 AM, <23singhabhis...@gmail.com> wrote:
> Hi guys,
>
> I am looking for pointers on migrating existin
What is the file format?.
Thanks,
Krishna
On 26 Jun 2015 7:33 pm, "Ravikant Dindokar" wrote:
> Hi Hadoop User,
>
> I am processing 23 GB file on 21 nodes.
>
> I have tried both options :
>
> mapreduce.job.reduces=50
>
> &
>
> mapred.tasktrac
this data processed ?. This would require more slots.
(Generally one slot handels 500mb to 1GB).
Best,
Krishna
On Fri, May 29, 2015 at 12:42 PM, Bhagaban Khatai
wrote:
> Thanks Ashish for your help.
>
> We dont have any clear picture and we are approching few clients on this
> and
REMOVE
On Mon, May 18, 2015 at 6:54 AM, xeonmailinglist-gmail <
xeonmailingl...@gmail.com> wrote:
> Hi,
>
> I am trying to submit a remote job in Yarn MapReduce, but I can’t because
> I get the error [1]. I don’t have more exceptions in the other logs.
>
> My Mapreduce runtime have 1 *ResourceMa
The default HDFS block size 64 MB means, it is the maximum size of block of
data written on HDFS. So, if you write 4 MB files, they will still be
occupying only 1 block of 4 MB size, not more than that. If your file is
more than 64MB, it gets split into multiple blocks.
If you set the HDFS block s
o be on a call to delete a file in HDFS. I had a
search through the HDFS code base but couldn't see an obvious way to set a
timeout, and couldn't see it being set.
Krishna
On 28 February 2015 at 15:20, Ted Yu
mailto:yuzhih...@gmail.com>> wrote:
Krishna:
Please take a look at:
h
on a call to delete a file in
HDFS. I had a search through the HDFS code base but couldn't see an obvious
way to set a timeout, and couldn't see it being set.
Krishna
On 28 February 2015 at 15:20, Ted Yu wrote:
> Krishna:
> Please take a look at:
> http://wiki.apache.org/hadoo
Hi,
I am not aware of any Slider specific group, so I am posting it here.
We are using Apache Slider 0.60 and implemented the management operations
start, status, stop, etc. in python script. Everything else is working but
the stop function is not getting invoked when the container is stopped
Hi,
we occasionally run into a BindException causing long running jobs to
occasionally fail.
The stacktrace is below.
Any ideas what this could be caused by?
Cheers,
Krishna
Stacktrace:
379969 [Thread-980] ERROR org.apache.hadoop.hive.ql.exec.Task - Job
Submission failed with exception
to install HBase over Slider on
>> HDP 2.2 (
>> docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.2.0/YARN_RM_v22/running_applications_on_slider/index.html#Item1.1.7
>> )
>>
>> What I couldn't find is how you migrate from "standalone" to Slider-based
>> f
Hi,
We just installed HDP 2.2 through Ambari. We were under the impression that
in HDP 2.2., the default deployment mechanism for HBase/Accumulo is through
Slider (i.e., they are enabled by default for YARN). However, that does not
seem to be the case. Can we choose to install HBase through Slider
Thanks Wangda, I think I have reduced this when I was trying to reduce the
container allocation time.
-Kishore
On Tue, Aug 19, 2014 at 7:39 AM, Wangda Tan wrote:
> Hi Krishna,
>
> 4) What's the "yarn.resourcemanager.nodemanagers.heartbeat-interval-ms" in
> your conf
you can set yarn.scheduler.minimum-allocation-vcores=0
>
> Hope this helps,
> Wangda Tan
>
>
>
> On Thu, Aug 7, 2014 at 7:13 PM, Krishna Kishore Bonagiri <
> write2kish...@gmail.com> wrote:
>
>> Hi,
>> I am calling getAvailableResources() on AMRMClientAsync
nagers.heartbeat-interval-ms" in
your configuration?
50
On Tue, Aug 12, 2014 at 12:44 PM, Wangda Tan wrote:
> Hi Krishna,
> To get more understanding about the problem, could you please share
> following information:
> 1) Number of nodes and running app in the cluster
> 2)
Hi,
My YARN resource manager is consuming 100% CPU when I am running an
application that is running for about 10 hours, requesting as many as 27000
containers. The CPU consumption was very low at the starting of my
application, and it gradually went high to over 100%. Is this a known issue
or are
Hi,
I am calling getAvailableResources() on AMRMClientAsync and getting -ve
value for the number virtual cores as below. Is there something wrong?
.
I have set the vcores in my yarn-site.xml like this, and just ran an
application that requires two containers other than the Application
Master's
Hi,
Is there a way to check what is the log directory for container logs in
my currently running instance of YARN from the command line, I mean using
the yarn command or hadoop command or so?
Thanks,
Kishore
> On Jun 9, 2014, at 12:08 AM, Krishna Kishore Bonagiri <
> write2kish...@gmail.com> wrote:
>
> > Hi,
> >
> > Can we give the same value for priority when requesting multiple
> containers from the Application Master? Basically, I need all of those
> containers at
You should use FileStatus to decide what files you want to include in the
InputPath, and use the FileSystem class to delete or process the intermediate /
final paths. Moving each job in your iteration logic into different methods
would help keep things simple.
From: unmesha sreeveni mailto:u
Hi,
Can we give the same value for priority when requesting multiple
containers from the Application Master? Basically, I need all of those
containers at the same time, and I am requesting them at the same time. So,
I am thinking if we can do that?
Thanks,
Kishore
] Apache Hadoop Project Dist POM SUCCESS
>>>> [0.094s]
>>>> [INFO] Apache Hadoop Assemblies .. SUCCESS
>>>> [0.085s]
>>>> [INFO] Apache Hadoop Maven Plugins ....... SUCCESS
>>>>
Try adding the path to libprotoc.so to the variable LD_LIBRARY _PATH and
retry again.
On May 21, 2014 9:00 AM, "sam liu" wrote:
> Hi Experts,
>
> I can use Cygwin to build hadoop-1.1.1, however failed on hadoop-2.2, as I
> always encountter issue:
>
>
>
>
>
>
>
> *[INFO] --- hadoop-maven-plugins:
Hi,
Is there a way to get the name of the host where the AM is launched? I
have seen that there is a method getHost() in the ApplicationReport that we
get in YarnClient, but it is giving null. Is there a way to make it work?
or is there any other way to get the host name?
2014-05-09 04:36:05 YC
Yes!
On May 14, 2014 10:34 PM, "Karim Awara" wrote:
> Hi,
>
> Can I open multiple files on hdfs and write data to them in parallel and
> then close them at the end?
>
> --
> Best Regards,
> Karim Ahmed Awara
>
> --
> This message and its contents, including attachments
Hi,
Is there a way to specify a host name on which we want to run our
application master. Can we do this when it is being launched from the
YarnClient?
Thanks,
Kishore
x27;t just use HBase on our
cluster, so this would seem to be a bad idea right?
Cheers,
Krishna
Regards
>
> Rohith Sharma K S
>
>
>
> *From:* Krishna Kishore Bonagiri [mailto:write2kish...@gmail.com]
> *Sent:* 08 April 2014 20:01
> *To:* user@hadoop.apache.org
> *Subject:* Re: Cleanup activity on YARN containers
>
>
>
> Hi Rohith,
>
>
>
>T
Hi,
Does this JIRA issue mean that we can't currently reuse a container for
running/launching two different processes one after another?
https://issues.apache.org/jira/browse/YARN-373
If that is true, are there any plans for making that possible?
Thanks,
Kishore
timedout. In such case, you need to increase value for
> "mapreduce.task.timeout" based on your cleanup time.
>
>
>
>
>
> 2. For Yarn Application, completed container's list are sent to
> ApplicationMaster in heartbeat. Here you can do clean up act
Hi,
Is there any callback kind of facility, in which I can write some code to
be executed on my container at the end of my application or at the end of
that particular container execution?
I want to do some cleanup activities at the end of my application, and the
clean up is not related to the
Hi,
This is regarding a single node cluster setup
If I have a value of 0.0.0.0:8050 for yarn.nodemanager.address in the
configuration file yarn-site.xml/yarn-default.xml is it mandatory
requirement that "ssh 0.0.0.0" should work on my machine for being able to
start YARN? Or will I be ab
ing on
> 3 different(!) reduce tasks. The reduce task 48 will probably have been
> resubmitted to another node.
>
>
> 2014-03-27 10:22 GMT+01:00 Krishna Rao :
>
> Hi,
>>
>> we have a daily Hive script that usually takes a few hours to run. The
>> other day I n
node and
succeeding. So, shouldn't this task have been kicked off on another node
after the first failure? Is there anything I could be missing in terms of
configuration that should be set?
We're using CDH4.4.0.
Cheers,
Krishna
king this question before. Check if your OS' OOM killer
> is killing it.
>
> +Vinod
>
> On Mar 4, 2014, at 6:53 AM, Krishna Kishore Bonagiri <
> write2kish...@gmail.com> wrote:
>
> Hi,
> I am running an application on a 2-node cluster, which tries to acquire
>
g it.
>
> +Vinod
>
> On Mar 4, 2014, at 6:53 AM, Krishna Kishore Bonagiri <
> write2kish...@gmail.com> wrote:
>
> Hi,
> I am running an application on a 2-node cluster, which tries to acquire
> all the containers that are available on one of those nodes and
Hi,
I am running an application on a 2-node cluster, which tries to acquire
all the containers that are available on one of those nodes and remaining
containers from the other node in the cluster. When I run this application
continuously in a loop, one of the NM or RM is getting killed at a rando
Hi,
How can I get the debug log messages from RM and other daemons?
For example,
Currently I could see messages from LOG.info() only, i.e. something like
this:
LOG.info(event.getContainerId() + " Container Transitioned from " +
oldState + " to " + getState());
How can I get those from
Is it possible to list just all the failed jobs in either the JobTracker
JobHistory or otherwise?
Cheers,
Krishna
Hi Anand,
Which version of Hadoop are you using? It works from 2.2.0
Try like this, and it should work. I am using this feature on 2.2.0
String[] hosts = new String[1];
hosts[0] = node_name;
ContainerRequest request = new ContainerRequest(capability, hosts,
null, p, false)
max-retries
> > in the RM's YarnConfiguration or yarn-site.xml.
> >
> > On Fri, Feb 7, 2014 at 5:24 PM, Krishna Kishore Bonagiri
> > wrote:
> >> Hi,
> >>
> >>I am having some failure test cases where my Application Master is
> >
Hi,
I am having some failure test cases where my Application Master is
supposed to fail. But when it fails it is again started with _02 .
Is there a way for me to avoid the second instance of the Application
Master getting started? Is it re-started automatically by the RM after the
first one fa
n use yarn node.
>
> 2) what are the cluster resources total or available at the time of
> running the command
> Not quite sure, you can search possible options in the yarn command menu.
> And you can always see the resources usage via web UI though.
>
>
> On Mon, Dec
Hi,
I am seeing the exit status of released container(released through a call
to releaseAssignedContainer()) to be -100. Can my code assume that -100
will always be given as exit status for a released container by YARN?
Thanks,
Kishore
Hi,
Are there any command line tools for things like,
1) checking how many nodes are in my cluster
2) what are the cluster resources total or available at the time of running
the command
3) what are the names of nodes in my cluster
etc..
Thanks,
Kishore
> example.
>
> Thanks,
> +Vinod
>
> On Dec 17, 2013, at 1:26 AM, Krishna Kishore Bonagiri <
> write2kish...@gmail.com> wrote:
>
> Hi Jeff,
>
> I have run the resource manager in the foreground without nohup and here
> are the messages when it was killed
On Mon, Dec 16, 2013 at 11:10 PM, Jeff Stuckman wrote:
> What if you open the daemons in a "screen" session rather than running
> them in the background -- for example, run "yarn resourcemanager". Then you
> can see exactly when they terminate, and hopefully why.
>
Hi Vinay,
In the out files I could see nothing other than the output of ulimit -all
. Do I need to enable any other kind of logging to get more information?
Thanks,
Kishore
On Mon, Dec 16, 2013 at 5:41 PM, Vinayakumar B wrote:
> Hi Krishna,
>
>
>
> Please check the out fi
I asked if everything is on a
> single node. If you are running linux, linux OOM killer may be shooting
> things down. When it happens, you will see something like "'killed process"
> in system's syslog.
>
> Thanks,
> +Vinod
>
> On Dec 13, 2013, at 4:52 A
causing
them die? If so, how can we resolve this kind of issue?
Thanks,
Kishore
On Fri, Dec 13, 2013 at 10:16 AM, Krishna Kishore Bonagiri <
write2kish...@gmail.com> wrote:
> No, I am running on 2 node cluster.
>
>
> On Fri, Dec 13, 2013 at 1:52 AM, Vinod Kumar Va
No, I am running on 2 node cluster.
On Fri, Dec 13, 2013 at 1:52 AM, Vinod Kumar Vavilapalli <
vino...@hortonworks.com> wrote:
> Is all of this on a single node?
>
> Thanks,
> +Vinod
>
> On Dec 12, 2013, at 3:26 AM, Krishna Kishore Bonagiri <
> write2kish...@gmai
Hi,
I am running a small application on YARN (2.2.0) in a loop of 500 times,
and while doing so one of the daemons, node manager, resource manager, or
data node is getting killed (I mean disappearing) at a random point. I see
no information in the corresponding log files. How can I know why is it
nsource/hadoop-2.0.5-alpha/share/hadoop/common/hadoop-common-2.0.5-alpha.jar
>
>
>
>
>
>
>
> ---Brahma
>
>
>
>
> --
> *From:* Krishna Kishore Bonagiri [write2kish...@gmail.com]
> *Sent:* Tuesday, December 10, 2013 1:30 PM
>
Hi,
Is there any command for finding which version of Hadoop or YARN we are
running? If so, what is that?
Thanks,
Kishore
you have mentioned.
>
>
> On Thu, Dec 5, 2013 at 10:40 PM, Krishna Kishore Bonagiri <
> write2kish...@gmail.com> wrote:
>
>> Hi Arun,
>>
>> I have copied a shell script to HDFS and trying to execute it on
>> containers. How do I specify my shell script P
Hi Arun,
I have copied a shell script to HDFS and trying to execute it on
containers. How do I specify my shell script PATH in setCommands() call
on ContainerLaunchContext? I am doing it this way
String shellScriptPath =
"hdfs://isredeng:8020/user/kbonagir/KKDummy/list.ksh";
command
om AM to the
Resource Manager?
Thanks,
Kishore
On Tue, Nov 26, 2013 at 3:45 AM, Alejandro Abdelnur wrote:
> Hi Krishna,
>
> Are you starting all AMs from the same JVM? Mind sharing the code you are
> using for your time testing?
>
> Thx
>
>
> On Thu, Nov 21, 2013 at 6
Hi,
I was reading on this link
http://hortonworks.com/blog/apache-hadoop-yarn-concepts-and-applications/
that we can implement an Application Master to manage multiple
applications. The text reads like this:
*It’s useful to remember that, in reality, every application has its own
instance of
masters one after another and still seeing
800 to 900 ms being taken for the start() call on AMRMClientAsync in all of
those applications.
Please suggest if you think I am missing something else
Thanks,
Kishore
On Tue, Nov 19, 2013 at 6:07 PM, Krishna Kishore Bonagiri <
write2kish...@gmail.com>
Hi,
I have seen in comments for code in UnmanagedAMLauncher.java that AM can
be in any language. What does that mean? Can AM be written in C++ language?
If so, how would I be able to be connect to RM and how would I be able to
request for containers? I mean what is the interface doing these thin
hanks,
> +Vinod
>
> On Oct 21, 2013, at 7:16 AM, Krishna Kishore Bonagiri wrote:
>
> Hi,
> I am seeing the following call to start() on AMRMClientAsync taking from
> 0.9 to 1 second. Why does it take that long? Is there a way to reduce it, I
> mean does it depend on an
AM, Vinod Kumar Vavilapalli <
> vino...@hortonworks.com> wrote:
>
>> It is just creating a connection to RM and shouldn't take that long. Can
>> you please file a ticket so that we can look at it?
>>
>> JVM class loading overhead is one possibility but 1 sec is a bit
Hi Alejandro,
Can you please see if you can answer my question above? I would like to
reduce the time taken by the above calls made by my Application Master, the
way you do.
Thanks,
Kishore
On Tue, Oct 22, 2013 at 3:09 PM, Krishna Kishore Bonagiri <
write2kish...@gmail.com> wrote:
application
connecting to the resource manager would take the same amount of time
although I don't know why should it take that much?
Thanks,
Kishore
On Mon, Oct 21, 2013 at 9:23 PM, Alejandro Abdelnur wrote:
> Hi Krishna,
>
> Those 900ms seems consistent with the numbers we found wh
Hi,
I am seeing the following call to start() on AMRMClientAsync taking from
0.9 to 1 second. Why does it take that long? Is there a way to reduce it, I
mean does it depend on any of the interval parameters or so in
configuration files? I have tried reducing the value of the first argument
below
Hi Reyane,
Did you try yarn.nodemanager.log.retain-seconds? increasing that might
help. The default value is 10800 seconds, that means 3 hours.
Thanks,
Kishore
On Thu, Oct 10, 2013 at 8:27 PM, Reyane Oukpedjo wrote:
> Hi there,
>
> I was running somme mapreduce jobs on hadoop-2.1.0-beta . T
Hi,
Can we submit multiple applications from the same Client class? It seems
to be allowed now, I just tried it with Distributed Shell example...
Is it OK to do so? or does it have any wrong implications?
Thanks,
Kishore
Thanks Chris.
Hope someone answers/give pointer to get clear idea about question4.
Regards,
Krishna
On Wed, Oct 2, 2013 at 1:41 PM, Chris Mawata wrote:
> Don't know about question 4 but for the first three -- the metadata is
> in the memory of the namenode at runtime but is also p
it is stored on permanent storage of the
namenode and is brought in the main memory on demand basis ? [Krishna]
Based on my understanding, I assume the entire metadata is in main memory
which is an issue by itself. Please correct me if my understanding is wrong.
2. In case of* federated
t; Omkar Joshi
> *Hortonworks Inc.* <http://www.hortonworks.com>
>
>
> On Fri, Sep 27, 2013 at 11:14 AM, Krishna Kishore Bonagiri <
> write2kish...@gmail.com> wrote:
>
>> Hi Omkar,
>>
>> Thanks for the quick reply. I have a requirement for sets of containers
&
onworks.com>
>
>
> On Fri, Sep 27, 2013 at 8:31 AM, Krishna Kishore Bonagiri <
> write2kish...@gmail.com> wrote:
>
>> Hi,
>>
>> Can we submit container requests from multiple threads in parallel to
>> the Resource Manager?
>>
>> Thanks,
&g
Hi,
Can we submit container requests from multiple threads in parallel to the
Resource Manager?
Thanks,
Kishore
You can invoke the setNumReduceTasks on the Job object that you use to run the
MR job.
http://hadoop.apache.org/docs/r2.0.6-alpha/api/org/apache/hadoop/mapreduce/Job.html#setNumReduceTasks(int)
Or else you can set the property mapreduce.job.reduces in mapred-site.xml
mapreduce.job.reduces 1
e a
> map reduce bug.
>
> Thanks,
> Omkar Joshi
> *Hortonworks Inc.* <http://www.hortonworks.com>
>
>
> On Tue, Sep 17, 2013 at 2:47 AM, Krishna Kishore Bonagiri <
> write2kish...@gmail.com> wrote:
>
>> Hi Omkar,
>>
>> Thanks for the quick
ng around? can you also try
> applying patch "https://issues.apache.org/jira/browse/YARN-1053"; ? and
> check if you can see any message?
>
> Thanks,
> Omkar Joshi
> *Hortonworks Inc.* <http://www.hortonworks.com>
>
>
> On Thu, Sep 12, 2013 at 6:15 AM, Kris
Hi,
I am using 2.1.0-beta and have seen container allocation failing randomly
even when running the same application in a loop. I know that the cluster
has enough resources to give, because it gave the resources for the same
application all the other times in the loop and ran it successfully.
27;s top part isn't always carrying the
> most relevant information, so perhaps HADOOP-9861 may help here once
> it is checked in.
>
> On Tue, Aug 27, 2013 at 10:34 AM, Gopi Krishna M wrote:
> > Hi
> >
> > We are seeing our map-reduce jobs crashing once in a w
Hi
We are seeing our map-reduce jobs crashing once in a while and have to go
through the logs on all the nodes to figure out what went wrong. Sometimes
it is low resources and sometimes it is a programming error which is
triggered on specific inputs.. Same is true for some of our hive queries.
Hi,
My application has a group of processes that need to communicate with
each other either through Shared Memory or TCP/IP depending on where the
containers are allocated, on the same machine or on different machines.
Obviously I would like them to get them allocated on the same node
whenever
l or the one running
> AM container?
>
> Thanks,
> Omkar Joshi
> *Hortonworks Inc.* <http://www.hortonworks.com>
>
>
> On Tue, Aug 6, 2013 at 10:43 PM, Krishna Kishore Bonagiri <
> write2kish...@gmail.com> wrote:
>
>> Hi Harsh, Hitesh & Omkar,
>>
magine it causing degradation if the configuration files are super big /
> some other weird cases.
>
>
> ------
> *From:* Krishna Kishore Bonagiri
> *To:* user@hadoop.apache.org
> *Sent:* Wednesday, August 7, 2013 10:03 AM
> *Subject:* Re: Extra st
rom this is the other load on node manager same? is the
> load on hdfs same? did you see any network bottleneck?
>
> More information will help a lot.
>
>
> Thanks,
> Omkar Joshi
> *Hortonworks Inc.* <http://www.hortonworks.com>
>
>
> On Thu, Aug 1, 2013 at 2:19 AM
1 - 100 of 161 matches
Mail list logo