performance of Hadoop client in Apache HDFS and the performance of DataNode
> in HortonWorks HDFS. If that's the fact, maybe it's a bug brought in by
> HortonWorks?
>
> 2016-08-01 17:47 GMT+08:00 Dejan Menges :
>
>> Hi Shady,
>>
>> We did extensive tests on this
signifiant
> difference, maybe I should try other ways to tune my HBase.
>
> And Dejan, I've never heard of or noticed what you said. If that's true
> it's really disappointing and please notice us if there's any progress.
>
> 2016-08-01 15:33 GMT+08:00 Dejan Menges :
Sorry for jumping in, but hence performance... it took as a while to figure
out why, whatever disk/RAID0 performance you have, when it comes to HDFS
and replication factor bigger then zero, disk write speed drops to
100Mbps... After long long tests with Hortonworks they found that issue is
that som
Hello Renjith,
Hortonworks have self contained box where you can just download and spin up
stuff and see how it looks like:
http://hortonworks.com/downloads/#sandbox
Cheers,
Dejan
On Wed, Jun 22, 2016 at 6:33 PM Renjith wrote:
> Hello All,
>
> before proceeding, seek an expert advice from the
Hi Deepak,
Hadoop is just platform (Hadoop and all around it). Toolset to do what you
want to do.
If you are writing bad code you can't blame programming language. It's you
not being able to write good code. There's also nothing bad in using
commodity hardware (and not sure I understand whats' co
Hello Jose,
For Yarn's classpath, depending how you installed everything on Ubuntu,
take a look also into yarn-env.sh, and inside
/etc/default/${whateveryarnorhadoopfile}.
However, I would personally expect it to be in yarn-env.sh.
Cheers
On Sun, Feb 7, 2016 at 2:02 AM José Luis Larroque
wrote
Hi Nick,
I had exactly the same case, and in our case it was that tokens were
expiring too quickly. What we increased was
dfs.client.read.shortcircuit.streams.cache.size
and dfs.client.read.shortcircuit.streams.cache.expiry.ms.
Hope this helps.
Best,
Dejan
On Wed, Jan 20, 2016 at 12:15 AM Nick
Ok, this is not a joke...
On Thu, Nov 5, 2015 at 3:06 PM Bourre, Marc <
marc.bou...@ehealthontario.on.ca> wrote:
> Unsubcribe
>
>
>
> *From:* mark charts [mailto:mcha...@yahoo.com]
> *Sent:* Thursday, November 05, 2015 8:54 AM
> *To:* user@hadoop.apache.org; wadood.chaudh...@instinet.com
> *Subje
Hi Stephen,
In /usr/hdp/version there's etc/ subfolder with init scripts, beside others
one for datanode too (hadoop-hdfs-datanode is the name).
Cheers
On Oct 24, 2015 7:00 PM, "Stephen Boesch" wrote:
> OK I will continue on hdp list: I am already using the hdfs command for
> all of those indiv
Hi,
We had situation that RAID controller died in one of our nodes, and we had
to change it obviously. After changing it, from system side all looks good,
but DataNode doesn't want to start start anymore:
https://gist.github.com/dejo1307/5ca4946275eb81aa96f1
Using HDP 2.2, Hadoop version is 2.6.
Hi,
Does anyone knows if there are any plans related to this ticket:
https://issues.apache.org/jira/browse/HIVE-9223
I also asked for update in the ticket too, just to be sure.
Thanks a lot,
Dejan
Found it - yarn.nodemanager.delete.debug-delay-sec
On Mon, Jul 27, 2015 at 2:25 PM Dejan Menges wrote:
> Hi,
>
> I remember there was an option to retain container lunch scripts for some
> period of time, but at this moment neither I can remember what parameter it
> was, neithe
Hi,
I remember there was an option to retain container lunch scripts for some
period of time, but at this moment neither I can remember what parameter it
was, neither I can find it in documentation.
Any information would be appreciated!
Cheers,
Dejan
Hi,
We are using (still, until Monday) HDP 2.1 for quite some time now, and SC
local reads were enabled all the time. In beginning, we used Hortonworks
recommendations and set SC cache size to 256, with default 5 minutes to
invalidate them, and that's where problems started.
At some point in time
Hi,
>From time to time I see some reduces failing with this:
Error: java.io.IOException: Failed to replace a bad datanode on the
existing pipeline due to no more good datanodes being available to try. The
current failed datanode replacement policy is DEFAULT, and a client may
configure this via
'
Hi,
I'm seeing this exception on every HDFS node once in a while on one cluster:
2015-05-26 13:37:31,831 INFO datanode.DataNode
(BlockSender.java:sendPacket(566)) - Failed to send data:
java.net.SocketTimeoutException: 1 millis timeout while waiting for
channel to be ready for write. ch :
ja
Your output says permission denied for ssh@localhost. Try to fix that first
(there are bunch of tutorials on passwordless SSH connection).
On Apr 13, 2014 7:37 PM, "Ekta Agrawal" wrote:
> Hi,
>
> I started with "ssh localhost" command.
> Does anything else is needed to check SSH?
>
> Then I stopp
17 matches
Mail list logo