To make sure I understand...you've allocated /ten times/ your physical
RAM for containers? If so, I think that's your issue.
For reference, under Hadoop 3.x I didn't have a cluster that would
really do anything until its worker nodes had at least 8GiB.
On 8/14/19 12:10 PM, . . wrote:
Hi all,
Hi all,
I installed a basic 3 nodes Hadoop 2.9.1 cluster and playing with YARN
settings.
The 3 nodes has following configuration:
1 cpu / 1 core / 512MB RAM
I wonder I was able to configure yarn-site.xml with following settings
(higher than node physical limits) and successfully run a mapreduce
Hi,
Did you try anarchyape ?
Originally: https://github.com/david78k/anarchyape
my fork to avoid hard coded network interface as "eth0" :
https://github.com/julienlau/anarchyape
Regards,
JL
Le mer. 14 août 2019 à 15:12, Aleksander Buła <
ab370...@students.mimuw.edu.pl> a écrit :
> Hi,
>
> I woul
Aleksander,
Yes I am aware of that doc but I've never seen any one maintaining that
piece of code in the last 4 years. And I don't think any one had ever used
that.
On Wed, Aug 14, 2019 at 5:12 AM Aleksander Buła <
ab370...@students.mimuw.edu.pl> wrote:
> Hi,
>
> I would like to ask whether the F
Hi,
I would like to ask whether the Fault Injection Framework (
https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/FaultInjectFramework.html)
is still supported in the Hadoop HDFS?
I think that this documentation was created around *v0.23* and was not
updated since then. Additi