Congratulations for the work Javi Roman !!
It seems to me a very interesting tool to accelerate and test the work of
the contributors. Personally, I would leave HDFS installed as a service to
make management and maintenance easier. For demonstrations and tests, I
think it would be interesting to put a spark-shell and update the README
with some map-reduce example. What do you think?

On 9 April 2018 at 19:34, Javi Roman <jroman.espi...@gmail.com> wrote:

> Hi!
>
> I've just created a PR [1] with the fist version of DC/OS (tested
> v1.11.0) for Myriad development/deployment at DC/OS.
>
> The cluster created by default is something like that:
>
> $ vagrant status
> Current machine states:
>
> bt                         running (libvirt)
> m1                       running (libvirt)
> a1                        running (libvirt)
> a2                        running (libvirt)
> a3                        running (libvirt)
> a4                        running (libvirt)
> p1                        running (libvirt)
>
> The bootstrap machine (with DC/OS CLI installed). One master node, 4
> private agents, and one public.
>
> I haven't tested Myraid yet, because we have different options for it:
>
> - Deploy HDFS as a system service.
> - Deploy HDFS using Mesosphere Universe (we have to tune this one, the
> requirements don't fit in this small Vagrant cluster).
> - Deploy Myriad by means of Marathon and Docker.
>
> We have to test different options for this deployment. One interesting
> idea would be create a Myriad Universe with packages for HDFS tuned
> for this Vagrant, with upstream Apache Hadoop, and packages for
> Myriad. This could be a good point for demos.
>
>
> [1] https://github.com/apache/incubator-myriad/pull/108
> --
> Javi Roman
>
> Twitter: @javiromanrh
> GitHub: github.com/javiroman
> Linkedin: es.linkedin.com/in/javiroman
> Big Data Blog: dataintensive.info
>

Reply via email to