Re: Slider Proposal

2014-04-15 Thread Kanak Biscuitwala
Hi,

I'm a committer/PMC member for Apache Helix, and we've already done some of the 
work integrating with systems like YARN (pretty much ready to go) and Mesos 
(about 1/3 of the way there). For some background, Helix separates cluster 
management logic from application logic by modeling an application's lifecycle 
as a state machine. It uses ZooKeeper for coordination, though ZK is behind a 
layer of abstraction as well.

In essence, what Helix has done well up until now is solved this problem: given 
something you want to distribute, some constraints on how it should be 
distributed, and a set of live nodes, come up with a mapping of tasks to nodes 
and what state each task should be in (e.g. master, slave, online, offline, 
error, leader, standby). What we were missing was a way to actually affect the 
presence/absence of nodes that we want to assign to. So we came up with a 
general interface for Helix to tell YARN, Mesos, or anything else that it 
should start up a new container so that Helix can assign tasks to it. We did 
this in a general way, and have a working implementation for YARN.

Coming from the other side, Helix allows us to be much more fine-grained with 
how we use containers. Helix can dynamically assign/deassign tasks to 
containers based on application requirements, and does this in a general way. 
This allows for potential container reuse, hiding of container restart overhead 
(because we can transition something else to leader/master in the mean time), 
and potentially better container utilization.

Here are the slides for the talk we gave at ApacheCon describing the high-level 
architecture of the integration: 
http://www.slideshare.net/KanakBiscuitwala/finegrained-scheduling-with-helix-apachecon-na-2014

The source code for the integration is on the helix-provisioning branch (see 
source links here: http://helix.apache.org/sources.html). The 
helix-provisioning module contains all of the code done to make the integration 
work including the app master code. The helloworld-provisioning-yarn module in 
recipes is a no-op service.

Here is an email with steps on how to make this work end-to-end: 
http://markmail.org/message/cddcdy4iphleyueb

The key classes to look at are:
  - AppLauncher (the client that submits the YAML file describing the service) 
-- this is what you would actually submit to the YARN RM
  - AppMasterLauncher (the deployed app master with Helix controller and 
integration with YARN APIs)
  - ParticipantLauncher (code that is invoked when a container for the app 
starts, runs an instance of the service that uses Helix)

We'd be happy to collaborate to see if there is a way to make Helix, Slider, 
Twill, and others better.

Thanks,
Kanak


On Sat, Apr 12, 2014 at 3:38 PM, Roman Shaposhnik rv...@apache.org wrote:

On Sat, Apr 12, 2014 at 11:58 AM, Andrew Purtell apur...@apache.org
wrote:
The reason I ask is I'm wondering how Slider differentiates from
projects
like Apache Twill or Apache Bigtop that are already existing vehicles
for
achieving the aims discussed in the Slider proposal.

Twill: handles all the AM logic for running new code packaged as a JAR
with
an executor method
Bigtop: stack testing

As a Bigtop committer, I disagree with this narrow interpretation of the
scope of the project, but this is my personal opinion and I am not PMC...

A strong +1 here! Bigtop attacks a problem of packaging and deploying
Hadoop stacks from a classical UNIX packaging background. We are
also slowly moving into container/VM/OSv packaging territory which
could be an extremely exciting way of side-stepping the general installer
issues (something that Ambari struggles mightily with).

Something like Slider tries to leapfrog and side-step UNIX packaging and
deployment altogether. This is an interesting take on the problem, but
ultimately the jury is still out on what the level of adoption for
everything
is now YARN will be.

At the end of the day, we will need both for a really long time.

For example, we package Hadoop core and ecosystem services both for
deployment, have Puppet based deployment automation (which can be used
more
generally than merely for setting up test clusters), and I have been
considering filing JIRAs to tie in cgroups at the whole stack level here.
What is missing of course is a hierarchical model for resource
management,
and tools within the components for differentiated service levels, but
that
is another discussion.

On that note, I find YARN's attitude towards cgroups be, how shall I put
it,
optimistic. If you look carefully you can see that the Linux community
has completely given up an pretending that one can use naked cgroup
trees for reliable resource partitioning:
http://www.freedesktop.org/wiki/Software/systemd/PaxControlGroups/

It is now clear to me that the path Linux distros are endorsing is via
the brokers such as systemd:

http://www.freedesktop.org/wiki/Software/systemd/ControlGroupInterface/

As such I'd see Bigtop providing quite a bit

Re: Slider Proposal

2014-04-14 Thread Steve Loughran
On 12 April 2014 23:38, Roman Shaposhnik r...@apache.org wrote:

 On Sat, Apr 12, 2014 at 11:58 AM, Andrew Purtell apurt...@apache.org
 wrote:
   The reason I ask is I'm wondering how Slider differentiates from
 projects
   like Apache Twill or Apache Bigtop that are already existing vehicles
 for
   achieving the aims discussed in the Slider proposal.
 
 
  Twill: handles all the AM logic for running new code packaged as a JAR
 with
  an executor method
  Bigtop: stack testing
 
 
  As a Bigtop committer, I disagree with this narrow interpretation of the
  scope of the project, but this is my personal opinion and I am not PMC...

 A strong +1 here! Bigtop attacks a problem of packaging and deploying
 Hadoop stacks from a classical UNIX packaging background.



I think I'm a big top committer too --but sorry if I stepped on your toes.
I view it as testing primarily because the main bits of code I've been near
were stuff to do FS testing, and the script testing code ( we use
org.apache.bigtop.itest.shell.Shell as the basis for our remote functional
tests). I'd also like to get


 We are
 also slowly moving into container/VM/OSv packaging territory which
 could be an extremely exciting way of side-stepping the general installer
 issues (something that Ambari struggles mightily with).


Topic for another day. I believe I have some past experience in that area,
hence some opinions.



 Something like Slider tries to leapfrog and side-step UNIX packaging and
 deployment altogether. This is an interesting take on the problem, but
 ultimately the jury is still out on what the level of adoption for
 everything
 is now YARN will be.


What it's trying to do is say end users are allowed to install and run
things too. It's like in linux, you can do rpm/deb installations -and
that's the best way to manage a set of machines, but there's nothing to
stop me, a non-root user, from running anything I want even if the admins
haven't installed it. We have formats for that .tar, .gz, arguably .war and
.ear.


most of the standard java server apps come with .tar.gz/.zip distributions
(hadoop included), so we all know user-side code is a use case -what we're
thinking of here are : what are the problems we need to address to get this
to work in a YARN cluster.

-getting the configuration customised to the destination. This is the
perennial problem of Configuration Management -and I am not going to try
and solve that problem -again. There is 20+ years of history there, the
scripted vs desired state, centralised vs decentralised and static/dynamic
debates being the perennial ones, ones into which you can device them all
up. Oh, and there's policy driven, such as CompSolv [Hewson12], which is
not yet something that's trickled into the OSS tools yet.

What we're thinking of is something far less ambitious than lets do a new
CM system , and in doing so have something more practical how to make it
easy to bundle non-YARN code so that YARN can deploy it, with enough
metadata and scripts to bring it up. At the same, I do want it to be
possible for CM systems, PaaS front ends and other management tools to be
able to not just spawn a slider app, but integrate better with it. Part of
our thinking here is the idea of bonded AM, which, rather than an
in-process  unmanaged AM, is a remote YARN-deployed AM but hooked up so
that only the owner process can work with it -events being published for
the owner to track and control it, and having both slider and the owner
being able to tell any agent what to do. This is work in progress -I'd like
a prototype web front end though.


We are going to have to do some of the dynamic config generation, because
we can't dictate up front ports and hosts, so will need to get that
information into both the configs used when starting the apps, and into
information published up for clients to pull down -the latter meaning we
are going to have to evolve a cluster registry, based on Curator or
something else -adding artifact sharing/generation too.

A registry is likely to go beyond slder, as it shouldn't be restricted to
just one YARN app, and indeed, there's no reason why, say, a bigtop
deployed HBase cluster can't publish its binding information.


YARN-913 https://issues.apache.org/jira/browse/YARN-913 has raised some
of this; I'd like to use slider to evolve some of the topics with a faster
cycle than Hadoop releases -but I'd also love to have any other YARN apps
involved (given I'm trying to re-use any existing code that works: Apache
Curator, Apache Directory, hopefully bits of Helix, I'd welcome other
contribs too).




 At the end of the day, we will need both for a really long time.


+1. YARN itself depends on a well-setup cluster, network and all the bits
where users take an ops team for granted



  For example, we package Hadoop core and ecosystem services both for
  deployment, have Puppet based deployment automation (which can be used
 more
  generally than merely for setting up test clusters

Re: Slider Proposal

2014-04-14 Thread Steve Loughran
On 14 April 2014 04:43, Andreas Neumann a...@apache.org wrote:

 I'd like to comment on what has been said about Twill.

  Twill: handles all the AM logic for running new code packaged as a JAR
 with
  an executor method

 The goal of Twill is much broader than to support writing new code. Its
 goal is to ease the development and deployment of any distributed service
 in a cluster. Its impact is most significantly for the development of new
 applications but here are some other aspects:


It should please you to know that for my Berlin Buzzwords talks I'm going
to call out Twill as where you should be starting coding any new YARN app
-stay at the djikstra and knuth layer and avoid going near the Lamport
Layer.


- Twill can run existing Java applications, even if they were not
developed against Twill APIs and do not make use of Twill's advanced
features, After all the existing main() method of Java programs bears a
 lot
of resemblance to a runnable. For example, we have successfully run
 Presto
on Twill, and the same should be possible for other distributed services
written in Java (and non-Java language support is on the horizon for
 Twill
as well).
- Twill does provide very useful features that most distributed apps
need (such as service discovery, lifecycle management, elastic scaling,
etc.). Even if the first step may be to port an existing applications
without modification, it is very much possible that these applications -
over time - make use of Twill features.
- Twill is not limited to YARN, in fact its APIs are completely
YARN-independent, and in addition to a YARN-based implementation, there
 can
well be others - for example, it is not far--fetched to think of a
MesosTwillRunner.

 In this sense, Twill is more generic than Slider, which is optimized
 specifically for a few existing distributed services. At this time, Slider
 is probably more optimized for running - say - HBase or Accumulo than
 Twill. Yet most likely Twill will eventually have all the required
 features,
 and wouldn't it be better if things like ZK-bindings were done in a widely
 used and common way? Twill has the potential to lead us there.


I'd love all YARN collaborate on service registry stuff -I've been playing
with curator service discovery which is a good starting point, but I see
where I need to go beyond it. And for it to work, we do need as much
commonality as we can

If we can build something driven by the needs of different YARN apps, we
can go to the YARN project itself and say This is what works for us

I've just uploaded to Slideshare some slides on Slider, with bits of what I
want from a registry in there
http://www.slideshare.net/steve_l/2014-0414slideroverview

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Slider Proposal

2014-04-13 Thread Andreas Neumann
I'd like to comment on what has been said about Twill.

 Twill: handles all the AM logic for running new code packaged as a JAR
with
 an executor method

The goal of Twill is much broader than to support writing new code. Its
goal is to ease the development and deployment of any distributed service
in a cluster. Its impact is most significantly for the development of new
applications but here are some other aspects:

   - Twill can run existing Java applications, even if they were not
   developed against Twill APIs and do not make use of Twill's advanced
   features, After all the existing main() method of Java programs bears a lot
   of resemblance to a runnable. For example, we have successfully run Presto
   on Twill, and the same should be possible for other distributed services
   written in Java (and non-Java language support is on the horizon for Twill
   as well).
   - Twill does provide very useful features that most distributed apps
   need (such as service discovery, lifecycle management, elastic scaling,
   etc.). Even if the first step may be to port an existing applications
   without modification, it is very much possible that these applications -
   over time - make use of Twill features.
   - Twill is not limited to YARN, in fact its APIs are completely
   YARN-independent, and in addition to a YARN-based implementation, there can
   well be others - for example, it is not far--fetched to think of a
   MesosTwillRunner.

In this sense, Twill is more generic than Slider, which is optimized
specifically for a few existing distributed services. At this time, Slider
is probably more optimized for running - say - HBase or Accumulo than
Twill. Yet most likely Twill will eventually have all the required features,
and wouldn't it be better if things like ZK-bindings were done in a widely
used and common way? Twill has the potential to lead us there.

In the meantime, Slider can fill the gap.

Cheers -Andreas.


On Sat, Apr 12, 2014 at 3:38 PM, Roman Shaposhnik r...@apache.org wrote:

 On Sat, Apr 12, 2014 at 11:58 AM, Andrew Purtell apurt...@apache.org
 wrote:
   The reason I ask is I'm wondering how Slider differentiates from
 projects
   like Apache Twill or Apache Bigtop that are already existing vehicles
 for
   achieving the aims discussed in the Slider proposal.
 
 
  Twill: handles all the AM logic for running new code packaged as a JAR
 with
  an executor method
  Bigtop: stack testing
 
 
  As a Bigtop committer, I disagree with this narrow interpretation of the
  scope of the project, but this is my personal opinion and I am not PMC...

 A strong +1 here! Bigtop attacks a problem of packaging and deploying
 Hadoop stacks from a classical UNIX packaging background. We are
 also slowly moving into container/VM/OSv packaging territory which
 could be an extremely exciting way of side-stepping the general installer
 issues (something that Ambari struggles mightily with).

 Something like Slider tries to leapfrog and side-step UNIX packaging and
 deployment altogether. This is an interesting take on the problem, but
 ultimately the jury is still out on what the level of adoption for
 everything
 is now YARN will be.

 At the end of the day, we will need both for a really long time.

  For example, we package Hadoop core and ecosystem services both for
  deployment, have Puppet based deployment automation (which can be used
 more
  generally than merely for setting up test clusters), and I have been
  considering filing JIRAs to tie in cgroups at the whole stack level here.
  What is missing of course is a hierarchical model for resource
 management,
  and tools within the components for differentiated service levels, but
 that
  is another discussion.

 On that note, I find YARN's attitude towards cgroups be, how shall I put
 it,
 optimistic. If you look carefully you can see that the Linux community
 has completely given up an pretending that one can use naked cgroup
 trees for reliable resource partitioning:
 http://www.freedesktop.org/wiki/Software/systemd/PaxControlGroups/

 It is now clear to me that the path Linux distros are endorsing is via
 the brokers such as systemd:

 http://www.freedesktop.org/wiki/Software/systemd/ControlGroupInterface/

 As such I'd see Bigtop providing quite a bit of value out of the box
 via tight integration with systemd. YARN, at the moment, is in a trickier
 situation.

  HBase and accumulo do have their own ZK binding mechanism, so don't
 really
  need their own registry. But to work with their data you do need the
  relevant client apps. I would like to have some standard for at least
  publishing the core binding information in a way that could be parsed by
  any client app (CLI, web UI, other in-cluster apps)
 
 
  +1 to such a standard.

 I've been known to advocate Apache Helix as a standard API to build
 exactly that type of distributed application architecture. In fact, if only
 I had more spare time on my hand, I'd totally prototype Helix

Re: Slider Proposal

2014-04-12 Thread Steve Loughran
On 10 April 2014 16:28, Andrew Purtell apurt...@apache.org wrote:

 Hi Steve,

 Does Slider target the deployment and management of components/projects in
 the Hadoop project itself? Not just the ecosystem examples mentioned in the
 proposal? I don't see this mentioned in the proposal.


no.

That said, some of the stuff I'm prototyping on a service registry should
be usable for existing code -there's no reason why a couple of zookeeper
arguments shouldn't be enough to look up the bindings for HDFS, Yarn, etc.

I've not done much there -currently seeing how well curator service
discovery works- so assistance would be welcome.



 The reason I ask is I'm wondering how Slider differentiates from projects
 like Apache Twill or Apache Bigtop that are already existing vehicles for
 achieving the aims discussed in the Slider proposal.


Twill: handles all the AM logic for running new code packaged as a JAR with
an executor method
Bigtop: stack testing


 Tackling
 cross-component resource management issues could certainly be that, but
 only if core Hadoop services are also brought into the deployment and
 management model, because IO pathways extend over multiple layers and
 components. You mention HBase and Accumulo as examples. Both are HDFS
 clients. Would it be insufficient to reserve or restrict resources for e.g.
 the HBase RegionServer without also considering the HDFS DataNode?


IO quotas is a tricky one -you can't cgroup-throttle a container for HDFS
IO as it takes place on local and remote DN processes. Without doing some
priority queuing in the DNs we can hope for some labelling of nodes in the
YARN cluster so you can at least isolate the high-SLA apps from IO
intensive but lower priority code.


 Do the
 HDFS DataNode and HBase RegionServer have exactly the same kind of
 deployment, recovery/restart, and dynamic scaling concerns?


DN's react to loss of the NN by spinning on the cached IP address, or, in
HA, to the defined failover address. Now, if we did support ZK lookup of NN
IPC and Web ports we could consider an alternate failure mode where the DNs
do intermittently poll the ZK bindings during the spin cycle

HBase and accumulo do have their own ZK binding mechanism, so don't really
need their own registry. But to work with their data you do need the
relevant client apps. I would like to have some standard for at least
publishing the core binding information in a way that could be parsed by
any client app (CLI, web UI, other in-cluster apps)



 Or are these
 sort of considerations outside the Slider proposal scope?

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Slider Proposal

2014-04-12 Thread Andrew Purtell
On Sat, Apr 12, 2014 at 3:27 AM, Steve Loughran ste...@hortonworks.comwrote:

 On 10 April 2014 16:28, Andrew Purtell apurt...@apache.org wrote:

  Hi Steve,
 
  Does Slider target the deployment and management of components/projects
 in
  the Hadoop project itself? Not just the ecosystem examples mentioned in
 the
  proposal? I don't see this mentioned in the proposal.
 

 no.


It seems what you propose for other Hadoop ecosystem components with Slider
applies to some parts of core.



 That said, some of the stuff I'm prototyping on a service registry should
 be usable for existing code -there's no reason why a couple of zookeeper
 arguments shouldn't be enough to look up the bindings for HDFS, Yarn, etc.

 I've not done much there -currently seeing how well curator service
 discovery works- so assistance would be welcome.


 
  The reason I ask is I'm wondering how Slider differentiates from projects
  like Apache Twill or Apache Bigtop that are already existing vehicles for
  achieving the aims discussed in the Slider proposal.


 Twill: handles all the AM logic for running new code packaged as a JAR with
 an executor method
 Bigtop: stack testing


As a Bigtop committer, I disagree with this narrow interpretation of the
scope of the project, but this is my personal opinion and I am not PMC...

For example, we package Hadoop core and ecosystem services both for
deployment, have Puppet based deployment automation (which can be used more
generally than merely for setting up test clusters), and I have been
considering filing JIRAs to tie in cgroups at the whole stack level here.
What is missing of course is a hierarchical model for resource management,
and tools within the components for differentiated service levels, but that
is another discussion.



  Tackling
  cross-component resource management issues could certainly be that, but
  only if core Hadoop services are also brought into the deployment and
  management model, because IO pathways extend over multiple layers and
  components. You mention HBase and Accumulo as examples. Both are HDFS
  clients. Would it be insufficient to reserve or restrict resources for
 e.g.
  the HBase RegionServer without also considering the HDFS DataNode?


 IO quotas is a tricky one -you can't cgroup-throttle a container for HDFS
 IO as it takes place on local and remote DN processes. Without doing some
 priority queuing in the DNs we can hope for some labelling of nodes in the
 YARN cluster so you can at least isolate the high-SLA apps from IO
 intensive but lower priority code.


Yes. Do you see this as something Slider could motivate and drive?



  Do the
  HDFS DataNode and HBase RegionServer have exactly the same kind of
  deployment, recovery/restart, and dynamic scaling concerns?


 DN's react to loss of the NN by spinning on the cached IP address, or, in
 HA, to the defined failover address. Now, if we did support ZK lookup of NN
 IPC and Web ports we could consider an alternate failure mode where the DNs
 do intermittently poll the ZK bindings during the spin cycle


Yes. But to my original question, I see a high degree of similarity in
terms of management and operational considerations even if mechanism isn't
quite there or would need tweaking. Again, do you see this as something
Slider could motivate and drive perhaps?

HBase and accumulo do have their own ZK binding mechanism, so don't really
 need their own registry. But to work with their data you do need the
 relevant client apps. I would like to have some standard for at least
 publishing the core binding information in a way that could be parsed by
 any client app (CLI, web UI, other in-cluster apps)


+1 to such a standard.



-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)


Re: Slider Proposal

2014-04-12 Thread Roman Shaposhnik
On Sat, Apr 12, 2014 at 11:58 AM, Andrew Purtell apurt...@apache.org wrote:
  The reason I ask is I'm wondering how Slider differentiates from projects
  like Apache Twill or Apache Bigtop that are already existing vehicles for
  achieving the aims discussed in the Slider proposal.


 Twill: handles all the AM logic for running new code packaged as a JAR with
 an executor method
 Bigtop: stack testing


 As a Bigtop committer, I disagree with this narrow interpretation of the
 scope of the project, but this is my personal opinion and I am not PMC...

A strong +1 here! Bigtop attacks a problem of packaging and deploying
Hadoop stacks from a classical UNIX packaging background. We are
also slowly moving into container/VM/OSv packaging territory which
could be an extremely exciting way of side-stepping the general installer
issues (something that Ambari struggles mightily with).

Something like Slider tries to leapfrog and side-step UNIX packaging and
deployment altogether. This is an interesting take on the problem, but
ultimately the jury is still out on what the level of adoption for everything
is now YARN will be.

At the end of the day, we will need both for a really long time.

 For example, we package Hadoop core and ecosystem services both for
 deployment, have Puppet based deployment automation (which can be used more
 generally than merely for setting up test clusters), and I have been
 considering filing JIRAs to tie in cgroups at the whole stack level here.
 What is missing of course is a hierarchical model for resource management,
 and tools within the components for differentiated service levels, but that
 is another discussion.

On that note, I find YARN's attitude towards cgroups be, how shall I put it,
optimistic. If you look carefully you can see that the Linux community
has completely given up an pretending that one can use naked cgroup
trees for reliable resource partitioning:
http://www.freedesktop.org/wiki/Software/systemd/PaxControlGroups/

It is now clear to me that the path Linux distros are endorsing is via
the brokers such as systemd:
http://www.freedesktop.org/wiki/Software/systemd/ControlGroupInterface/

As such I'd see Bigtop providing quite a bit of value out of the box
via tight integration with systemd. YARN, at the moment, is in a trickier
situation.

 HBase and accumulo do have their own ZK binding mechanism, so don't really
 need their own registry. But to work with their data you do need the
 relevant client apps. I would like to have some standard for at least
 publishing the core binding information in a way that could be parsed by
 any client app (CLI, web UI, other in-cluster apps)


 +1 to such a standard.

I've been known to advocate Apache Helix as a standard API to build
exactly that type of distributed application architecture. In fact, if only
I had more spare time on my hand, I'd totally prototype Helix-based
YARN APIs.

Thanks,
Roman.

-
To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org
For additional commands, e-mail: general-h...@incubator.apache.org



Re: Slider Proposal

2014-04-10 Thread Andrew Purtell
Hi Steve,

Does Slider target the deployment and management of components/projects in
the Hadoop project itself? Not just the ecosystem examples mentioned in the
proposal? I don't see this mentioned in the proposal.

The reason I ask is I'm wondering how Slider differentiates from projects
like Apache Twill or Apache Bigtop that are already existing vehicles for
achieving the aims discussed in the Slider proposal. Tackling
cross-component resource management issues could certainly be that, but
only if core Hadoop services are also brought into the deployment and
management model, because IO pathways extend over multiple layers and
components. You mention HBase and Accumulo as examples. Both are HDFS
clients. Would it be insufficient to reserve or restrict resources for e.g.
the HBase RegionServer without also considering the HDFS DataNode? Do the
HDFS DataNode and HBase RegionServer have exactly the same kind of
deployment, recovery/restart, and dynamic scaling concerns? Or are these
sort of considerations outside the Slider proposal scope?


On Mon, Mar 31, 2014 at 11:49 AM, Steve Loughran ste...@hortonworks.comwrote:

 Hi


 For people wondering what's been happening with that Hoya proposal, I've
 got a successor proposal up for discussion.


 https://wiki.apache.org/incubator/SliderProposal

 This proposal -as well as having a different name- is a superset of the
 original draft. It emphasises that making the tool usable by other
 applications via a client API is a key need -it's how some people have been
 using Hoya- and that packaging and service registration and discovery are
 key areas for improvement.


 The code is up on github at https://github.com/hortonworks/slider for
 people to download and play with

 If you look closely you'll see that the packaging is still .hoya, as are a
 lot of the classnames. That's something we'll fix during incubation, along
 with some of the code packaging.


 Comments?

 -Stevve

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.




-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)


Re: Slider Proposal

2014-04-02 Thread Arun C. Murthy
Looks great, thanks for the update Steve.

Arun


 On Mar 31, 2014, at 8:49 PM, Steve Loughran ste...@hortonworks.com wrote:
 
 Hi
 
 
 For people wondering what's been happening with that Hoya proposal, I've
 got a successor proposal up for discussion.
 
 
 https://wiki.apache.org/incubator/SliderProposal
 
 This proposal -as well as having a different name- is a superset of the
 original draft. It emphasises that making the tool usable by other
 applications via a client API is a key need -it's how some people have been
 using Hoya- and that packaging and service registration and discovery are
 key areas for improvement.
 
 
 The code is up on github at https://github.com/hortonworks/slider for
 people to download and play with
 
 If you look closely you'll see that the packaging is still .hoya, as are a
 lot of the classnames. That's something we'll fix during incubation, along
 with some of the code packaging.
 
 
 Comments?
 
 -Stevve
 
 -- 
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to 
 which it is addressed and may contain information that is confidential, 
 privileged and exempt from disclosure under applicable law. If the reader 
 of this message is not the intended recipient, you are hereby notified that 
 any printing, copying, dissemination, distribution, disclosure or 
 forwarding of this communication is strictly prohibited. If you have 
 received this communication in error, please contact the sender immediately 
 and delete it from your system. Thank You.

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

-
To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org
For additional commands, e-mail: general-h...@incubator.apache.org



Slider Proposal

2014-03-31 Thread Steve Loughran
Hi


For people wondering what's been happening with that Hoya proposal, I've
got a successor proposal up for discussion.


https://wiki.apache.org/incubator/SliderProposal

This proposal -as well as having a different name- is a superset of the
original draft. It emphasises that making the tool usable by other
applications via a client API is a key need -it's how some people have been
using Hoya- and that packaging and service registration and discovery are
key areas for improvement.


The code is up on github at https://github.com/hortonworks/slider for
people to download and play with

If you look closely you'll see that the packaging is still .hoya, as are a
lot of the classnames. That's something we'll fix during incubation, along
with some of the code packaging.


Comments?

-Stevve

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.