[ 
https://issues.apache.org/jira/browse/HADOOP-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12798875#action_12798875
 ] 

Steve Loughran commented on HADOOP-6483:
----------------------------------------

# You should decouple provisioning/decommissioning from the workflow stuff 
where possible. That said, it helps your scheduler to have a lot of awareness 
of node cost and locality. A good interface is needed there.
# I wasn't aware of a standard RESTy BES. Can you provide links
# This is a good opportunity to talk to the OGF people and get their input and 
engineering support. 


> Provide Hadoop as a Service based on standards
> ----------------------------------------------
>
>                 Key: HADOOP-6483
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6483
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: Yang Zhou
>
> Hadoop as a Service provides a standards-based web services interface that 
> layers on top of Hadoop on Demand and allows Hadoop jobs to be submitted via 
> popular schedulers, such as Sun Grid Engine (SGE), Platform LSF, Microsoft 
> HPC Server 2008 etc., to local or remote Hadoop clusters.  This allows 
> multiple Hadoop clusters within an organization to be efficiently shared and 
> provides flexibility, allowing remote Hadoop clusters, offered as Cloud 
> services, to be used for experimentation and burst capacity. HaaS hides 
> complexity, allowing users to submit many types of compute or data intensive 
> work via a single scheduler without actually knowing where it will be done. 
> Additionally providing a standards-based front-end to Hadoop means that users 
> would be able to easily choose HaaS providers without being locked in, i.e. 
> via proprietary interfaces such as Amazon's map/reduce service.  
> Our HaaS implementation uses the OGF High Performance Computing Basic Profile 
> standard to define interoperable job submission descriptions and management 
> interfaces to Hadoop. It uses Hadoop on Demand to provision capacity. Our 
> HaaS implementation also supports files stage in/out with protocols like FTP, 
> SCP and GridFTP.
> Our HaaS implementation also provides a suit of RESTful interface which  
> compliant with HPC-BP.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to