On Thu, Sep 1, 2011 at 7:58 PM, Per Steffensen <st...@designware.dk> wrote:

> Thanks for your response. See comments below.
>
> Regards, Per Steffensen
>
> Alejandro Abdelnur skrev:
>
>  [moving common-user@ to BCC]
>>
>> Oozie is not HA yet. But it would be relatively easy to make it. It was
>> designed with that in mind, we even did a prototype.
>>
>>
> Ok, so if it isnt HA out-of-the-box I believe Oozie is too big a framework
> for my needs - I dont need all this workflow stuff - just a plain simple job
> trigger that triggers every 5th minute. I guess I will try out something
> smaller like Quartz Scheduler. It also only have HA/cluster support through
> JDBC (JobStore) but I guess I could fairly easy make a HDFSFilesJobStore
> which still hold the properties so that Quartz clustering works.
>
> But what I would really like to have is a scheduling framework that is HA
> out-of-the-box. Guess Oozie is not the solution for me. Anyone knows about
> other frameworks?

This is similar to my requirement. Only that I already have Quartz
scheduling my jobs and haven't started using Hadoop yet. I plan to wrap
Quartz jobs to internally call Hadoop jobs. I'm still in the design phase
though. Hopefully, it will be successful.

>
>  Oozie consists of 2 services, a SQL database to store the Oozie jobs state
>> and a servlet container where Oozie app proper runs.
>>
>> The solution for HA for the database, well, it is left to the database.
>> This
>> means, you'll have to get an HA DB.
>>
>>
> I would really like to avoid having to run a relational database. Couldnt I
> just do the persistence of Oozie jobs state in files on HDFS?
>
>  The solution for HA for the Oozie app is deploying the servlet container
>> with the Oozie app in more than one box (2 or 3); and front them by a HTTP
>> load-balancer.
>>
>> The missing part is that the current Oozie lock-service is currently an
>> in-memory implementation. This should be replaced with a Zookeeper
>> implementation. Zookeeper could run externally or internally in all Oozie
>> servers. This is what was prototyped long ago.
>>
>>
> Yes but if I have to do ZooKeeper stuff I could just do the scheduler
> myself and make run no all/many boxes. The only hard part about it is the
> "locking" thing that makes sure only one job-triggering happens in the
> entire cluster when only one job-triggering is supposed to happen, and that
> the job-triggering happens no matter how many machines might be down.
>
>  Thanks.
>>
>> Alejandro
>>
>>
>> On Thu, Sep 1, 2011 at 4:14 AM, Ronen Itkin <ro...@taykey.com> wrote:
>>
>>
>>
>>> If I get you right you are asking about Installing Oozie as Distributed
>>> and/or HA cluster?!
>>> In that case I am not familiar with an out of the box solution by Oozie.
>>> But, I think you can made up a solution of your own, for example:
>>> Installing Oozie on two servers on the same partition which will be
>>> synchronized by DRBD.
>>> You can trigger a "failover" using linux Heartbeat and that way maintain
>>> a
>>> virtual IP.
>>>
>>>
>>>
>>>
>>>
>>> On Thu, Sep 1, 2011 at 1:59 PM, Per Steffensen <st...@designware.dk>
>>> wrote:
>>>
>>>
>>>
>>>> Hi
>>>>
>>>> Thanks a lot for pointing me to Oozie. I have looked a little bit into
>>>> Oozie and it seems like the "component" triggering jobs is called
>>>> "Coordinator Application". But I really see nowhere that this
>>>> Coordinator
>>>> Application doesnt just run on a single machine, and that it will
>>>>
>>>>
>>> therefore
>>>
>>>
>>>> not trigger anything if this machine is down. Can you confirm that the
>>>> "Coordinator Application"-role is distributed in a distribued Oozie
>>>>
>>>>
>>> setup,
>>>
>>>
>>>> so that jobs gets triggered even if one or two machines are down?
>>>>
>>>> Regards, Per Steffensen
>>>>
>>>> Ronen Itkin skrev:
>>>>
>>>>  Hi
>>>>
>>>>
>>>>> Try to use Oozie for job coordination and work flows.
>>>>>
>>>>>
>>>>>
>>>>> On Thu, Sep 1, 2011 at 12:30 PM, Per Steffensen <st...@designware.dk>
>>>>> wrote:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> Hi
>>>>>>
>>>>>> I use hadoop for a MapReduce job in my system. I would like to have
>>>>>> the
>>>>>> job
>>>>>> run very 5th minute. Are there any "distributed" timer job stuff in
>>>>>> hadoop?
>>>>>> Of course I could setup a timer in an external timer framework (CRON
>>>>>> or
>>>>>> something like that) that invokes the MapReduce job. But CRON is only
>>>>>> running on one particular machine, so if that machine goes down my job
>>>>>> will
>>>>>> not be triggered. Then I could setup the timer on all or many
>>>>>> machines,
>>>>>> but
>>>>>> I would not like the job to be run in more than one instance every 5th
>>>>>> minute, so then the timer jobs would need to coordinate who is
>>>>>> actually
>>>>>> starting the job "this time" and all the rest would just have to do
>>>>>> nothing.
>>>>>> Guess I could come up with a solution to that - e.g. writing some
>>>>>>
>>>>>>
>>>>> "lock"
>>>
>>>
>>>> stuff using HDFS files or by using ZooKeeper. But I would really like
>>>>>>
>>>>>>
>>>>> if
>>>
>>>
>>>> someone had already solved the problem, and provided some kind of a
>>>>>> "distributed timer framework" running in a "cluster", so that I could
>>>>>> just
>>>>>> register a timer job with the cluster, and then be sure that it is
>>>>>> invoked
>>>>>> every 5th minute, no matter if one or two particular machines in the
>>>>>> cluster
>>>>>> is down.
>>>>>>
>>>>>> Any suggestions are very welcome.
>>>>>>
>>>>>> Regards, Per Steffensen
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>> --
>>> *
>>> Ronen Itkin*
>>> Taykey | www.taykey.com
>>>
>>>
>>>
>>
>>
>>
>
>


-- 
Regards,

Tharindu

Reply via email to