> I tried to totally terminate the app role and www roles on my farm > which I could not do (this means changing min instances to 0 which is > not allowed). You can untick the role in the farm->edit tree and all instances will be terminated.
> > Why was that happening? I can understand scalr starting a new instance > to compensate on the min instances number that was 1...but why was it > terminating over and over again. Naturally this created an excess > charge with AWS due to the fact that a new instance was generated (it > was probably not just a reboot). Your script app_make_index_dirs was exeuting on hostInit with 1200 seconds timeout. Scalr must adjust the timeout for role according to summarized timeout for all scripts assigned on hostInit, but it was not doing it because of bug that has just been fixed. So, your script was taking too much time to execute (more than 1200 secs that you gave it) and Scalr was considering this instance as broken, as it was not sending hostUp in timely manner. It can be seen in logs Event log: 21-12-2008 01:46:41 WARN PollerProcess Instance 'i-c26cd4ab' did not send 'hostUp' event in 300 seconds after launch. Considering it broken. Terminating instance. Scripting log: 2008-12-21 01:44:19 OnHostUp zibabappcom i-c26cd4ab Executing '/usr/local/bin/scalr-scripting.HBO2465/app_make_index_dirs' synchronously, with timeout 1200 seconds. <nothing here> On 21.12.08 09:28, "afishler" <afish...@gmail.com> wrote: > > I tried to totally terminate the app role and www roles on my farm > which I could not do (this means changing min instances to 0 which is > not allowed). What actually happened eventually was that an app server > entered into a reboot cycle which caused it to keep terminating and > restarting for the whole weekend. > > Why was that happening? I can understand scalr starting a new instance > to compensate on the min instances number that was 1...but why was it > terminating over and over again. Naturally this created an excess > charge with AWS due to the fact that a new instance was generated (it > was probably not just a reboot). > > > On Dec 18, 7:55 pm, Alex Kovalyov <alex.koval...@gmail.com> wrote: >> I think Scalr should offer you to decrease MinInstances for corresponding >> role when you terminate instances. >> You can decrease it by hands anyway - to prevent Scalr from launching >> replacements. >> >> On 18.12.08 19:40, "afishler" <afish...@gmail.com> wrote: >> >> >> >>> By specifically terminating several instances? >> >>> Seems like they keep starting up as new instances....Is my only option >>> removing them from the farm? > > --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "scalr-discuss" group. To post to this group, send email to scalr-discuss@googlegroups.com To unsubscribe from this group, send email to scalr-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/scalr-discuss?hl=en -~----------~----~----~----~------~----~------~--~---