i'm trying to ssh into base centos 32 bit machine with user root and the key file. Asks for password. Tried my admin passwords on both scalr and AWS. Permission denied..
On Mon, Jun 11, 2012 at 6:57 PM, Sree <[email protected]> wrote: > Is there a default username and password to ssh into the vms ? i tried > with root and the .pem file, which fails. > > > On Mon, Jun 11, 2012 at 6:33 PM, Sree <[email protected]> wrote: > >> Yes Nick, I'm trying to ssh into one of these machines in this 15-20 >> minutes window. My admin password is not working. Will post scalarizr logs >> shortly. >> >> >> On Mon, Jun 11, 2012 at 6:17 PM, Nick Toursky <[email protected]> wrote: >> >>> Scalarizr logs could be found at /var/log/scalarizr* >>> >>> >>> On Mon, Jun 11, 2012 at 3:17 PM, Sree <[email protected]> wrote: >>> >>>> I'm attaching the system log from the terminated computer herwith >>>> (system.log) >>>> Of this I notice this one which looks like an error - >>>> >>>> * Starting Scalarizr scalarizr >>>> [80G >>>> [74G[ OK ] >>>> landscape-client is not configured, please run landscape-config. >>>> >>>> >>>> >>>> Any pointers ? >>>> >>>> Regards >>>> Sreehari >>>> >>>> >>>> On Mon, Jun 11, 2012 at 5:30 PM, Sree <[email protected]>wrote: >>>> >>>>> Also I get these log entries in the event log - >>>>> Jun 11, 2012 02:54:06 >>>>> >>>>> LAMP Farm Jun 8-A >>>>> >>>>> /EventObserver >>>>> >>>>> Cannot create volumeConfig for PromoteToMaster message: >>>>> Scalr_Storage_Volume Volume # not found in database >>>>> >>>>> >>>>> >>>>> >>>>> Jun 11, 2012 02:52:06 >>>>> >>>>> LAMP Farm Jun 8-A >>>>> >>>>> /FarmLog >>>>> >>>>> Terminating server 'a07f5f1d-92d2-41d9-8988-3153b97f0897' (Platform: >>>>> ec2) (Poller). >>>>> >>>>> >>>>> >>>>> >>>>> Jun 11, 2012 02:50:06 >>>>> >>>>> LAMP Farm Jun 8-A >>>>> >>>>> /EventObserver >>>>> >>>>> Cannot create volumeConfig for PromoteToMaster message: >>>>> Scalr_Storage_Volume Volume # not found in database >>>>> >>>>> >>>>> >>>>> >>>>> Jun 11, 2012 02:49:55 >>>>> >>>>> LAMP Farm Jun 8-A >>>>> >>>>> a07f5f1d-92d2-41d9-8988-3153b97f0897/scalarizr.handlers >>>>> >>>>> Starting app >>>>> >>>>> >>>>> >>>>> >>>>> On Mon, Jun 11, 2012 at 5:23 PM, Sree <[email protected]>wrote: >>>>> >>>>>> Hi Nick, >>>>>> Thank you for the response. >>>>>> The event log gives this message - "Server >>>>>> '4569884f-1568-4209-8d11-69f12b285166' did not send 'hostUp' event in >>>>>> 2400 >>>>>> seconds after launch (Try increasing timeouts in role settings). >>>>>> Considering it broken. Terminating instance." >>>>>> >>>>>> Regards >>>>>> Sreehari >>>>>> >>>>>> >>>>>> On Mon, Jun 11, 2012 at 5:10 PM, Nick Toursky <[email protected]>wrote: >>>>>> >>>>>>> Hi there, >>>>>>> >>>>>>> Did you check system/event logs? >>>>>>> >>>>>>> >>>>>>> On Mon, Jun 11, 2012 at 2:38 PM, Sreehari <[email protected] >>>>>>> > wrote: >>>>>>> >>>>>>>> Hi All, >>>>>>>> I'm trying to fix this problem for last 3 days, some help here would >>>>>>>> be highly appreciated. >>>>>>>> >>>>>>>> Context: >>>>>>>> I downloaded the the latest scalr from github as a Zipped archive, >>>>>>>> and >>>>>>>> followed the steps in listed here "http://wiki.scalr.net/pages/ >>>>>>>> viewpage.action?pageId=327743". Additionally, just so to be up to >>>>>>>> date, I added the extra two crons listed here " >>>>>>>> http://wiki.scalr.net/ >>>>>>>> display/docs/Open-Source+Installation<http://wiki.scalr.net/display/docs/Open-Source+Installation> >>>>>>>> ". >>>>>>>> My Scalr is installed on a machine with public IP with following >>>>>>>> ports >>>>>>>> open : 80, 8013, 8014. >>>>>>>> >>>>>>>> Now when I create and launch a sample Lamp farm, my servers are >>>>>>>> caught >>>>>>>> up in a cycle of start->intializing->pending terminate->terminated, >>>>>>>> each cycle spawning its own server. (One server is terminated and >>>>>>>> another is created, then terminated and so on). For each of the >>>>>>>> created instance in Scalr, I can see the corresponding one being >>>>>>>> created and then terminated in the AWS management console. >>>>>>>> >>>>>>>> Anybody how met and resolved this issue before ? >>>>>>>> >>>>>>>> What am I missing? Is there any DNS setting to be done ? Is there a >>>>>>>> good/complete tutorial for self hosted/Open source Scalr deployment? >>>>>>>> Please help !! >>>>>>>> >>>>>>>> -- >>>>>>>> You received this message because you are subscribed to the Google >>>>>>>> Groups "scalr-discuss" group. >>>>>>>> To post to this group, send email to [email protected] >>>>>>>> . >>>>>>>> To unsubscribe from this group, send email to >>>>>>>> [email protected]. >>>>>>>> For more options, visit this group at >>>>>>>> http://groups.google.com/group/scalr-discuss?hl=en. >>>>>>>> >>>>>>>> >>>>>>> -- >>>>>>> You received this message because you are subscribed to the Google >>>>>>> Groups "scalr-discuss" group. >>>>>>> To post to this group, send email to [email protected]. >>>>>>> To unsubscribe from this group, send email to >>>>>>> [email protected]. >>>>>>> For more options, visit this group at >>>>>>> http://groups.google.com/group/scalr-discuss?hl=en. >>>>>>> >>>>>> >>>>>> >>>>> >>>> -- >>>> You received this message because you are subscribed to the Google >>>> Groups "scalr-discuss" group. >>>> To post to this group, send email to [email protected]. >>>> To unsubscribe from this group, send email to >>>> [email protected]. >>>> For more options, visit this group at >>>> http://groups.google.com/group/scalr-discuss?hl=en. >>>> >>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "scalr-discuss" group. >>> To post to this group, send email to [email protected]. >>> To unsubscribe from this group, send email to >>> [email protected]. >>> For more options, visit this group at >>> http://groups.google.com/group/scalr-discuss?hl=en. >>> >> >> > -- You received this message because you are subscribed to the Google Groups "scalr-discuss" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/scalr-discuss?hl=en.
