INSTALLATION ID 
03480c7d
VERSION 
5.10.21 (Community Edition)
REVISION 
968cbb8 (Sat, 26 Dec 2015 23:50:10 +0300)
FULL REVISION HASH 
968cbb81844c77ac60b1623d6b12706442f5cec0

On Monday, June 13, 2016 at 12:09:11 PM UTC-4, DicsyDel wrote:
>
> Hi, 
>
> What is your Scalr version? 
>
> Regards, 
> Igor 
>
> On 13 June 2016 at 18:50,  <[email protected] <javascript:>> wrote: 
> > Kernel version of our Ubuntu 12.04 instance is 3.2.0-40-virtual. 
> > 
> > I saw a comment on a aws forum saying: 
> > 
> > "For M3 instances, you must specify instance store volumes in the block 
> > device mapping for the instance. 
> > When you launch an M3 instance, we ignore any instance store volumes 
> > specified in the block device 
> > mapping for the AMI." 
> > 
> > We're also not seeing this error when switching instance type to 
> M4.large. 
> > Note, when switching instance 
> > types (to M4) in the farm settings we get the following message: 
> > 
> > "Number of ephemeral devices were decreased. Following devices will be 
> > unavailable: ephemeral0" 
> > 
> > I ssh into the failed instance and noticed that  instance store volume 
> did 
> > get mounted to /dev/xvdb. 
> > 
> > looking the .py code which triggers the error we see block device 
> mappings 
> > defined by name2device: 
> > 
> > def name2device(name): 
> >     if not name.startswith('/dev'): 
> >         if linux.os.windows: 
> >             return re.sub(r'^sd', 'xvd', name) 
> >         name = os.path.join('/dev', name) 
> >     if name.startswith('/dev/xvd'): 
> >         return name 
> >     if os.path.exists('/dev/sda1') and linux.os.version < (6, 0): 
> >         # see [SCALARIZR-2266] 
> >         return name 
> >     name = name.replace('/sd', '/xvd') 
> >     if storage2.RHEL_DEVICE_ORDERING_BUG: 
> >         name = name[0:8] + chr(ord(name[8])+4) + name[9:] 
> >     return name 
> > 
> > and failure occurs here: 
> > 
> >     def _ensure(self): 
> >         self._check_attr('name') 
> >         try: 
> >             url = 
> > 'http://169.254.169.254/latest/meta-data/block-device-mapping/%s' % 
> > self.name 
> >             device = urllib2.urlopen(url).read().strip() 
> >         except: 
> >             msg = "Failed to fetch device name for instance store '%s'. 
> %s 
> > (%s)" % ( 
> >                             self.name, sys.exc_info()[1], url) 
> >             raise storage2.StorageError, msg, sys.exc_info()[2] 
> >         else: 
> >             device = ebs.name2device(device) 
> >             if fact['os']['name'] != 'windows': 
> >                 if not os.path.exists(device): 
> >                     raise Exception(( 
> >                         "Instance store device {} ({}) doesn't exist. " 
> >                         "Please check that instance type {} supports 
> > it").format( 
> >                             device, self.name, 
> > __node__['platform'].get_instance_type())) 
> >             self.device = device 
> > 
> > 
> > On Sunday, June 12, 2016 at 2:56:44 AM UTC-4, [email protected] wrote: 
> >> 
> >> We started getting the following error in the "Agent BeforeHostUp 
> phase" 
> >> on a M3.medium instance a few days ago.  Running the same 
> >> farm on a t2.medium does not cause the error (we'er using a Ubuntu 
> 12.04 
> >> Precise image).  Seems to have something to do with instance store 
> >> volumes on M3 instances and blocking device mappings.  Any idea how to 
> fix 
> >> this?  We have RIs so we don't want to move to T2 or other 
> >> instance types. 
> >> 
> >> 2016-06-11 17:37:46,940+00:00 - ERROR - scalarizr.ops.system.init - 
> >> Operation "system.init" (id: 412be8be-66a7-4c98-83ac-0dd6ef2da89b) 
> failed. 
> >> Reason: 
> >> Instance store device /dev/sdb (ephemeral0) doesn't exist. Please check 
> >> that instance type m3.medium supports it 
> >> Traceback (most recent call last): 
> >>   File 
> >> 
> "/opt/scalarizr/embedded/lib/python2.7/site-packages/scalarizr-4.6.0-py2.7.egg/scalarizr/api/operation.py",
>  
>
> >> line 273, in _in_progress 
> >>     self._completed(self.func(self, *self.func_args, **self.func_kwds)) 
> >>   File 
> >> 
> "/opt/scalarizr/embedded/lib/python2.7/site-packages/scalarizr-4.6.0-py2.7.egg/scalarizr/handlers/lifecycle.py",
>  
>
> >> line 441, in handler 
> >>     bus.fire("host_init_response", message) 
> >>   File 
> >> 
> "/opt/scalarizr/embedded/lib/python2.7/site-packages/scalarizr-4.6.0-py2.7.egg/scalarizr/libs/bases.py",
>  
>
> >> line 33, in fire 
> >>     ln(*args, **kwargs) 
> >>   File 
> >> 
> "/opt/scalarizr/embedded/lib/python2.7/site-packages/scalarizr-4.6.0-py2.7.egg/scalarizr/handlers/block_device.py",
>  
>
> >> line 103, in on_host_init_response 
> >>     self._plug_new_style_volumes(volumes) 
> >>   File 
> >> 
> "/opt/scalarizr/embedded/lib/python2.7/site-packages/scalarizr-4.6.0-py2.7.egg/scalarizr/handlers/block_device.py",
>  
>
> >> line 127, in _plug_new_style_volumes 
> >>     vol.ensure(mount=bool(vol.mpoint), mkfs=True) 
> >>   File 
> >> 
> "/opt/scalarizr/embedded/lib/python2.7/site-packages/scalarizr-4.6.0-py2.7.egg/scalarizr/storage2/volumes/base.py",
>  
>
> >> line 88, in ensure 
> >>     self._ensure() 
> >>   File 
> >> 
> "/opt/scalarizr/embedded/lib/python2.7/site-packages/scalarizr-4.6.0-py2.7.egg/scalarizr/storage2/volumes/ec2_ephemeral.py",
>  
>
> >> line 44, in _ensure 
> >>     device, self.name, __node__['platform'].get_instance_type())) 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "scalr-discuss" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to [email protected] <javascript:>. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"scalr-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to