I am having this same problem myself. I attached a python script I wrote to
work around it (using the OpenStack API), with the credentials in a hidden,
environment-level global variable to hide them. You add a global variable
"volumes" (with a space-separated list of volume sizes) to either the Farm
scope (if you want all the systems in the farm to have the same number and
size of volumes) or the Farm-role scope.
On Wednesday, October 5, 2016 at 9:45:04 AM UTC-5, Randy Black wrote:
>
> Is this going to be back ported or is there a work around proposed?
> Either a cinder client upgrade on the scalr host or something of that
> nature?
>
> Thanks!
>
> On Friday, September 2, 2016 at 11:11:22 AM UTC-5, Marc O'Brien wrote:
>>
>> Hi Patrick,
>>
>> It appears that this is a bug with the current version of Open Source
>> Scalr that has since been resolved in Enterprise Scalr as well as Hosted
>> Scalr. When testing using the latest agent and insecure OpenStack this
>> issue does not present in Hosted or Enterprise Scalr.
>>
>> Many thanks,
>> Wm. Marc O'Brien
>> Scalr Technical Support
>>
>> On Wednesday, August 31, 2016 at 9:32:02 AM UTC-6, Patrick Vinas wrote:
>>>
>>> As specified in my original post, the issue persists whether "Enable SSL
>>> certificate verification" is checked or unchecked (it's currently
>>> unchecked).
>>>
>>> On Monday, August 22, 2016 at 1:59:24 PM UTC-5, Marc O'Brien wrote:
>>>>
>>>> Hi Patrick,
>>>>
>>>> Can you provide a screenshot of your Openstack cloud credentials
>>>> configuration? You can mask your credentials. We are looking for the
>>>> current status of the checkbox for "Enable SSL certificate verification
>>>> for
>>>> Keystone endpoints."
>>>>
>>>> Many thanks,
>>>> Wm. Marc O'Brien
>>>> Scalr Technical Support
>>>>
>>>>
>>>> On Wednesday, August 17, 2016 at 3:11:10 PM UTC-6, Patrick Vinas wrote:
>>>>>
>>>>> Thanks, Igor and Marc. Debug log is attached, with domain obfuscated.
>>>>>
>>>>> On Monday, August 15, 2016 at 4:53:40 PM UTC-5, DicsyDel wrote:
>>>>>>
>>>>>> We will need /var/log/scalarizr_debug.log file from the failed VM.
>>>>>> You
>>>>>> can send it to igor [at] scalr.com
>>>>>>
>>>>>> Thanks,
>>>>>> Igor
>>>>>>
>>>>>> On 15 August 2016 at 08:59, Marc O'Brien <[email protected]> wrote:
>>>>>> > Hi Patrick,
>>>>>> >
>>>>>> > Could you attach the full error log here? There was previously a
>>>>>> similar
>>>>>> > issue related to Python version that has since been resolved in
>>>>>> Enterprise
>>>>>> > Scalr 6.0.1. Error log should hopefully provide a bit more context
>>>>>> for us.
>>>>>> >
>>>>>> > Many thanks,
>>>>>> > Wm. Marc O'Brien
>>>>>> > Scalr Technical Support
>>>>>> >
>>>>>> >
>>>>>> > On Saturday, August 13, 2016 at 11:12:48 AM UTC-6, Patrick Vinas
>>>>>> wrote:
>>>>>> >>
>>>>>> >> I've got both Openstack and AWS environments in an account in
>>>>>> scalr
>>>>>> >> (5.11.22 Community, scalarizr agent v. 4.8.2 (stable) and 4.9.9
>>>>>> (latest) )
>>>>>> >>
>>>>>> >> The only issue I'm having with launching a farm is with roles that
>>>>>> are
>>>>>> >> launching in Openstack with a storage volume attached. All AWS
>>>>>> roles, and
>>>>>> >> all Openstack roles without cinder storage volumes, launch
>>>>>> successfully.
>>>>>> >>
>>>>>> >> The error in the UI Servers->Initialization progress is "SSL
>>>>>> exception
>>>>>> >> connecting to https://<controller>:5000/v3/auth/tokens: [SSL:
>>>>>> >> CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)"
>>>>>> >>
>>>>>> >> First tried adding my internal CA cert to the system trust for the
>>>>>> image,
>>>>>> >> to no effect. I can ssh into the failed instances, and I've
>>>>>> verified that I
>>>>>> >> can connect to the keystone service at that port (curl
>>>>>> >> https://<controller>:5000/v3 returns as expected). There aren't
>>>>>> any errors
>>>>>> >> in the keystone or cinder logs, and the Scalr internal messaging
>>>>>> and system
>>>>>> >> logs look fine. If I turn off SSL verification of endpoints in the
>>>>>> >> environment settings, this error persists.
>>>>>> >>
>>>>>> >> Does anyone have any ideas for further troubleshooting?
>>>>>> >
>>>>>> > --
>>>>>> > You received this message because you are subscribed to the Google
>>>>>> Groups
>>>>>> > "scalr-discuss" group.
>>>>>> > To unsubscribe from this group and stop receiving emails from it,
>>>>>> send an
>>>>>> > email to [email protected].
>>>>>> > For more options, visit https://groups.google.com/d/optout.
>>>>>>
>>>>>
--
You received this message because you are subscribed to the Google Groups
"scalr-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.
#!/usr/bin/env python
'''
Add global variable "volumes" (space-separated string of volume sizes in GB, e.g. "100 150 75")
For identical number and size of volumes in each system in a farm, add to Farm scope.
Otherwise, add to Farm-role scope.
'''
import os, requests, json, sys
from time import sleep
volSizes = os.environ.get("volumes", "").split()
hostname = os.environ.get("SCALR_SERVER_HOSTNAME", "")
driveLetter = 'b'
payload = json.dumps({"auth": {"identity": {"methods": ["password"],"password": {"user": {"domain": {"name": \
"users"},"name": os.environ.get("OS_API_USER",""),"password":os.environ.get("OS_API_PWD","")}}},"scope": {"project": {"domain": \
{"name": "users"},"name": os.environ.get("OS_API_PROJECT","")}}}})
s = requests.Session()
url = # The url of your controller
s.headers.update({'content-type': 'application/json', 'accept': 'application/json'})
response = s.post(url+':5000/v3/auth/tokens', data=payload)
if response.status_code == requests.codes.created:
print "Authentication successful"
s.headers.update({'Content-Type': 'application/json', 'accept': 'application/json', 'X-Auth-Token': response.headers['X-Subject-Token']})
project_id = response.json()["token"]["project"]["id"]
else:
sys.exit("Could not authenticate")
# Get list of volumes
volumes = s.get(url+':8776/v2/{0}/volumes'.format(project_id)).json()
volList = {}
for volume in volumes["volumes"]:
volList[volume["name"]] = volume["id"]
servers = s.get(url+':8774/v2.1/{0}/servers'.format(project_id), params=json.dumps({"name": os.environ.get("BASENAME","")})).json()["servers"]
for server in servers:
if server["name"] in hostname:
server_id = server["id"]
hostname = server["name"]
for volSize in volSizes:
volName = hostname + '-sd' + driveLetter
if not volName in volList.keys():
# Create volume
try:
v_id = s.post(url+':8776/v2/{0}/volumes'.format(project_id), data=json.dumps({"volume": {"size": volSize, "name": volName}})).json()['volume']['id']
except TypeError as e:
sys.exit("Failed to create volume")
else:
# Get volume id
v_id = volList[volName]
# Wait until volume status is available
v_status = ''
while v_status != "available":
sleep(5)
try:
v_status = s.get(url+':8776/v3/{0}/volumes/{1}'.format(project_id, v_id)).json()["volume"]["status"]
except:
sys.exit("Error getting volume status")
vol_data = json.dumps({"volumeAttachment":{"volumeId": v_id}})
# Attach volume
s.post(url+':8774/v2.1/{0}/servers/{1}/os-volume_attachments'.format(project_id, server_id), data=vol_data)
driveLetter = chr(ord(driveLetter)+1)