Hi All,
Apologies to those who saw this on the operators list earlier, there is a
bit of new info here & having gotten no response there thought I'd take it
to a wider audience...
I'm almost through my grizzly upgrade. I'd upgraded everything except
nova-compute before upgrading that (ubuntu 12.
HI All,
My maintenance window is closing and I haven't yet managed the transition I
planned from nova-network to quantum/neutron with ovs plugin. Using Ubuntu
12.04 Cloud archive packages (and puppetlabs openstack modules, though I
had the same results by hand so likely confusion on my part rathe
to debug with a nova-compute log. Are you willing
> to post one somewhere that people could take a look at?
>
> Thanks,
> Michael
>
> On Thu, Aug 8, 2013 at 7:35 AM, Jonathan Proulx wrote:
> > Hi All,
> >
> > Apologies to those who saw this on the operators lis
ova-0:~# ovs-vsctl del-port br-int tapd2799dad-27
root@nova-0:~# ovs-vsctl add-port trunk tapd2799dad-27 tag=2113
This what I wanted to happen but clearly I have something confused, can
quantum/neutron do this & if so how do I tell it to?
On Sun, Aug 11, 2013 at 8:07 AM, Jonathan Proulx wr
This is particularly odd since glance was working fine yesterday and to my
knowlege the only thing I did was turn on more compute nodes...
Now when I try and launch an instance it goes almost immediately to error
state with the fault:
{u'message': u'ImageNotFound', u'code': 404, u'details': u'Im
Thanks Rob,
That was it
-Jon
On Sun, Aug 11, 2013 at 6:31 PM, Robert Collins
wrote:
> You need to connect the exterior network to the integration bridge
> yourself. This is in the deployer docs somewhere, I don't recall
> offhand - sorry.
>
> -Rob
>
> On 12 August 2
CE nova.openstack.common.rpc.amqp raise
exc.from_response(resp, body_str)
2013-08-13 12:28:41.317 9593 TRACE nova.openstack.common.rpc.amqp
ImageNotFound: Image 627b4902-f324-4615-a3bb-76f9fd22207a could not be
found.
On Mon, Aug 12, 2013 at 4:24 PM, John Bresnahan wrote:
>
D'Oh of course after the messy trace post i find my typo
I'd accidentally defined glance_api_servers to be IP rather than IP:PORT
while refactoring my configuration management (I'd suspected it was in
there but had been looking in all the wrong places)
-Jon
__
On Tue, Aug 13, 2013 at 3:59 AM, Michael Still wrote:
> Jonathan, sorry for the slow reply. I had a baby on Friday last week
> instead of keeping up with email. I promise it wont happen again. ;)
>
Congrats, now what the hell are you doing reading this list?
> Did you manage these instances in
Hi All,
I'm trying to find the cinder DellEQLSanISCSIDriver for either Grizzly or
Havana (grizzly for preference). I thought this was brought into the main
tree at https://github.com/openstack/cinder.git but I seem to be wrong
about that.
I've also looked in a bunch of Dell and Mirantis github b
got a quick answer on IRC though (thanks eharney), for any who find this
thread in the future answer is it is currently under review at
https://review.openstack.org/#/c/43944/
On Wed, Sep 4, 2013 at 8:14 AM, Jonathan Proulx wrote:
> Hi All,
>
> I'm trying to f
On Thu, Sep 5, 2013 at 5:34 AM, pangj wrote:
> I suggest the openstack general list to be splited into
> multi-special-lists, such as swift list, nova list etc. I think it's more
> helpful for the special subjects.Thanks.
>
>
I think that OpenStack projects are too interrelated to always be clear
Hi Clint,
I run an OpenStack cloud for academic research as well (over here
https://tig.csail.mit.edu/wiki/TIG/OpenStack ). Started on Essex just over
a year ago, moved to Folsom just after it came out, and most recently
Grizzly since last month including a move from nova-network to
quantum/neutr
Hi All,
I'm running Grizzly on Ubuntu 12.04 and using quantum ovs.
Periodically it seems the quantum-dhcp-agent stops informing dnsmasq when
new ports are created.
looking at the instance info from nova or the related port information in
quantum the MAC address to IP mapping shows up correctly.
On Mon, Sep 9, 2013 at 4:27 PM, Édouard Thuleau wrote:
> Hi Jon,
>
> Effectively, a bug [1] was identified. When DHCP agent is down because its
> overloaded, Neutron agent will not send update to that agent and doesn't
> raise the problem.
> I submitted small patch to a warning log but I abandone
Hi Ilkka,
I have the same setup you describe below. You simply need to specify
"--gateway when running "quantum help
subnet-create", this doesn't create an L3 router it just specifies the
gateway dhcp gives out.
-Jon
On Fri, Sep 27, 2013 at 1:07 PM, Ilkka Tengvall
wrote:
> Hi,
>
> how to tel
On Fri, Sep 27, 2013 at 2:06 PM, Jonathan Proulx wrote:
> Hi Ilkka,
>
> I have the same setup you describe below. You simply need to specify
> "--gateway when running "quantum help
> subnet-create", this doesn't create an L3 router it just specifies the
&g
> On 10/01/2013 12:24 PM, Ilkka Tengvall wrote:
>>
>> This remains the problem now. If we create a port to a network with fixed
>> ip, it succeeds. but when we attach that to a router, it will stay in DOWN
>> state and will not generate the required route to router namespace. Is there
>> a way to s
On Tue, Oct 01, 2013 at 12:01:24PM +0300, Ilkka Tengvall wrote:
:Thanks, it works, except the metadata service is of course missing
:until there is quantum router connected.
I see I misse dhtis bit in my last response. Now you mention it I ran
into much the same problem.
I'll admit I'm probably
Welcome,
Not sure about the docker support question, so I'll leave that to others
On Wed, Oct 16, 2013 at 12:38 AM, Sarah Gerweck wrote:
> Is Havana still on track to be released in two days? Or is the release
> calendar out of date? If Havana will really be out in two days I will
> certainly w
the repositories you seek can be found at
https://wiki.ubuntu.com/ServerTeam/CloudArchive
you may also want to look at the OpenStack ubuntu install guide at
http://docs.openstack.org/havana/install-guide/install/apt/content/
I'm currently running Grizzly on 12.04, but for new install you should
g
I've seen issues where the quota_use table in the nova database gets
out of sync with the resources actually used.
try this query and see if the fixed_ips matches what is in use:
mysql nova -e 'select * from quota_usages where
project_id="60d776fe573f44a4810cb294b95e09d6" and
resource="fixed_ips"
Hi All,
nova-manage db sync is failing fro me with:
2013-12-30 10:53:22.795 3323 CRITICAL nova [-] (OperationalError)
(1061, "Duplicate key name
'uniq_task_log0task_
name0host0period_beginning0period_ending'") 'ALTER
TABLE task_log ADD CONSTRAINT
uniq_task_log0task_name0host0period_beginning0pe
s every step I take, so I'm a bit
worried at this point
-Jon
On Mon, Dec 30, 2013 at 11:09 AM, Jonathan Proulx wrote:
> Hi All,
>
> nova-manage db sync is failing fro me with:
>
> 2013-12-30 10:53:22.795 3323 CRITICAL nova [-] (OperationalError)
&g
Unfortunately this doesn't seem to be doing it for me.
Working from Ubuntu 12.04 originally installed with Essex and
incrementally upgraded using cloud archive. The Grizzly -> Havana
transition choked, then I discovered your patch and like Blair copied
it in place. After a bit of thrashing found
On Tue, Jan 7, 2014 at 12:22 AM, Ageeleshwar Kandavelu
wrote:
> I am using neutron openvswitch plugin. I successfully created a port using
> neutron port-create, but I do not see the newly created port when I do
> 'ovs-vsctl show'. Is it that the port created is just a logical entity that
> just e
an.pro...@gmail.com [jonathan.pro...@gmail.com] on behalf of
> Jonathan Proulx [j...@jonproulx.com]
> Sent: Tuesday, January 07, 2014 8:33 PM
> To: Ageeleshwar Kandavelu
> Cc: openstack@lists.openstack.org
> Subject: Re: [Openstack] Neutron port-create command
>
> On Tue, Jan 7,
Hi All,
Last week I tried to upgrade my production system and ran into
https://bugs.launchpad.net/nova/+bug/1245502 (after having run the
test upgrade in a clean grizzly which is insufficient). The fix for
this was in head (now backported to stable/havana) and only involved
one file 185_rename_un
really necessary and file bugs for any of it that
was.
-Jon
On Wed, Jan 8, 2014 at 8:39 AM, Jonathan Proulx wrote:
> Hi All,
>
> Last week I tried to upgrade my production system and ran into
> https://bugs.launchpad.net/nova/+bug/1245502 (after having run the
> test upgrade in a c
On Thu, Jan 9, 2014 at 9:17 AM, Mridhul Pax wrote:
> Hello Stackers,
>
> Any one using KVM as hypervisor in the production servers ? We are planning
> to use KVM as hypervisor in our production systems and we are planning to
> use 5 hypervisor nodes (~40 vms per hypervisor).
I have 60 hypervisor
Hi All,
recently upgraded my 1 controller 60 compute node Ubuntu 12.04(+cloud
archive) system from Grizlzy to Havana. Now even before I let my
users back to the API I'm barely able to do anything due to
authentication time outs. I am using neutron which like to
authenticate *a lot*, I'm not enti
he token
> expiration down to something in the ~3 hour range (if possible) to help keep
> bloat down.
>
> --Morgan
>
> On January 11, 2014 at 10:43:12, Jonathan Proulx (j...@jonproulx.com) wrote:
>
> Hi All,
>
> recently upgraded my 1 controller 60 compute no
wrote:
> Hi Jonathan! I have not yet deployed Havana to production, however I'll
> add some comments and suggestions below that you may want to try in
> order to isolate root cause...
>
> On Sat, 2014-01-11 at 13:34 -0500, Jonathan Proulx wrote:
>> Hi All,
>>
>
On Sat, Jan 11, 2014 at 8:24 PM, Morgan Fainberg wrote:
> Hi Jon,
>
> I have published a patch set that I hope will help to address this issue:
> https://review.openstack.org/#/c/66149/ . If you need this in another
> format, please let me know.
That's a fine format, also I love patches that onl
On Sat, Jan 11, 2014 at 10:57 PM, Morgan Fainberg wrote:
> Sounds good! Just remember that prior to the fix I posted there, for each
> token in the user’s index, it incurred a round-trip to memcached to validate
> the token wasn’t expired. This change makes it so that there are
> significantly l
On Sun, Jan 12, 2014 at 12:31 PM, Jonathan Proulx wrote:
> puzzling side effect?
>
> I just made a small change to neutron.conf (adjusted a default quota)
> and restarted neutron-server, now neutron (but not other services) is
> spweing:
>
> Invalid user token - rejecting
ou
> don't need to maintain it outside of the releases.
>
> Cheers,
> Morgan
>
> Sent from my tablet-like-device
>
>> On Jan 11, 2014, at 11:01 PM, Jonathan Proulx wrote:
>>
>>> On Sat, Jan 11, 2014 at 10:57 PM, Morgan Fainberg
>>> wr
Hi All,
I'm very near to having metadata service working in Havana I think,
but need a little help.
Most of my instances are on a provider network that uses
neutron-dhcp-agent but an external router. I have a single controller
setup using Ubuntu 12.04 and cloud archive.
It looks like the servic
Hi all,
neutron-lbass-agent is failing with:
neutron.service OperationalError: (OperationalError) (1054, "Unknown
column 'pools.status_description' in 'field list'") 'SELECT
pools.tenant_id AS pools_tenant_id, pools.id AS pools_id, pools.status
AS pools_status, pools.status_description AS pools_s
On Mon, Jan 13, 2014 at 8:05 AM, Darragh O'Reilly
wrote:
> Hi Jon,
>
> the RST is probably because the curl request was to port 8775. Can you try
> again to port 80 and also use -n with tcpdump.
Thanks for the extra eyes on that one, been a long weekend and not so
much in the good way, I didn't
looks as migration issue. I wonder how did you perform the
> migration.
> Devstack usually runs migration during the setup and everything works fine
> with the current code.
>
> Could you provide more details about how you run the migration.
>
> Thanks,
> Eugene.
>
>
>
&
are inline below
Thanks,
-Jon
On Mon, Jan 13, 2014 at 9:06 AM, Darragh O'Reilly
wrote:
>
> On Monday, 13 January 2014, 13:34, Jonathan Proulx wrote:
>>Now I'm getting an internal server error because:
>>
>>connect(13, {sa_family=AF_FILE,
>>path="/
I went a head and created the colum it was complaining about , no
doubt this will come back to haunt me:
mysql> ALTER TABLE pools ADD COLUMN status_description varchar(255)
DEFAULT NULL AFTER status;
but at least neutron-server will run with the lbaas agent enabled
while I try and figure out why
How does the nova api map fixed-ips from neutron to running instances?
I've been poking around the nova database but cant seem to guess
where it is.
My issue is that after upgrading to Havana *some* instances no longer
show their network and fixed-ip in the output of 'nova' cli commands
or in the
ate through
all active ports and for each look up if the device-id is and nova
instance and if that instace reports correctly or not then fix it if
need, but there must be a *much* simpler way to do this directly in
the database.
-Jon
On Wed, Jan 15, 2014 at 10:08 AM, Jonathan Proulx wrote:
>
Hi Varun,
As Garry said you can pre create ports with specific mac addresses and
then assign them to instances if you are using neutron for networking.
The Horizon dash board does not provide an interface to either of
these actions but the command line tools do.
the 'port-create' sub command of
Hi All,
noticed some (most but not all) of my network namespaces are no longer working:
# ip netns exec qdhcp-0a1d0a27-cffa-4de3-92c5-9d3fd3f2e74d ip addr
seting the network namespace failed: Invalid argument
strace gives only slightly more info that I can't parse and probably
adds no new inform
On Wed, Jan 22, 2014 at 2:32 PM, Remo Mattei wrote:
> I would guess it is the kernel try to boot with the older version and see if
> it works
Older version of what? System has 160 day uptime and the problem is
hours old. I'm probably going to need to reboot, I don't like to do
on my production
On Wed, Jan 22, 2014 at 3:07 PM, gustavo panizzo
wrote:
> On 01/22/2014 04:52 PM, Jonathan Proulx wrote:
>> If I remove the name spaces will restarting the agents that uses them
>> recreate them?
> yes, it should. l3-agent creates the namespace(s)
> if you cant go into it i
change but
who's to say.
Thanks
-Jon
:
:Dave
:
:
:On Wed, Jan 22, 2014 at 2:09 PM, Jonathan Proulx wrote:
:
:> On Wed, Jan 22, 2014 at 3:07 PM, gustavo panizzo
:> wrote:
:> > On 01/22/2014 04:52 PM, Jonathan Proulx wrote:
:> >> If I remove the name spaces will restart
Hi All,
This was working fine until I rebooted to clear an issue with network
namespaces...
glance consistently gives authentication failures if I'm running
keystone in wsgi mode behind apache, all other services (cinder, nova,
neutron, and keytone) respond normally.
changing nothing else, if I
HI All,
DHCP requests from instances with interfaces on OVS/GRE based tenant
networks are showing up on the tap device on the compute node but
never make it to the physical network device (tcpdump -i ehtX proto
gre).
If I manually configure an address all seems well & I can for example
ping from
nt br-tun
however I can't tcpdump on the patch or gre devices
# tcpdump -i patch-tun
tcpdump: patch-tun: No such device exists
is there a way to do this? Right now I can only see what's happening
at the beginning (tap) and end (ethN)
On Wed, Jan 29, 2014 at 10:21 AM,
On Wed, Jan 29, 2014 at 1:49 PM, Joe Topjian wrote:
>
>> however I can't tcpdump on the patch or gre devices
>>
>> # tcpdump -i patch-tun
>> tcpdump: patch-tun: No such device exists
>
>
> I can reproduce this. I suspect because patch-tun and patch-int are OVS
> patch interfaces, they are inte
On Wed, Jan 29, 2014 at 3:39 PM, Robert Collins
wrote:
> On 30 January 2014 08:16, Jonathan Proulx wrote:
>> On Wed, Jan 29, 2014 at 1:49 PM, Joe Topjian wrote:
> Always use ovs-vsctl show on ovs switches - brcompat is super limited.
usually do just interesting that what looks the
Still can't quite sort this out but I am circling in on where the problem is.
To recap bootpc and arp requests from instances using GRE tenant
networks are not making it onto the physical network, I suspect this
is "all broadcast traffic". If IP is configured statically and the
arp cache is set
try these:
>
> net.ipv4.conf.all.arp_announce=1
> net.ipv4.conf.default.arp_announce=1
> net.ipv4.conf.all.arp_notify=1
> net.ipv4.conf.default.arp_notify=1
> net.ipv4.conf.all.rp_filter=0
> net.ipv4.conf.default.rp_filter=0
>
> Just shooting from the hip here, so sorry if I
It's most likely you need to look inside the namespaces to see the
traffic on br-int, are you familiar with 'ip netns exec'? I just
added a bit about this to the operators guide
http://docs.openstack.org/trunk/openstack-ops/content/network_troubleshooting.html#dealing_with_netns
there's a bit more
There's a very new chapter, just merged Feb 6th, in the Ops Guide on
Upgrades
http://docs.openstack.org/trunk/openstack-ops/content/ch_ops_upgrades.html
Covers testing, upgrade, and rollback if something does go wrong. Be
sure you test before doing a production upgrade, and probably a good
idea
Hi All,
Is it possible to restrict volume types to (or from) specific
projects? I'm using Ubuntu 12.04 + Havana and currently not defining
any volume types.
I'm not looking to do anything particularly creative here I'd just
like attach a test volume backend to my production systems, but I want
t
On Wed, Feb 12, 2014 at 1:45 PM, Remo Mattei wrote:
> Hello does anyone have a custom filter steps on how to do it for a custom
> filters like
>
> AggregateRamFilter
Make sure it's listed in "scheduler_default_filters" in nova.conf on
the host running nova-scheduler (by default I think all availa
Hi All,
Using Havana Neutron on Ubuntu 12.04 with OVS provider VLAN networks,
instances are failing to get DHCP reponse on reboot. This seems to be
a new issue, I believe this had been working before.
New instances get DHCP fine, but on hard or soft reboot instances can
go >24hr without getting
Having previously restarted neutron-dhcp-agent on the controller and
watching my test instance continue to fail to get DHCP response, I can
now no longer induce this failure mode. No difference in logs before
or after, no particular difference in load either. ovs-ofctl
dump-flows on br-eth1 was a
Hi All,
Running 2013.2.2 on Ubuntu 12.04. I've recently notices some (10%
ish) of my compute node exhibiting unusually large "system' cpu
utilization over the past few days.
Looking a little more closely I see that there are a number of
'qemu-nbd -c' process trying to connect deleted instance dr
This is a bit of a religious discussion...so with out judgement of
what anyone else is doing:
We just use puppet, though 12.04 and Havana, not sure the state of the
community modules for Icehouse. I do think the Havana to Icehouse
difference is likely much larger than the 12.04 to 14.04 difference
ack
>
> Does anyone know who is working on the Icehouse Puppet modules? I'd love to
> help with testing, at least.
>
> On Apr 11, 2014, at 10:04 AM, Jonathan Proulx wrote:
>
>> This is a bit of a religious discussion...so with out judgement of
>> what anyone els
The configuration reference at
http://docs.openstack.org/icehouse/config-reference
has explanations of the myriad options you could specify, but doesn't quite
make an example config out of them. I could provide a sanitized version of
my KVM based configs if you'd like.
-Jon
On Thu, Jun 26, 20
Hi all,
I testing upgrade from Havana to Icehouse. This is changing a number
of networking related things so not sure which is my problem. My goal
is to maintain my current production state jumbo frames network.
In Havana production I'm using openvswitch plugin, but not
LibvirtHybridOVSBridgeDr
To answer my own question...
the qbr devices are created by nova-network not neutron so their
configuration needs to go on nova.conf:
[default]/network_device_mtu=9000
so yay two place to enter the same value for the same purpose
On Tue, Jul 22, 2014 at 1:24 PM, Jonathan Proulx wrote:
>
Hi Marcus,
The silence you hear is because Ceph isn't an OpenStack project,
though a lot of us (myself included) do use it heavily with OpenStack.
Your questions are very cpeh specific rather than about how to get it
to work with OpenStack and would be better answered on the ceph-users
mailing l
Hi All,
I'm installing a new storage backend to my cinder environment and
would *really* like to move my existing volumes to it.
I did this once before moving from LVM on a storage nod to a SAN based
solution with the cinder volume service running on the controller
node. for that case:
cinder m
The wiki article is when the upstream release happened which does not
include any distro packages.
The cloud-archive is not an OpenStack project it's how Ubuntu packages
for downstream so it is upto Ubuntu when new relases endup there not
OpenStack.
Also the juno cloud-archive is only available fo
main
# deb-src http://ubuntu-cloud.archive.canonical.com/ubuntu
trusty-updates/juno main
If you add by hand you'll also need to add the repo keys:
sudo apt-get install ubuntu-cloud-keyring
-Jon
> On Wed, Nov 12, 2014 at 8:29 PM, Jonathan Proulx wrote:
>>
>> The wiki articl
On Wed, Dec 10, 2014 at 12:19 AM, Venu Murthy
wrote:
> This command should help nova add-fixed-ip or you
> can specify the static ip that has been created in the nova boot command
> this link should help
>
> https://ask.openstack.org/en/question/30690/add-multiple-specific-ips-to-instance/
>
>
Hi All,
I can see the obvious distinction between cinder-snapshot and
cinder-backup being that snapshots would live on the same storage back
end as the active volume (using that ever snapshotting that provides)
where the backup would be to different storage.
We're using Ceph for volume and object
Hi All,
After upgrading from icehouse to juno I get timeouts when trying to
schedule >20 instances at once (20 deterministically works, 30 always
fails didn't bother to go finer than that)
Note this is a single call with --max-count on the CLI or setting
number of instances in Horizon. If I do p
>
>> It is true that in icehouse instances might get launched individually
>> before scheduler realizes no enough hosts are available, but currently this
>> has been changed and number of available hosts are checked before launching
>> any.
>>
>> Thanks,
Hi All,
Using Juno...
I'm an operator trying to replace the crufty Essex era bash scripts we
use for account registration with something saner. Using keystone v3
python api seems the right thing.
Keystone is serving v2 and v3. My 'identity' endoint points to v2 but
Horizion uses v3 and I can a
|
| service_type | identityv3
|
| url | https://SERVER-1:5001/v3
|
+--+----+
On Tue, Mar 17, 2015 at 10:20 AM, Jonathan Proulx wrote:
> Hi All,
>
> U
I see lots of (well enough) info on how to update quota classes in the
docs, but how do you assign a quota class to a project?
I'm pretty sure I actually did that once before but today I can't
remember and googling the docs site and grepping the source code isn't
helping me...
Thanks,
-Jon
_
Hi All,
As part of an expansion to a second Region (and 2nd datacenter) I'm
considering building out the new region on Kilo and upgrading the
(shared) Keystone to Kilo while leaving the rest of the current
region at Juno.
Obviously this is not a tested or supported configuration but in my
small d
enstack list in practice.
Thanks,
-Jon
> -dave
>
> On Thu, Jun 11, 2015 at 1:58 PM, Jonathan Proulx wrote:
>>
>> Hi All,
>>
>> As part of an expansion to a second Region (and 2nd datacenter) I'm
>> considering building out the new region on Kilo a
On Fri, Jun 26, 2015 at 1:01 PM, Kevin Benton wrote:
> Yes, networking functions don't conflict with compute functions.
This is true but it is different that the legacy nova-compute multi
host where network functions for VMs on a physical node were handled
by the network service on that same node
On Fri, Jul 17, 2015 at 1:14 PM, Erdősi Péter wrote:
> Hi!
>
> We are start to build our production cloud in this year, but we need to make
> a big decision:
> Which OS should we use for the nodes on the system?
>
> The truth is, we cannot decide beetwen Ubuntu and CentOS. The Ubuntu is
> widely u
Hi All,
Attempting to upgrade from juno -> kilo I'm getting DB migration
errors from cinder:
2015-08-03 14:31:19.612 16831 ERROR 032_add_volume_type_projects [-]
Table |Table('volume_type_projects',
MetaData(bind=Engine(mysql://cinder:***@10.0.128.15/cinder?charset=utf8)),
Column('id', Integer(),
into this at Time Warner Cable using puppet.)
>
> The work around is to turn the collation order back.
>
> IRC help: look for "clayton" or "med_" in freenode.
>
> On Mon, Aug 3, 2015 at 12:43 PM, Jonathan Proulx wrote:
>>
>> Hi All,
>>
>
H i All,
I'm hitting a DB migration error while attempting a production upgrade
(despite having successfully ruin the same upgrade on an only slightly
older copy of the database last week)
in:
INFO [alembic.migration] Running upgrade 38495dc99731 -> 4dbe243cd84d, nsxv
Failing:
sqlalchemy.exc.Op
d it’s because of a
> collation mismatch between the tables.
>
>
>
>
>
> On 8/20/15, 9:25 AM, "Jonathan Proulx" wrote:
>
>>H i All,
>>
>>I'm hitting a DB migration error while attempting a production upgrade
>>(despite having successfully ruin th
HI,
I want to create a 'project_admin' role with the ability to add and
remove existing users from the project in which one has this role.
But it's not working as I thought. Here's what I tried in policy.json
(note #comments are not in the json file):
# set up the rules
"project_admin": "pro
up`
> it should work.
>
> Thanks,
>
> Steve Martinelli
> OpenStack Keystone Core
>
> [image: Inactive hide details for Morgan Fainberg ---2015/08/24 10:49:22
> PM---The policy file is not really used for v2 keystone. There]Morgan
> Fainberg ---2015/08/24 10:49:22 PM
Hi All,
Before I open a bug...
after Kilo upgrade I get 'Error: Flavor's disk is too small for
requested image' when trying to boot from cinder volumes that are
larger than the image-type root volume.
This is a severe issue for me. Has anyone else seen/reported this (I
didn't see it looking at
27;ll
post to operators list about it.
-Jon
On Wed, Aug 26, 2015 at 1:26 PM, John Griffith
wrote:
>
> On Wed, Aug 26, 2015 at 10:59 AM, Jonathan Proulx wrote:
>>
>> Error: Flavor's disk is too small for
>> requested image
>
>
> Without looking closely, wond
cials, after verification of the electorate status of the
candidate.
-Jon
Jonathan Proulx
Sr. Technical Architect
MIT CSAIL
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstac
on same committee.
Don't worry you can't get rid of me that easily :)
-Jon
:Thanks again for all that you have done!
:
:Sincerely,
:Shamail
:
:> On Aug 3, 2017, at 4:11 PM, Jonathan Proulx wrote:
:>
:> Hello All,
:>
:> It has been an honor and a privelege to serve o
On Wed, Mar 21, 2018 at 08:32:38PM -0400, Paul Belanger wrote:
:6. Spandau loses to Solar by 195–88, loses to Springer by 125–118
Given this is at #6 and formal vetting is yet to come it's probably
not much of an issue, but "Spandau's" first association for many will
be Nazi war criminals via Sp
95 matches
Mail list logo