On Thu, Sep 08, 2016 at 03:52:42PM +, Kris G. Lindgren wrote:
> I completely agree about the general rule of thumb. I am only looking at the
> team that specifically supports openstack. For us frontend support for
> public clouds is handled by another team/org all together.
in my previous
On Fri, Aug 05, 2016 at 12:09:50PM +, kostiantyn.volenbovs...@swisscom.com
wrote:
> 3) The question of cfq vs. deadline vs. noop scheduler (apparently both
> in guest and host) where decision should be based on
> workloads/recommendations of OS vendor (/which again might be
> release-
On Tue, Jun 14, 2016 at 02:36:16PM +0800, Tom Fifield wrote:
>
> Hi all,
>
> We're up for a meeting:
>
> Tuesday, 14 of Jun at 1400 UTC [1]
>
> [1] To see this in your local time - check:
> http://www.timeanddate.com/worldclock/fixedtime.html?msg=Ops+Meetups+Team&iso=20160614T22
corrected li
On Thu, Mar 03, 2016 at 03:52:49PM -0500, Jonathan Proulx wrote:
>
> I have a user who wants to specify their libvirt CPU type to restrict
> performance because they're modeling embeded systems.
>
> I seem to vaguely recall there is/was a way to specify this either in
> the instance type or maybe
+1
On Thu, Oct 29, 2015 at 07:39:33 +, Kris G. Lindgren wrote:
> Hello all,
>
> I am not sure if you guys have looked at the schedule for Friday… but its all
> working groups. I was talking with a few other operators and the idea came
> up around doing an informal ops meetup tomorrow. So
I moved from Tuesday to Wednesday, but I feel like Wednesday is already
packed while Tuesday is not.
On Tuesday I conflict with non-production environments. besides that I'm
ok
On Tue, Oct 20, 2015 at 07:48:28PM +, Mike Dorman wrote:
> I’ve gone ahead and (somewhat arbitrarily) scheduled out
On Fri, Sep 18, 2015 at 08:43:16 +0100, Matt Jarvis wrote:
> So judging from the responses so far, I'd say there is an appetite for
> doing this. There are also a lot of folks who don't seem to be on this list
> including most of the other European commercial providers so I've started
> reaching ou
On 2015-07-21 22:45, Michael Still wrote:
> We therefore propose the following:
>
> - all operators when they hit Liberty will need to add a new
> connection string to their nova.conf which configures this new mysql
> database, there will be a release note to remind you to do this.
> - we will
On 2015-07-06 15:43, Álvaro López García wrote:
On 02 Jul 2015 (19:26), gustavo panizzo (gfa) wrote:
Hello
Hi,
has anybody moved from kvm to xen?
i see the support for xen on nova's hypervisor support matrix got better on
latest releases.
We're using Xen from the begin
ks!
>
> On Fri, May 29, 2015 at 8:23 AM, gustavo panizzo (gfa) <mailto:g...@zumbi.com.ar>> wrote:
>
>
>
> On 2015-05-29 05:16, Daniel Comnea wrote:
>
> Hi folks,
>
> Is anyone using SaltStack to deploy Openstack ? I haven't seen muc
Hello
has anybody moved from kvm to xen?
i see the support for xen on nova's hypervisor support matrix got better
on latest releases.
we found hard to isolate noisy vm on kvm, and the network problem (i
sent on another email) is killing us
besides, xen being used by rackspace and aw
an screenshot of collectd from the affected hypervisor
http://zumbi.com.ar/tmp/irq-tlb.png
On 2015-07-02 10:40, gustavo panizzo (gfa) wrote:
Hello
we are having a problem were our compute nodes, and the vm running
on them, suddenly and for some seconds lost network connectivity.
the root
Hello
we are having a problem were our compute nodes, and the vm running on
them, suddenly and for some seconds lost network connectivity.
the root cause appears to be the increase of irb-tlb from low values
(less than 20) to more than >100k, that spike only last for some seconds
then everythi
On 2015-06-24 13:44, Kris G. Lindgren wrote:
> One more reminder that Mike Dorman and I will be talking about this with
> the devs at the Neutron mid-cycle. If you have a use case for Network
> Segmentation that is not covered and/or you have a different Ideal
> Situation please update the ether
On 2015-05-29 05:16, Daniel Comnea wrote:
Hi folks,
Is anyone using SaltStack to deploy Openstack ? I haven't seen much
discussion around this tech hence my question and maybe point of
inspiration.
we do
there is a repo in github by CSScorp and other projects (i don't
remember right now bu
On 2015-05-08 00:39, George Shuklin wrote:
> I wanted to put tenant networks and external networks on the same
> network, but than I realised that there is no way to say neutron to
> avoid specific vlan_id's once you set up tenant_network_types=vlan and
> add vlan_id to the list of available fo
On 2015-05-07 23:17, gustavo panizzo (gfa) wrote:
>
> neutron net-create vlanN --provider:network-type vlan
> --provider:physical_network blabla --provider:segmentation_id N
>
> ...
>
> neutron net-create vlanN+nn --provider:network-type vlan
> --provider
On 2015-05-07 22:32, George Shuklin wrote:
> Hello everyone.
>
> Got a problem: we want to use same physical interface for external
> networks and virtual (tenant) networks. All inside vlans with different
> ranges.
>
> My expected config was:
>
> [ml2]
> type_drivers = vlan
> tenant_network_t
On 2015-04-27 22:59, Mike Spreitzer wrote:
> Uwe Sauter wrote on 04/27/2015 10:54:15 AM:
>>
>> What I suggested later on is that you probably don't need any second
>> level bridge at all. Just create a second/third external
>> network with appropriate CIDR. As long as those networks are
>> exter
hello,
last night our ops live migrated (nova live-migration --block-migrate
$vm) a group of vm to do hw maintenance. some of the vm ended on a
different AZ making the vm unusable (we have different upstream network
connectivity on each AZ)
I haven't read all the logs yet, but i remember wh
On 03/29/2015 11:19 AM, Joe Topjian wrote:
Hello,
Without specifying a rescue image, Nova will use the image that the
instance is based on when performing a rescue.
I've noticed that this is problematic for "cloud-friendly" images such
as the official Ubuntu images and the newer CentOS 7 imag
On Fri, 2015-03-27 at 08:00 +, Van Leeuwen, Robert wrote:
> If you have a blank slate and no investment in any technology I would
> recommend looking at Saltstack.
i did that on my previous job. no regrets
on current job non-openstack infra runs under puppet, openstack runs
over saltstack. n
On 2015-03-21 02:57, Assaf Muller wrote:
Hello everyone,
The use_namespaces option in the L3 and DHCP Neutron agents controls if you
can create multiple routers and DHCP networks managed by a single L3/DHCP agent,
or if the agent manages only a single resource.
Are the setups out there *not*
On 03/06/2015 04:35 PM, Toshikazu Ichikawa wrote:
Therefore, I believe adding a config option such as "start_new_agents"
(default: "true") to neutron's configuration provides consistent
experience to operators to maintain nodes. The "true" value of
"start_new_agents" makes the agent status of
On 01/28/2015 01:13 AM, Fischer, Matt wrote:
> Our keystone database is clustered across regions, so we have this job
> running on node1 in each site on alternating hours. I don’t think you’d
> want a bunch of cron jobs firing off all at once to cleanup tokens on
> multiple clustered nodes. That’
On 12/18/2014 09:57 AM, Jeremy Stanley wrote:
4. Set up a service that periodically regenerates sample
configuration and tracks it over time. This attempts to address the
stated desire to be able to see how sample configurations change,
but note that this is a somewhat artificial presentation s
forgot to send this email before
On 01/09/2015 12:50 AM, Kris G. Lindgren wrote:
>>> neutron net-create --shared should do the trick
>>
>> I guess the problem is that I was creating *external* _and_ *shared*
>> network, but if I don't want to use floating IPs from that network I
>> probably don't
On 01/08/2015 07:01 PM, Antonio Messina wrote:
On Thu, Jan 8, 2015 at 11:53 AM, gustavo panizzo (gfa)
wrote:
i may be wrong as i haven't tested that on juno, but in icehouse and havana
i've setup external/provider networks one for each tenant
Ah, ok, this is the point. Wh
On 01/08/2015 06:36 PM, Antonio Messina wrote:
Hi all, I'm also interested in this setup.
On Fri, Dec 26, 2014 at 12:31 AM, George Shuklin
wrote:
Report on progress so far:
I was able to fix policies (nova/neutron) to allow tennants to plug to 'own'
external networks, found and report few b
On 10/06/2014 04:09 AM, Mike Kolesnik wrote:
> Now, I know the 1st solution seems very appealing but thinking of it further
> reveals very serious limitations:
> * No HA for DHCP agents is possible (more prone to certain race conditions).
eventually they will be just bugs, bugs can be fixed
>
icehouse will be supported 18 months IIRC
i don't have a link here. it was mentioned on Thierry presentation (mid cycle
state of the project ) a few months ago
On September 30, 2014 7:39:08 AM GMT+08:00, George Shuklin
wrote:
>
>On 09/30/2014 01:55 AM, Jeremy Stanley wrote:
>> On 2014-09-29 21
it happened to me when upgraded from grizzly to havana, i don't remember
to have fixed back in the day, i just recreated them
On 09/25/2014 04:36 PM, Oliver Böttcher wrote:
Hi all,
we've recently upgraded our setup from grizzly to icehouse. In grizzly,
we've created plenty of security-groups w
32 matches
Mail list logo