Re: [Openstack] [Pike] [Nova] Shutting down VMs with Cinder Volumes

2018-03-15 Thread fv

Thank you very much!

What I did in the end was simply shelve the instances. After the work 
was done on the compute nodes the VMs launched again flawlessly. I was 
surprised how easy it was!


FV


From:
Subject: Re: [Openstack] [Pike] [Nova] Shutting down VMs with Cinder 
Volumes

Date: 8 March 2018 at 23:53:31 GMT-8
To: openstack@lists.openstack.org


Hi,

My question is this: Can I shutdown the VMs, rebuild the compute nodes, 
and then relaunch the VMs?


Why shut them down? You could just migrate (cold or live) them to other 
compute nodes and maintain your >compute nodes one by one, this would 
be possible without downtime.


Depending on your storage backend (if the disks and volumes do not 
reside on the compute nodes) rebooting >instances on upgraded compute 
nodes should be no problem at all. The configuration of the instances 
are in >the database, and if the compute nodes don't have existing xml 
files, they will be simply recreated.
Before our live migration worked I had to deal with some compute node 
issues and changed the hosting compute >node of some instances directly 
in the database, and the instances came back up. So I don't see an 
issue >there, always under the prerequisite that the compute 
configuration is correct and the storage backend is >accessible by the 
compute nodes, of course.


Hope this helps!



Zitat von Father Vlasie :

Hello everyone,

I have a couple of compute nodes that need HD upgrades. They are 
running VMs with Cinder volumes.


My question is this: Can I shutdown the VMs, rebuild the compute 
nodes, and then relaunch the VMs?


I am thinking “yes” because the volumes are not ephemeral but I am not 
sure.


Are there any VM specific data that I need to save from the compute 
nodes?


Thank you,

FV

___
Mailing list: 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Post to : openstack@lists.openstack.org
Unsubscribe : 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

  Vorsitzende des Aufsichtsrates: Angelika Mozdzen
Sitz und Registergericht: Hamburg, HRB 90934
Vorstand: Jens-U. Mozdzen
 USt-IdNr. DE 814 013 983







___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [RDO PackStack] Running PackStack multiple times

2018-01-27 Thread fv

Thank you, that is exactly what I needed!

(I know it is a bit naughty but I am planning to use PackStack
for a production deployment. :)

FV

On 2018-01-27 02:14, Marcin Dulak wrote:

Yes - you can run packstack repeatedly until the installation is
successful, puppet is supposed to take care about the desired state.

If you experience the contrary - open a bug at
https://bugzilla.redhat.com/enter_bug.cgi?product=RDO

For the purpose of learning openstack it's better to use a VM for
packstack - give https://github.com/locationlabs/vagrant-packstack [2]
a try.

Marcin

On Fri, Jan 26, 2018 at 7:06 PM, Remo Mattei  wrote:


What cluster? As far as I know there is no HA mode with PackStack.

Remo


On Jan 26, 2018, at 6:23 PM, fv@spots.school wrote:

Hello!

I am trying to deploy an OpenStack cluster using PackStack but I
am encountering some errors. I am slowly working my way through
them but I have a question:

Is it alright to run the packstack script multiple times?
And, if not is there a way to undo what packstack has done in
order to try again?

Thank you!

FV

___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1]
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1]


___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1]
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1]




Links:
--
[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[2] https://github.com/locationlabs/vagrant-packstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [RDO PackStack] Running PackStack multiple times

2018-01-26 Thread fv

Yes, you are right! Wrong terminology on my part, sorry.

FV

On 2018-01-26 10:06, Remo Mattei wrote:

What cluster? As far as I know there is no HA mode with PackStack.

Remo


On Jan 26, 2018, at 6:23 PM, fv@spots.school wrote:

Hello!

I am trying to deploy an OpenStack cluster using PackStack but I am
encountering some errors. I am slowly working my way through them
but I have a question:

Is it alright to run the packstack script multiple times?
And, if not is there a way to undo what packstack has done in order
to try again?

Thank you!

FV

___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [RDO PackStack] Running PackStack multiple times

2018-01-26 Thread fv

Hello!

I am trying to deploy an OpenStack cluster using PackStack but I am 
encountering some errors. I am slowly working my way through them but I 
have a question:


Is it alright to run the packstack script multiple times?
And, if not is there a way to undo what packstack has done in order to 
try again?


Thank you!

FV

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [Openstack-Ansible] Deploy errors on small setup

2017-10-22 Thread fv
9.236.10
 tunnel_bridge: "br-vxlan"
 management_bridge: "br-mgmt"
 provider_networks:
   - network:
   container_bridge: "br-mgmt"
   container_type: "veth"
   container_interface: "eth1"
   ip_from_q: "container"
   type: "raw"
   group_binds:
 - all_containers
 - hosts
   is_container_address: true
   is_ssh_address: true
   - network:
   container_bridge: "br-vxlan"
   container_type: "veth"
   container_interface: "eth10"
   ip_from_q: "tunnel"
   type: "vxlan"
   range: "1:1000"
   net_name: "vxlan"
   group_binds:
 - neutron_linuxbridge_agent
   - network:
   container_bridge: "br-vlan"
   container_type: "veth"
   container_interface: "eth12"
   host_bind_over: "br-vlan"
   type: "flat"
   net_name: "flat"
   group_binds:
 - neutron_linuxbridge_agent
   - network:
   container_bridge: "br-vlan"
   container_type: "veth"
   container_interface: "eth11"
   type: "vlan"
   range: "1:1"
   net_name: "vlan"
   group_binds:
 - neutron_linuxbridge_agent
   - network:
   container_bridge: "br-storage"
   container_type: "veth"
   container_interface: "eth2"
   ip_from_q: "storage"
   type: "raw"
   group_binds:
 - glance_api
 - cinder_api
 - cinder_volume
 - nova_compute

###
### Infrastructure
###

# galera, memcache, rabbitmq, utility
shared-infra_hosts:
 infra1:
   ip: 172.29.236.11

# repository (apt cache, python packages, etc)
repo-infra_hosts:
 infra1:
   ip: 172.29.236.11

# load balancer
# Ideally the load balancer should not use the Infrastructure hosts.
# Dedicated hardware is best for improved performance and security.
haproxy_hosts:
 infra1:
   ip: 172.29.236.11

# rsyslog server
log_hosts:
 log1:
   ip: 172.29.236.14

###
### OpenStack
###

# keystone
identity_hosts:
 infra1:
   ip: 172.29.236.11

# cinder api services
storage-infra_hosts:
 infra1:
   ip: 172.29.236.11

# glance
# The settings here are repeated for each infra host.
# They could instead be applied as global settings in
# user_variables, but are left here to illustrate that
# each container could have different storage targets.
image_hosts:
 infra1:
   ip: 172.29.236.11
   container_vars:
 limit_container_types: glance
 glance_nfs_client:
   - server: "172.29.244.15"
 remote_path: "/images"
 local_path: "/var/lib/glance/images"
 type: "nfs"
 options: "_netdev,auto"

# nova api, conductor, etc services
compute-infra_hosts:
 infra1:
   ip: 172.29.236.11

# heat
orchestration_hosts:
 infra1:
   ip: 172.29.236.11

# horizon
dashboard_hosts:
 infra1:
   ip: 172.29.236.11

# neutron server, agents (L3, etc)
network_hosts:
 infra1:
   ip: 172.29.236.11

# ceilometer (telemetry data collection)
metering-infra_hosts:
 infra1:
   ip: 172.29.236.11

# aodh (telemetry alarm service)
metering-alarm_hosts:
 infra1:
   ip: 172.29.236.11

# gnocchi (telemetry metrics storage)
metrics_hosts:
 infra1:
   ip: 172.29.236.11

# nova hypervisors
compute_hosts:
 compute1:
   ip: 172.29.236.12

# ceilometer compute agent (telemetry data collection)
metering-compute_hosts:
 compute1:
   ip: 172.29.236.12

---

Thanks for any ideas!

FV

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack