Re: [Openstack-operators] [Openstack] diskimage-builder: prepare ubuntu 17.x images

2018-02-09 Thread Volodymyr Litovka

Hi Tony,

this patch works for me:

--- 
diskimage-builder/diskimage_builder/elements/ubuntu/root.d/10-cache-ubuntu-tarball.orig 
2018-02-09 12:20:02.117793234 +
+++ 
diskimage-builder/diskimage_builder/elements/ubuntu/root.d/10-cache-ubuntu-tarball 
2018-02-09 13:25:48.654868263 +

@@ -14,7 +14,9 @@

 DIB_CLOUD_IMAGES=${DIB_CLOUD_IMAGES:-http://cloud-images.ubuntu.com}
 DIB_RELEASE=${DIB_RELEASE:-trusty}
-BASE_IMAGE_FILE=${BASE_IMAGE_FILE:-$DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz}
+SUFFIX="-root"
+[[ $DIB_RELEASE =~ (artful|bionic) ]] && SUFFIX=""
+BASE_IMAGE_FILE=${BASE_IMAGE_FILE:-$DIB_RELEASE-server-cloudimg-${ARCH}${SUFFIX}.tar.gz}
 
SHA256SUMS=${SHA256SUMS:-https://${DIB_CLOUD_IMAGES##http?(s)://}/$DIB_RELEASE/current/SHA256SUMS}
 CACHED_FILE=$DIB_IMAGE_CACHE/$BASE_IMAGE_FILE
 CACHED_FILE_LOCK=$DIB_LOCKFILES/$BASE_IMAGE_FILE.lock
@@ -45,9 +47,25 @@
 fi
 popd
 fi
-    # Extract the base image (use --numeric-owner to avoid UID/GID 
mismatch between
-    # image tarball and host OS e.g. when building Ubuntu image on an 
openSUSE host)
-    sudo tar -C $TARGET_ROOT --numeric-owner -xzf 
$DIB_IMAGE_CACHE/$BASE_IMAGE_FILE

+    if [ -n "$SUFFIX" ]; then
+  # Extract the base image (use --numeric-owner to avoid UID/GID 
mismatch between
+  # image tarball and host OS e.g. when building Ubuntu image on an 
openSUSE host)
+  sudo tar -C $TARGET_ROOT --numeric-owner -xzf 
$DIB_IMAGE_CACHE/$BASE_IMAGE_FILE

+    else
+  # Unpack image to IDIR, mount it on MDIR, copy it to TARGET_ROOT
+  IDIR="/tmp/`head /dev/urandom | tr -dc A-Za-z0-9 | head -c 13 ; 
echo ''`"
+  MDIR="/tmp/`head /dev/urandom | tr -dc A-Za-z0-9 | head -c 13 ; 
echo ''`"

+  sudo mkdir $IDIR $MDIR
+  sudo tar -C $IDIR --numeric-owner -xzf 
$DIB_IMAGE_CACHE/$BASE_IMAGE_FILE
+  sudo mount -o loop -t auto 
$IDIR/$DIB_RELEASE-server-cloudimg-${ARCH}.img $MDIR

+  pushd $PWD 2>/dev/null
+  cd $MDIR
+  sudo tar c . | sudo tar x -C $TARGET_ROOT -k --numeric-owner 
2>/dev/null

+  popd
+  # Clean up
+  sudo umount $MDIR
+      sudo rm -rf $IDIR $MDIR
+    fi
 }

 (


On 2/9/18 1:03 PM, Volodymyr Litovka wrote:

Hi Tony,

On 2/9/18 6:01 AM, Tony Breeds wrote:

On Thu, Feb 08, 2018 at 10:53:14PM +0200, Volodymyr Litovka wrote:

Hi colleagues,

does anybody here know how to prepare Ubuntu Artful (17.10) image using
diskimage-builder?

diskimage-builder use the following naming style for download -
$DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz

and while "-root" names are there for trusty/amd64 and xenial/amd64 distros,
these archives for artful (and bionic) are absent on
cloud-images.ubuntu.com. There are just different kinds of images, not
source tree as in -root archives.

I will appreciate any ideas or knowledge how to customize 17.10-based image
using diskimage-builder or in diskimage-builder-like fashion.

You might like to investigate the ubuntu-minimal DIB element which will
build your ubuntu image from apt rather than starting with the pre-built
image.

good idea, but with

export DIST="ubuntu-minimal"
export DIB_RELEASE=artful

diskimage-builder fails with the following debug:

2018-02-09 10:33:22.426 | dib-run-parts Sourcing environment file 
/tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash
2018-02-09 10:33:22.427 | + source 
/tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash
2018-02-09 10:33:22.427 |  dirname 
/tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash
2018-02-09 10:33:22.428 | +++ 
PATH='$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/in_target.d/pre-install.d/../environment.d/..'

2018-02-09 10:33:22.428 | +++ dib-init-system
2018-02-09 10:33:22.429 | + set -eu
2018-02-09 10:33:22.429 | + set -o pipefail
2018-02-09 10:33:22.429 | + '[' -f /usr/bin/systemctl -o -f 
/bin/systemctl ']'

2018-02-09 10:33:22.429 | + [[ -f /sbin/initctl ]]
2018-02-09 10:33:22.429 | + [[ -f /etc/gentoo-release ]]
2018-02-09 10:33:22.429 | + [[ -f /sbin/init ]]
2018-02-09 10:33:22.429 | + echo 'Unknown init system'
2018-02-09 10:36:54.852 | + exit 1
2018-02-09 10:36:54.853 | ++ DIB_INIT_SYSTEM='Unknown init system

while earlier it find systemd

2018-02-09 10:33:22.221 | dib-run-parts Sourcing environment file 
/tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash
2018-02-09 10:33:22.223 | + source 
/tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash
2018-02-09 10:33:22.223 |  dirname 
/tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash
2018-02-09 10:33:22.224 | +++ 
PATH=/home/doka/DIB/dib/bin:/home/doka/DIB/dib/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/..

2018-02-09 10:33:22.224 | +++ dib-init-system

Re: [Openstack-operators] [Openstack] diskimage-builder: prepare ubuntu 17.x images

2018-02-09 Thread Volodymyr Litovka

Hi Tony,

On 2/9/18 6:01 AM, Tony Breeds wrote:

On Thu, Feb 08, 2018 at 10:53:14PM +0200, Volodymyr Litovka wrote:

Hi colleagues,

does anybody here know how to prepare Ubuntu Artful (17.10) image using
diskimage-builder?

diskimage-builder use the following naming style for download -
$DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz

and while "-root" names are there for trusty/amd64 and xenial/amd64 distros,
these archives for artful (and bionic) are absent on
cloud-images.ubuntu.com. There are just different kinds of images, not
source tree as in -root archives.

I will appreciate any ideas or knowledge how to customize 17.10-based image
using diskimage-builder or in diskimage-builder-like fashion.

You might like to investigate the ubuntu-minimal DIB element which will
build your ubuntu image from apt rather than starting with the pre-built
image.

good idea, but with

export DIST="ubuntu-minimal"
export DIB_RELEASE=artful

diskimage-builder fails with the following debug:

2018-02-09 10:33:22.426 | dib-run-parts Sourcing environment file 
/tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash
2018-02-09 10:33:22.427 | + source 
/tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash
2018-02-09 10:33:22.427 |  dirname 
/tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash
2018-02-09 10:33:22.428 | +++ 
PATH='$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/in_target.d/pre-install.d/../environment.d/..'

2018-02-09 10:33:22.428 | +++ dib-init-system
2018-02-09 10:33:22.429 | + set -eu
2018-02-09 10:33:22.429 | + set -o pipefail
2018-02-09 10:33:22.429 | + '[' -f /usr/bin/systemctl -o -f 
/bin/systemctl ']'

2018-02-09 10:33:22.429 | + [[ -f /sbin/initctl ]]
2018-02-09 10:33:22.429 | + [[ -f /etc/gentoo-release ]]
2018-02-09 10:33:22.429 | + [[ -f /sbin/init ]]
2018-02-09 10:33:22.429 | + echo 'Unknown init system'
2018-02-09 10:36:54.852 | + exit 1
2018-02-09 10:36:54.853 | ++ DIB_INIT_SYSTEM='Unknown init system

while earlier it find systemd

2018-02-09 10:33:22.221 | dib-run-parts Sourcing environment file 
/tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash
2018-02-09 10:33:22.223 | + source 
/tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash
2018-02-09 10:33:22.223 |  dirname 
/tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash
2018-02-09 10:33:22.224 | +++ 
PATH=/home/doka/DIB/dib/bin:/home/doka/DIB/dib/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/..

2018-02-09 10:33:22.224 | +++ dib-init-system
2018-02-09 10:33:22.225 | + set -eu
2018-02-09 10:33:22.225 | + set -o pipefail
2018-02-09 10:33:22.225 | + '[' -f /usr/bin/systemctl -o -f 
/bin/systemctl ']'

2018-02-09 10:33:22.225 | + echo systemd
2018-02-09 10:33:22.226 | ++ DIB_INIT_SYSTEM=systemd
2018-02-09 10:33:22.226 | ++ export DIB_INIT_SYSTEM

it seems somewhere in the middle something happens to systemd package

In the meantime I'll look at how we can consume the .img file, which is
similar to what we'd need to do for Fedora
script 
diskimage-builder/diskimage_builder/elements/ubuntu/root.d/10-cache-ubuntu-tarball 
contains the function get_ubuntu_tarball() which, after all checks, does 
the following:


sudo tar -C $TARGET_ROOT --numeric-owner -xzf 
$DIB_IMAGE_CACHE/$BASE_IMAGE_FILE


probably, the easiest hack around the issue is to change above to smth like

sudo (
mount -o loop  
tar cv   | tar xv -C $TARGET_ROOT ...
)

Will try this.

--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] diskimage-builder: prepare ubuntu 17.x images

2018-02-08 Thread Volodymyr Litovka

Hi colleagues,

does anybody here know how to prepare Ubuntu Artful (17.10) image using 
diskimage-builder?


diskimage-builder use the following naming style for download - 
$DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz


and while "-root" names are there for trusty/amd64 and xenial/amd64 
distros, these archives for artful (and bionic) are absent on 
cloud-images.ubuntu.com. There are just different kinds of images, not 
source tree as in -root archives.


I will appreciate any ideas or knowledge how to customize 17.10-based 
image using diskimage-builder or in diskimage-builder-like fashion.


Thanks!

--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Octavia LBaaS - networking requirements

2018-02-06 Thread Volodymyr Litovka

Hi Flint,

I think, Octavia expects reachibility between components over management 
network, regardless of network's technology.


On 2/6/18 11:41 AM, Flint WALRUS wrote:
Hi guys, I’m wondering if the Octavia lb-mgmt-net can be a L2 provider 
network instead of a neutron L3 vxlan ?


Is Octavia specifically relying on L3 networking or can it operate 
without neutron L3 features ?


I didn't find anything specifically related to the network 
requirements except for the network itself.


Thanks guys!


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Newton LBaaS v2 settings

2017-12-15 Thread Volodymyr Litovka

Hi Grant,

in case of Octavia, when you create healthmonitor with parameters of 
monitoring:


$ openstack loadbalancer healthmonitor create
usage: openstack loadbalancer healthmonitor create [-h]
   [-f 
{json,shell,table,value,yaml}]

   [-c COLUMN]
[--max-width ]
[--fit-width]
[--print-empty]
[--noindent]
[--prefix PREFIX]
   [--name ] --delay

[--expected-codes ]
[--http_method {GET,POST,DELETE,PUT,HEAD,OPTIONS,PATCH,CONNECT,TRACE}]
--timeout 
--max-retries 
[--url-path ]
   --type
{PING,HTTP,TCP,HTTPS,TLS-HELLO}
[--max-retries-down ]
[--enable | --disable]


Octavia pushes these parameters to haproxy config on Amphorae agent 
(/var/lib/octavia//haproxy.cfg) like this:


backend f30f2586-a387-40f4-a7b7-9718aebf49d4
    mode tcp
    balance roundrobin
    timeout check 1s
    server 26ae7b5c-4ec4-4bb3-ba21-6c8bccd9cdf8 10.1.4.11:80 weight 1 
check inter 5s fall 3 rise 3
    server 611a645e-9b47-40cd-a26a-b0b2a6348959 10.1.4.14:80 weight 1 
check inter 5s fall 3 rise 3


so, if you guess it's a problem with backend servers, you can play with 
HealthMonitor parameters in order to set appropriate timeouts for 
backend servers in this pool.


On 12/15/17 12:11 PM, Grant Morley wrote:


Hi All,

I wonder if anyone would be able to help with some settings I might be 
obviously missing for LBaaS. We have a client that uses the service 
but they are coming across issues with their app randomly not 
working.  Basically if their app takes longer than 20 seconds to 
process a request it looks like LBaaS times out the connection.


I have had a look and I can't seem to find any default settings for 
either "server" or "tunnel" and wondered if there was a way I could 
increase or see any default timeout settings through the neutron cli?


I can only see timeout settings for the "Health Monitor"

Any help will be much appreciated.

Regards,

--
Grant Morley
Senior Cloud Engineer
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/> 
gr...@absolutedevops.io <mailto:gr...@absolutedevops.io> 0845 874 0580



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Openstack] Certifying SDKs

2017-12-14 Thread Volodymyr Litovka

Hi Melvin,

isn't SDK the same as Openstack REST API? In my opinion (can be 
erroneous, though), SDK should just support everything that API 
supports, providing some basic checks of parameters (e.g. verify 
compliancy of passed parameter to IP address format, etc) before calling 
API (in order to decrease load of Openstack by eliminating obviously 
broken requests).


Thanks.

On 12/11/17 8:35 AM, Melvin Hillsman wrote:

Hey everyone,

On the path to potentially certifying SDKs we would like to gather a 
list of scenarios folks would like to see "guaranteed" by an SDK.


Some examples - boot instance from image, boot instance from volume, 
attach volume to instance, reboot instance; very much like InterOp 
works to ensure OpenStack clouds provide specific functionality.


Here is a document we can share to do this - 
https://docs.google.com/spreadsheets/d/1cdzFeV5I4Wk9FK57yqQmp5JJdGfKzEOdB3Vtt9vnVJM/edit#gid=0


--
Kind regards,

Melvin Hillsman
mrhills...@gmail.com <mailto:mrhills...@gmail.com>
mobile: (832) 264-2646


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openst...@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] [neutron] Should we continue providing FQDNs for instance hostnames?

2017-09-25 Thread Volodymyr Litovka

Hi Matt,

On 9/22/17 7:10 PM, Matt Riedemann wrote:

while this approach is ok in general, some comments from my side -
1. For a new instance, if the neutron network has a dns_domain set, 
use it. I'm not totally sure how we tell from the metadata API if it's 
a new instance or not, except when we're building the config drive, 
but that could be sorted out.
In some scenarios, ports can be created for VM but be detached until 
"right" time. For example, at the moment Nova don't reflect Neutron's 
port admin state to VM (long time was going to, but thanks to this 
discussion just filled a bug 
https://bugs.launchpad.net/nova/+bug/1719261 ). So, if you need VM with 
predefined port roles (with corresponding iptables rules), but, for some 
reasons, these ports should be DOWN, you need:


- create them before VM will be created
- pass their MAC addresses to VM in order to create corresponding udev 
naming rules and subsequent configuration

- but don't attach them

In such scenario, network with "dns_domain" parameter can be unavailable 
to VM since there are no attached ports from this network at the VM 
creation time.


And a second point: what I wanted to say is that "dns_domain" is a 
property, which is available only when Designate is in use. While it can 
be immanent property of network for use with dnsmasq's "--domain" 
parameter, in order to get useful responces for DHCP "domain" queries. 
Not too critical, but full integration with DNS don't always required 
while simple DHCP functionality is enough.



2. Otherwise use the dhcp_domain config option in nova.
Crazy idea is to get customization right here - if instance's "name" is 
FQDN itself (e.g. myhost.some.domain.here) then:

- ignore "dhcp_domain" and pass "name" unchanged as hostname to VM
- but use "hostname"-part of name (e.g. myhost) to register VM in Openstack

Thank you.

--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] [neutron] Should we continue providing FQDNs for instance hostnames?

2017-09-22 Thread Volodymyr Litovka

Hi Stephen,

I think, it's useful to have hostname in Nova's metadata - this provides 
some initial information for cloud-init to configure newly created VM, 
so I would not refuse this method. A bit confusing is domain part of the 
hostname (novalocal), which derived from Openstack-wide deprecated-now 
parameter "dhcp_domain":


$ curl http://169.254.169.254/latest/meta-data/hostname
jex-n1.novalocal

cloud-init qualify this as FQDN and prepare configuration accordingly. 
Not too critical, but if there would be any way to use user-defined 
domain part in metadata, it will not break backward compatibility with 
cloud-init but reduce bustle upon initial VM configuration :)


And another topic, in Neutron, regarding domainname. Any DHCP-server, 
created by Neutron, will return "domain" derived from system-wide 
"dns_name" parameter (defined in neutron.conf and explicitly used in 
argument "--domain" of dnsmasq). There is no way to customize this 
parameter on a per-network basis (parameter "dns_domain" is in action 
only with Designate, no other ways to use it). Again, it would be great 
if it will be possible to set per-network domain name in order to deal 
with DHCP / DNS queries from connected VMs.


Thank you.

On 9/8/17 12:12 PM, Stephen Finucane wrote:

[Re-posting (in edited from) from openstack-dev]

Nova has a feature whereby it will provide instance host names that cloud-init
can extract and use inside the guest, i.e. this won't happen without cloud-
init. These host names are fully qualified domain names (FQDNs) based upon the
instance name and local domain name. However, as noted in bug #1698010 [1], the
domain name part of this is based up nova's 'dhcp_domain' option, which is a
nova-network option that has been deprecated [2].

My idea to fix this bug was to start consuming this information from neutron
instead, via the . However, per the feedback in the (WIP) fix [3], this
requires requires that the 'DNS Integration' extension works and will introduce
a regression for users currently relying on the 'dhcp_domain' option. This
suggests it might not be the best approach to take but, alas, I don't have any
cleverer ones yet.

My initial question to openstack-dev was "are FQDNs a valid thing to use as a
hostname in a guest" and it seems they definitely are, even if they're not
consistently used [4][5]. However, based on other comments [6], it seems there
are alternative approaches and even openstack-infra don't use this
functionality (preferring instead to configure hostnames using their
orchestration software, if that's what nodepool could be seen as?). As a
result, I have a new question: "should nova be in the business of providing
this information (via cloud-init and the metadata service) at all"?

I don't actually have any clever ideas regarding how we can solve this. As
such, I'm open to any and all input.

Cheers,
Stephen

[1] https://bugs.launchpad.net/nova/+bug/1698010
[2] https://review.openstack.org/#/c/395683/
[3] https://review.openstack.org/#/c/480616/
[4] http://lists.openstack.org/pipermail/openstack-dev/2017-September/121948.ht
ml
[5] http://lists.openstack.org/pipermail/openstack-dev/2017-September/121794.ht
ml
[6] http://lists.openstack.org/pipermail/openstack-dev/2017-September/121877.ht
ml

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] port state is UP when admin-state-up is False (neutron/+bug/1672629)

2017-08-11 Thread Volodymyr Litovka


On 8/8/17 7:18 PM, Kevin Benton wrote:

The best way to completely ensure the instance isn't trying to use it 
is to detach it from the instance using the 'nova interface-detach' 
command.
Sure, but this introduces additional complexity in complex environments 
when it's required to have predictable interface naming accordingly to 
roles (e.g. eth0 is always WAN connection, eth1 is always LAN1, eth2 is 
always control/mgmt, etc etc). Attaching/detaching interfaces changes 
this and requires to manage udev rules, which adds issues when creating 
new VM from snapshot, ... :-)


Not too critical, everything can be handled using more or less complex 
workarounds, but, since libvirt has support to set interface state 
(using '*virsh domif-setlink domain interface-device state*'), why don't 
use this call to reflect interface state according to Openstack's settings?


Thanks.


On Tue, Aug 8, 2017 at 7:49 AM, Volodymyr Litovka <doka...@gmx.com 
<mailto:doka...@gmx.com>> wrote:


Hi Kevin,

see below

On 8/8/17 1:06 AM, Kevin Benton wrote:

What backend are you using? That bug is about the port showing
ACTIVE when admin_state_up=False but it's still being
disconnected from the dataplane. If you are seeing dataplane
traffic with admin_state_up=False, then that is a separate bug.

I'm using OVS

Also, keep in mind that marking the port down will still not be
reflected inside of the VM via ifconfig or ethtool. It will
always show active in there. So even after we fix bug 1672629,
you are going to see the port is connected inside of the VM.

Is there way to disconnect port, thus putting it into DOWN state
on VM, using Openstack API ? This is important for *public clouds*
when it can be necessary to shutdown port of unmanaged
(customer's) VM. The only idea I have is to set admin_state_up to
False and, actually, it's the only command, which control port state.

As I mentioned earlier, it seems it was working in Kilo ("I have
checked the behavior of admin_state_up of Kilo version, when port
admin-state-up is set to False, the port status will be DOWN.")
but Ocata shows another behaviour, ignoring this parameter.

So, any ideas on how to shutdown port on VM using Openstack API?

Thank you!


On Mon, Aug 7, 2017 at 5:21 AM, Volodymyr Litovka
<doka...@gmx.com <mailto:doka...@gmx.com>> wrote:

Hi colleagues,

am I the only who care about this case? -
https://bugs.launchpad.net/neutron/+bug/1672629
<https://bugs.launchpad.net/neutron/+bug/1672629>

The problem is when I set port admin_state_up to False, it
still UP on the VM thus continuing to route statically
configured networks (e.g. received from DHCP host_routes),
sending DHCP reqs, etc

As people discovered, in Kilo everything was ok - "I have
checked the behavior of admin_state_up of Kilo version, when
port admin-state-up is set to False, the port status will be
DOWN." - but at least in Ocata it is broken.

Anybody facing this problem too? Any ideas on how to work
around it?

    Thank you.

-- 
Volodymyr Litovka

   "Vision without Execution is Hallucination." -- Thomas Edison


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>




-- 
Volodymyr Litovka

   "Vision without Execution is Hallucination." -- Thomas Edison




--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] port state is UP when admin-state-up is False (neutron/+bug/1672629)

2017-08-08 Thread Volodymyr Litovka

Hi Kevin,

see below

On 8/8/17 1:06 AM, Kevin Benton wrote:
What backend are you using? That bug is about the port showing ACTIVE 
when admin_state_up=False but it's still being disconnected from the 
dataplane. If you are seeing dataplane traffic with 
admin_state_up=False, then that is a separate bug.

I'm using OVS
Also, keep in mind that marking the port down will still not be 
reflected inside of the VM via ifconfig or ethtool. It will always 
show active in there. So even after we fix bug 1672629, you are going 
to see the port is connected inside of the VM.
Is there way to disconnect port, thus putting it into DOWN state on VM, 
using Openstack API ? This is important for *public clouds* when it can 
be necessary to shutdown port of unmanaged (customer's) VM. The only 
idea I have is to set admin_state_up to False and, actually, it's the 
only command, which control port state.


As I mentioned earlier, it seems it was working in Kilo ("I have checked 
the behavior of admin_state_up of Kilo version, when port admin-state-up 
is set to False, the port status will be DOWN.") but Ocata shows another 
behaviour, ignoring this parameter.


So, any ideas on how to shutdown port on VM using Openstack API?

Thank you!


On Mon, Aug 7, 2017 at 5:21 AM, Volodymyr Litovka <doka...@gmx.com 
<mailto:doka...@gmx.com>> wrote:


Hi colleagues,

am I the only who care about this case? -
https://bugs.launchpad.net/neutron/+bug/1672629
<https://bugs.launchpad.net/neutron/+bug/1672629>

The problem is when I set port admin_state_up to False, it still
UP on the VM thus continuing to route statically configured
networks (e.g. received from DHCP host_routes), sending DHCP reqs, etc

As people discovered, in Kilo everything was ok - "I have checked
the behavior of admin_state_up of Kilo version, when port
admin-state-up is set to False, the port status will be DOWN." -
but at least in Ocata it is broken.

Anybody facing this problem too? Any ideas on how to work around it?

Thank you.

-- 
Volodymyr Litovka

   "Vision without Execution is Hallucination." -- Thomas Edison


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>




--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova]

2017-08-07 Thread Volodymyr Litovka
If you don't recreate Neutron ports (just destroying VM, creating it as 
new and attaching old ports), then you can distinguish between 
interfaces by MAC addresses and store this in udev rules. You can do 
this on first boot (e.g. in cloud-init's "startcmd" command), using 
information from /sys/class/net directory.



On 7/31/17 9:14 PM, Morgenstern, Chad wrote:


Hi,

I am trying to programmatically rebuild a nova instance that has a 
persistent volume for its root device.


I am specifically trying to rebuild an instance that has multiple 
network interfaces and a floating ip.


I have observed that the order in which the network interface are 
attached  matters, floating ip attach to eth0 based.


How do I figure out which of the interfaces currently attached is 
associated with eth0?




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] port state is UP when admin-state-up is False (neutron/+bug/1672629)

2017-08-07 Thread Volodymyr Litovka

Hi colleagues,

am I the only who care about this case? - 
https://bugs.launchpad.net/neutron/+bug/1672629


The problem is when I set port admin_state_up to False, it still UP on 
the VM thus continuing to route statically configured networks (e.g. 
received from DHCP host_routes), sending DHCP reqs, etc


As people discovered, in Kilo everything was ok - "I have checked the 
behavior of admin_state_up of Kilo version, when port admin-state-up is 
set to False, the port status will be DOWN." - but at least in Ocata it 
is broken.


Anybody facing this problem too? Any ideas on how to work around it?

Thank you.

--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ocata diskimage-builder heat issues

2017-06-30 Thread Volodymyr Litovka

Hi Amit,

Please, check whether this can be an issue -

https://bugs.launchpad.net/openstack-manuals/+bug/1661759

you should use 'v2.0' path in both ec2authtoken and keystone_authtoken 
sections of heat.conf.


On 6/30/17 11:21 AM, Amit Kumar wrote:

Hello,

Yes, my instance had os-collect-config service running. Unfortunately, 
I am not having the same setup to see if /var/lib/heat-config/ is 
containing deployment scripts or not but I remember I checked 
/var/lib/cloud/instance/scripts/ was not having anything.


Regards,
Amit


On Thu, Jun 29, 2017 at 7:54 PM, Ignazio Cassano 
<ignaziocass...@gmail.com <mailto:ignaziocass...@gmail.com>> wrote:


Hello Amit, tomorrow I'll try with trusty. Centos7 is working.
Some questions:
your instance created ny heat with SoftwareDeployment has
os-collect-config runnig ?
If yes, lauching os.refresh-config ang going under
/var/lib/heat-config/ on yoyr instance, you should see deployment
scripts 

Regards
Ignazio


2017-06-29 13:47 GMT+02:00 Amit Kumar <ebiib...@gmail.com
<mailto:ebiib...@gmail.com>>:

Hi,

I tried to create Ubuntu Trusty image using diskimage-builder
tag 1.28.0, dib-run-parts got included in VM os-refresh-config
should have worked but still SoftwareDeployment didn't work
with the cloud image.

Regards,
Amit


On Jun 29, 2017 5:08 PM, "Matteo Panella"
<matteo.pane...@cnaf.infn.it
<mailto:matteo.pane...@cnaf.infn.it>> wrote:

On 29/06/2017 12:11, Ignazio Cassano wrote:
> Hello all,
> the new version of diskimage-builder (I am testing for
centos 7) does
> not install dib-utils and jq in te image.
> The above are required by os-refresh-config .

Yup, I reverted diskimage-builder to 2.2.0 (the last tag
before
dib-run-parts was not injected anymore) and
os-refresh-config works
correctly.

os-refresh-config should probably be modified to depend on
dib-run-parts, however:
a) dib-run-parts provides package-style installation with an
RPM-specific package name
b) the package does not exist for Ubuntu 14.04 and there
are no
source-style installation scripts

Regards,
--
Matteo Panella
INFN CNAF
Via Ranzani 13/2 c - 40127 Bologna, Italy
Phone: +39 051 609 2903 <tel:+39%20051%20609%202903>






___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ocata diskimage-builder heat issues

2017-06-30 Thread Volodymyr Litovka

Hi Ignazio,

recently I've opened bug - https://bugs.launchpad.net/heat/+bug/1691964 
- with instructions on how to build images with Software Deployment 
using diskimage-builder.


Hope this'll help.


On 6/28/17 7:52 PM, Ignazio Cassano wrote:
Hi All, last March I user diskimage builder tool to generate a centos7 
image with tools for heat software deployment. That image worked fine 
with heat software deployment in newton and works today with ocata.
Upgrading diskimage builder and creating again the image , heat 
software deployment does not work because some errors in 
os-refresh-config .
I do not remember what version of diskimage builder I used in March. 
At this time I am trying with 2.6.1 and 2.6.2 with same issues.

Does anyone know what is changed ?
Regards
Ignazio



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] control guest VMs in isolated network

2017-05-23 Thread Volodymyr Litovka

Hi colleagues,

are there ways to control guest VMs which reside in isolated network?

In general, there two methods are available:

1. use Heat's SoftwareDeployment method
2. use Qemu Guest Agent

First method requires accessibility of Keystone/Heat (os-collect-agent 
authorizes on Keystone, receives endpoints list and use public Heat's 
endpoint to deploy changes), but, since network is isolated, these 
addresses are inaccessible. It can work if Neutron can provide proxying 
like it do for Metadata server, but I didn't find this kind of 
functionality neither in Neutron's documentation nor in other sources. 
And I don't want to apply another NIC to VM for access to Keystone/Heat, 
since it violates customer's rules (this is, by design, isolated network 
with just VPN connection to premises). So the first question is - 
*whether Neutron can proxy requests to Keystone/Heat like it do this for 
Metadata*?


Second method (using qemu guest agent) gives some control of VM, but, 
again, I wasn't be able to find how this can achieved using Nova. There 
are some mentions about this functionality but no details and examples. 
So, the second question - *whether Nova supports qemu guest agent and 
allows to use available calls of QEMU-ga protocol, including 
'guest-exec**'*?


And, may be, there are another methods or ways to use mentioned above 
methods to bypass isolation while keeping it?


Thank you!

--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators