Re: [ovirt-devel] [OST] [HC] HE VM fails to start

2017-04-06 Thread Dan Kenigsberg
On Thu, Apr 6, 2017 at 5:27 PM, Petr Horacek  wrote:
> I started the basic-hc test with latest VDSM and it passed OK:
>
> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/205/
>
> Are we good?

"we" as in "vdsm networking" are good.

"We" as devel@ovirt still have a riddle regarding why hc-basic tends
to get stuck.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] Firewalld migration.

2017-04-06 Thread Yaniv Kaul
On Thu, Apr 6, 2017 at 4:03 PM, Leon Goldberg  wrote:

> p.s., we've begun implementing option #2 using the following design and
> approach:
>

I'm missing the design @ https://github.com/oVirt/ovirt-site/pulls ...


>
> Beginning with a configurable threshold for cluster compatibility levels
> (defaulted to 4.2), instead of using/deploying iptables' deploy unit, we
> set a firewalld boolean in vdsm.conf's deploy unit (similarly to iptables;
> only in case firewall override is set).
>

This is exactly what I preferred we want to - continue to extend VDSM to
perform deployment, service management and configuration...


>
> Using a new dedicated vdsm configurator for firewalld, the required
> services are added to the active zone(s) (currently being just the public
> one) and become operational. This only takes place if firewalld's boolean
> is set to true in vdsm.conf. We determine what non-baseline services should
> be added based on what is installed on the host (e.g. gluster packages).
>
> This approach guarantees that neither upgrading Engine or a host
> separately will cause unwarranted firewall related modifications (more
> specifically, custom rules/iptables' service remain intact). Explicitly
> installing/re-installing hosts in compatible clusters via an upgraded
> Engine is the only way to override custom rules/enabling firewalld's
> service over iptables' service (barring manual alterations to
> vdsm.conf...). We're also going to warn users during engine-setup and add
> alerts during host (re)installations.
>

I would not pursue this direction until we are convinced using Ansible is
not a better, easier, more user-friendly approach.
Ansible can do all the checks and understand if need to keep iptables or
switch to firewalld.
Y.


>
> On Thu, Apr 6, 2017 at 2:56 PM, Leon Goldberg  wrote:
>
>> Hey,
>>
>> There seems to be a growing consensus towards moving custom rules out of
>> Engine. It is believed that Engine shouldn't have assumed the role of a
>> centralized firewall management system in he first place, and that using a
>> proper 3rd party solution will be both favorable to the users (allowing
>> better functionality and usability) and will allow us to simplify our
>> firewall deployment process.
>>
>> Considering we don't have to manage custom rules, a host will be able to
>> derive all the information regarding its firewalld services from its own
>> configuration. Consequently, option #2 becomes a forerunner with Engine's
>> involvement being even further diminished.
>>
>>
>>
>> On Sun, Mar 26, 2017 at 1:33 PM, Leon Goldberg 
>> wrote:
>>
>>>
>>> Hey,
>>>
>>> We're looking to migrate from iptables to firewalld. We came up with a
>>> couple of possible approaches we'd like opinions on. I'll list the options
>>> first, and will
>>>
>>> 1) Replicate existing flow:
>>>
>>> As of date, iptable rules are inserted in the database via SQL config
>>> files. During host deployment, VdsDeployIptablesUnit adds the required
>>> rules (based on cluster/firewall configuration) to the deployment
>>> configuration, en route to being deployed on the host via otopi and its
>>> iptables plugin.
>>>
>>> Pros:
>>>
>>> - Reuse of existing infrastructure.
>>>
>>> Cons:
>>>
>>> - Current infrastructure is overly complex...
>>> - Many of the required services are provided by firewalld. Rewriting
>>> them is wasteful; specifying them (instead of providing actual service .xml
>>> content) will require adaptations on both (engine/host) sides. More on that
>>> later.
>>>
>>>
>>> 2) Host side based configuration:
>>>
>>> Essentially, all the required logic (aforementioned cluster/firewall
>>> configuration) to determine if/how firewalld should be deployed could be
>>> passed on to the host via ohd. Vdsm could take on the responsibility of
>>> examining the relevant configuration, and then creating and/or adding the
>>> required services (using vdsm.conf and vdsm-tool).
>>>
>>> Pros:
>>>
>>>  - Engine side involvement is greatly diminished.
>>>  - Simple(r).
>>>
>>> Cons:
>>>
>>>  - Custom services/rules capabilities will have to be rethought and
>>> re-implemented (current infrastructure supports custom iptables rules by
>>> being specified in the SQL config file).
>>>
>>>
>>> 3) Some other hybrid approach:
>>>
>>> If we're able to guarantee all the required firewalld services are
>>> statically provided one way or the other, the current procedure could be
>>> replicated and be made more simpler. Instead of providing xml content in
>>> the form of strings, service names could be supplied. The responsibility of
>>> actual service deployment becomes easier, and could be left to otopi (with
>>> the appropriate modifications) or switched over to vdsm.
>>>
>>> --
>>>
>>> Regardless, usage of statically provided vs. dynamically created
>>> services remains an open question. I think we'd like to avoid implementing
>>> logic that ask whether some service is provided (and then write it if it
>>> isn't...), and so choosing between 

Re: [ovirt-devel] [OST] [HC] HE VM fails to start

2017-04-06 Thread Petr Horacek
I started the basic-hc test with latest VDSM and it passed OK:

http://jenkins.ovirt.org/job/ovirt-system-tests_manual/205/

Are we good?

2017-04-06 14:21 GMT+02:00 Dan Kenigsberg :
> On Thu, Apr 6, 2017 at 3:02 PM, Sahina Bose  wrote:
>>
>>
>> On Thu, Apr 6, 2017 at 2:24 PM, Dan Kenigsberg  wrote:
>>>
>>> On Thu, Apr 6, 2017 at 11:31 AM, Sahina Bose  wrote:
>>> >
>>> >
>>> > On Thu, Apr 6, 2017 at 1:31 PM, Dan Kenigsberg 
>>> > wrote:
>>> >>
>>> >> I've merged the fix of https://gerrit.ovirt.org/#/c/75134/
>>> >>
>>> >> With it, the hc-basic suit no longer fail - it hangs for hours, and I
>>> >> don't know why.
>>> >>
>>> >> Sahina, can you look at
>>> >> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/199/parameters/
>>> >
>>> >
>>> > The gluster setup and HE install does take around 15-20 minutes. Looks
>>> > like
>>> > you aborted in between?
>>>
>>>
>>> What does the log say?
>>> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/199/console
>>>
>>> I aborted the job after 1 hour and 47 minutes.
>>
>>
>> Sorry, missed that. I could not make out much from logs apart from the fact
>> that it's stuck on starting gluster services. Since there are no gluster
>> logs available from the host VMs, cannot dig further.
>
> Can you check by running the hc suite on-premise?
>
> I believe that we've solved the network-related bug, but I do not have
> a proof, as there's something else broken there.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] Firewalld migration.

2017-04-06 Thread Leon Goldberg
p.s., we've begun implementing option #2 using the following design and
approach:

Beginning with a configurable threshold for cluster compatibility levels
(defaulted to 4.2), instead of using/deploying iptables' deploy unit, we
set a firewalld boolean in vdsm.conf's deploy unit (similarly to iptables;
only in case firewall override is set).

Using a new dedicated vdsm configurator for firewalld, the required
services are added to the active zone(s) (currently being just the public
one) and become operational. This only takes place if firewalld's boolean
is set to true in vdsm.conf. We determine what non-baseline services should
be added based on what is installed on the host (e.g. gluster packages).

This approach guarantees that neither upgrading Engine or a host separately
will cause unwarranted firewall related modifications (more specifically,
custom rules/iptables' service remain intact). Explicitly
installing/re-installing hosts in compatible clusters via an upgraded
Engine is the only way to override custom rules/enabling firewalld's
service over iptables' service (barring manual alterations to
vdsm.conf...). We're also going to warn users during engine-setup and add
alerts during host (re)installations.

On Thu, Apr 6, 2017 at 2:56 PM, Leon Goldberg  wrote:

> Hey,
>
> There seems to be a growing consensus towards moving custom rules out of
> Engine. It is believed that Engine shouldn't have assumed the role of a
> centralized firewall management system in he first place, and that using a
> proper 3rd party solution will be both favorable to the users (allowing
> better functionality and usability) and will allow us to simplify our
> firewall deployment process.
>
> Considering we don't have to manage custom rules, a host will be able to
> derive all the information regarding its firewalld services from its own
> configuration. Consequently, option #2 becomes a forerunner with Engine's
> involvement being even further diminished.
>
>
>
> On Sun, Mar 26, 2017 at 1:33 PM, Leon Goldberg 
> wrote:
>
>>
>> Hey,
>>
>> We're looking to migrate from iptables to firewalld. We came up with a
>> couple of possible approaches we'd like opinions on. I'll list the options
>> first, and will
>>
>> 1) Replicate existing flow:
>>
>> As of date, iptable rules are inserted in the database via SQL config
>> files. During host deployment, VdsDeployIptablesUnit adds the required
>> rules (based on cluster/firewall configuration) to the deployment
>> configuration, en route to being deployed on the host via otopi and its
>> iptables plugin.
>>
>> Pros:
>>
>> - Reuse of existing infrastructure.
>>
>> Cons:
>>
>> - Current infrastructure is overly complex...
>> - Many of the required services are provided by firewalld. Rewriting them
>> is wasteful; specifying them (instead of providing actual service .xml
>> content) will require adaptations on both (engine/host) sides. More on that
>> later.
>>
>>
>> 2) Host side based configuration:
>>
>> Essentially, all the required logic (aforementioned cluster/firewall
>> configuration) to determine if/how firewalld should be deployed could be
>> passed on to the host via ohd. Vdsm could take on the responsibility of
>> examining the relevant configuration, and then creating and/or adding the
>> required services (using vdsm.conf and vdsm-tool).
>>
>> Pros:
>>
>>  - Engine side involvement is greatly diminished.
>>  - Simple(r).
>>
>> Cons:
>>
>>  - Custom services/rules capabilities will have to be rethought and
>> re-implemented (current infrastructure supports custom iptables rules by
>> being specified in the SQL config file).
>>
>>
>> 3) Some other hybrid approach:
>>
>> If we're able to guarantee all the required firewalld services are
>> statically provided one way or the other, the current procedure could be
>> replicated and be made more simpler. Instead of providing xml content in
>> the form of strings, service names could be supplied. The responsibility of
>> actual service deployment becomes easier, and could be left to otopi (with
>> the appropriate modifications) or switched over to vdsm.
>>
>> --
>>
>> Regardless, usage of statically provided vs. dynamically created services
>> remains an open question. I think we'd like to avoid implementing logic
>> that ask whether some service is provided (and then write it if it
>> isn't...), and so choosing between the dynamic and static approaches is
>> also needed. Using the static approach, guaranteeing *all* services are
>> provided will be required.
>>
>> I do believe guaranteeing the presence of all required services is worth
>> it, however custom services aren't going to be naively compatible, and
>> we'll still have to use similar mechanism as described in #1 (service
>> string -> .xml -> addition of service name to active zone).
>>
>>
>> Your thoughts are welcome.
>>
>> Thanks,
>> Leon
>>
>>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/dev

Re: [ovirt-devel] Firewalld migration.

2017-04-06 Thread Martin Sivak
> Engine - no, running from RHVM - yes - if you are using Ansible , I think it
> makes sense to use a single common script or possibly per cluster.

Exactly my point. The engine should not manage those, but it should
still know how to execute them to perform a proper host deploy.

Martin

On Thu, Apr 6, 2017 at 2:13 PM, Yaniv Kaul  wrote:
>
>
> On Thu, Apr 6, 2017 at 2:56 PM, Leon Goldberg  wrote:
>>
>> Hey,
>>
>> There seems to be a growing consensus towards moving custom rules out of
>> Engine. It is believed that Engine shouldn't have assumed the role of a
>> centralized firewall management system in he first place, and that using a
>> proper 3rd party solution will be both favorable to the users (allowing
>> better functionality and usability) and will allow us to simplify our
>> firewall deployment process.
>>
>> Considering we don't have to manage custom rules, a host will be able to
>> derive all the information regarding its firewalld services from its own
>> configuration. Consequently, option #2 becomes a forerunner with Engine's
>> involvement being even further diminished.
>
>
> Engine - no, running from RHVM - yes - if you are using Ansible , I think it
> makes sense to use a single common script or possibly per cluster.
> Y.
>
>>
>>
>>
>> On Sun, Mar 26, 2017 at 1:33 PM, Leon Goldberg 
>> wrote:
>>>
>>>
>>> Hey,
>>>
>>> We're looking to migrate from iptables to firewalld. We came up with a
>>> couple of possible approaches we'd like opinions on. I'll list the options
>>> first, and will
>>>
>>> 1) Replicate existing flow:
>>>
>>> As of date, iptable rules are inserted in the database via SQL config
>>> files. During host deployment, VdsDeployIptablesUnit adds the required rules
>>> (based on cluster/firewall configuration) to the deployment configuration,
>>> en route to being deployed on the host via otopi and its iptables plugin.
>>>
>>> Pros:
>>>
>>> - Reuse of existing infrastructure.
>>>
>>> Cons:
>>>
>>> - Current infrastructure is overly complex...
>>> - Many of the required services are provided by firewalld. Rewriting them
>>> is wasteful; specifying them (instead of providing actual service .xml
>>> content) will require adaptations on both (engine/host) sides. More on that
>>> later.
>>>
>>>
>>> 2) Host side based configuration:
>>>
>>> Essentially, all the required logic (aforementioned cluster/firewall
>>> configuration) to determine if/how firewalld should be deployed could be
>>> passed on to the host via ohd. Vdsm could take on the responsibility of
>>> examining the relevant configuration, and then creating and/or adding the
>>> required services (using vdsm.conf and vdsm-tool).
>>>
>>> Pros:
>>>
>>>  - Engine side involvement is greatly diminished.
>>>  - Simple(r).
>>>
>>> Cons:
>>>
>>>  - Custom services/rules capabilities will have to be rethought and
>>> re-implemented (current infrastructure supports custom iptables rules by
>>> being specified in the SQL config file).
>>>
>>>
>>> 3) Some other hybrid approach:
>>>
>>> If we're able to guarantee all the required firewalld services are
>>> statically provided one way or the other, the current procedure could be
>>> replicated and be made more simpler. Instead of providing xml content in the
>>> form of strings, service names could be supplied. The responsibility of
>>> actual service deployment becomes easier, and could be left to otopi (with
>>> the appropriate modifications) or switched over to vdsm.
>>>
>>> --
>>>
>>> Regardless, usage of statically provided vs. dynamically created services
>>> remains an open question. I think we'd like to avoid implementing logic that
>>> ask whether some service is provided (and then write it if it isn't...), and
>>> so choosing between the dynamic and static approaches is also needed. Using
>>> the static approach, guaranteeing all services are provided will be
>>> required.
>>>
>>> I do believe guaranteeing the presence of all required services is worth
>>> it, however custom services aren't going to be naively compatible, and we'll
>>> still have to use similar mechanism as described in #1 (service string ->
>>> .xml -> addition of service name to active zone).
>>>
>>>
>>> Your thoughts are welcome.
>>>
>>> Thanks,
>>> Leon
>>>
>>
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [OST] [HC] HE VM fails to start

2017-04-06 Thread Dan Kenigsberg
On Thu, Apr 6, 2017 at 3:02 PM, Sahina Bose  wrote:
>
>
> On Thu, Apr 6, 2017 at 2:24 PM, Dan Kenigsberg  wrote:
>>
>> On Thu, Apr 6, 2017 at 11:31 AM, Sahina Bose  wrote:
>> >
>> >
>> > On Thu, Apr 6, 2017 at 1:31 PM, Dan Kenigsberg 
>> > wrote:
>> >>
>> >> I've merged the fix of https://gerrit.ovirt.org/#/c/75134/
>> >>
>> >> With it, the hc-basic suit no longer fail - it hangs for hours, and I
>> >> don't know why.
>> >>
>> >> Sahina, can you look at
>> >> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/199/parameters/
>> >
>> >
>> > The gluster setup and HE install does take around 15-20 minutes. Looks
>> > like
>> > you aborted in between?
>>
>>
>> What does the log say?
>> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/199/console
>>
>> I aborted the job after 1 hour and 47 minutes.
>
>
> Sorry, missed that. I could not make out much from logs apart from the fact
> that it's stuck on starting gluster services. Since there are no gluster
> logs available from the host VMs, cannot dig further.

Can you check by running the hc suite on-premise?

I believe that we've solved the network-related bug, but I do not have
a proof, as there's something else broken there.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [OST] [HC] HE VM fails to start

2017-04-06 Thread Sahina Bose
On Thu, Apr 6, 2017 at 5:41 PM, Yaniv Kaul  wrote:

>
>
> On Thu, Apr 6, 2017 at 3:02 PM, Sahina Bose  wrote:
>
>>
>>
>> On Thu, Apr 6, 2017 at 2:24 PM, Dan Kenigsberg  wrote:
>>
>>> On Thu, Apr 6, 2017 at 11:31 AM, Sahina Bose  wrote:
>>> >
>>> >
>>> > On Thu, Apr 6, 2017 at 1:31 PM, Dan Kenigsberg 
>>> wrote:
>>> >>
>>> >> I've merged the fix of https://gerrit.ovirt.org/#/c/75134/
>>> >>
>>> >> With it, the hc-basic suit no longer fail - it hangs for hours, and I
>>> >> don't know why.
>>> >>
>>> >> Sahina, can you look at
>>> >> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/199/p
>>> arameters/
>>> >
>>> >
>>> > The gluster setup and HE install does take around 15-20 minutes. Looks
>>> like
>>> > you aborted in between?
>>>
>>>
>>> What does the log say?
>>> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/199/console
>>>
>>> I aborted the job after 1 hour and 47 minutes.
>>>
>>
>> Sorry, missed that. I could not make out much from logs apart from the
>> fact that it's stuck on starting gluster services. Since there are no
>> gluster logs available from the host VMs, cannot dig further.
>>
>
> Which logs are needed? I thought we collect everything from /var/log ?
>

I think logs are collected only for the tests?
This is prior to running the tests in tests-scenario


> Y.
>
>
>>
>>
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Firewalld migration.

2017-04-06 Thread Yaniv Kaul
On Thu, Apr 6, 2017 at 2:56 PM, Leon Goldberg  wrote:

> Hey,
>
> There seems to be a growing consensus towards moving custom rules out of
> Engine. It is believed that Engine shouldn't have assumed the role of a
> centralized firewall management system in he first place, and that using a
> proper 3rd party solution will be both favorable to the users (allowing
> better functionality and usability) and will allow us to simplify our
> firewall deployment process.
>
> Considering we don't have to manage custom rules, a host will be able to
> derive all the information regarding its firewalld services from its own
> configuration. Consequently, option #2 becomes a forerunner with Engine's
> involvement being even further diminished.
>

Engine - no, running from RHVM - yes - if you are using Ansible , I think
it makes sense to use a single common script or possibly per cluster.
Y.


>
>
> On Sun, Mar 26, 2017 at 1:33 PM, Leon Goldberg 
> wrote:
>
>>
>> Hey,
>>
>> We're looking to migrate from iptables to firewalld. We came up with a
>> couple of possible approaches we'd like opinions on. I'll list the options
>> first, and will
>>
>> 1) Replicate existing flow:
>>
>> As of date, iptable rules are inserted in the database via SQL config
>> files. During host deployment, VdsDeployIptablesUnit adds the required
>> rules (based on cluster/firewall configuration) to the deployment
>> configuration, en route to being deployed on the host via otopi and its
>> iptables plugin.
>>
>> Pros:
>>
>> - Reuse of existing infrastructure.
>>
>> Cons:
>>
>> - Current infrastructure is overly complex...
>> - Many of the required services are provided by firewalld. Rewriting them
>> is wasteful; specifying them (instead of providing actual service .xml
>> content) will require adaptations on both (engine/host) sides. More on that
>> later.
>>
>>
>> 2) Host side based configuration:
>>
>> Essentially, all the required logic (aforementioned cluster/firewall
>> configuration) to determine if/how firewalld should be deployed could be
>> passed on to the host via ohd. Vdsm could take on the responsibility of
>> examining the relevant configuration, and then creating and/or adding the
>> required services (using vdsm.conf and vdsm-tool).
>>
>> Pros:
>>
>>  - Engine side involvement is greatly diminished.
>>  - Simple(r).
>>
>> Cons:
>>
>>  - Custom services/rules capabilities will have to be rethought and
>> re-implemented (current infrastructure supports custom iptables rules by
>> being specified in the SQL config file).
>>
>>
>> 3) Some other hybrid approach:
>>
>> If we're able to guarantee all the required firewalld services are
>> statically provided one way or the other, the current procedure could be
>> replicated and be made more simpler. Instead of providing xml content in
>> the form of strings, service names could be supplied. The responsibility of
>> actual service deployment becomes easier, and could be left to otopi (with
>> the appropriate modifications) or switched over to vdsm.
>>
>> --
>>
>> Regardless, usage of statically provided vs. dynamically created services
>> remains an open question. I think we'd like to avoid implementing logic
>> that ask whether some service is provided (and then write it if it
>> isn't...), and so choosing between the dynamic and static approaches is
>> also needed. Using the static approach, guaranteeing *all* services are
>> provided will be required.
>>
>> I do believe guaranteeing the presence of all required services is worth
>> it, however custom services aren't going to be naively compatible, and
>> we'll still have to use similar mechanism as described in #1 (service
>> string -> .xml -> addition of service name to active zone).
>>
>>
>> Your thoughts are welcome.
>>
>> Thanks,
>> Leon
>>
>>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [OST] [HC] HE VM fails to start

2017-04-06 Thread Yaniv Kaul
On Thu, Apr 6, 2017 at 3:02 PM, Sahina Bose  wrote:

>
>
> On Thu, Apr 6, 2017 at 2:24 PM, Dan Kenigsberg  wrote:
>
>> On Thu, Apr 6, 2017 at 11:31 AM, Sahina Bose  wrote:
>> >
>> >
>> > On Thu, Apr 6, 2017 at 1:31 PM, Dan Kenigsberg 
>> wrote:
>> >>
>> >> I've merged the fix of https://gerrit.ovirt.org/#/c/75134/
>> >>
>> >> With it, the hc-basic suit no longer fail - it hangs for hours, and I
>> >> don't know why.
>> >>
>> >> Sahina, can you look at
>> >> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/199/parameters/
>> >
>> >
>> > The gluster setup and HE install does take around 15-20 minutes. Looks
>> like
>> > you aborted in between?
>>
>>
>> What does the log say?
>> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/199/console
>>
>> I aborted the job after 1 hour and 47 minutes.
>>
>
> Sorry, missed that. I could not make out much from logs apart from the
> fact that it's stuck on starting gluster services. Since there are no
> gluster logs available from the host VMs, cannot dig further.
>

Which logs are needed? I thought we collect everything from /var/log ?
Y.


>
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [OST] [HC] HE VM fails to start

2017-04-06 Thread Sahina Bose
On Thu, Apr 6, 2017 at 2:24 PM, Dan Kenigsberg  wrote:

> On Thu, Apr 6, 2017 at 11:31 AM, Sahina Bose  wrote:
> >
> >
> > On Thu, Apr 6, 2017 at 1:31 PM, Dan Kenigsberg 
> wrote:
> >>
> >> I've merged the fix of https://gerrit.ovirt.org/#/c/75134/
> >>
> >> With it, the hc-basic suit no longer fail - it hangs for hours, and I
> >> don't know why.
> >>
> >> Sahina, can you look at
> >> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/199/parameters/
> >
> >
> > The gluster setup and HE install does take around 15-20 minutes. Looks
> like
> > you aborted in between?
>
>
> What does the log say?
> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/199/console
>
> I aborted the job after 1 hour and 47 minutes.
>

Sorry, missed that. I could not make out much from logs apart from the fact
that it's stuck on starting gluster services. Since there are no gluster
logs available from the host VMs, cannot dig further.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Firewalld migration.

2017-04-06 Thread Leon Goldberg
Hey,

There seems to be a growing consensus towards moving custom rules out of
Engine. It is believed that Engine shouldn't have assumed the role of a
centralized firewall management system in he first place, and that using a
proper 3rd party solution will be both favorable to the users (allowing
better functionality and usability) and will allow us to simplify our
firewall deployment process.

Considering we don't have to manage custom rules, a host will be able to
derive all the information regarding its firewalld services from its own
configuration. Consequently, option #2 becomes a forerunner with Engine's
involvement being even further diminished.



On Sun, Mar 26, 2017 at 1:33 PM, Leon Goldberg  wrote:

>
> Hey,
>
> We're looking to migrate from iptables to firewalld. We came up with a
> couple of possible approaches we'd like opinions on. I'll list the options
> first, and will
>
> 1) Replicate existing flow:
>
> As of date, iptable rules are inserted in the database via SQL config
> files. During host deployment, VdsDeployIptablesUnit adds the required
> rules (based on cluster/firewall configuration) to the deployment
> configuration, en route to being deployed on the host via otopi and its
> iptables plugin.
>
> Pros:
>
> - Reuse of existing infrastructure.
>
> Cons:
>
> - Current infrastructure is overly complex...
> - Many of the required services are provided by firewalld. Rewriting them
> is wasteful; specifying them (instead of providing actual service .xml
> content) will require adaptations on both (engine/host) sides. More on that
> later.
>
>
> 2) Host side based configuration:
>
> Essentially, all the required logic (aforementioned cluster/firewall
> configuration) to determine if/how firewalld should be deployed could be
> passed on to the host via ohd. Vdsm could take on the responsibility of
> examining the relevant configuration, and then creating and/or adding the
> required services (using vdsm.conf and vdsm-tool).
>
> Pros:
>
>  - Engine side involvement is greatly diminished.
>  - Simple(r).
>
> Cons:
>
>  - Custom services/rules capabilities will have to be rethought and
> re-implemented (current infrastructure supports custom iptables rules by
> being specified in the SQL config file).
>
>
> 3) Some other hybrid approach:
>
> If we're able to guarantee all the required firewalld services are
> statically provided one way or the other, the current procedure could be
> replicated and be made more simpler. Instead of providing xml content in
> the form of strings, service names could be supplied. The responsibility of
> actual service deployment becomes easier, and could be left to otopi (with
> the appropriate modifications) or switched over to vdsm.
>
> --
>
> Regardless, usage of statically provided vs. dynamically created services
> remains an open question. I think we'd like to avoid implementing logic
> that ask whether some service is provided (and then write it if it
> isn't...), and so choosing between the dynamic and static approaches is
> also needed. Using the static approach, guaranteeing *all* services are
> provided will be required.
>
> I do believe guaranteeing the presence of all required services is worth
> it, however custom services aren't going to be naively compatible, and
> we'll still have to use similar mechanism as described in #1 (service
> string -> .xml -> addition of service name to active zone).
>
>
> Your thoughts are welcome.
>
> Thanks,
> Leon
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [OST] [HC] HE VM fails to start

2017-04-06 Thread Dan Kenigsberg
On Thu, Apr 6, 2017 at 11:31 AM, Sahina Bose  wrote:
>
>
> On Thu, Apr 6, 2017 at 1:31 PM, Dan Kenigsberg  wrote:
>>
>> I've merged the fix of https://gerrit.ovirt.org/#/c/75134/
>>
>> With it, the hc-basic suit no longer fail - it hangs for hours, and I
>> don't know why.
>>
>> Sahina, can you look at
>> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/199/parameters/
>
>
> The gluster setup and HE install does take around 15-20 minutes. Looks like
> you aborted in between?


What does the log say?
http://jenkins.ovirt.org/job/ovirt-system-tests_manual/199/console

I aborted the job after 1 hour and 47 minutes.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [OST] [HC] HE VM fails to start

2017-04-06 Thread Sahina Bose
On Thu, Apr 6, 2017 at 1:31 PM, Dan Kenigsberg  wrote:

> I've merged the fix of https://gerrit.ovirt.org/#/c/75134/
>
> With it, the hc-basic suit no longer fail - it hangs for hours, and I
> don't know why.
>
> Sahina, can you look at
> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/199/parameters/


The gluster setup and HE install does take around 15-20 minutes. Looks like
you aborted in between?


>
> ?
>
> On Tue, Apr 4, 2017 at 2:11 PM, Petr Horacek  wrote:
> > Hello Sahina,
> >
> > I think I have a fix for that. Can I somehow trigger the test with
> > VDSM refspec/custom RPMs?
> >
> > Thanks,
> > Petr
> >
> > 2017-04-04 8:52 GMT+02:00 Dan Kenigsberg :
> >> On Tue, Apr 4, 2017 at 9:28 AM, Sahina Bose  wrote:
> >>> Job's still failing on master.
> >>> Could this be related to network patches that got merged on Mar 28, for
> >>> instance https://gerrit.ovirt.org/#/c/74390/ ?
> >>>
> >>> On Thu, Mar 30, 2017 at 11:41 AM, Sahina Bose 
> wrote:
> 
>  The error in vdsm.log
> 
>  Traceback (most recent call last):
>    File "/usr/share/vdsm/virt/vm.py", line 2016, in _setup_devices
>  dev_object.setup()
>    File "/usr/lib/python2.7/site-packages/vdsm/virt/vmdevices/
> graphics.py",
>  line 63, in setup
>  net_api.create_libvirt_network(display_network,
> self.conf['vmId'])
>    File "/usr/lib/python2.7/site-packages/vdsm/network/api.py", line
> 90, in
>  create_libvirt_network
>  libvirt.create_network(netname, user_reference)
>    File "/usr/lib/python2.7/site-packages/vdsm/network/libvirt.py",
> line
>  94, in create_network
>  if not is_libvirt_network(netname):
>    File "/usr/lib/python2.7/site-packages/vdsm/network/libvirt.py",
> line
>  159, in is_libvirt_network
>  netname = LIBVIRT_NET_PREFIX + netname
>  TypeError: cannot concatenate 'str' and 'NoneType' objects
>  2017-03-29 22:58:39,559-0400 ERROR (vm/d71bdf4e) [virt.vm]
>  (vmId='d71bdf4e-1eb3-4762-bd0e-05bb9f5e43ef') The vm start process
> failed
>  (vm:659)
> 
>  The tests last passed on Mar 28. Did a recent patch break this?
> 
>  The full build logs at
>  http://jenkins.ovirt.org/job/ovirt_master_hc-system-tests/
> 52/artifact/exported-artifacts/test_logs/
> 
>  thanks
>  sahina
> >>
> >> Sorry Sahina for having missed your email. Indeed, it seems that
> >> Eddy's topic branch of creating libvirt networks just-in-time causes
> >> the failure.
> >> Do note that the topic branch is quite long, and reverting parts of it
> >> might take a little while.
> >>
> >> Petr,  can you make hc-system-tests green again?
> >>
> >> Regards,
> >> Dan.
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Abusing injection and DB access

2017-04-06 Thread Roy Golan
On Wed, Apr 5, 2017 at 11:34 PM Moti Asayag  wrote:

> On Wed, Apr 5, 2017 at 11:17 PM, Roy Golan  wrote:
>
>
>
> On Wed, Apr 5, 2017 at 9:06 PM Moti Asayag  wrote:
>
> Hi All,
>
> ATM, there are 78 occurrences of "Injector.injectMembers(new
> AuditLogableBase())" in ovirt-engine project, which their main purpose is
> to ease the resolve of the placeholders of the audit log message while
> logging an event.
>
> For instance AuditLogType.MAC_ADDRESS_IS_EXTERNAL is being used from
> ImportVmCommandBase.java in the following way:
>
> private AuditLogableBase createExternalMacsAuditLog(VM vm, Set
> externalMacs) {
> AuditLogableBase logable = *Injector.injectMembers*(new
> AuditLogableBase());
> logable.setVmId(vm.getId());
> logable.setCustomCommaSeparatedValues("MACAddr", externalMacs);
> return logable;
> }
>
> The entry in the properties file is:
> MAC_ADDRESS_IS_EXTERNAL=VM ${*VmName*} has MAC address(es) ${MACAddr},
> which is/are out of its MAC pool definitions.
>
> Therefore the only purpose of the injection is to allow the
> AuditLogDirector to resolve the ${*VmName*} which is already known at the
> time of creating the AuditLogableBase entry.
>
> The result is injecting the DAOs for the AuditLogableBase instance and
> using the VM dao to retrieve the VM entry from the DB.
> This is just a wastef of injection and DB access while both can be spared.
>
> This could have been easily replaced by one of the following:
>
>- auditLogableBase.setVmName(vm.getName());
>
> - setVmName is protected so not usable as is
>
>
> It will become public if we agree on
>
>
> https://gerrit.ovirt.org/#/c/75244/2/backend/manager/modules/dal/src/main/java/org/ovirt/engine/core/dal/dbbroker/auditloghandling/AuditLogableBase.java
>
>
>
>- auditLogableBase.addCustomValue("VmName", vm.getName());
>
> I prefer this, it is readable. and BTW it is fluent, it returns 'this' so
> use
>
>   AuditLogDirector(new AuditLogableBase(type)
>   .addCustomValue("VmName", vm.getName()));
>
>
> I'm okay with this as well.
>
>
> Please pick up any occurrence from your domain and send a patch to replace
> it where possible.
> Thanks in advance,
> Moti
>
>
> +1
>
> Frankly the fact that all the DAOs sets protected access in
> AuditLogableBase is a total abuse. Every command should declare its own
> deps.
>
>
> That will require a huge effort.
>

Removed them all, https://gerrit.ovirt.org/75262 compile +1
Now need to fix the tests - I'd appreciate help here


>
> ___
>
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
>
>
>
> --
> Regards,
> Moti
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Abusing injection and DB access

2017-04-06 Thread Yevgeny Zaspitsky
On Wed, Apr 5, 2017 at 11:17 PM, Roy Golan  wrote:

>
>
> On Wed, Apr 5, 2017 at 9:06 PM Moti Asayag  wrote:
>
>> Hi All,
>>
>> ATM, there are 78 occurrences of "Injector.injectMembers(new
>> AuditLogableBase())" in ovirt-engine project, which their main purpose is
>> to ease the resolve of the placeholders of the audit log message while
>> logging an event.
>>
>> For instance AuditLogType.MAC_ADDRESS_IS_EXTERNAL is being used from
>> ImportVmCommandBase.java in the following way:
>>
>> private AuditLogableBase createExternalMacsAuditLog(VM vm, Set
>> externalMacs) {
>> AuditLogableBase logable = *Injector.injectMembers*(new
>> AuditLogableBase());
>> logable.setVmId(vm.getId());
>> logable.setCustomCommaSeparatedValues("MACAddr", externalMacs);
>> return logable;
>> }
>>
>> The entry in the properties file is:
>> MAC_ADDRESS_IS_EXTERNAL=VM ${*VmName*} has MAC address(es) ${MACAddr},
>> which is/are out of its MAC pool definitions.
>>
>> Therefore the only purpose of the injection is to allow the
>> AuditLogDirector to resolve the ${*VmName*} which is already known at
>> the time of creating the AuditLogableBase entry.
>>
>> The result is injecting the DAOs for the AuditLogableBase instance and
>> using the VM dao to retrieve the VM entry from the DB.
>> This is just a wastef of injection and DB access while both can be spared.
>>
>> This could have been easily replaced by one of the following:
>>
>>- auditLogableBase.setVmName(vm.getName());
>>
>> - setVmName is protected so not usable as is
>
>>
>>- auditLogableBase.addCustomValue("VmName", vm.getName());
>>
>> I prefer this, it is readable. and BTW it is fluent, it returns 'this' so
> use
>
>   AuditLogDirector(new AuditLogableBase(type)
>   .addCustomValue("VmName", vm.getName()));
>
>> Please pick up any occurrence from your domain and send a patch to
>> replace it where possible.
>> Thanks in advance,
>> Moti
>>
>
> +1
>
+1

>
> Frankly the fact that all the DAOs sets protected access in
> AuditLogableBase is a total abuse. Every command should declare its own
> deps.
>

+100 Can't agree more with declaring fine grained dependencies on each
bean/command.

>
> ___
>
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [OST] [HC] HE VM fails to start

2017-04-06 Thread Dan Kenigsberg
I've merged the fix of https://gerrit.ovirt.org/#/c/75134/

With it, the hc-basic suit no longer fail - it hangs for hours, and I
don't know why.

Sahina, can you look at
http://jenkins.ovirt.org/job/ovirt-system-tests_manual/199/parameters/
?

On Tue, Apr 4, 2017 at 2:11 PM, Petr Horacek  wrote:
> Hello Sahina,
>
> I think I have a fix for that. Can I somehow trigger the test with
> VDSM refspec/custom RPMs?
>
> Thanks,
> Petr
>
> 2017-04-04 8:52 GMT+02:00 Dan Kenigsberg :
>> On Tue, Apr 4, 2017 at 9:28 AM, Sahina Bose  wrote:
>>> Job's still failing on master.
>>> Could this be related to network patches that got merged on Mar 28, for
>>> instance https://gerrit.ovirt.org/#/c/74390/ ?
>>>
>>> On Thu, Mar 30, 2017 at 11:41 AM, Sahina Bose  wrote:

 The error in vdsm.log

 Traceback (most recent call last):
   File "/usr/share/vdsm/virt/vm.py", line 2016, in _setup_devices
 dev_object.setup()
   File "/usr/lib/python2.7/site-packages/vdsm/virt/vmdevices/graphics.py",
 line 63, in setup
 net_api.create_libvirt_network(display_network, self.conf['vmId'])
   File "/usr/lib/python2.7/site-packages/vdsm/network/api.py", line 90, in
 create_libvirt_network
 libvirt.create_network(netname, user_reference)
   File "/usr/lib/python2.7/site-packages/vdsm/network/libvirt.py", line
 94, in create_network
 if not is_libvirt_network(netname):
   File "/usr/lib/python2.7/site-packages/vdsm/network/libvirt.py", line
 159, in is_libvirt_network
 netname = LIBVIRT_NET_PREFIX + netname
 TypeError: cannot concatenate 'str' and 'NoneType' objects
 2017-03-29 22:58:39,559-0400 ERROR (vm/d71bdf4e) [virt.vm]
 (vmId='d71bdf4e-1eb3-4762-bd0e-05bb9f5e43ef') The vm start process failed
 (vm:659)

 The tests last passed on Mar 28. Did a recent patch break this?

 The full build logs at
 http://jenkins.ovirt.org/job/ovirt_master_hc-system-tests/52/artifact/exported-artifacts/test_logs/

 thanks
 sahina
>>
>> Sorry Sahina for having missed your email. Indeed, it seems that
>> Eddy's topic branch of creating libvirt networks just-in-time causes
>> the failure.
>> Do note that the topic branch is quite long, and reverting parts of it
>> might take a little while.
>>
>> Petr,  can you make hc-system-tests green again?
>>
>> Regards,
>> Dan.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 28-03-2017 ] [ repoclosure ]

2017-04-06 Thread Sandro Bonazzola
On Tue, Mar 28, 2017 at 5:17 PM, Pavel Zhukov  wrote:

>
> Hi,
>
> Newly introduced repoclosure test is failed [1] on gdeploy package
> (GlusterFS) [2]
>
> As we hadn't run this test before the only suspected patch is the patch
> to add the test itself:
> 959e735c1c02a74c209fffc03d76f67ccf86606e ost: added repo closure test
>
> [1] http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/6066/
>
> [2]
>
> 14:18:36 [basic_suit_el7] Repos looked at: 10
> 14:18:36 [basic_suit_el7]centos-base-el7
> 14:18:36 [basic_suit_el7]centos-extras-el7
> 14:18:36 [basic_suit_el7]centos-opstools-testing-el7
> 14:18:36 [basic_suit_el7]centos-ovirt-4.0-el7
> 14:18:36 [basic_suit_el7]centos-updates-el7
> 14:18:36 [basic_suit_el7]epel-el7
> 14:18:36 [basic_suit_el7]glusterfs-3.10-el7
> 14:18:36 [basic_suit_el7]internal_repo
> 14:18:36 [basic_suit_el7]ovirt-master-snapshot-static-el7
> 14:18:36 [basic_suit_el7]ovirt-master-tested-el7
> 14:18:36 [basic_suit_el7] Num Packages in Repos: 38045
> 14:18:36 [basic_suit_el7] package: ovirt-release-host-node-4.2.0-
> 0.3.master.2017032835.git67870d2.el7.centos.noarch from internal_repo
> 14:18:36 [basic_suit_el7]   unresolved deps:
> 14:18:36 [basic_suit_el7]  gdeploy
>
>

This is a bug within the test suite: gdeploy repo should be added to the
ones used to compose the internal_repo.
I understand that using ovirt-release rpm to get the right repos is not
possible right now, but it would be nice to have just a proxy instead of
mirroring repos with a subset of packages.



> --
> Pavel Zhukov
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D

Red Hat EMEA 

TRIED. TESTED. TRUSTED. 
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel