[ovirt-devel] Re: Can we drop fc27 jobs / rpms from CI/tested repos?

2019-01-13 Thread Eyal Edri
Almost all were removed, still remaining:
-
jenkins_master_check-patch-fc27-x86_64

- system-sync_mirrors-fedora-base-fc27-x86_64

- system-sync_mirrors-fedora-updates-fc27-x86_64

- lago-ost-plugin_master_github_check-patch-fc27-x86_64

- lago-ost-plugin_master_github_check-merged-fc27-x86_64

- lago-ost-plugin_master_github_build-artifacts-fc27-x86_64

-
Galit/Ehud - please see if these can be dropped.

On Thu, Jan 10, 2019 at 1:04 PM Sandro Bonazzola 
wrote:

>
>
> Il giorno mar 25 dic 2018 alle ore 15:44 Eyal Edri  ha
> scritto:
>
>> I see the following jobs still exists:
>>
>>
>>- jenkins_master_check-patch-fc27-x86_64
>>
>> 
>>- repoman_master_check-patch-fc27-x86_64
>>
>> 
>>- repoman_master_check-merged-fc27-x86_64
>>
>> 
>>- pthreading_master_check-patch-fc27-x86_64
>>
>> 
>>- repoman_master_build-artifacts-fc27-x86_64
>>
>> 
>>- system-sync_mirrors-fedora-base-fc27-x86_64
>>
>> 
>>- pthreading_master_build-artifacts-fc27-x86_64
>>
>> 
>>- system-sync_mirrors-fedora-updates-fc27-x86_64
>>
>> 
>>- ovirt-engine-api-model_4.1_check-patch-fc27-x86_64
>>
>> 
>>- ovirt-engine-api-model_4.2_check-patch-fc27-x86_64
>>
>> 
>>- lago-ost-plugin_master_github_check-patch-fc27-x86_64
>>
>> 
>>- ovirt-engine-api-model_master_check-patch-fc27-x86_64
>>
>> 
>>- lago-ost-plugin_master_github_check-merged-fc27-x86_64
>>
>> 
>>- lago-ost-plugin_master_github_build-artifacts-fc27-x86_64
>>
>> 
>>
>>
>> This means also dropping the mirrors for it.
>>
>>
> fc27 is EOL so yes, I think it's safe to drop them.
>
>
>
>
>> --
>>
>> Eyal edri
>>
>>
>> MANAGER
>>
>> RHV/CNV DevOps
>>
>> EMEA VIRTUALIZATION R&D
>>
>>
>> Red Hat EMEA 
>>  TRIED. TESTED. TRUSTED. 
>> phone: +972-9-7692018
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>> ___
>> Infra mailing list -- in...@ovirt.org
>> To unsubscribe send an email to infra-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/HTALCNOHRLEVEULQZW37NHW4ZURHRVZ5/
>>
>
>
> --
>
> SANDRO BONAZZOLA
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>


-- 

Eyal edri


MANAGER

RHV/CNV DevOps

EMEA VIRTUALIZATION R&D


Red Hat EMEA 
 TRIED. TESTED. TRUSTED. 
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev

[ovirt-devel] Re: Can we drop fc27 jobs / rpms from CI/tested repos?

2019-01-13 Thread Eyal Edri
Adding Galit.

On Sun, Jan 13, 2019 at 10:08 AM Eyal Edri  wrote:

> Almost all were removed, still remaining:
>
>-
>jenkins_master_check-patch-fc27-x86_64
>
> 
>- system-sync_mirrors-fedora-base-fc27-x86_64
>
> 
>- system-sync_mirrors-fedora-updates-fc27-x86_64
>
> 
>- lago-ost-plugin_master_github_check-patch-fc27-x86_64
>
> 
>- lago-ost-plugin_master_github_check-merged-fc27-x86_64
>
> 
>- lago-ost-plugin_master_github_build-artifacts-fc27-x86_64
>
> 
>-
>
> Galit/Ehud - please see if these can be dropped.
>
> On Thu, Jan 10, 2019 at 1:04 PM Sandro Bonazzola 
> wrote:
>
>>
>>
>> Il giorno mar 25 dic 2018 alle ore 15:44 Eyal Edri  ha
>> scritto:
>>
>>> I see the following jobs still exists:
>>>
>>>
>>>- jenkins_master_check-patch-fc27-x86_64
>>>
>>> 
>>>- repoman_master_check-patch-fc27-x86_64
>>>
>>> 
>>>- repoman_master_check-merged-fc27-x86_64
>>>
>>> 
>>>- pthreading_master_check-patch-fc27-x86_64
>>>
>>> 
>>>- repoman_master_build-artifacts-fc27-x86_64
>>>
>>> 
>>>- system-sync_mirrors-fedora-base-fc27-x86_64
>>>
>>> 
>>>- pthreading_master_build-artifacts-fc27-x86_64
>>>
>>> 
>>>- system-sync_mirrors-fedora-updates-fc27-x86_64
>>>
>>> 
>>>- ovirt-engine-api-model_4.1_check-patch-fc27-x86_64
>>>
>>> 
>>>- ovirt-engine-api-model_4.2_check-patch-fc27-x86_64
>>>
>>> 
>>>- lago-ost-plugin_master_github_check-patch-fc27-x86_64
>>>
>>> 
>>>- ovirt-engine-api-model_master_check-patch-fc27-x86_64
>>>
>>> 
>>>- lago-ost-plugin_master_github_check-merged-fc27-x86_64
>>>
>>> 
>>>- lago-ost-plugin_master_github_build-artifacts-fc27-x86_64
>>>
>>> 
>>>
>>>
>>> This means also dropping the mirrors for it.
>>>
>>>
>> fc27 is EOL so yes, I think it's safe to drop them.
>>
>>
>>
>>
>>> --
>>>
>>> Eyal edri
>>>
>>>
>>> MANAGER
>>>
>>> RHV/CNV DevOps
>>>
>>> EMEA VIRTUALIZATION R&D
>>>
>>>
>>> Red Hat EMEA 
>>>  TRIED. TESTED. TRUSTED.
>>> 
>>> phone: +972-9-7692018
>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>> ___
>>> Infra mailing list -- in...@ovirt.org
>>> To unsubscribe send an email to infra-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/HTALCNOHRLEVEULQZW37NHW4ZURHRVZ5/
>>>
>>
>>
>> --
>>
>> SANDRO BONAZZOLA
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>>
>> Red Hat EMEA 
>>
>> sbona...@r

[ovirt-devel] Re: [VDSM] FAIL: test_add_delete_ipv6 (network.ip_address_test.IPAddressTest) fail again

2019-01-13 Thread Edward Haas
Thank you Nir.
I added an assert message to this last one.
It should not happen at all, it is very strange.

https://gerrit.ovirt.org/#/c/96849/

Thanks,
Edy.

On Sun, Jan 13, 2019 at 9:21 AM Nir Soffer  wrote:

> Another network test that did not fail for long time, failed again today.
>
> Build:
>
> https://jenkins.ovirt.org/blue/rest/organizations/jenkins/pipelines/vdsm_standard-check-patch/runs/1484/nodes/127/steps/407/log/?start=0
>
> ==
> FAIL: test_local_auto_with_static_address_without_ra_server 
> (network.netinfo_test.TestIPv6Addresses)
> --
> Traceback (most recent call last):
>   File 
> "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/tests/testValidation.py",
>  line 333, in wrapper
> return f(*args, **kwargs)
>   File 
> "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/tests/testValidation.py",
>  line 194, in wrapper
> return f(*args, **kwargs)
>   File 
> "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/tests/network/netinfo_test.py",
>  line 399, in test_local_auto_with_static_address_without_ra_server
> self.assertEqual(2, len(ip_addrs))
> AssertionError: 2 != 4
>  >> begin captured logging << 
> 2019-01-13 07:08:43,598 DEBUG (MainThread) [root] /sbin/ip link add name 
> dummy_PU7TA type dummy (cwd None) (cmdutils:133)
> 2019-01-13 07:08:43,619 DEBUG (MainThread) [root] SUCCESS:  = '';  = 
> 0 (cmdutils:141)
> 2019-01-13 07:08:43,624 DEBUG (netlink/events) [root] START thread 
>  (func= Monitor._scan of  0x7f0817ca1610>>, args=(), kwargs={}) (concurrent:193)
> 2019-01-13 07:08:43,627 DEBUG (MainThread) [root] /sbin/ip link set dev 
> dummy_PU7TA up (cwd None) (cmdutils:133)
> 2019-01-13 07:08:43,647 DEBUG (MainThread) [root] SUCCESS:  = '';  = 
> 0 (cmdutils:141)
> 2019-01-13 07:08:43,653 DEBUG (netlink/events) [root] FINISH thread 
>  (concurrent:196)
> 2019-01-13 07:08:43,655 DEBUG (MainThread) [root] /sbin/ip -6 addr add dev 
> dummy_PU7TA 2001::88/64 (cwd None) (cmdutils:133)
> 2019-01-13 07:08:43,669 DEBUG (MainThread) [root] SUCCESS:  = '';  = 
> 0 (cmdutils:141)
> 2019-01-13 07:08:43,677 DEBUG (MainThread) [root] /sbin/ip link del dev 
> dummy_PU7TA (cwd None) (cmdutils:133)
> 2019-01-13 07:08:43,696 DEBUG (MainThread) [root] SUCCESS:  = '';  = 
> 0 (cmdutils:141)
> - >> end captured logging << -
>
>
>
> On Fri, Jan 4, 2019 at 7:30 PM Nir Soffer  wrote:
>
>> We had this failure a lot in the past and it seems to be resolve, but I
>> see it again today:
>>
>> ==
>> FAIL: test_add_delete_ipv6 (network.ip_address_test.IPAddressTest)
>> --
>> Traceback (most recent call last):
>>   File 
>> "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/tests/testValidation.py",
>>  line 333, in wrapper
>> return f(*args, **kwargs)
>>   File 
>> "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/tests/network/ip_address_test.py",
>>  line 226, in test_add_delete_ipv6
>> self._test_add_delete(IPV6_A_WITH_PREFIXLEN, IPV6_B_WITH_PREFIXLEN)
>>   File 
>> "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/tests/network/ip_address_test.py",
>>  line 247, in _test_add_delete
>> self._assert_has_no_address(nic, ip_b)
>>   File 
>> "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/tests/network/ip_address_test.py",
>>  line 344, in _assert_has_no_address
>> self._assert_address_not_in(address_with_prefixlen, addresses)
>>   File 
>> "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/tests/network/ip_address_test.py",
>>  line 352, in _assert_address_not_in
>> self.assertNotIn(address_with_prefixlen, addresses_list)
>> AssertionError: '2002:99::1/64' unexpectedly found in ['2002:99::1/64', 
>> '2001:99::1/64', 'fe80::6:23ff:fead:ed34/64']
>>  >> begin captured logging << 
>> 2019-01-04 16:01:53,543 DEBUG (MainThread) [root] /sbin/ip link add name 
>> dummy_NKzY1 type dummy (cwd None) (cmdutils:133)
>> 2019-01-04 16:01:53,559 DEBUG (MainThread) [root] SUCCESS:  = '';  
>> = 0 (cmdutils:141)
>> 2019-01-04 16:01:53,563 DEBUG (netlink/events) [root] START thread 
>>  (func=> Monitor._scan of > 0x7f271d654cd0>>, args=(), kwargs={}) (concurrent:193)
>> 2019-01-04 16:01:53,567 DEBUG (MainThread) [root] /sbin/ip link set dev 
>> dummy_NKzY1 up (cwd None) (cmdutils:133)
>> 2019-01-04 16:01:53,586 DEBUG (MainThread) [root] SUCCESS:  = '';  
>> = 0 (cmdutils:141)
>> 2019-01-04 16:01:53,593 DEBUG (netlink/events) [root] FINISH thread 
>>  (concurrent:196)
>> 2019-01-04 16:01:53,597 DEBUG (MainThread) [root] /sbin/ip -6 addr add dev 
>> dummy_NKzY1 2001:99::1/64 (cwd None) (cmdutils:133)
>> 2019-01-04 16:01:53,615 DEBUG (MainThread) [root] SUCCESS:  = '';  
>

[ovirt-devel] Hard disk requirement

2019-01-13 Thread Hetz Ben Hamo
Hi,

The old oVirt (3.x?) ISO image wasn't requiring a big hard disk in order to
install the node part on a physical machine, since the HE and other parts
were running using NFS/iSCSI etc.

oVirt 4.X if I understand correctly - does require hard disk on the node.
Can this requirement be avoided and just use an SD Card? Whats the minimum
storage for the local node?

Thanks,
Hetz
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/DLQVINP6TFCQMJEEY7FAF3S5HOEZFXDX/


[ovirt-devel] Re: Hard disk requirement

2019-01-13 Thread Nir Soffer
On Sun, Jan 13, 2019 at 6:18 PM Hetz Ben Hamo  wrote:

> Hi,
>
> The old oVirt (3.x?) ISO image wasn't requiring a big hard disk in order
> to install the node part on a physical machine, since the HE and other
> parts were running using NFS/iSCSI etc.
>
> oVirt 4.X if I understand correctly - does require hard disk on the node.
>

I think you can setup temporary storage on a diskless host using NFS or
by connecting to temporary LUN and setting up a file system on it.

Once the bootstrap engine is ready on the "local" storage, you we move
engine disk to shared storage, and you can remove the local storage.

Adding Simone to add more info.


> Can this requirement be avoided and just use an SD Card? Whats the minimum
> storage for the local node?
>

Another option is to use /dev/shm, if you have a server with lot of memory.

Note that this list is for ovirt developers. This question is more about
using ovirt, so the users
mailing list is better. Others users may already solved this issue and can
help more than
developers which have much less experience with actual deployment.

Nir
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/DNQDWN5GR7T2VKFK5A4ISYJLLL4JKXMF/